Related
Eclipse has an option to warn on assignment to a method's parameter (inside the method), as in:
public void doFoo(int a){
if (a<0){
a=0; // this will generate a warning
}
// do stuff
}
Normally I try to activate (and heed) almost all available compiler warnings, but in this case I'm not really sure whether it's worth it.
I see legitimate cases for changing a parameter in a method (e.g.: Allowing a parameter to be "unset" (e.g. null) and automatically substituting a default value), but few situations where it would cause problems, except that it might be a bit confusing to reassign a parameter in the middle of the method.
Do you use such warnings? Why / why not?
Note:
Avoiding this warning is of course equivalent to making the method parameter final (only then it's a compiler error :-)). So this question Why should I use the keyword "final" on a method parameter in Java? might be related.
The confusing-part is the reason for the warning. If you reassign a parameter a new value in the method (probably conditional), then it is not clear, what a is. That's why it is seen as good style, to leave method-params unchanged.
For me, as long as you do it early and clearly, it's fine. As you say, doing it buried deep in four conditionals half-way into a 30-line function is less than ideal.
You also obviously have to be careful when doing this with object references, since calling methods on the object you were given may change its state and communicate information back to the caller, but of course if you've subbed in your own placeholder, that information is not communicated.
The flip side is that declaring a new variable and assigning the argument (or a default if argument needs defaulting) to it may well be clearer, and will almost certainly not be less efficient -- any decent compiler (whether the primary compiler or a JIT) will optimize it out when feasible.
Assigning a method parameter is not something most people expect to happen in most methods. Since we read the code with the assumption that parameter values are fixed, an assignment is usually considered poor practice, if only by convention and the principle of least astonishment.
There are always alternatives to assigning method parameters: usually a local temporary copy is just fine. But generally, if you find you need to control the logic of your function through parameter reassignment, it could benefit from refactoring into smaller methods.
Reassigning to the method parameter variable is usually a mistake if the parameter is a reference type.
Consider the following code:
MyObject myObject = new myObject();
myObject.Foo = "foo";
doFoo(myObject);
// what's the value of myObject.Foo here?
public void doFoo(MyObject myFoo){
myFoo = new MyObject("Bar");
}
Many people will expect that at after the call to doFoo, myObject.Foo will equal "Bar". Of course, it won't - because Java is not pass by reference, but pass by reference value - that is to say, a copy of the reference is passed to the method. Reassigning to that copy only has an effect in the local scope, and not at the callsite. This is one of the most commonly misunderstood concepts.
Different compiler warnings can be appropriate for different situations. Sure, some are applicable to most or all situations, but this does not seem to be one of them.
I would think of this particular warning as the compiler giving you the option to be warned about a method parameter being reassigned when you need it, rather than a rule that method parameters should not be reassigned. Your example constitutes a perfectly valid case for it.
I sometimes use it in situations like these:
void countdown(int n)
{
for (; n > 0; n--) {
// do something
}
}
to avoid introducing a variable i in the for loop. Typically I only use these kind of 'tricks' in very short functions.
Personally I very much dislike 'correcting' parameters inside a function this way. I prefer to catch these by asserts and make sure that the contract is right.
I usually don't need to assign new values to method parameters.
As to best-practices - the warning also avoids confusion when facing code like:
public void foo() {
int a = 1;
bar(a);
System.out.println(a);
}
public void bar(int a) {
a++;
}
You shoud write code with no side effect : every method shoud be a function that doesn't change . Otherwise it's a command and it can be dangerous.
See definitions for command and function on the DDD website :
Function :
An operation that computes and returns a result without observable side effects.
Command : An operation that effects some change to the system (for
example, setting a variable). An
operation that intentionally creates a
side effect.
JavaScript is a revelation to me. I thought it would be like another sort of classical languages like C#, Java, etc. But it didn't. "Dynamic world" is tough and unpredictable. I was astonished when I read that functions can receive as many parameters as you desire. Without any error! I don't like it at all. I want more "staticness", I want some sort of compile-time errors!
My question is: am I need to worry about that? Is it a good practice to throw an exception if a quantity of passed parameters are more than a particular function expects?
function foo(one, two, three)
{
// Is it good?
if(arguments.length > arguments.callee.length)
throw new Error("Wrong quantity of arguments in " + arguments.callee.name + "()");
/* Stuff */
}
foo(1, 2, 3, 4); // -> Error
foo(1, 2, 3); // -> OK
Should I be concerned about it at all?
Thanks in advance!
You probably should not be concerned. There is no blanket rule on how to handle errors like this. It depends entirely upon the type of error and the type of situation. In some cases, where it's a serious programming error and there is no way to proceed (like insufficient arguments to perform the desired function), it may make sense to throw an exception or return an error from the function. But, in other cases, an extra argument can just be safely ignored and you can continue on your merry way as if that argument was never passed.
As you get used to javascript, you will come to understand that many function arguments can be optional and a single function may be correctly called with zero, one, two or three or even N arguments and the code in the function can adapt appropriately. This actually allows you to do things that are not as easy to do in more "static" languages. It is even possible to adapt to the type of the arguments and do something appropriately based on the type of the argument. While this may sound like heresy to someone that only has experience in hard-typed languages, it can actually be extremely useful.
As you maintain a body of code over time, you will also come to find that it's nice to be able to add an argument to the definition of a function, add code to that function that defaults it to a reasonable value if it isn't passed and NOT have to change any of the prior code that was using that function, yet a few new places that need that new argument can start using it immediately. Rather then grepping through the entire codebase to fix up every caller of that function, you can just make one change in one file and immediately start using a new argument to the function without changing all the other callers. This is enormously useful.
So, in more direct answer to your question, an extra argument passed to a function is never a serious error in javascript. Your code could just ignore it and proceed. If you want to alert the developer who wrote that code that an unexpected argument was passed, you can notify them somehow (perhaps some warning text on the debug console) in the "debug" version of your function/library, but I see no reason why you should stop execution in the "production" version of your function/library when you can proceed without any harm.
You don't need to worry about this. If you pass too many arguments, the function will just ignore it. You should only throw an error if there are too few arguments. In that case, the function might not be able to run.
While I agree that the number of arguments aren't important (and won't cause a problem so long as you type-check the arguments you're getting before you use them), since an unused, uncalled, argument won't do anything, if you're particularly concerned you could just create a subset of the passed-arguments and access that object internally:
function test(arg1, arg2, arg3) {
var slice = Array.prototype.slice,
subset = slice.call(arguments, 0, 3); // depending on how many arguments you want
}
Of course this means that you've now got to recover the parameters from the args object, and since surplus arguments seem to be perfectly safe this seems pointless. But it is still an option.
Albeit unnecessary.
When creating a JavaScript function with multiple arguments, I am always confronted with this choice: pass a list of arguments vs. pass an options object.
For example I am writing a function to map a nodeList to an array:
function map(nodeList, callback, thisObject, fromIndex, toIndex){
...
}
I could instead use this:
function map(options){
...
}
where options is an object:
options={
nodeList:...,
callback:...,
thisObject:...,
fromIndex:...,
toIndex:...
}
Which one is the recommended way? Are there guidelines for when to use one vs. the other?
[Update] There seems to be a consensus in favor of the options object, so I'd like to add a comment: one reason why I was tempted to use the list of arguments in my case was to have a behavior consistent with the JavaScript built in array.map method.
Like many of the others, I often prefer passing an options object to a function instead of passing a long list of parameters, but it really depends on the exact context.
I use code readability as the litmus test.
For instance, if I have this function call:
checkStringLength(inputStr, 10);
I think that code is quite readable the way it is and passing individual parameters is just fine.
On the other hand, there are functions with calls like this:
initiateTransferProtocol("http", false, 150, 90, null, true, 18);
Completely unreadable unless you do some research. On the other hand, this code reads well:
initiateTransferProtocol({
"protocol": "http",
"sync": false,
"delayBetweenRetries": 150,
"randomVarianceBetweenRetries": 90,
"retryCallback": null,
"log": true,
"maxRetries": 18
});
It is more of an art than a science, but if I had to name rules of thumb:
Use an options parameter if:
You have more than four parameters
Any of the parameters are optional
You've ever had to look up the function to figure out what parameters it takes
If someone ever tries to strangle you while screaming "ARRRRRG!"
Multiple arguments are mostly for obligatory parameters. There's nothing wrong with them.
If you have optional parameters, it gets complicated. If one of them relies on the others, so that they have a certain order (e.g. the fourth one needs the third one), you still should use multiple arguments. Nearly all native EcmaScript and DOM-methods work like this. A good example is the open method of XMLHTTPrequests, where the last 3 arguments are optional - the rule is like "no password without a user" (see also MDN docs).
Option objects come in handy in two cases:
You've got so many parameters that it gets confusing: The "naming" will help you, you don't have to worry about the order of them (especially if they may change)
You've got optional parameters. The objects are very flexible, and without any ordering you just pass the things you need and nothing else (or undefineds).
In your case, I'd recommend map(nodeList, callback, options). nodelist and callback are required, the other three arguments come in only occasionally and have reasonable defaults.
Another example is JSON.stringify. You might want to use the space parameter without passing a replacer function - then you have to call …, null, 4). An arguments object might have been better, although its not really reasonable for only 2 parameters.
Using the 'options as an object' approach is going to be best. You don't have to worry about the order of the properties and there's more flexibility in what data gets passed (optional parameters for example)
Creating an object also means the options could be easily used on multiple functions:
options={
nodeList:...,
callback:...,
thisObject:...,
fromIndex:...,
toIndex:...
}
function1(options){
alert(options.nodeList);
}
function2(options){
alert(options.fromIndex);
}
It can be good to use both. If your function has one or two required parameters and a bunch of optional ones, make the first two parameters required and the third an optional options hash.
In your example, I'd do map(nodeList, callback, options). Nodelist and callback are required, it's fairly easy to tell what's happening just by reading a call to it, and it's like existing map functions. Any other options can be passed as an optional third parameter.
I may be a little late to the party with this response, but I was searching for other developers' opinions on this very topic and came across this thread.
I very much disagree with most of the responders, and side with the 'multiple arguments' approach. My main argument being that it discourages other anti-patterns like "mutating and returning the param object", or "passing the same param object on to other functions". I've worked in codebases which have extensively abused this anti-pattern, and debugging code which does this quickly becomes impossible. I think this is a very Javascript-specific rule of thumb, since Javascript is not strongly typed and allows for such arbitrarily structured objects.
My personal opinion is that developers should be explicit when calling functions, avoid passing around redundant data and avoid modify-by-reference. It's not that this patterns precludes writing concise, correct code. I just feel it makes it much easier for your project to fall into bad development practices.
Consider the following terrible code:
function main() {
const x = foo({
param1: "something",
param2: "something else",
param3: "more variables"
});
return x;
}
function foo(params) {
params.param1 = "Something new";
bar(params);
return params;
}
function bar(params) {
params.param2 = "Something else entirely";
const y = baz(params);
return params.param2;
}
function baz(params) {
params.params3 = "Changed my mind";
return params;
}
Not only does this kind of require more explicit documentation to specify intent, but it also leaves room for vague errors.
What if a developer modifies param1 in bar()? How long do you think it would take looking through a codebase of sufficident size to catch this?
Admittedly, this is example is slightly disingenuous because it assumes developers have already committed several anti-patterns by this point. But it shows how passing objects containing parameters allows greater room for error and ambiguity, requiring a greater degree of conscientiousness and observance of const correctness.
Just my two-cents on the issue!
Your comment on the question:
in my example the last three are optional.
So why not do this? (Note: This is fairly raw Javascript. Normally I'd use a default hash and update it with the options passed in by using Object.extend or JQuery.extend or similar..)
function map(nodeList, callback, options) {
options = options || {};
var thisObject = options.thisObject || {};
var fromIndex = options.fromIndex || 0;
var toIndex = options.toIndex || 0;
}
So, now since it's now much more obvious what's optional and what's not, all of these are valid uses of the function:
map(nodeList, callback);
map(nodeList, callback, {});
map(nodeList, callback, null);
map(nodeList, callback, {
thisObject: {some: 'object'},
});
map(nodeList, callback, {
toIndex: 100,
});
map(nodeList, callback, {
thisObject: {some: 'object'},
fromIndex: 0,
toIndex: 100,
});
It depends.
Based on my observation on those popular libraries design, here are the scenarios we should use option object:
The parameter list is long (>4).
Some or all parameters are optional and they don’t rely on a certain
order.
The parameter list might grow in future API update.
The API will be called from other code and the API name is not clear
enough to tell the parameters’ meaning. So it might need strong
parameter name for readability.
And scenarios to use parameter list:
Parameter list is short (<= 4).
Most of or all of the parameters are required.
Optional parameters are in a certain order. (i.e.: $.get )
Easy to tell the parameters meaning by API name.
Object is more preferable, because if you pass an object its easy to extend number of properties in that objects and you don't have to watch for order in which your arguments has been passed.
For a function that usually uses some predefined arguments you would better use option object. The opposite example will be something like a function that is getting infinite number of arguments like: setCSS({height:100},{width:200},{background:"#000"}).
I would look at large javascript projects.
Things like google map you will frequently see that instantiated objects require an object but functions require parameters. I would think this has to do with OPTION argumemnts.
If you need default arguments or optional arguments an object would probably be better because it is more flexible. But if you don't normal functional arguments are more explicit.
Javascript has an arguments object too.
https://developer.mozilla.org/en-US/docs/JavaScript/Reference/Functions_and_function_scope/arguments
I am new to DOJO and trying to figure out the difference between these two uses of the seemingly two things.
dndController: new dijit.tree.dndSource("dijit.tree.dndSource",{copyOnly:true})
and
dndController: "dijit.tree.dndSource"
The second one works, but when I use the first one, it gives me an error when loading my tree. It says type node is undefined. The reason I want to use the first one though is because I want to set copyOnly to true.
Any answers appreciated it.
That parameter expects a constructor function instead of the object you passed. Perhaps the following would work:
dndController: function(arg, params){
return new dijit.tree.dndSource(
arg, // don't mess up with the first parameter
dojo.mixin({}, params, {copyOnly:true}))
//create a copy of the params object, but set copyOnly to true
}
Some explanation:
I actually don't know anything about drag-and-drop on trees. All I did was look at the Tree source code (its at dijit/Tree.js or something like that) to find out where dndController is used. From that point I could find out that is was supposed to be a function that can receive these two parameters (or a string representing the path to such a function...). The actual dijit.tree.dndSource function that is used I just copied from your question statement, hoping it would work.
The dojo.mixin function mixes in all the objects in its 2nd, 3rd, ... arguments into the first argument. By using a new, empty, object as the "receiving" object we have a neat way to make a shallow copy of params, settting copyOnly without modifying the original params object.
Pouring over the release notes regarding jQuery 1.4, I came acrosss $.noop() which is:
Description: An empty function. (added in 1.4)
You can use this empty function when you wish to pass around a function that will do nothing.
Perhaps I'm missing something profound here, but what exactly is a practical use of passing around an empty function?
Code examples appreciated.
This function was proposed due to performance issues on embedded systems when using $.ajax, reported on the jQuery-Dev mailing list. You can see the thread.
Basically, they preferred to introduce and use this single empty function, rather than declaring empty anonymous functions all around.
Now this function is internally used in the ajax, event and offset modules.
You can give a look to the commit when it was introduced also.
If you have a function that accepts a function as a parameter, and you don't have any code to give it, you can pass $.noop.
I can't think of any such cases in jQuery where the parameter isn't optional in the first place, though.
Unlike writing function(){}, passing $.noop will not create a new function instance, saving a bit of memory. However, if whatever you're passing it to modifies the function object (eg, funcParam.id = 2), passing $.noop will mess things up.
Real World Example (well almost):
jQuery.fn.myAwesomeAjax = function(url, complete) {
return jQuery.ajax(url || this.url)
.complete(complete || jQuery.noop);
};
Use it instead of function (){}
Probably if some bad API requires a function as a parameter, and you don't want to do anything in it, this would be a framework-supported way of making that obvious.
I use a couple of plugins which require callbacks, but for some parts I don't actually want to use a certain callback. So, I put in function() {}.
noop is defined in the jQuery source as
noop: function() {}
so it will fit anywhere you'd use a blank function, such as the above example.
The only logical reason is if you're calling a function that does something AND calls another function, and you want the higher-level function to do its thing without calling a parameter function.
Most of the jQuery functions optionally take a parameter function, so you don't have to pass one in. Maybe there's one or two where that's not the case -- or maybe it's to assist developers with their custom code that behaves like this.
If a function requires you pass a function as an argument maybe? It's shorter to say do_something($.noop) than do_something(function(){}).
Although not by much...
...6 characters...
...yeah, that feature looks quite useless actually.
It can be useful if you have a function that supplies functions to other functions.
Example: You have a List of data. Each item has a Button that does something. The "something" can be different for every item. You have a "FunctionFactory" that takes in the item and returns a function. If you don't want the button to do something for whatever reason, then the cleanest way could be to return an empty function, as that way you know that your Factory ALWAYS returns a function.
I don't have a concrete example for jQuery, but I guess this could come in handy when used in an .each or .map block.
It's purely a convenience/replacement for function(){} in the context of where callbacks are required - I don't think I'll be using it anytime soon.
I bet the jQuery team had quite a laugh when they dropped it in though, also serves a comedic purpose.