I'm currently reading through this jquery masking plugin to try and understand how it works, and in numerous places the author calls the slice() function passing no arguments to it. For instance here the _buffer variable is slice()d, and _buffer.slice() and _buffer seem to hold the same values.
Is there any reason for doing this, or is the author just making the code more complicated than it should be?
//functionality fn
function unmaskedvalue($input, skipDatepickerCheck) {
var input = $input[0];
if (tests && (skipDatepickerCheck === true || !$input.hasClass('hasDatepicker'))) {
var buffer = _buffer.slice();
checkVal(input, buffer);
return $.map(buffer, function(element, index) {
return isMask(index) && element != getBufferElement(_buffer.slice(), index) ? element : null; }).join('');
}
else {
return input._valueGet();
}
}
The .slice() method makes a (shallow) copy of an array, and takes parameters to indicate which subset of the source array to copy. Calling it with no arguments just copies the entire array. That is:
_buffer.slice();
// is equivalent to
_buffer.slice(0);
// also equivalent to
_buffer.slice(0, _buffer.length);
EDIT: Isn't the start index mandatory? Yes. And no. Sort of. JavaScript references (like MDN) usually say that .slice() requires at least one argument, the start index. Calling .slice() with no arguments is like saying .slice(undefined). In the ECMAScript Language Spec, step 5 in the .slice() algorithm says "Let relativeStart be ToInteger(start)". If you look at the algorithm for the abstract operation ToInteger(), which in turn uses ToNumber(), you'll see that it ends up converting undefined to 0.
Still, in my own code I would always say .slice(0), not .slice() - to me it seems neater.
array.slice() = array shallow copy and is a shorter form of array.slice()
Is there any reason for doing this, or is the author just making the code more complicated than it should be?
Yes there may be a reason in the following cases (for which we do not have a clue, on whether they apply, in the provided code):
checkVal() or getBufferElement() modify the content of the arrays passed to them (as second and first argument respectively). In this case the code author wants to prevent the global variable _buffer's content from being modified when calling unmaskedvalue().
The function passed to $.map runs asynchronously. In this case the code author wants to make sure that the passed callback will access the array content as it was during unmaskedvalue() execution (e.g. Another event handler could modify _buffer content after unmaskedvalue() execution and before $.map's callback execution).
If none of the above is the case then, yes, the code would equally work without using .slice(). In this case maybe the code author wants to play safe and avoid bugs from future code changes that would result in unforeseen _buffer content modifications.
Note:
When saying: "prevent the global variable _buffer's content from being modified" it means to achieve the following:
_buffer[0].someProp = "new value" would reflect in the copied array.
_buffer[0] = "new value" would not reflect in the copied array.
(For preventing changes also in the first bullet above, array deep clone can be used, but this is out of the discussed context)
Note 2:
In ES6
var buffer = _buffer.slice();
can also be written as
var buffer = [..._buffer];
Related
Considering the following recursive Fibonacci function that’s been optimized using memoization.
No other code apart from this.
function memoizeFibonacci(index, cache = []) {
if (cache[index]) {
return cache[index]
} else {
if (index < 3) return 1
else {
cache[index] = memoizeFibonacci(index - 1, cache) + memoizeFibonacci(index - 2, cache)
}
}
console.log(cache)
return cache[index];
}
memoizeFibonacci(6)
Can someone please explain how is the cache array updated? When viewing the console logs the cache seems to hold all the previous values from the resolved recursive functions. But to me, this doesn't make sense as the cache is not stored outside memoizeFibonacci so the scope should not allow this.
Every nested recursive function adds its item when it resolves only during the chain of executions of the recursion.
console.log is asynchronous and probably shows repeatedly the final result. There is no specification about how console.log work so it can act differently depending on the environment, it's I/O.
Even if you make it work as expected in your environment, it can work differently for other user. An hypothesis based on the use of console in your algorithm is not correct.
Kyle Simpson references:
Recursion in JS https://github.com/getify/Functional-Light-JS/blob/master/manuscript/ch8.md
Console log https://github.com/getify/You-Dont-Know-JS/blob/1st-ed/async%20%26%20performance/ch1.md#async-console
This has nothing to do with closures. This is simply passing the same array to nested recursive calls.
When you pass an array (or any object for that matter) to a function, the array is not copied, rather a reference to it is passed, so changing the array in the recursive call will affect the same array.
Here is an example of what is basically happening:
function foo(arr) {
arr[0] = "hello";
}
let arr = [];
foo(arr);
console.log(arr); // changed because of the call to foo
Notice that the recursive calls to memoizeFibonacci is explicitly passing the cache object as the second parameter, so each recursive call is sharing the same array as the top-level call, and any changes to the cache object in the recursive calls is reflected in the top-level call aswell.
BTW, this type of memoization is not persistent, meaning that these two calls:
memoizeFibonacci(6);
memoizeFibonacci(10);
don't share the same cache object. They each use a different cache array that must be reconstructed from scratch rather than the call to memoizeFibonacci(10) using the cache object already constructed in the call to memoizeFibonacci(6) and appending to it. A more efficient memoization makes use of closures like in this example: https://stackoverflow.com/a/8548823/9867451
Note: If you are asking why all the console.log(cache) print out the same exact filled array, that's because they are printing the same array, and the values you see in the console are not necessarily added at the point of the console.log. Take a look at this other question: array.length is zero, but the array has elements in it. To log exactly what's in the cache object at the time of the console.log, change it to:
console.log(JSON.stringify(cache));
Eclipse has an option to warn on assignment to a method's parameter (inside the method), as in:
public void doFoo(int a){
if (a<0){
a=0; // this will generate a warning
}
// do stuff
}
Normally I try to activate (and heed) almost all available compiler warnings, but in this case I'm not really sure whether it's worth it.
I see legitimate cases for changing a parameter in a method (e.g.: Allowing a parameter to be "unset" (e.g. null) and automatically substituting a default value), but few situations where it would cause problems, except that it might be a bit confusing to reassign a parameter in the middle of the method.
Do you use such warnings? Why / why not?
Note:
Avoiding this warning is of course equivalent to making the method parameter final (only then it's a compiler error :-)). So this question Why should I use the keyword "final" on a method parameter in Java? might be related.
The confusing-part is the reason for the warning. If you reassign a parameter a new value in the method (probably conditional), then it is not clear, what a is. That's why it is seen as good style, to leave method-params unchanged.
For me, as long as you do it early and clearly, it's fine. As you say, doing it buried deep in four conditionals half-way into a 30-line function is less than ideal.
You also obviously have to be careful when doing this with object references, since calling methods on the object you were given may change its state and communicate information back to the caller, but of course if you've subbed in your own placeholder, that information is not communicated.
The flip side is that declaring a new variable and assigning the argument (or a default if argument needs defaulting) to it may well be clearer, and will almost certainly not be less efficient -- any decent compiler (whether the primary compiler or a JIT) will optimize it out when feasible.
Assigning a method parameter is not something most people expect to happen in most methods. Since we read the code with the assumption that parameter values are fixed, an assignment is usually considered poor practice, if only by convention and the principle of least astonishment.
There are always alternatives to assigning method parameters: usually a local temporary copy is just fine. But generally, if you find you need to control the logic of your function through parameter reassignment, it could benefit from refactoring into smaller methods.
Reassigning to the method parameter variable is usually a mistake if the parameter is a reference type.
Consider the following code:
MyObject myObject = new myObject();
myObject.Foo = "foo";
doFoo(myObject);
// what's the value of myObject.Foo here?
public void doFoo(MyObject myFoo){
myFoo = new MyObject("Bar");
}
Many people will expect that at after the call to doFoo, myObject.Foo will equal "Bar". Of course, it won't - because Java is not pass by reference, but pass by reference value - that is to say, a copy of the reference is passed to the method. Reassigning to that copy only has an effect in the local scope, and not at the callsite. This is one of the most commonly misunderstood concepts.
Different compiler warnings can be appropriate for different situations. Sure, some are applicable to most or all situations, but this does not seem to be one of them.
I would think of this particular warning as the compiler giving you the option to be warned about a method parameter being reassigned when you need it, rather than a rule that method parameters should not be reassigned. Your example constitutes a perfectly valid case for it.
I sometimes use it in situations like these:
void countdown(int n)
{
for (; n > 0; n--) {
// do something
}
}
to avoid introducing a variable i in the for loop. Typically I only use these kind of 'tricks' in very short functions.
Personally I very much dislike 'correcting' parameters inside a function this way. I prefer to catch these by asserts and make sure that the contract is right.
I usually don't need to assign new values to method parameters.
As to best-practices - the warning also avoids confusion when facing code like:
public void foo() {
int a = 1;
bar(a);
System.out.println(a);
}
public void bar(int a) {
a++;
}
You shoud write code with no side effect : every method shoud be a function that doesn't change . Otherwise it's a command and it can be dangerous.
See definitions for command and function on the DDD website :
Function :
An operation that computes and returns a result without observable side effects.
Command : An operation that effects some change to the system (for
example, setting a variable). An
operation that intentionally creates a
side effect.
Having some issues with eval an array to return the 2nd value of the multidimensional array.
az1_array = new Array([33010, 50],[33012, 50],[33013, 50]);
nzone=33012;
eval(az1_array).forEach(function(t) {
if(t[0]==nzone){
alert(az1_array[t][1]);
}
});
I am trying to get the 2nd argument when this code is run but it is always returning the first value 33010 no matter what i make nzone, how would i get it to search the entire array and only return the 2nd argument of nzone?
First, avoid usage of global variables. Next, eval is unnecessary, and you shouldn't use it for such things. Third, use Array.filter and filter what you want.
console.log(az1_array.filter(function(arr){
return arr[0] == nzone;
}));
Why eval isn't suitable here:
eval() is a dangerous function, which executes the code it's passed
with the privileges of the caller. If you run eval() with a string
that could be affected by a malicious party, you may end up running
malicious code on the user's machine with the permissions of your
webpage / extension. More importantly, third party code can see the
scope in which eval() was invoked, which can lead to possible attacks
in ways to which the similar Function is not susceptible.
Also, it's pretty slow compared to the alternatives.
Either keep most of your code the same, just change two things:
var az1_array = new Array([33010, 50],[33012, 51],[33013, 52]);
nzone=33012;
az1_array.forEach(function(t) { // remove eval (or clarify why you used it in the first place)
if(t[0]==nzone){
alert(t[1]); // change the alert
}
});
Or use already existing methods liker Array.prototype.filter() Filter
Just wondering what's the different with the following js script.
[].slice.call(this) vs Array.prototype.slice.call(this);
They seems to be doing this same, can someone explain the different and which one i should stick with?
Cheers
They are almost identical. Let's examine each of them:
[].slice.call(data)
What it does?
[] - this creates new Array, same to new Array()
slice - it retrieves method slice from array
call(data) - it calls slice() method replacing its current context with data variable, i.e. uses data as its internal array, not []. This returns our data converted into Array.
Now, the second one:
Array.prototype.slice.call(data)
This one:
Array.prototype.slice - retrieves method slice from Array.prototype; in a nutshell - it returns you slice method without any context (internal data).
call(data) - as in previous variant, calls this method with data as its context.
Conclusion
Both [].slice and Array.prototype.slice return you a slice function which you can call on some array, however this array is passed not as argument, but as context (using call method of function object). In JavaScript, these calls will be absolutely the same:
// Calling as a method of 'Hello' string
'Hello'.indexOf('e') // returns 1
// Calling as unbound method
String.prototype.indexOf.call('Hello', 'e') // returns 1
String.prototype.indexOf.apply('Hello', ['e']) // returns 1
Those methods are almost the same. First one is more readable, but second one uses a bit less memory because it isn't creating a temporary initial Array.
I believe that Array doesn't actually create an object, where as [] is an instance of Array.
Definitely stick with the latter one as the first one is an implicit array construction and the same as:
new Array().slice.call(this)
Its constructing a new array that needs to be garbage collected afterwards since you dont use it. While both statements "work", the above mentioned does a lot of extra work, first you construct a new instance, then you look up if the instance itself has a slice property, as this is false the prototype chain is traversed and the slice is found on the Array.prototype and then called. Then you call it with your other array as scope, so essentially render the instance you created useless by using your other one.
Thats why Array.prototype.slice.call(myarr) is the proper way of accessing the slice method.
Array.prototype.slice.call() seems slightly more performant, though that makes sense because it would be quicker at runtime to "look up" this method. I posit that [].slice.call() is the best choice. Here's the jsperf of reference
Let's be real, in what case would the negligible difference between the two be a major contributor to poor website performance? sounds like heaven, actually. The shorter syntax of the literal method is awesome. I'd rather save myself the half-second it'd take to type the prototype variant than be shakily more efficient.
The coffin nail of Array.prototype... some people say is that it's irresponsible because you risk altering it. But... that's stupid. Figure 1: Array.prototype = function() { console.log('foo') }; Whoopsie... That probably happens all of the time. Seriously, even granting its legitimacy is still intact, it takes me longer to type and is imo hideous compared to its sexier literal sister.
The real answer to your real question is that the functionality is exactly the same. Use whichever you will still appreciate on your deathbed.
I was reading the source code for pallet.js and came across this.
var ret = (function(proto) {
return {
slice: function(arr, opt_begin, opt_end) {
return proto.slice.apply(arr, proto.slice.call(arguments, 1));
},
extend: function(arr, arr2) {
proto.push.apply(arr, arr2);
}
};
})(Array.prototype);
var slice = ret.slice;
var extend = ret.extend;
Why is this necessary? Why could they not simply write this:
var slice = function(arr,opt_begin,opt_end) {
return Array.prototype.slice.apply(arr,[opt_begin,opt_end]));
}
var extend = function(arr,arr2) {
return Array.prototype.push.apply(arr,arr2);
}
EDIT 1:
In response to the duplicate question. I don't think it is a duplicate, but that question definitely does address my question. So it is an optimization. But won't each one only be evaluated once? So is there really a significant improvement here for two function calls?
Also if we are worried about performance why are we calling proto.slice.call(arguments,1) instead of constructing the array of two elements by hand [opt_begin,opt_end], is slice faster?
Because the syntax is just so much cooler. Plus you can rationalize it's use by telling yourself that it's more DRY. You didn't have to type Array.prototype twice.
I can't be sure what was the original rationale behind that code (only the author knows) but I can see a few differences:
proto is a closed-over local variable, while instead Array is a global. It's possible for a smart enough Javascript engine to optimize access because proto is never changed and thus it could even be captured by value, not reference. proto.slice can be faster than Array.prototype.slice because one lookup less is needed.
passing opt_begin and opt_end as undefined is not the same as not passing them in general. The called function can know if a parameter was passed and happens to be undefined or if instead it wasn't passed. Using proto.slice.call(arguments, 1) ensures that the parameters are passed to slice only if they were actually passed to the closure.