I just saw a code snippet in MDN about destructuring rest parameters like so:
function f(...[a, b, c]) {
return a + b + c;
}
f(1) // NaN (b and c are undefined)
f(1, 2, 3) // 6
f(1, 2, 3, 4) // 6 (the fourth parameter is not destructured)
the code snippet is in this page: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Functions/rest_parameters
Although the common use case for rest parameters is very clear to me (function foo(...params){/*code*/}) I could not think about a real world use case to use rest parameters like the way presented in that code snippet. Instead, I think that in that case, I should just use a common function definition:
function f(a, b, c) {
return a + b + c;
}
f(1) // NaN (b and c are undefined)
f(1, 2, 3) // 6
f(1, 2, 3, 4) // 6 (the fourth parameter is not defined)
Your function f(a, b, c) { … } is indeed the proper way to write this. The only difference between that and the rest+destructuring syntax is that rest parameters do not add to number of parameters, i.e. f.length == 0.
There really is no good use case for putting an array destructuring pattern as the target of a rest parameter. Just because the syntax allows it doesn't mean that it's useful somewhere. The MDN example probably should've made that more clear.
The example illustrates that rest and destructuring syntaxes are flexible enough to be combined even in such a way.
It is known that neither TypeScript nor Babel stable versions currently support this syntax, primarily because it's of no practical use.
let's say that we have a function that returns an object such like that:
function getCustomer(id) {
return fetch(`http://myapi.com/customer/${id}`);
}
and let's say I have a response like that:
{
"customer": {
"id": 1234,
"name": "John Doe",
"latestBadges": [
"Platinum Customer",
"100 Buys",
"Reviewer"
]
}
}
In a more traditional approach I could write a function to show the latest 3 badges like so:
function showLatestBadges(a, b, c) {
console.log(a, b, c);
}
and to use that function, I would need to to:
getCustomer(1234).then((customer) => {
showLatestBadges(
customer.latestBadges[0],
customer.latestBadges[1],
customer.latestBadges[2]
);
});
With this new spread operator, I could do this instead:
getCustomer(1234).then((customer) => {
showLatestBadges(...customer.latestBadges);
});
So, using the spread operator in the function definition may look like it's a little useless. But, in fact, it CAN be useful in a VERY specific situation:
Let's say we have a legacy system, and let's say that the call to the showLatestBadges function is being made in hundreds of places without using the spread operator, just like the old days. Let's also assume that we are using a linting tool that prevents unused variables, and let's also assume that we are running a build process that do cares about the linting results, and if the linting says that something is not right, the build fails.
Let's ALSO ASSUME that for some weird business rule, we now have to show only the first and third badges.
Now, assuming this function call being made in hundreds of places in the legacy system, and we do not have much time available to deliver the implementation of this new business rule, we do not have time to refactor the code for ALL those hundreds of calls.
So, we will now change the function as so:
function showLatestBadges(a, b, c) {
console.log(a, c);
}
But now we have a problem: the build fails because of the unused b variable, and we have to deliver this change for YESTERDAY!!! We have no time to refactor all the hundreds of calls to this function, and we cannot just do a simple find and replace in all the spots, because we have such a messy code, and there are evals all over the place, and unpredictable behavior can happen.
So, one solution is: change the function signature using the spread operator, so the build succeeds, and create a task on the board to do the refactoring.
So, we can change the function as so:
function showLatestBadges(...[a,,c]) {
console.log(a, c);
}
Ok, I know this is a VERY specific situation and that this is very unlike to happen, but, who knows? ¯\_(ツ)_/¯
Actually the ... operator is two ways. It's both called rest and spread depending on your use case. They are both very powerful operators especially for functional approaches. You may always use spread operator as,
var a = [1,2,3],
b = [4,5,6];
a.push(...b);
which would yield a to be [1,2,3,4,5,6] all at once. At this moment one could say that .concat() could do the same. Yes concat has a built in spread functionality but a.concat(b) wouldn't effect a. I just creates and returns a new array. In fact in proper functional languages treating a as an immutable object is nice for the sake of purity. Yet JS is a weird language. It's believed to be functional but at the same time deeply embraces reference types. So long story short if you want to keep the references to a intact while mutating it then you can not use a.concat(b) but a.push(...b). Here i have to mention that .push() is not perfectly designed because it returns a stupid length property which is totally useless. It should have returned a. So I end up using the comma operator like (a.push(...b),a) most of the times.
OK apart from simple use cases you may stretch ... further for a little more complicated but cool looking implementations. Such as you may do an Haskellesque pattern matching to split head and tail of an array and recurse accordingly.
Here is a useful case of spread and rest operators working hand to hand to flatten an arbitrary nested array.
var flat = (x,...xs) => x ? [...Array.isArray(x) ? flat(...x) : [x], ...flat(...xs)] : [];
var na = [[1,2],[3,[4,5]],[6,7,[[[8],9]]],10];
fa = flat(na);
console.log(fa);
This is one of the use-cases I got to use this
const tail = function([, ...xs]) {
return xs;
}
tail([1,2]); // [2]
const head = ([a]) => a
head([1,2,3,4]) // 1
Related
Am trying to understand Maps objects in javascript, and how to use them inside an application, but there's something that i cant understand and it leads me to this question, here's my example
const myMap = new Map();
myMap.set('Name', 'John Doe')
.set(1, function sayHello(user){ console.log(`Hello ${user}`)})
myMap.get('Name'); // output John Doe
myMap.get(1); // output [function: sayHello]
as you see above i can set a function inside the Map
how can i use that function?
what's the point of setting a function in a Map?
are there any use cases?
I'm so confused, i will appreciate any explanation
What you've stored in the map is a function object. To understand it better, take a look at the following snippet to observe the difference between sayHello and sayHello("World"). The former is the function object and the latter is an invocation.
const sayHello = (user) => console.log(`Hello ${user}`)
console.log(sayHello);
sayHello("World");
You'd observe that the .get returns you the function object. To see it in action, you need to invoke it with ().
myMap.get(1)("World");
Among other things, maps could help you organize function objects and have, arguably, more readable code. For comparison, check the following implementations.
function calculator(operation, a, b) {
if (operation === "add") {
return a + b;
} else if (operation === "subtract") {
return a - b;
} else if (operation === "multiply") {
return a * b;
}
}
console.log(calculator("add", 5, 10));
console.log(calculator("subtract", 5, 10));
console.log(calculator("multiply", 5, 10));
function calculator(operation, a, b) {
const operations = new Map([
["add", (a, b) => a + b],
["subtract", (a, b) => a - b],
["multiply", (a, b) => a * b],
]);
return operations.get(operation)(a, b);
}
console.log(calculator("add", 5, 10));
console.log(calculator("subtract", 5, 10));
console.log(calculator("multiply", 5, 10));
1. `myMap.get(1)(userName)
2. Several: Functions are objects that define behaviours. You can pass them as parameters as callbacks, transformation filters, etc... Storing them in a Map or just a regular object is just a matter of getting faster access when accessing by some key.
3. Lots of them. You can store not only functions in maps but even whole classes if you want even in most cases it would be more handy (and almost equally efficient) to just use a regular object.
The point is never finding use cases for a thing but having that thing in your toolbox in order to be able to use it as soon as the necessity arises. In this case, when you have a set of key-function pairs big enough.
HINT: If you are curios on more use cases, search for functional programming stuff.
You need to invoke the function by passing the argument like:
myMap.get(1)("user");
If you want to use the function inside the map ( like set above ) then use like this : myMap.get(1)('name')
Map accepts any key type
If the object's key is not a string or symbol, JavaScript implicitly transforms it into a string.
Contrary, the map accepts keys of any type: strings, numbers, boolean, symbols. Moreover, the map preserves the key type. That's the map's main benefit.
There are specific usecases where map win the race over objects :
Map can contain keys of any data type, it could be Objects, integers, strings, boolean, functions or arrays. But in Objects, the key must always be a string or a symbol.
A Map is ordered and iterable, whereas a objects is not ordered and not iterable
Checking the number of entries in a Map is quite easy compared to checking that of Objects.
A Map inherits from Map.prototype. This offers all sorts of utility functions and properties which makes working with Map objects a lot easier
There are chances of accidentally overwriting inherited properties from prototypes by writing JavaScript identifiers as key names of an object (e.g., toString, constructor, etc.) at that case, use Maps
Another object cannot be used as key of an object, so no extra information can be written for an object by writing that object as key of another object and value of that another object will contain the extra information but this is possible in the case of Maps
and much more...
Remember! : debugging with Maps is painful then the objects
I Hope this answer helps you!
Comment if you have any questions or doubts and don't forget to mark the answer as accepted if you find it useful because it'll be helpful for others who're looking the answer for the same question.
Have a great day!
I have the same question as this one, but in the context of JavaScript.
From Wikipedia:
[a pure function's] return value is the same for the same arguments
It's further claimed there that a pure function is not allowed to have a variation in return value with "mutable reference arguments". In JavaScript, every normal object is passed as a "mutable reference argument". Consider the following example:
const f = (arr) => arr.length
const x = []
console.log( f(x) ) // 0
x.push(1);
console.log( f(x) ) // 1
Is the above proof that f is impure?
Or would you argue that we're not calling f with the "same" argument in the two cases?
I can see how it would make sense to call f impure in a language/environment where other threads could potentially mess with the mutable reference argument while f is executing. But since f is not async, there is no way for this to happen. x is going to stay the same from the moment f is called to when it's done executing. (If I'm understanding correctly, this interpretation seems to be supported by the definition of "same" put forth in § 4.1 of Verifiable Functional Purity in Java.)
Or am I missing something? Is there an example in JavaScript where a function containing no asynchronous code loses the property of referential transparency simply because it's taking a mutable reference, but it would be pure if we used e.g. an Immutable.js data structure instead?
When taking the Wikipedia definition to the letter, a function that takes as argument a reference to a mutable data structure (such as a native Array) is not pure:
Its return value is the same for the same arguments (no variation with local static variables, non-local variables, mutable reference arguments or input streams from I/O devices).
Equivalence
Although this clearly says "no variation with mutable reference arguments", we could maybe say this is open to interpretation and depends on the meaning of "same" and "variation". There are different definitions possible, and so we enter the area of opinion. Quoted from the paper your referred to:
There is not a single obviously right answer to these questions. Determinism is thus a parameterized property: given a definition of what it means for arguments to be equivalent, a method is deterministic if all calls with equivalent arguments return results that are indistinguishable from within the language
The functional purity proposed in the same paper, uses the following definition of equivalence:
Two sets of object references are considered equivalent if they result in identical object graphs
So with that definition, the following two arrays are considered equivalent:
let a = [1];
let b = [1];
But this concept can not really be applied to JavaScript without adding more restrictions. Nor to Java, which is the reason why the authors of the paper refer to a trimmed-down language, called Joe-E:
objects have identity: conceptually, they have an “address”, and we can compare whether two object references point to the same “address” using the == operator. This notion of object identity can expose nondeterminism.
Illustrated in JavaScript:
const compare = (array1, array2) => array1 === array2;
let arr = [1];
let a = compare(arr, arr);
let b = compare(arr, [1]);
console.log(a === b); // false
As the two calls return a different result, even though the arguments had the same shape and content, we should conclude (with this definition of equivalence) that the above function compare is not pure. While in Java you can influence the behaviour of the == operator (Joe-E forbids calling Object.hashCode), and so avoid this from happening, this is not generally possible in JavaScript when comparing objects.
Unintended side effects
Another issue is that JavaScript is not strongly typed, and so a function cannot be certain that the arguments it receives are what they are intended to be. For instance, the following function looks pure:
const add = (a, b) => a + b;
But it can be called in way to give side effects:
const add = (a, b) => a + b;
let i = 0;
let obj = { valueOf() { return i++ } };
let a = add(1, obj);
let b = add(1, obj);
console.log(a === b); // false
The same problem exists with the function in your question:
const f = (arr) => arr.length;
const x = { get length() { return Math.random() } };
let a = f(x);
let b = f(x);
console.log(a === b) // false
In both cases the function unintentionally called an impure function and returned a result that depended on it. While in the first example it is easy to still make the function pure with a typeof check, this is less trivial for your function. We can think of instanceof or Array.isArray, or even some smart deepCompare function, but still, callers can set a strange object's prototype, set its constructor property, replace primitive properties with getters, wrap the object in a proxy, ...etc, etc, and so fool even the smartest equality checkers.
Pragmatism
As in JavaScript there are just too many "loose ends", one has to be pragmatic in order to have a useful definition of "pure", as otherwise almost nothing can be labelled pure.
For example, in practice many will call a function like Array#slice pure, even though it suffers from the problems mentioned above (including related to the special argument this).
Conclusion
In JavaScript, when calling a function pure, you will often have to agree on a contract on how the function should be called. The arguments should be of a certain type, and not have (hidden) methods that could be called but that are impure.
One may argue that this goes against the idea behind "pure", which should only be determined by the function definition itself, not the way it eventually might get called.
The scenario is as follows:
function:
const fun = (a="keep this", b="change this")=>{return a + b};
How can I keep the first default parameter and override the second one? I have several functions that use many default parameters that are called in different ways, so just moving the b param to the first argument will not work. For the sake of simplicity, a would hypothetically be overridden nearly as often as b.
I found answers regarding optional parameters but none showed me how to specify parameter while retaining defaults before said parameter.
I tried calling it in a way similar to python with no success:
fun(b="changed");
This can be done by providing what the function sees in any situation where no parameters are defined:
((a="keep this", b="change this")=>{return a + b})(undefined, "changed");
Simply specifying the paramater position as undefined will cause the default value to be used. It's not as simply and direct as Python, however it will work in any situation where you know the number of arguments to skip.
Here is an example that can be used on more than one parameter and could be useful in a situation where many parameters need to be skipped:
((a="keep", b="this", c="change this")=>{
return a+b+c
})(...(new Array(2).fill(undefined)), "changed");
It is sufficient to create a new sparse array without filling.
const fn = (a = "keep", b = "this", c = "change this") => a + b + c;
console.log(fn(...new Array(2), "changed"));
When you pass no argument to the a function, you are indeed passing undefined, so you can just do that for the first argument.
const f = (a="a", b="b") => console.log(">", a, b);
f(undefined, "new b") // > a new b
another way is to just use dict
const f = ({a="a", b="b"}) => ...
f({b: "b content"})
I stumbled upon this relatively simple arrow function:
var x = ([a, b] = [1, 2], {x: c} = {x: a + b}) => a + b + c;
console.log(x());
I know what it does in general. But why does it this thing so complicated? I mean, the same thing can be done much easier and has (imo) a better readability too:
var x = ([a, b] = [1, 2], c = a + b) => a + b + c;
console.log(x());
So could someone tell me the difference of this two notations or show me a better usecase for the first one?
The 2nd argument of your 2nd is example is a simple es6 default initialization, while the 2nd argument of your 1st example is again a simple es6 default initialization with destructuring.
But, I assume you already know that.
Your other part of the question was, show me a better usecase for the first one?
Destructuring is mainly useful when you want to access a key from a huge javascipt object;
Something like this:
aHugeJavascriptObject = {
key1:'value1',
.
.
.
key999:'value999'
}
Now, one way to access the object's key key999 is aHugeJavascriptObject.key999, instead you probably want to do
const { key999 } = aHugeJavascriptObject
I also assume that you already also know that.
But, I am afraid that is what that is there to your question.
The first notation takes an object with a property x as the second argument. It is destructured and x is extracted as c. If it is not defined, a default object with a property x is used instead:
console.log(x([1, 2], {x: 5}));
Whereas the second one takes a simple argument primitive argument (probably a Number in this case):
console.log(x([1, 2], 5));
The only difference thus is the second argument that is fed into the function.
So given a function
function foo( a, b ) {
}
Now, if I wanted to swap the values of the arguments a and b, I could write this:
var t = a;
a = b;
b = t;
However, this is an ugly pattern - it requires three statements (three lines of code), plus a local variable. Now, I could bear three statements, but having to declare that annoying helper variable? I would like to avoid that.
So, what would be a better way to do this? (Better as in fewer lines of code (possibly one-liner), or not declaring a local variable.)
I came up with this:
(function ( t ) { a = b; b = t; })( a ); // swap a and b
Live demo: http://jsfiddle.net/J9T22/
So, what can you come up with?
Using a function for it? Seriously?
The easiest is often the best:
var t = a;
a = b;
b = t;
If you use it e.g. for server-side JS (i.e. you only need to support one JavaScript engine) you might also be able to use the destructuring assignment syntax:
[a, b] = [b, a]
This is a fun little exercise.
You could do this: a=[b][b=a,0]
var a='a';
var b='b';
a=[b][b=a,0];
alert(a + ', ' + b); // "b, a"
Also +1 from me, ignore the haters ;)
...Oh wait! Is this not a fun little exercise, but actually for real-world use? Then you'd better not do it this way, because it's less readable than var t=a;a=b;b=t!
a=[b][b=a,0]; // wth?
var t=a; a=b; b=t; // ahhh so readable!
But no, seriously, doing it this way actually gives you neat benefits over having to create another variable, because you can do it in line. Var declarations can't usually be part of normal statements, so attempting to do something like (var t=a; a=b; b=t) will just throw a SyntaxError, but (a=[b][b=a,0]) evaluates to a, which could be useful.
It's interesting to discuss things like this because, while doing things in unconventional ways may not be welcome in our production code, it is a great way to learn about the language. And that (I think) is what SO is all about. I rest my case.
In Mozilla's Javascript 1.7 you could do [a, b] = [b, a].
If you're allowing for Mozilla only, or future ES6 stuff, you can use destructuring assignment:
[a,b] = [b,a]
If the biggest concern is variable environment pollution, you could borrow the arguments object.
arguments[arguments.length] = a;
a = b;
b = arguments[arguments.length];
But this gets a bit long.
Or you could assign an object to an existing parameter:
a = {a:a,b:b};
b = a.a;
a = a.b;
function foo( a, b ) {
a = {a:a,b:b};
b = a.a;
a = a.b;
console.log(a,b); 'baz' 'bar'
}
foo('bar','baz');
Or eliminate a like like this:
a = {b:b,a:(b=a)};
a = a.b;
Or down to one line:
a = {b:b,a:(b=a)}.b;
Currently in "strict mode" supported implementations, you can do this (if you're actually running in "strict mode"):
a = b; b = arguments[0];
This is because changes to formal parameters has no effect on the arguments object, and vice versa.
I've read that you should never do this but...
a=5;
b = 7
//add parenthesis to make this do what it should in one line
a ^= b;
b ^= a;
a ^= b;
They should now have each other's values.
[edit] as per pointedears' description this doesn't work as stated, but here's the description of how it DOES work... however, as already stated (by others) stick to what's simplest. there's no reason to do this, you will NOT notice any performance gains, and your code will simply become less readable.
http://en.wikipedia.org/wiki/XOR_swap_algorithm
and here it is in action...
http://jsfiddle.net/nHdwH/
What's a better way to swap two argument values?
There is no "better" way, only a number of progressively more obscure and confusing ways.
Some people might view those other methods as "clever", and I guess some of them are, but I wouldn't want to work with anybody who thinks they're actually "better" because I would not want to see such methods cropping up in real project code.
The "clever" maths methods only work if you assume integer values, so in my opinion they're wrong since you didn't specify types.
If you find the three statements ugly you could do this:
var t;
t = a, a = b, b = t;
Yes it's virtually the same thing, but at least it puts the swap code all on the same line.
(Having said all that, I think [a, b] = [b, a]; would be "better" if not for the lack of browser support for it.)
If the values are integers, then you can use arithmetic to do the work:
function foo( a, b ) {
a = -(b = (a += b) - b) + a;
console.log(a);
console.log(b);
}
foo(1,2);
See http://www.greywyvern.com/?post=265
a=[a,b];
b=a[0];
a=a[1];
this uses only the variables a and b, but it creates an array,
which is more significant than a temporary variable.
Use the temporary variable.
var t=a;
a=b;
b=t;
Another upvote for sanity.
I slightly disagree with other people, it can be useful. However, defining it inline is bad, bad, bad, bad, bad. If you are going to do it, it should be a higher order function:
function flip(fn) {
return function(a, b) {
return fn.call(null, b, a);
};
}
For example:
function log() {
console.log.apply(console, arguments);
}
flip(log)(1, 2); // 2 1
Now this might seem silly, but this kind of stuff happens quite often when you're mapping/reducing/iterating etc. Say you have some function:
function doSomeStuff(index, value) {
// complex stuff happening here
}
And an array:
var arr = ["foo", "bar", "etc"];
If you were to use, for example, map on this array and needed to call doSomeStuff you'd have to hand roll this function:
arr.map(function(value, index) {
return doSomeStuff(index, value);
});
Whilst you could say this isn't bad, it is distracting from what you're trying to do (just call the damn function!). With this higher order function it can be reduced to:
arr.map(flip(doSomeStuff));
If you wanted a more complete flip function you could:
function flip(fn) {
return function() {
var args = Array.prototype.slice.call(arguments);
return fn.apply(null, args.reverse());
};
}
And now:
flip(log)(1,2,3,4,5); // 5 4 3 2 1
I assume in the real world one would want this swap to be conditional.
[a,b] = c? [a,b] : [b,a]
You could also replace all instances of a with [a,b][+c] and all b's with [b,a][+c] like:
arr.sort( (a,b) => [a,b][+c] - [b,a][+c] )
or just have a function to call the function:
swapMyFunction = (a,b) => myFunction(b,a)