Weird function syntax - javascript

I saw a weird function that looked something like:
const x = (a) => (b) => a + b;
console.log(x(1)(2))
The output is 3, I understand that it's a function returning a function and both a and b are in the same scope but the questions I have are:
How could this be used in real life?
What's the advantage of not using a function with 2 parameters and using this instead (for a one-line function)?

With this closure, you could get a function with a constant value for later adding.
How could this be used in real life?
You could take the returned function for a mapping of an array.
What's the advantage of not using a function with 2 parameters and using this instead (for a one-line function)?
It's a cleaner and functional approach.
const
x = a => b => a + b,
add5 = x(5);
console.log([1, 2, 3].map(add5));

Let's give that function a better name:
const add = (a) => (b) => a + b
Then later you can write
[1, 2, 3, 4] .map (add (5)) //=> [6, 7, 8, 9]
which is nicer to read than
[1, 2, 3, 4] .map ((n) => 5 + n) //=> [6, 7, 8, 9]
This is handy in a chain of .then() calls on Promises:
return fetchList (param)
.then (map (add (5)))
.then (filter (lessThan (8)))
.then (average)
(This of course requires curried functions add, lessThan, map, and filter, and some simple average function.)
Compare this to
return fetchList (param)
.then (xs => xs.map (x => add (5, x)))
.then (xs => xs.filter (x => lessThan (8, x)))
.then (average)
Note that the reason that average works the same in both versions of this is that it
takes a single parameter. One major point of currying is to turn a function into one that takes a single parameter. It makes a certain style of coding much easier to perform.

Nina gave an excellent answer. I will provide another, a little more advanced example where such closures help a lot with the clarity of the code. Let's combine functions together into a prefix-checker as below and then re-use it as many times as we want:
//given a word, check if a string s starts with this word
const literal = word => s => s && s.startsWith(word);
//allow to combine 2 literals with OR
const either = (p1, p2) => s => p1(s) || p2(s);
//allow to combine N literals
const any = (...parsers) => parsers.reduce(either);
//create a parser
const check = any(literal('cat'),literal('dog'),literal('cow'));
console.log('cat: ' + check('cat'));
console.log('dog: ' + check('dog is smart'));
console.log('cow: ' + check('cow 123'));
console.log('banana: ' + check('banana'));
In reality, it is a simplified parser-combinator (nope, not yet monadic). Extending this approach, you can create parsers for your own programming language, and it would be maintainable and fast.

Related

Ramda: Confused about pipe

I'm learning functional programming in JS and I'm doing it with Ramda.
I'm trying to make a function that takes parameters and returns a list. Here is the code:
const list = R.unapply(R.identity);
list(1, 2, 3); // => [1, 2, 3]
Now I tried doing this using pipe:
const otherList = R.pipe(R.identity, R.unapply);
otherList(1,2,3);
// => function(){return t(Array.prototype.slice.call(arguments,0))}
Which returns a weird function.
This:
const otherList = R.pipe(R.identity, R.unapply);
otherList(R.identity)(1,2,3); // => [1, 2, 3]
works for some reason.
I know this might be a newbie question, but how would you construct f(g(x)) with pipe, if f is unapply and g is identity?
Read the R.unapply docs. It's a function that gets a function and returns a function, which can take multiple parameters, collect it to a single array, and pass it as the parameter for the wrapped function.
So in the 1st case, it converts R.identity to a function that can receive multiple parameters and return an array.
In the 2nd case, R.unapply gets the result of R.identity - a single value, and not a function. If you pass R.identity as a parameter to the pipe, R.unapply gets a function and return a function, which is similar to the 1st case.
To make R.unapply work with R.pipe, you need to pass R.pipe to R.unapply:
const fn = R.unapply(R.pipe(
R.identity
))
const result = fn(1, 2, 3)
console.log(result)
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.26.1/ramda.min.js"></script>
It looks as though you really are thinking of pipe incorrectly.
When you use unapply(identity), you are passing the function identity to unapply.
But when you try pipe(identity, unapply), you get back a function that passes the results of calling identity to unapply.
That this works is mostly a coincidence: pipe(identity, unapply)(identity). Think of it as (...args) => unapply(identity(identity))(...args). Since identity(identity) is just identity, this turns into (...args) => unapply(identity)(...args), which can be simplified to unapply(identity). This only means something important because of the nature of identity.
You would use unapply to transform a function that would normally take its arguments as an array into a function that can take any number of positional arguments:
sum([1, 2, 3]); //=> 6
unapply(sum)(1, 2, 3) //=> 6
This allows you to, among many other things, map over any number of positional arguments:
unapply(map(inc))(1, 2) //=> [2, 3]
unapply(map(inc))(1, 2, 3) //=> [2, 3, 4]
unapply(map(inc))(1, 2, 3, 4) //=> [2, 3, 4, 5]
identity will always return its first argument. So unapply(identity)(1,2) is the same as identity([1,2]).
If your end goal was to create a function that returns a list of its arguments, I don't think you needed pipe in the first place. unapply(identity) was already doing that.
However, if what you need to do is to make sure that your pipe gets its parameters as a list, then you simply need to wrap pipe with unapply:
const sumplusplus = unapply(pipe(sum, inc, inc));
sumplusplus(1, 2, 3); //=> 8

Main difference between map and reduce

I used both methods but I am quite confused regarding the usage of both methods.
Is anything that map can do but reduce can not and vice versa?
Note: I know how to use both methods I am questioning for main difference between these method and when we need to used.
Source
Both map and reduce have as input the array and a function you define. They are in some way complementary: map cannot return one single element for an array of multiple elements, while reduce will always return the accumulator you eventually changed.
map
Using map you iterate the elements, and for each element you return an element you want.
For example, if you have an array of numbers and want to get their squares, you can do this:
// A function which calculates the square
const square = x => x * x
// Use `map` to get the square of each number
console.log([1, 2, 3, 4, 5].map(square))
reduce
Using an array as an input, you can get one single element (let's say an Object, or a Number, or another Array) based on the callback function (the first argument) which gets the accumulator and current_element parameters:
const numbers = [1, 2, 3, 4, 5]
// Calculate the sum
console.log(numbers.reduce(function (acc, current) {
return acc + current
}, 0)) // < Start with 0
// Calculate the product
console.log(numbers.reduce(function (acc, current) {
return acc * current
}, 1)) // < Start with 1
Which one should you choose when you can do the same thing with both? Try to imagine how the code looks. For the example provided, you can compute the squares array like you mentioned, using reduce:
// Using reduce
[1, 2, 3, 4, 5].reduce(function (acc, current) {
acc.push(current*current);
return acc;
}, [])
// Using map
[1, 2, 3, 4, 5].map(x => x * x)
Now, looking at these, obviously the second implementation looks better and it's shorter. Usually you'd choose the cleaner solution, which in this case is map. Of course, you can do it with reduce, but in a nutshell, think which would be shorter and eventually that would be better.
I think this picture will answer you about the difference between those Higher Order Functions
Generally "map" means converting a series of inputs to an equal length series of outputs while "reduce" means converting a series of inputs into a smaller number of outputs.
What people mean by "map-reduce" is usually construed to mean "transform, possibly in parallel, combine serially".
When you "map", you're writing a function that transforms x with f(x) into some new value x1. When you "reduce" you're writing some function g(y) that takes array y and emits array y1.
They produce different results in terms of data structure.
The map() function returns a new array through passing a function over each element in the input array.
This is different to reduce() which takes an array and a function in the same way, but the function takes 2 inputs - an accumulator and a current value.
So reduce() could be used like map() if you always .concat onto the accumulator the next output from a function. However it is more commonly used to reduce the dimensions of an array so either taking a one dimensional and returning a single value or flattening a two dimensional array etc.
Let's take a look of these two one by one.
Map
Map takes a callback and run it against every element on the array but what's
makes it unique is it generate a new array based on your existing array.
var arr = [1, 2, 3];
var mapped = arr.map(function(elem) {
return elem * 10;
})
console.log(mapped); // it genrate new array
Reduce
Reduce method of the array object is used to reduce the array to one single value.
var arr = [1, 2, 3];
var sum = arr.reduce(function(sum, elem){
return sum + elem;
})
console.log(sum) // reduce the array to one single value
I think this question is a very good question and I can't disagree with the answers but I have the feeling we are missing the point entirely.
Thinking of map and reduce more abstractly can provide us with a LOT of very good insights.
This answer is divided in 3 parts:
Defining and deciding between map and reduce (7 minutes)
Using reduce intentionally (8 minutes)
Bridging map and reduce with transducers (5 minutes)
map or reduce
Common traits
map and reduce are implemented in a meaningful and consistent way on a wide range of objects which are not necessarily collections.
They return a value useful to the surrounding algorithm, and they only care about this value.
Their major role is conveying intent regarding transformation or preservation of structure.
Structure
By "structure" I mean a set of conceptual properties which characterise abstract objects, such as an unordered list or a 2D matrix, and their concretion in data structures.
Note that there can be a disconnect between the two:
an unordered list can be stored as an array, which has the concept of ordering carried by indexed keys;
a 2D matrix can be stored as a TypedArray, which lacks the concept of dimension (or nesting).
map
map is a strict structure-preserving transformation.
It is useful to implement it on other kinds of objects to grasp its semantic value:
class A {
constructor (value) {
this.value = value
}
map (f) {
return new A(f(this.value))
}
}
new A(5).map(x => x * 2); // A { value: 10 }
Objects implementing map can have all kinds of behaviours, but they always return the same kind of object you started with while transforming the values with the supplied callback.
Array.map returns an array of the same length and the same ordering as the original.
On the callback arity
Because it preserves structure, map is viewed as a safe operation, but not every callback is equal.
With a unary callback: map(x => f(x)), each value of the array is totally indifferent to the presence of other values.
Using the other two parameters on the other hand introduces coupling, which may not be true to the original structure.
Imagine removing or reordering the second item in the arrays bellow: doing it before or after the map would not yield the same result.
Coupling with array size:
[6, 3, 12].map((x, _, a) => x/a.length);
// [2, 1, 4]
Coupling with ordering:
['foo', 'bar', 'baz'].map((x, i) => [i, x]);
// [[0, 'foo'], [1, 'bar'], [2, 'baz']]
Coupling with one specific value:
[1, 5, 3].map((x, _, a) => x/Math.max(...a));
//[ 0.2, 1, 0.6]
Coupling with neighbours:
const smooth = (x, i, a) => {
const prev = a[i - 1] ?? x;
const next = a[i + 1] ?? x;
const average = (prev + x + next) / 3;
return Math.round((x + average) / 2);
};
[1, 10, 50, 35, 40, 1].map(smoothh);
// [ 3, 15, 41, 38, 33, 8 ] 
I recommend making it explicit on the call site whether or not these parameters are used.
const transfrom = (x, i) => x * i;
❌ array.map(transfrom);
⭕ array.map((x, i) => transfrom(x, i));
This has other benefits when you use variadic functions with map.
❌ ["1", "2", "3"].map(parseInt);
// [1, NaN, NaN]
⭕ ["1", "2", "3"].map(x => parseInt(x));
// [1, 2, 3]
reduce
reduce sets a value free from its surrounding structure.
Again, let's implement it on a simpler object:
class A {
constructor (value) {
this.value = value
}
reduce (f, init) {
return init !== undefined
? f(init, this.value)
: this.value
}
}
new A(5).reduce(); // 5
const concat = (a, b) => a.concat(b);
new A(5).reduce(concat, []); // [ 5 ]
Whether you leave the value alone or you put it back into something else, the output of reduce can be of any shape. It is literally the opposite of map.
Implications for arrays
Arrays can contain multiple or zero values, which gives rise to two, sometimes conflicting, requirements.
The need to combine
How can we return multiple values with no structure around them?
It is impossible. In order to return only one value, we have two options:
summarising the values into one value;
moving the values into a different structure.
Doesn't it make more sense now?
The need to initialise
What if there is no value to return?
If reduce returned a falsy value, there would be no way to know if the source array was empty or if it contained that falsy value, so unless we provide an initial value, reduce has to throw.
The true purpose of the reducer
You should be able to guess what the reducer f does in the following snippet:
[a].reduce(f);
[].reduce(f, a);
Nothing. It is not called.
It is the trivial case: a is the single value we want to return, so f is not needed.
This is by the way the reason why we didn't make the reducer mandatory in our class A earlier: because it contained only one value. It is mandatory on arrays because arrays can contain multiple values.
Since the reducer is only called when you have 2 or more values, saying its sole purpose is to combine them is only a stone throw away.
On transforming values
On arrays of variable lengths, expecting the reducer to transform the values is dangerous because, as we discovered, it may not be called.
I encourage you to map before you reduce when you need to both transform values and change shape.
It is a good idea to keep these two concerns separate for readability anyway.
When not to use reduce
Because reduce is this general-purpose tools for achieving structure transformation, I advise you to avoid it when you want an array back if there exists another more focussed method which does what you want.
Specifically, if you struggle with nested arrays in a map, think of flatMap or flat before reaching for reduce.
At the heart of reduce
a recursive binary operation
Implementing reduce on arrays introduces this feedback loop where the reducer's first argument is the return value of the previous iteration.
Needless to say it looks nothing like map's callback.
We could implement Array.reduce recursively like so:
const reduce = (f, acc, [current, ...rest]) =>
rest.length == 0
? f(acc, current)
: reduce(f, f(acc, current), rest)
This highlights the binary nature of the reducer f and how its return value becomes the new acc in the next iteration.
I let you convince yourself that the following is true:
reduce(f, a, [b, c, d])
// is equivalent to
f(f(f(a, b), c), d)
// or if you squint a little
((a ❋ b) ❋ c) ❋ d
This should seem familiar: you know arithmetic operations obey rules such as "associativity" or "commutativity". What I want to convey here is that the same kind of rules apply.
reduce may strip out the surrounding structure, values are still bound together in an algebraic structure for the time of the transformation.
the algebra of reducers
Algebraic structures are way out of the scope of this answer, so I will only touch on how they are relevant.
((a ❋ b) ❋ c) ❋ d
Looking at the expression above, it is self-evident that there is a constraint that ties all the values together : ❋ must know how to combine them the same way + must know how to combine 1 + 2 and just as importantly (1 + 2) + 3.
Weakest safe structure
One way to ensure this is to enforce that these values belong to a same set on which the reducer is an "internal" or "closed" binary operation, that is to say: combining any two values from this set with the reducer produces a value which belongs to the same set.
In abstract algebra this is called a magma. You can also look up semi-groups which are more talked about and are the same thing with associativity (no braces required), although reduce doesn't care.
Less safe
Living in a magma is not absolutely necessary : we can imagine a situation where ❋ can combine a and b but not c and b.
An example of this is function composition. One of the following functions returns a string, which constrains the order in which you can combine them:
const a = x => x * 2;
const b = x => x ** 2;
const c = x => x + ' !';
// (a ∘ b) ∘ c
const abc = x => c(b(a(x)));
abc(5); // "100 !"
// (a ∘ c) ∘ b
const acb = x => b(c(a(x)));
acb(5); // NaN
Like many binary operations, function composition can be used as a reducer.
Knowing if we are in a situation where reordering or removing elements from an array could make reduce break is kind of valuable.
So, magmas: not absolutely necessary, but very important.
what about the initial value
Say we want to prevent an exception from being thrown when the array is empty, by introducing an initial value:
array.reduce(f, init)
// which is really the same as doing
[init, ...array].reduce(f)
// or
((init ❋ a) ❋ b) ❋ c...
We now have an additional value. No problem.
"No problem"!? We said the purpose of the reducer was to combine the array values, but init is not a true value: it was forcefully introduced by ourselves, it should not affect the result of reduce.
The question is:
What init should we pick so that f(init, a) or init ❋ a returns a?
We want an initial value which acts as though it was not there. We want a neutral element (or "identity").
You can look up unital magmas or monoids (the same with associativity) which are swear words for magmas equipped with a neutral element.
Some neutral elements
You already know a bunch of neutral elements
numbers.reduce((a, b) => a + b, 0)
numbers.reduce((a, b) => a * b, 1)
booleans.reduce((a, b) => a && b, true)
strings.reduce((a, b) => a.concat(b), "")
arrays.reduce((a, b) => a.concat(b), [])
vec2s.reduce(([u,v], [x,y]) => [u+x,v+y], [0,0])
mat2s.reduce(dot, [[1,0],[0,1]])
You can repeat this pattern for many kinds of abstractions. Note that the neutral element and the computation don't need to be this trivial (extreme example).
Neutral element hardships
We have to accept the fact that some reductions are only possible for non-empty arrays and that adding poor initialisers don't fix the problem.
Some examples of reductions gone wrong:
Only partially neutral
numbers.reduce((a, b) => b - a, 0)
// does not work
numbers.reduce((a, b) => a - b, 0)
Subtracting 0 form b returns b, but subtracting b from 0 returns -b.
We say that only "right-identity" is true.
Not every non-commutative operation lack a symmetrical neutral element but it's a good sign.
Out of range
const min = (a, b) => a < b ? a : b;
// Do you really want to return Infinity?
numbers.reduce(min, Infinity)
Infinity is the only initial value which does not change the output of reduce for non-empty arrays, but it is unlikely that we would want it to actually appear in our program.
The neutral element is not some Joker value we add as a convenience. It has to be an allowed value, otherwise it doesn't accomplish anything.
Nonsensical
The reductions bellow rely on position, but adding an initialiser naturally shifts the first element to the second place, which requires messing with the index in the reducer to maintain the behaviour.
const first = (a, b, i) => !i ? b : a;
things.reduce(first, null);
const camelCase = (a, b, i) => a + (
!i ? b : b[0].toUpperCase() + b.slice(1)
);
words.reduce(camelCase, '');
It would have been a lot cleaner to embrace the fact the array can't be empty and simplify the definition of the reducers.
Moreover, the initials values are degenerate:
null is not the first element of an empty array.
an empty string is by no means a valid identifier.
There is no way to preserve the notion of "firstness" with an initial value.
conclusion
Algebraic structures can help us think of our programs in a more systematic way. Knowing which one we are dealing with can predict exactly what we can expect from reduce, so I can only advise you to look them up.
One step further
We have seen how map and reduce were so different structure-wise, but it is not as though they were two isolated things.
We can express map in terms of reduce, because it is always possible to rebuild the same structure we started with.
const map = f => (acc, x) =>
acc.concat(f(x))
;
const double = x => x * 2;
[1, 2, 3].reduce(map(double), []) // [2, 4, 6]
Pushing it a little further has led to neat tricks such as transducers.
I will not go into much detail about them, but I want you to notice a couple of things which will echo what we have said before.
Transducers
First let's see what problem we are trying to solve
[1, 2, 3, 4].filter(x => x % 2 == 0)
.map(x => x ** 2)
.reduce((a, b) => a + b)
// 20
We are iterating 3 times and creating 2 intermediary data structures. This code is declarative, but not efficient. Transducers attempt to reconcile the two.
First a little util for composing functions using reduce, because we are not going to use method chaining:
const composition = (f, g) => x => f(g(x));
const identity = x => x;
const compose = (...functions) =>
functions.reduce(composition, identity)
;
// compose(a, b, c) is the same as x => a(b(c(x)))
Now pay attention to the implementation of map and filter bellow. We are passing in this reducer function instead of concatenating directly.
const map = f => reducer => (acc, x) =>
reducer(acc, f(x))
;
const filter = f => reducer => (acc, x) =>
f(x) ? reducer(acc, x) : acc
;
look at this more specifically:
reducer => (acc, x) => [...]
after the callback function f is applied, we are left with a function which takes a reducer as input and returns a reducer.
These symmetrical functions is what we pass to compose:
const pipeline = compose(
filter(x => x % 2 == 0),
map(x => x ** 2)
);
Remember compose is implemented with reduce: our composition function defined earlier combines our symmetrical functions.
The output of this operation is a function of the same shape: something which expects a reducer and returns a reducer, which means
we have a magma. We can keep composing transformations as long as they have this shape.
we can consume this chain by applying the resulting function with a reducer, which will return a reducer that we can use with reduce
I let you expand the whole thing if you need convincing. If you do so you will notice that transformations will conveniently be applied left to right, which is the opposite direction of compose.
Alright, lets use this weirdo:
const add = (a, b) => a + b;
const reducer = pipeline(add);
const identity = 0;
[1, 2, 3, 4].reduce(reducer, identity); // 20
We have composed operations as diverse as map, filter and reduce into a single reduce, iterating only once with no intermediary data-structure.
This is no small achievement! And it is not a scheme you can come up with by deciding between map and reduce merely on the basis of the conciseness of the syntax.
Also notice that we have full control over the initial value and the final reducer. We used 0 and add, but we could have used [] and concat (more realistically push performance-wise) or any other data-structure for which we can implement a concat-like operation.
To understand the difference between map, filter and reduce, remember this:
All three methods are applied on array so anytime you want to make any operation on an array, you will be using these methods.
All three follow functional approaches and therefore the original array remains the same. Original array doesn't change instead a new array/value is returned.
Map returns a new array with the equal no. of elements as there are in the original array. Therefore, if the original array has 5 elements, the returned array will also have 5 elements. This method is used whenever we want to make some change on every individual element of an array. You can remember that every element of ann array is being mapped to some new value in output array, therefore the name map
For eg,
var originalArr = [1,2,3,4]
//[1,2,3,4]
var squaredArr = originalArr.map(function(elem){
return Math.pow(elem,2);
});
//[1,4,9,16]
Filter returns a new array with equal/less number of elements than the original array. It returns those elements in the array which have passed some condition. This method is used when we want to apply a filter on the original array therefore the name filter. For eg,
var originalArr = [1,2,3,4]
//[1,2,3,4]
var evenArr = originalArr.filter(function(elem){
return elem%2==0;
})
//[2,4]
Reduce returns a single value, unlike a map/filter. Therefore, whenever we want to run an operation on all elements of an array but want a single output using all elements, we use reduce. You can remember an array's output is reduced to a single value therefore the name reduce. For eg,
var originalArr = [1,2,3,4]
//[1,2,3,4]
var sum = originalArr.reduce(function(total,elem){
return total+elem;
},0)
//10
The map function executes a given function on each element but reduce executes a function which reduces the array to a single value. I'll give an example of both:
// map function
var arr = [1, 2, 3, 4];
var mappedArr = arr.map((element) => { // [10, 20, 30, 40]
return element * 10;
})
// reduce function
var arr2 = [1, 2, 3, 4]
var sumOfArr2 = arr2.reduce((total, element) => { // 10
return total + element;
})
It is true that reduce reduces an array to a single value, but since we can pass an object as initialValue, we can build upon it and end up with a more complex object than what we started with, such as this example where we group items by some criteria. Therefore the term 'reduce' can be slightly misleading as to the capabilities of reduce and thinking of it as necessarily reducing information can be wrong since it could also add information.
let a = [1, 2, 3, 4, 5, 6, 7, 8, 9]
let b = a.reduce((prev, curr) => {
if (!prev["divisibleBy2"]) {
prev["divisibleBy2"] = []
}
if (curr % 2 === 0) {
prev["divisibleBy2"].push(curr)
}
if (!prev["divisibleBy3"]) {
prev["divisibleBy3"] = []
}
if (curr % 3 === 0) {
prev["divisibleBy3"].push(curr)
}
if (!prev["divisibleBy5"]) {
prev["divisibleBy5"] = []
}
if (curr % 5 === 0) {
prev["divisibleBy5"].push(curr)
}
return prev
}, {})
console.log(b)

Iterating through a function call

After years of writing loops in C++ the tedious way
for(int i=0; i<N; ++i) {
...
}
it becomes quite nice to use iterators
for(it i=v.begin(); i<v.end(); ++i) {
...
}
and ultimately moving to range iterators
for(auto i:v) {
...
}
In JavaScript also the for can be used, in a style nearly identical
(minus the type declaration and the pre/post increment operator) to
the first one above.
Still, in all of these the for is there. The D3.js
library demonstrates an alternative. One can iterate over an array by writing
d3.select("body")
.selectAll("p")
.data([4, 8, 15, 16, 23, 42])
.enter().append("p")
.text(function(d) { return "I’m number " + d + "!"; });
Here the enter mutates to a for loop. The documentation
explains nicely the client-side view of joins. What I am missing is a
standalone example of the (functional programming?) style of
converting a function call to an iteration.
No doubt this is not unique to D3.js. This is just where I encountered the idiom.
Can you suggest a few lines of standalone JavaScript code that
demonstrate iteration through a function call?
There are at least a couple of built-in functions that come to my mind.
map()
This one is very obvious.
[1, 2, 3]
.map(someNumber => someNumber * someNumber)
.map((powered, index) => index + "::" + powered);
// --> [ "1::1", "2::4", "3::9" ]
Chains well, right? Takes some input and produces the result consisting of elements calculated by applying a function element-wise.
Recommendation: try to use with pure functions whenever possible (produce the same results for same inputs, don't mutate the original collection if possible, nor produce any side effects).
forEach()
This function iterates through all elements of an array too, and applies a function, without returning anything back. Therefore, it can only end a chain of calls, but cannot be used for further chaining.
[1, 2, 3, 4]
.forEach(number => console.info(number));
Recommendation: forEach() is useful when we want to write some code that will result in a side effect per entry in the collection being iterated.
filter()
Filter function uses a predicate is used to sift the wheat from the chaff. The predicate is defining a criteria for the items you want to deal with on the next "stage".
[null, undefined, 0, 1, 2, 3, NaN, "", "You get the idea"]
.filter(Boolean)
.map(filteredElement => filteredElement + "!")
// --> [ "1!", "2!", "3!", "You get the idea!" ]
Recommendation: try to use with pure functions whenever possible. I.e. don't do anything else in filter other than things immediately related to filtration logic itself.
Object.keys() and Object.entries()
These two functions are helpful when we need to iterate over object's keys or key-value pairs, rather than an array's elements.
const targetObject = { a: 1, b: 2, c: 3 };
Object
.keys(targetObject)
.map(key => key + "=" + targetObject[key])
// --> [ "a=1", "b=2", "c=3" ]
same result can be achieved like this
Object
.entries({ a: 1, b: 2, c: 3 })
.map((key, value) => key + "=" + value)
// --> [ "a=1", "b=2", "c=3" ]
Recommendation: you may want to use Object.hasOwnProperty(...) when using working with Object.keys(...). See the documentation for details.
find()
The one is almost trivial. Let's us search for an item that matches a predicate. The search is "left-to-right", and it stops whenever the first "match" is found.
[1, 5, 10, 15]
.find(number >= 7)
// --> 10
findIndex() function can be used when we're looking for a position of an element that matches a predicate.
some() and every()
These functions check whether
a) there is at least one element matching a predicate; or
b) each and every element is matching a predicate.
const arrayOfNumbers = [2, 4, 6, 8, 10];
arrayOfNumbers.every(number => number % 2 === 0); // --> true
arrayOfNumbers.every(number => number % 2 === 1); // --> false
arrayOfNumbers.some(number => number > 1); // --> true
arrayOfNumbers.some(number => number <= 1); // --> false
reduce() and `reduceRight()`
The last one to mention in this quick review is the function that takes a list of things and aggregates it into a single result.
[-1, 0, 1, 2, 3]
.filter(value => value >= 0) // [0, 1, 2, 3]
.map(value => value + 1) // [1, 2, 3, 4]
.reduce((subTotal, currentValue) => subTotal + currentValue, 5);
// --> 15
Recommendation: try to use with pure functions whenever possible.
Universally applicable note on performance. In my benchmarks (don't have them on hand), a hand-written for loop was always faster than forEach, map, and other iterating functions. I do still prefer the functions unless the performance is being severely affected. There two main reasons for that: 1) easier to avoid off-by-one-errors; 2) the code is more readable, since each single function defines an independent step in the data processing flow, thus making code simpler and more maintainable.
I hope, this is an okay overview of some built-in chain-able JavaScript functions. More are described here. Take a look at concat(), sort(), fill(), join(), slice(), reverse() -- I frequently use them too.
If you need something like first() or last(), you will not find them in native functions. Either write your own ones, or use third-party libraries (e.g. lodash, rambda.js).
Here is an example implementation of Array.prototype.forEach:
function foreach(array, cb) {
for (var i = 0; i < array.length; ++i)
cb(array[i], i, array);
}
foreach([2,8,739,9,0], (n, i) =>
console.log("number: %s\nindex: %s\n", n, i));
surely I don't have to spoonfeed you do I?
function array_iterator(array) {
var i = 0;
function next() {
return array[i++];
}
function head() {
return array[i];
}
function tail() {
return array[array.length-1];
}
function more() {
return i < array.length;
}
function done() {
return !more();
}
function reset() {
i = 0;
}
return { next, head, tail, done, more, reset };
}
var nums = [3,34,4];
var iter = array_iterator(nums);
while (iter.more()) {
console.log(iter.next());
}

Can't wrap my head around "lift" in Ramda.js

Looking at the source for Ramda.js, specifically at the "lift" function.
lift
liftN
Here's the given example:
var madd3 = R.lift(R.curry((a, b, c) => a + b + c));
madd3([1,2,3], [1,2,3], [1]); //=> [3, 4, 5, 4, 5, 6, 5, 6, 7]
So the first number of the result is easy, a, b, and c, are all the first elements of each array. The second one isn't as easy for me to understand. Are the arguments the second value of each array (2, 2, undefined) or is it the second value of the first array and the first values of the second and third array?
Even disregarding the order of what's happening here, I don't really see the value. If I execute this without lifting it first I will end up with the arrays concatenated as strings. This appears to sort of be working like flatMap but I can't seem to follow the logic behind it.
Bergi's answer is great. But another way to think about this is to get a little more specific. Ramda really needs to include a non-list example in its documentation, as lists don't really capture this.
Lets take a simple function:
var add3 = (a, b, c) => a + b + c;
This operates on three numbers. But what if you had containers holding numbers? Perhaps we have Maybes. We can't simply add them together:
const Just = Maybe.Just, Nothing = Maybe.Nothing;
add3(Just(10), Just(15), Just(17)); //=> ERROR!
(Ok, this is Javascript, it will not actually throw an error here, just try to concatenate thing it shouldn't... but it definitely doesn't do what you want!)
If we could lift that function up to the level of containers, it would make our life easier. What Bergi pointed out as lift3 is implemented in Ramda with liftN(3, fn), and a gloss, lift(fn) that simply uses the arity of the function supplied. So, we can do:
const madd3 = R.lift(add3);
madd3(Just(10), Just(15), Just(17)); //=> Just(42)
madd3(Just(10), Nothing(), Just(17)); //=> Nothing()
But this lifted function doesn't know anything specific about our containers, only that they implement ap. Ramda implements ap for lists in a way similar to applying the function to the tuples in the crossproduct of the lists, so we can also do this:
madd3([100, 200], [30, 40], [5, 6, 7]);
//=> [135, 136, 137, 145, 146, 147, 235, 236, 237, 245, 246, 247]
That is how I think about lift. It takes a function that works at the level of some values and lifts it up to a function that works at the level of containers of those values.
Thanks to the answers from Scott Sauyet and Bergi, I wrapped my head around it. In doing so, I felt there were still hoops to jump to put all the pieces together. I will document some questions I had in the journey, hope it could be of help to some.
Here's the example of R.lift we try to understand:
var madd3 = R.lift((a, b, c) => a + b + c);
madd3([1,2,3], [1,2,3], [1]); //=> [3, 4, 5, 4, 5, 6, 5, 6, 7]
To me, there are three questions to be answered before understanding it.
Fantasy-land's Apply spec (I will refer to it as Apply) and what Apply#ap does
Ramda's R.ap implementation and what does Array has to do with the Apply spec
What role does currying play in R.lift
Understanding the Apply spec
In fantasy-land, an object implements Apply spec when it has an ap method defined (that object also has to implement Functor spec by defining a map method).
The ap method has the following signature:
ap :: Apply f => f a ~> f (a -> b) -> f b
In fantasy-land's type signature notation:
=> declares type constraints, so f in the signature above refers to type Apply
~> declares method declaration, so ap should be a function declared on Apply which wraps around a value which we refer to as a (we will see in the example below, some fantasy-land's implementations of ap are not consistent with this signature, but the idea is the same)
Let's say we have two objects v and u (v = f a; u = f (a -> b)) thus this expression is valid v.ap(u), some things to notice here:
v and u both implement Apply. v holds a value, u holds a function but they have the same 'interface' of Apply (this will help in understanding the next section below, when it comes to R.ap and Array)
The value a and function a -> b are ignorant of Apply, the function just transforms the value a. It's the Apply that puts value and function inside the container and ap that extracts them out, invokes the function on the value and puts them back in.
Understanding Ramda's R.ap
The signature of R.ap has two cases:
Apply f => f (a → b) → f a → f b: This is very similar to the signature of Apply#ap in last section, the difference is how ap is invoked (Apply#ap vs. R.ap) and the order of params.
[a → b] → [a] → [b]: This is the version if we replace Apply f with Array, remember that the value and function has to be wrapped in the same container in the previous section? That's why when using R.ap with Arrays, the first argument is a list of functions, even if you want to apply only one function, put it in an Array.
Let's look at one example, I'm using Maybe from ramda-fantasy, which implements Apply, one inconsistency here is that Maybe#ap's signature is: ap :: Apply f => f (a -> b) ~> f a -> f b. Seems some other fantasy-land implementations also follow this, however, it shouldn't affect our understanding:
const R = require('ramda');
const Maybe = require('ramda-fantasy').Maybe;
const a = Maybe.of(2);
const plus3 = Maybe.of(x => x + 3);
const b = plus3.ap(a); // invoke Apply#ap
const b2 = R.ap(plus3, a); // invoke R.ap
console.log(b); // Just { value: 5 }
console.log(b2); // Just { value: 5 }
Understanding the example of R.lift
In R.lift's example with arrays, a function with arity of 3 is passed to R.lift: var madd3 = R.lift((a, b, c) => a + b + c);, how does it work with the three arrays [1, 2, 3], [1, 2, 3], [1]? Also note that it's not curried.
Actually inside source code of R.liftN (which R.lift delegates to), the function passed in is auto-curried, then it iterates through the values (in our case, three arrays), reducing to a result: in each iteration it invokes ap with the curried function and one value (in our case, one array). It's hard to explain in words, let's see the equivalent in code:
const R = require('ramda');
const madd3 = (x, y, z) => x + y + z;
// example from R.lift
const result = R.lift(madd3)([1, 2, 3], [1, 2, 3], [1]);
// this is equivalent of the calculation of 'result' above,
// R.liftN uses reduce, but the idea is the same
const result2 = R.ap(R.ap(R.ap([R.curry(madd3)], [1, 2, 3]), [1, 2, 3]), [1]);
console.log(result); // [ 3, 4, 5, 4, 5, 6, 5, 6, 7 ]
console.log(result2); // [ 3, 4, 5, 4, 5, 6, 5, 6, 7 ]
Once the expression of calculating result2 is understood, the example will become clear.
Here's another example, using R.lift on Apply:
const R = require('ramda');
const Maybe = require('ramda-fantasy').Maybe;
const madd3 = (x, y, z) => x + y + z;
const madd3Curried = Maybe.of(R.curry(madd3));
const a = Maybe.of(1);
const b = Maybe.of(2);
const c = Maybe.of(3);
const sumResult = madd3Curried.ap(a).ap(b).ap(c); // invoke #ap on Apply
const sumResult2 = R.ap(R.ap(R.ap(madd3Curried, a), b), c); // invoke R.ap
const sumResult3 = R.lift(madd3)(a, b, c); // invoke R.lift, madd3 is auto-curried
console.log(sumResult); // Just { value: 6 }
console.log(sumResult2); // Just { value: 6 }
console.log(sumResult3); // Just { value: 6 }
A better example suggested by Scott Sauyet in the comments (he provides quite some insights, I suggest you read them) would be easier to understand, at least it points the reader to the direction that R.lift calculates the Cartesian product for Arrays.
var madd3 = R.lift((a, b, c) => a + b + c);
madd3([100, 200], [30, 40, 50], [6, 7]); //=> [136, 137, 146, 147, 156, 157, 236, 237, 246, 247, 256, 257]
Hope this helps.
lift/liftN "lifts" an ordinary function into an Applicative context.
// lift1 :: (a -> b) -> f a -> f b
// lift1 :: (a -> b) -> [a] -> [b]
function lift1(fn) {
return function(a_x) {
return R.ap([fn], a_x);
}
}
Now the type of ap (f (a->b) -> f a -> f b) isn't easy to understand either, but the list example should be understandable.
The interesting thing here is that you pass in a list and get back a list, so you can repeatedly apply this as long as the function(s) in the first list have the correct type:
// lift2 :: (a -> b -> c) -> f a -> f b -> f c
// lift2 :: (a -> b -> c) -> [a] -> [b] -> [c]
function lift2(fn) {
return function(a_x, a_y) {
return R.ap(R.ap([fn], a_x), a_y);
}
}
And lift3, which you implicitly used in your example, works the same - now with ap(ap(ap([fn], a_x), a_y), a_z).

Writing a parameterless function in Ramda in a point free style?

Consider the working code below:
var randN = x => () => Math.floor(x*Math.random());
var rand10 = randN(10)
times(rand10, 10) // => [6, 3, 7, 0, 9, 1, 7, 2, 6, 0]
randN is a function that takes a number and returns an RNG that, when called, will return a random int in the range [0, N-1]. So it's a factory for specific RNGs.
I've been using ramda.js, and learning functional programming theory, and my question is: Is it possible to rewrite randN in a point free style using ramda?
For example, I could write:
var badAttempt = pipe(multiply(Math.random()), Math.floor)
This would satisfy the "point-free style" requirement, but fails to behave the same way as randN: calling badAttempt(10) simply returns a single random number between 1 and 10, rather than a function that generates a random number between 1 and 10 when called.
I have not been able to find a combination of ramda functions that enables me to do the rewrite in a point-free style. I can't tell if this is just a failure on my part, or something special about using random, which breaks referential transparency and therefore may be incompatible with a point free style.
update
my own slight variation on the solution, after discussing it with Denys:
randN = pipe(always, of, append(Math.random), useWith(pipe(multiply, Math.floor)), partial(__,[1,1]))
This would help with an extra function for abstracting a function to re-evaluate its arguments each time it is called.
thunk = fn => R.curryN(fn.length, (...args) => () => fn(...args))
The only purpose of this function would be to cause some side effect within the given fn function.
Once we have thunk function, we can define randN like so:
randN = thunk(R.pipe(S.S(R.multiply, Math.random), Math.floor))
R.times(randN(10), 5) // e.g. [1, 6, 9, 4, 5]
Note: S.S here is the S combinator from Sanctuary which does effectively the same thing as R.converge(multiply, [Math.random, identity]).
I do however only recommend going with a point-free solution if it actually improves the readability of a function.
I don't know if it's a good idea to learn functional programming using a specific library, because the characteristics of a lib and the functional paradigm will mix inevitably. In practice, however, Ramda is incredibly useful. It bridges the gap between the imperative reality and the functional Fantasy Land in Javascript :D
Here's a manual approach:
// a few generic, reusable functions:
const comp = f => g => x => f(g(x)); // mathematical function composition
const comp2 = comp(comp)(comp); // composes binary functions
const flip = f => x => y => f(y)(x); // flips arguments
const mul = y => x => x * y; // first class operator function
// the actual point-free function:
const randN = comp2(Math.floor)(flip(comp(mul)(Math.random)));
let rand10 = randN(10); // RNG
for (let i = 0; i < 10; i++) console.log(rand10());
It's worth mentioning that randN is impure, since random numbers are impure by definition.
var randN = R.converge(R.partial, [R.wrap(R.pipe(R.converge(R.multiply, [Math.random, R.identity]), Math.floor), R.identity), R.of])
var rand10 = randN(10)
alert(R.times(rand10, 10)) // => [3, 1, 7, 5, 7, 5, 8, 4, 7, 2]
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.19.1/ramda.js"></script>

Categories

Resources