I have a pattern in my code that keeps recurring and seems like it should be pretty common, but I can't for the life of me figure out what it's called or whether there are common ways of handling it: maping using a function that takes an argument that is itself the result of a function taking the maped element as an argument.
Here's the pattern itself. I've named the function I want mapply (map-apply), but that seems like the wrong name:
const mapply = (outer, inner) => el => outer(inner(el))(el)
What is this actually called? How can I achieve it in idiomatic Ramda? It just seems like it has to be a thing in the world with smart people telling me how to handle it.
My use case is doing some basic quasi-Newtonian physics work, applying forces to objects. To calculate some forces, you need some information about the object—location, mass, velocity, etc. A (very) simplified example:
const g = Vector.create(0, 1),
gravity = ({ mass }) => Vector.multiply(mass)(g),
applyForce = force => body => {
const { mass } = body,
acceleration = Vector.divide(mass)(force)
return R.merge(body, { acceleration })
}
//...
const gravitated = R.map(mapply(applyForce, gravity))(bodies)
Can somebody tell me: What is this? How would you Ramda-fy it? What pitfalls, edge cases, difficulties should I watch out for? What are the smart ways to handle it?
(I've searched and searched—SO, Ramda's GitHub repo, some other functional programming resources. But perhaps my Google-fu just isn't where it needs to be. Apologies if I have overlooked something obvious. Thanks!)
This is a composition. It is specifically compose (or pipe, if you're into being backwards).
In math (consider, say, single variable calculus), you would have some statement like fx or f(x) signifying that there is some function, f, which transforms x, and the transformation shall be described elsewhere...
Then you get into craziness, when you see (g º f)(x). "G of F" (or many other descriptions).
(g º f)(x) == g(f(x))
Look familiar?
const compose = (g, f) => x => g(f(x));
Of course, you can extend this paradigm by using composed functions as operations inside of composed functions.
const tripleAddOneAndHalve = compose(halve, compose(add1, triple));
tripleAddOneAndHalve(3); // 5
For a variadic version of this, you can do one of two things, depending on whether you'd like to get deeper into function composition, or straighten out just a little bit.
// easier for most people to follow
const compose = (...fs) => x =>
fs.reduceRight((x, f) => f(x), x);
// bakes many a noodle
const compose = (...fs) => x =>
fs.reduceRight((f, g) => x => g(f(x)));
But now, if you take something like a curried, or partial map, for instance:
const curry = (f, ...initialArgs) => (...additionalArgs) => {
const arity = f.length;
const args = [...initialArgs, ...additionalArgs];
return args.length >= arity ? f(...args) : curry(f, ...args);
};
const map = curry((transform, functor) =>
functor.map(transform));
const reduce = ((reducer, seed, reducible) =>
reducible.reduce(reducer, seed));
const concat = (a, b) => a.concat(b);
const flatMap = curry((transform, arr) =>
arr.map(transform).reduce(concat, []));
You can do some spiffy things:
const calculateCombinedAge = compose(
reduce((total, age) => total + age, 0),
map(employee => employee.age),
flatMap(team => team.members));
const totalAge = calculateCombinedAge([{
teamName: "A",
members: [{ name: "Bob", age: 32 }, { name: "Sally", age: 20 }],
}, {
teamName: "B",
members: [{ name: "Doug", age: 35 }, { name: "Hannah", age: 41 }],
}]); // 128
Pretty powerful stuff. Of course, all of this is available in Ramda, too.
const mapply0 = (outer, inner) => el => outer(inner(el))(el);
const mapply1 = (outer, inner) => R.converge(
R.uncurryN(2, outer),
[
inner,
R.identity,
],
);
const mapply2 = R.useWith(
R.converge,
[
R.uncurry(2),
R.prepend(R.__, [R.identity]),
],
);
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.24.1/ramda.min.js"></script>
I haven't tested this but it will probably work.
The first is your function.
The second uses converge to pass 'el' through the inner function and then the identity function and pass both into an uncurried version of outer.
R.uncurryN(2, outer) works like this outer(inner(el), el), this means that converge can supply the parameters.
the third might be too far but it's fun anyway, you are calling converge with the first parameter as an uncurried version of outer and the second as an array containing inner and the identity, useWith does this which completely removes function definitions from the solution.
I'm not sure if this is what you were looking for but these are the 3 ways of writing it I found.
Paraphrased from the comments on the question:
mapply is, actually, chain:
R.chain(f, g)(x); //=> f(g(x), x)
Well, mostly. In this case, note that x must be an array.
My solution to the problem, then, is:
const gravitated = R.map(
R.chain(applyForce, R.compose(R.of, gravity))
)(bodies)
The Ramda documentation for chain is not terribly helpful in this case: it reads simply, "chain maps a function over a list and concatenates the results." (ramdajs.com/docs/#chain)
The answer is lurking in the second example there, where two functions are passed to chain and partially applied. I could not see that until after reading these answers here.
(Thanks to ftor, bergi, and Scott Sauyet.)
Related
I'm trying to practice function composition using Ramda but wondering if it's overkilled and needed some advice on this.
Given the object below
const articles = [
{
title: 'Everything Sucks',
url: 'http://do.wn/sucks.html',
author: {
name: 'Debbie Downer',
email: 'debbie#do.wn'
}
},
{
title: 'If You Please',
url: 'http://www.geocities.com/milq',
author: {
name: 'Caspar Milquetoast',
email: 'hello#me.com'
}
}
];
Make a boolean function that says whether a given person wrote any of the articles.
Sample function invocation
isAuthor('New Guy', articles) // should return false
isAuthor('Debbie Downer', articles) // should return true
My Solution
First I create a function to grab the author name as below
const get = _.curry(function(x, obj) { return obj[x]; });
const names = _.map(_.compose(get('name'), get('author'))); // ['Debbie Downer', 'Caspar Milquetoast']
Now that I have a function names ready to be used, I will try to construct isAuthor function and below is two of my attempts
Attempt 1: without composition
const isAuthor = function(name, articles) {
return _.contains(name, names(articles))
};
Attempt 2 with composition
const isAuthor = function(name, articles) {
return _.compose(
_.contains(name), names
)(articles);
}
Both attempts works with correct results, this question is asking solely from functional programming perspective as I'm totally new to this domain and wondering which attempt is favourable than the other, and hopefully this question will not get closed as opinion based, I sincerely thinks that this is not opinion based but seeking industrial practice in FP world.
Also feel free to provide any alternatives than the two attempts I've made, thanks!
Ramda has a set of functions to work with nested properties. In your case we could combine pathEq and any:
pathEq returns true if a property at given path is equal to given value.
any applies a predicate function to each element of a list until it is satisfied.
The isAuthor function takes a name first then returns a function that takes a list:
const {compose, any, pathEq} = R;
const isAuthor = compose(any, pathEq(['author', 'name']));
isAuthor('Debbie Downer')(articles);
//=> true
isAuthor('John Doe')(articles);
//=> false
But this isn't necessarily better than:
const {curry} = R;
const isAuthor = curry((x, xs) => xs.some(y => y?.author?.name === x));
isAuthor('Debbie Downer')(articles);
//=> true
isAuthor('John Doe')(articles);
//=> false
Both of your solutions seem like an overkill at least in my opinion.
Without using any libraries, a solution could be as simple as this for example:
const isAuthor = (name, articles) => articles.some((article) => article.author.name === name)
Functional programming is cool, but it doesn't mean you should make things unnecessarily complex.
From your proposals the first one seems a lot more readable than the second one.
Think of Ramda as a tool that helps you write in a certain way, and not something that dictates how you write your code. Ramda (disclaimer: I'm one of the founders) allows you to write in a more declarative fashion, using nice compositions and pipelines. But that doesn't mean that you need to use it everywhere you could.
The simple vanilla JS answer from tuomokar is probably all you need, but Ramda does have tools that might make that seem more readable to some.
So we could write something like this:
const isAuthor = (name) => (articles) =>
includes (name) (map (path (['author', 'name'])) (articles))
const articles = [{title: 'Everything Sucks',url: 'http://do.wn/sucks.html', author: {name: 'Debbie Downer',email: 'debbie#do.wn'}}, {title: 'If You Please', url: 'http://www.geocities.com/milq', author: {name: 'Caspar Milquetoast', email: 'hello#me.com'}}]
console .log (isAuthor ('New Guy') (articles))
console .log (isAuthor ('Debbie Downer') (articles))
<script src="//cdnjs.cloudflare.com/ajax/libs/ramda/0.27.1/ramda.min.js"></script>
<script>const {includes, map, path} = R </script>
Note that your get is built in to Ramda as prop, and that your names can be written as map (path (['author', 'name'])).
We could make this point-free, on our own, or using a tool like http://pointfree.io/. It might look like this:
const isAuthor = compose ((o (__, map (path (['author', 'name'])))), contains)
But I find that far less readable than any of the other suggestions. I wouldn't bother.
I'm new to functional programming and I'm trying rewrite some code to make it more functional-ish to grasp the concepts. Just now I've discovered Array.reduce() function and used it to create an object of arrays of combinations (I've used for loop before that). However, I'm not sure about something. Look at this code:
const sortedCombinations = combinations.reduce(
(accum, comb) => {
if(accum[comb.strength]) {
accum[comb.strength].push(comb);
} else {
accum[comb.strength] = [comb];
}
return accum;
},
{}
);
Obviously, this function mutates its argument accum, so it is not considered pure. On the other hand, the reduce function, if I understand it correctly, discards accumulator from every iteration and doesn't use it after calling callback function. Still, it's not a pure function. I can rewrite it like this:
const sortedCombinations = combinations.reduce(
(accum, comb) => {
const tempAccum = Object.assign({}, accum);
if(tempAccum[comb.strength]) {
tempAccum[comb.strength].push(comb);
} else {
tempAccum[comb.strength] = [comb];
}
return tempAccum;
},
{}
);
Now, in my understanding, this function is considered pure. However, it creates a new object every iteration, which consumes some time, and, obviously, memory.
So the question is: which variant is better and why? Is purity really so important that I should sacrifice performance and memory to achieve it? Or maybe I'm missing something, and there is some better option?
TL; DR: It isn't if you own the accumulator.
It's quite common in JavaScript to use the spread operator to create nice looking one-liner reducing functions. Developers often claim that it also makes their functions pure in the process.
const foo = xs => xs.reduce((acc, x) => ({...acc, [x.a]: x}), {});
//------------------------------------------------------------^
// (initial acc value)
But let's think about it for a second... What could possibly go wrong if you mutated acc? e.g.,
const foo = xs => xs.reduce((acc, x) => {
acc[x.a] = x;
return acc;
}, {});
Absolutely nothing.
The initial value of acc is an empty literal object created on the fly. Using the spread operator is only a "cosmetic" choice at this point. Both functions are pure.
Immutability is a trait not a process per se. Meaning that cloning data to achieve immutability is most likely both a naive and inefficient approach to it. Most people forget that the spread operator only does a shallow clone anyway!
I wrote this article a little while ago where I claim that mutation and functional programming don't have to be mutually exclusive and I also show that using the spread operator isn't a trivial choice to make.
Creating a new object on every iteration is common practice, and sometimes recommended, despite any potential performance issues.
(EDIT:) I guess that is because if you want to have only one general advice, then copying less likely causes
problems than mutating. The performance starts to become a "real" issue
if you have more than lets say about 1000 iterations. (For more details see my update below)
You can make your function pure in e.g. in this way:
const sortedCombinations = combinations.reduce(
(accum, comb) => {
return {
...accum,
[comb.strength]: [
...(accum[comb.strength] || []),
comb
]
};
},
{}
);
Purity might become more important if your state and reducer is defined somewhere else:
const myReducer = (accum, comb) => {
return {
...accum,
[comb.strength]: [
...(accum[comb.strength] || []),
comb
]
};
};
const initialState = {};
const sortedCombinations = combinations.reduce( myReducer, initialState );
const otherSortedCombinations = otherCombinations.reduce( myReducer, initialState );
const otherThing = otherList.reduce( otherReducer, initialState );
Update (2021-08-22):
preface to this update
As stated in the comments (and also mentioned in the question), of course copying on every iteration is less performant.
And I admit that in many cases, technically I can't see any disadvantages of mutating the accumulator (if you know what you are doing!).
Actually, thinking about it again, inspired from the comments and other answers,
I changed my mind a bit, and will consider mutating more often now, maybe at least
where I don't see any risk that e.g. somebody else misunderstands my code later.
But then again the question was explicitly about purity ... anyway, so here some more details:
purity
(Disclaimer: I must admit here that I know about React, but I don't know much about "the world of functional programming"
and their arguments about the advantages, e.g. in Haskell)
Using this "pure" approach is a tradeoff. You loose performance, and you win easier understandable and less coupled code.
E.g. in React, with many nested Components, you can always rely on the consistent state of the current component.
You know it will not be changed anywhere outside, except if you have passed down some 'onChange' callback explicitly.
If you define an object, you know for sure it will always stay unchanged.
If you need a modified version, you would have an new variable assignment,
this way it is obvious that you are working with a new version of the data
from here down, and any code that might use the old object will not be affected.:
const myObject = { a1: 1, a2: 2, a3: 3 }; <-- stays unchanged
// ... much other code ...
const myOtherObject = modifySomehow( myObject ); <-- new version of the data
Pros, Cons, and Caveats
I couldn't give a general advice which way (copy or mutate) is "the better one".
Mutating is more performant, but can cause lots of hard-to-debug problems, if you aren't absolutely sure what's happening.
At least in somewhat complex scenarios.
1. problem with non-pure reducer
As already mentioned in my original answer, a non-pure function
might unintentionally change some outside state:
var initialValue = { a1: 1, a2: 2, a3: 3, a4: 4 };
var newKeys = [ 'n1', 'n2', 'n3' ];
var result = newKeys.reduce( (acc, key) => {
acc[key] = 'new ' + key;
return acc
}, initialValue);
console.log( 'result:', result ); // We are interested in the 'result',
console.log( 'initialValue:', initialValue ); // but the initialValue has also changed.
Somebody might argue that you can copy the initial value beforehand:
var result = newKeys.reduce( (acc, key) => {
acc[key] = 'new ' + key;
return acc
}, { ...initialValue }); // <-- copy beforehand
But this might be even less efficient in cases where e.g. the object is very big and nested,
the reducer is called often, and maybe there are multiple conditionally used small modifications
inside the reducer, which are only changing little.
(think of useReducer in React,
or the Redux reducer)
2. shallow copies
An other answer stated correctly that even with the supposedly pure approach there might still be a reference to the original object.
And this is indeed something to be aware of, but the problems arise only if you do not follow this 'immutable' approach consequently enough:
var initialValue = { a1: { value: '11'}, a2: { value: '22'} }; // <-- an object with nested 'non-primitive' values
var newObject = Object.keys(initialValue).reduce( (acc, key) => {
return {
...acc,
['newkey_' + key]: initialValue[key], // <-- copies a reference to the original object
};
}, {}); // <-- starting with empty new object, expected to be 'pure'
newObject.newkey_a1.value = 'new ref value'; // <-- changes the value of the reference
console.log( initialValue.a1 ); // <-- initialValue has changed as well
This is not a problem, if it is taken care that no references are copied (which might be not trivial sometimes):
var initialValue = { a1: { value: '11'}, a2: { value: '22'} };
var newObject = Object.keys(initialValue).reduce( (acc, key) => {
return {
...acc,
['newkey_' + key]: { value: initialValue[key].value }, // <-- copies the value
};
}, {});
newObject.newkey_a1.value = 'new ref value';
console.log( initialValue.a1 ); // <-- initialValue has not changed
3. performance
The performance is no problem with a few elements, but if the object has several thousand items, the performance becomes indeed a significant issue:
// create a large object
var myObject = {}; for( var i=0; i < 10000; i++ ){ myObject['key' + i] = i; }
// copying 10000 items takes seconds (increasing exponentially!)
// (create a new object 10000 times, with each 1,2,3,...,10000 properties)
console.time('copy')
var result = Object.keys(myObject).reduce( (acc, key)=>{
return {
...acc,
[key]: myObject[key] * 2
};
}, {});
console.timeEnd('copy');
// mutating 10000 items takes milliseconds (increasing linearly)
console.time('mutate')
var result = Object.keys(myObject).reduce( (acc, key)=>{
acc[key] = myObject[key] * 2;
return acc;
}, {});
console.timeEnd('mutate');
My team is moving from Lodash to Ramda and entering the deeper parts of Functional Programming style. We've been experimenting more with compose, etc, and have run into this pattern:
const myFunc = state => obj => id => R.compose(
R.isNil,
getOtherStuff(obj),
getStuff(state)(obj)
)(id)
(We can of course omit the => id and (id) parts. Added for clarity.)
In other words, we have lots of functions in our app (it's React+Redux for some context) where we need to compose functions that take similar arguments or where the last function needs to get all its arguments before passing on to the next function in the compose line. In the example I gave, that would be id then obj then state for getStuff.
If it weren't for the getOtherStuff function, we could R.curry the myFunc.
Is there an elegant solution to this that would be point-free? This seems a common enough pattern in FP.
Here's one rationale for not pushing point-free too far. I managed to make a point-free version of the above. But I can't really understand it, and I really doubt that most readers of my code would either. Here it is,
const myFunc2 = o (o (o (isNil)), o (liftN (2, o) (getOtherStuff), getStuff))
Note that o is just a (Ramda-curried) binary version of Ramda's usual variadic compose function.
I didn't really figure this out. I cheated. If you can read Haskell code and write some basic things with it, you can use the wonderful Pointfree.io site to convert pointed code into point-free.
I entered this Haskell version of your function:
\state -> \obj -> \id -> isNil (getOtherStuff obj (getStuff state obj id))
and got back this:
((isNil .) .) . liftM2 (.) getOtherStuff . getStuff
which, with a little stumbling, I was able to convert to the version above. I knew I'd have to use o rather than compose, but it took a little while to understand that I'd have to use liftN (2, o) rather than just lift (o). I still haven't tried to figure out why, but Haskell really wouldn't understand Ramda's magic currying, and I'm guessing it has to do with that.
This snippet shows it in action, with your functions stubbed out.
const isNil = (x) =>
`isNil (${x})`
const getStuff = (state) => (obj) => (id) =>
`getStuff (${state}) (${obj}) (${id})`
const getOtherStuff = (obj) => (x) =>
`getOtherStuff (${obj}) (${x})`
const myFunc = state => obj => id => R.compose(
isNil,
getOtherStuff (obj),
getStuff (state) (obj)
)(id)
const myFunc2 = o (o (o (isNil)), o (liftN (2, o) (getOtherStuff), getStuff))
console .log ('Original : ', myFunc ('state') ('obj') ('id'))
console .log ('Point-free : ', myFunc2 ('state') ('obj') ('id'))
.as-console-wrapper {min-height: 100% !important; top: 0}
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.27.0/ramda.js"></script>
<script> const {o, liftN} = R </script>
Not worth it
While this is very interesting, I would never use that in production code. Reading it over now, I'm starting to get it. But I will have forgotten it in a month, and many readers would probably never understand.
Point-free can lead to some elegant code. But it's worth using only when it does so; when it obscures your intent, skip it.
I don't know why you can't curry though:
const myFunc = curry(state, obj) => R.compose(
R.isNil,
getOtherStuff(obj),
getStuff(state)(obj)
));
or
const myFunc = curry(state, obj, id) => R.compose(
R.isNil,
getOtherStuff(obj),
getStuff(state)(obj)
)(id));
I am not sure I see a point free solution here (as it stands). There are some less intuitive combinators that may apply. The other thing I would consider is whether the getStuff and getOtherStuff functions have their signatures in the correct order. Maybe it't be better if they were defined in this order: obj, state, id.
The problem is that the obj is needed in two differnt funcitons. Perhaps restating getStuff to return a pair and getOtherStuff to take a pair.
const myFunc = R.compose(
R.isNil, // val2 -> boolean
snd, // (obj, val2) -> val2
getOtherStuff, // (obj, val) -> (obj, val2)
getStuff // (obj, state, id) -> (obj, val)
);
myFunc(obj)(state)(id)
I have found it helpful to think of multiple parameter functions as functions that take a single parameter which happens to be a tuple of some sort.
getStuff = curry((obj, state, id) => {
const val = null;
return R.pair(obj, val);
}
getOtherStuff = curry((myPair) => {
const obj = fst(myPair)
const val2 = null;
return R.pair(obj, val2);
}
fst = ([f, _]) => f
snd = ([_, s]) => s
=====
Update per the question on combinators. From http://www.angelfire.com/tx4/cus/combinator/birds.html there is the starling (S) combinator:
λa.λb.λc.(ac)(bc)
written in a more es6 way
const S = a => b => c => a(c, b(c))
or a function that takes three parameters a,b,c. We apply c to a leaving a new function, and c to b leaving whatever which is immediately applied to the function resuilting from c being applied to a.
in your example we could write it like
S(getOtherStuff, getStuff, obj)
but that might not work now that I look at it. because getStuff isn't fully satisfied before being being applied to getOtherStuff... You can start to peice together a solution to a puzzle, which is sometimes fun, but also not something you want in your production code. There is the book https://en.wikipedia.org/wiki/To_Mock_a_Mockingbird people like it, though it is challenging for me.
My biggest advice is start thiking about all functions as unary.
I have the follow data structure:
const cluster = {
processes: [
{ color: 'test', x: 0, y: 0 },
...
],
};
And now I want to make function with the follow notation:
// getProcess :: (Cluster, number) -> Process
getProcess(cluster, 0);
// => { color: 'test', x: 0, y: 0 }
Well I try to use ramdajs for this:
const getProcess = R.compose(R.flip(R.nth), R.prop('processes'));
It is work fine for getProcess(cluster)(0) but for getProcess(cluster, 0) it return a function.
Is there way to solve this problem with ramda or may be more correct implementation?
You can use R.uncurryN to achieve this, which just takes the number of arguments you want to uncurry along with the curried function.
const getProcess = R.uncurryN(2, R.compose(R.flip(R.nth), R.prop('processes')));
This works with all curried functions, whether produced by Ramda or explicitly like the following.
R.uncurryN(2, x => y => x + y)
An alternative way to write this succinctly is with R.useWith, though I tend to find the use of useWith less readable than the alternatives.
const getProcess = R.useWith(R.nth, [R.identity, R.prop('processes')])
getProcess(0, cluster)
Sometimes the more direct approach is preferable..
const getProcess = R.curry(
(pos, entity) => R.path(['processes', pos], entity)
);
In Professor Frisby Introduces Composable Functional JavaScript the identity functor was introduced:
const Box = x =>
({
map: f => Box(f(x)),
fold: f => f(x) // for testing
})
I spent the better part of the day understanding functors and why the above JavaScript code is actually the identity functor. So I thought I would alter it to get a "real" functor that is not the identity functor. I came up with this:
const Endo = x =>
({
map: f => Endo(f(x).split('')),
fold: f => f(x).split('') // for testing
})
My reasoning is that with Box, Id_Box: Box -> Box and Id_Box f = f. Endo would also map to itself but Endo(f): Endo(x) -> Endo(y) (if f: x -> y).
Am I on the right track?
EDIT:
Replaced string with the more generic x as it was in the original examples.
As pointed out in this answer, for our purposes as programmers we can treat all functors as endofunctors so don't get too caught up on the differences.
As for what a functor is, in brief it is
a data structure (Box in your example)
that can support a mapping operation (think Array.prototype.map)
and that mapping operation respects identity: xs === xs.map(x => x)
...and composition: xs.map(f).map(g) === xs.map(f . g) where . is function composition.
That's it. No more, no less. Looking at your Box, it's a data structure that has a map function (check 1 & 2) and that map function looks like it should respect identity and composition (check 3 & 4). So it's a functor. But it doesn't do anything, which is why it's the identity functor. The fold function isn't strictly necessary, it just provides a way to 'unwrap' the box.
For a useful functor, let's look at JavaScript arrays. Arrays actually do something: namely they contain multiple values rather than just a single one. If an array could only have one element, it'd be your Box. For our purposes we'll pretend that they can only hold values of the same type to simply things. So an array is a data structure, that has a map function, that respects identity and composition.
let plus = x => y => x + y;
let mult = x => y => x * y;
let plus2 = plus(2);
let times3 = mult(3);
let id = x => x;
let compose = (...fs) => arg => fs.reverse().reduce((x, f) => { return f(x) }, arg);
// Here we need to stringify the arrays as JS will compare on
// ref rather than value. I'm omitting it after the first for
// brevity, but know that it's necessary.
[1,2,3].map(plus2).toString() === [3,4,5].toString(); // true
[1,2,3].map(id) === [1,2,3]; // true
[1,2,3].map(plus2).map(times3) === [1,2,3].map(compose(times3, plus2)); // true
So when we map a function over a functor (array) we get back another instance of the same functor (a new Array) with the function applied to whatever the functor (array) was holding.
So now lets look at another ubiquitous JavaScript data structure, the object. There's no built in map function for objects. Can we make them a functor? Assume again that the object is homogenous (only has keys to one type of value, in this example Number):
let mapOverObj = obj => f => {
return Object.entries(obj).reduce((newObj, [key, value]) => {
newObj[key] = f(value);
return newObj;
}, {});
};
let foo = { 'bar': 2 };
let fooPrime = mapOverObj(foo)(plus2); // { 'bar': 4 }
And you can continue on to test that the function accurately (as far as is possible in JavaScript) supports identity and composition to satisfy the functor laws.