Is mutating accumulator in reduce function considered bad practice? - javascript

I'm new to functional programming and I'm trying rewrite some code to make it more functional-ish to grasp the concepts. Just now I've discovered Array.reduce() function and used it to create an object of arrays of combinations (I've used for loop before that). However, I'm not sure about something. Look at this code:
const sortedCombinations = combinations.reduce(
(accum, comb) => {
if(accum[comb.strength]) {
accum[comb.strength].push(comb);
} else {
accum[comb.strength] = [comb];
}
return accum;
},
{}
);
Obviously, this function mutates its argument accum, so it is not considered pure. On the other hand, the reduce function, if I understand it correctly, discards accumulator from every iteration and doesn't use it after calling callback function. Still, it's not a pure function. I can rewrite it like this:
const sortedCombinations = combinations.reduce(
(accum, comb) => {
const tempAccum = Object.assign({}, accum);
if(tempAccum[comb.strength]) {
tempAccum[comb.strength].push(comb);
} else {
tempAccum[comb.strength] = [comb];
}
return tempAccum;
},
{}
);
Now, in my understanding, this function is considered pure. However, it creates a new object every iteration, which consumes some time, and, obviously, memory.
So the question is: which variant is better and why? Is purity really so important that I should sacrifice performance and memory to achieve it? Or maybe I'm missing something, and there is some better option?

TL; DR: It isn't if you own the accumulator.
It's quite common in JavaScript to use the spread operator to create nice looking one-liner reducing functions. Developers often claim that it also makes their functions pure in the process.
const foo = xs => xs.reduce((acc, x) => ({...acc, [x.a]: x}), {});
//------------------------------------------------------------^
// (initial acc value)
But let's think about it for a second... What could possibly go wrong if you mutated acc? e.g.,
const foo = xs => xs.reduce((acc, x) => {
acc[x.a] = x;
return acc;
}, {});
Absolutely nothing.
The initial value of acc is an empty literal object created on the fly. Using the spread operator is only a "cosmetic" choice at this point. Both functions are pure.
Immutability is a trait not a process per se. Meaning that cloning data to achieve immutability is most likely both a naive and inefficient approach to it. Most people forget that the spread operator only does a shallow clone anyway!
I wrote this article a little while ago where I claim that mutation and functional programming don't have to be mutually exclusive and I also show that using the spread operator isn't a trivial choice to make.

Creating a new object on every iteration is common practice, and sometimes recommended, despite any potential performance issues.
(EDIT:) I guess that is because if you want to have only one general advice, then copying less likely causes
problems than mutating. The performance starts to become a "real" issue
if you have more than lets say about 1000 iterations. (For more details see my update below)
You can make your function pure in e.g. in this way:
const sortedCombinations = combinations.reduce(
(accum, comb) => {
return {
...accum,
[comb.strength]: [
...(accum[comb.strength] || []),
comb
]
};
},
{}
);
Purity might become more important if your state and reducer is defined somewhere else:
const myReducer = (accum, comb) => {
return {
...accum,
[comb.strength]: [
...(accum[comb.strength] || []),
comb
]
};
};
const initialState = {};
const sortedCombinations = combinations.reduce( myReducer, initialState );
const otherSortedCombinations = otherCombinations.reduce( myReducer, initialState );
const otherThing = otherList.reduce( otherReducer, initialState );
Update (2021-08-22):
preface to this update
As stated in the comments (and also mentioned in the question), of course copying on every iteration is less performant.
And I admit that in many cases, technically I can't see any disadvantages of mutating the accumulator (if you know what you are doing!).
Actually, thinking about it again, inspired from the comments and other answers,
I changed my mind a bit, and will consider mutating more often now, maybe at least
where I don't see any risk that e.g. somebody else misunderstands my code later.
But then again the question was explicitly about purity ... anyway, so here some more details:
purity
(Disclaimer: I must admit here that I know about React, but I don't know much about "the world of functional programming"
and their arguments about the advantages, e.g. in Haskell)
Using this "pure" approach is a tradeoff. You loose performance, and you win easier understandable and less coupled code.
E.g. in React, with many nested Components, you can always rely on the consistent state of the current component.
You know it will not be changed anywhere outside, except if you have passed down some 'onChange' callback explicitly.
If you define an object, you know for sure it will always stay unchanged.
If you need a modified version, you would have an new variable assignment,
this way it is obvious that you are working with a new version of the data
from here down, and any code that might use the old object will not be affected.:
const myObject = { a1: 1, a2: 2, a3: 3 }; <-- stays unchanged
// ... much other code ...
const myOtherObject = modifySomehow( myObject ); <-- new version of the data
Pros, Cons, and Caveats
I couldn't give a general advice which way (copy or mutate) is "the better one".
Mutating is more performant, but can cause lots of hard-to-debug problems, if you aren't absolutely sure what's happening.
At least in somewhat complex scenarios.
1. problem with non-pure reducer
As already mentioned in my original answer, a non-pure function
might unintentionally change some outside state:
var initialValue = { a1: 1, a2: 2, a3: 3, a4: 4 };
var newKeys = [ 'n1', 'n2', 'n3' ];
var result = newKeys.reduce( (acc, key) => {
acc[key] = 'new ' + key;
return acc
}, initialValue);
console.log( 'result:', result ); // We are interested in the 'result',
console.log( 'initialValue:', initialValue ); // but the initialValue has also changed.
Somebody might argue that you can copy the initial value beforehand:
var result = newKeys.reduce( (acc, key) => {
acc[key] = 'new ' + key;
return acc
}, { ...initialValue }); // <-- copy beforehand
But this might be even less efficient in cases where e.g. the object is very big and nested,
the reducer is called often, and maybe there are multiple conditionally used small modifications
inside the reducer, which are only changing little.
(think of useReducer in React,
or the Redux reducer)
2. shallow copies
An other answer stated correctly that even with the supposedly pure approach there might still be a reference to the original object.
And this is indeed something to be aware of, but the problems arise only if you do not follow this 'immutable' approach consequently enough:
var initialValue = { a1: { value: '11'}, a2: { value: '22'} }; // <-- an object with nested 'non-primitive' values
var newObject = Object.keys(initialValue).reduce( (acc, key) => {
return {
...acc,
['newkey_' + key]: initialValue[key], // <-- copies a reference to the original object
};
}, {}); // <-- starting with empty new object, expected to be 'pure'
newObject.newkey_a1.value = 'new ref value'; // <-- changes the value of the reference
console.log( initialValue.a1 ); // <-- initialValue has changed as well
This is not a problem, if it is taken care that no references are copied (which might be not trivial sometimes):
var initialValue = { a1: { value: '11'}, a2: { value: '22'} };
var newObject = Object.keys(initialValue).reduce( (acc, key) => {
return {
...acc,
['newkey_' + key]: { value: initialValue[key].value }, // <-- copies the value
};
}, {});
newObject.newkey_a1.value = 'new ref value';
console.log( initialValue.a1 ); // <-- initialValue has not changed
3. performance
The performance is no problem with a few elements, but if the object has several thousand items, the performance becomes indeed a significant issue:
// create a large object
var myObject = {}; for( var i=0; i < 10000; i++ ){ myObject['key' + i] = i; }
// copying 10000 items takes seconds (increasing exponentially!)
// (create a new object 10000 times, with each 1,2,3,...,10000 properties)
console.time('copy')
var result = Object.keys(myObject).reduce( (acc, key)=>{
return {
...acc,
[key]: myObject[key] * 2
};
}, {});
console.timeEnd('copy');
// mutating 10000 items takes milliseconds (increasing linearly)
console.time('mutate')
var result = Object.keys(myObject).reduce( (acc, key)=>{
acc[key] = myObject[key] * 2;
return acc;
}, {});
console.timeEnd('mutate');

Related

Ramda selfComposeWhile

The Problem:
I'm learning functional programming
Just kidding, but also...
I have a helper function that composes a function with itself over and over again until some condition is met. Like f(f(f(f(f(f(f(x))))))) or compose(f,f,f,f,f,f,f)(x), except it keeps going unless told to stop.
The way I've implemented it, it doesn't really feel like composition (and perhaps that's the wrong word to use here regardless)
This is my current solution:
const selfComposeWhile = curry(
(pred, fn, init) => {
let prevVal = null;
let nextVal = init;
while(prevVal == null || pred(prevVal, nextVal)){
prevVal = nextVal;
nextVal = fn(nextVal);
}
return nextVal;
}
);
and here it is in use:
const incOrDec = ifElse(gt(30), inc, dec);
console.log(
selfComposeWhile(lt, incOrDec, 0)
); // -> 29
I don't want to use recursion as JavaScript doesn't have proper tail recursion and the namesake of this site (Stack Overflow) is a real concern for how I use this.
There's nothing wrong with it as is, but I've been trying to learn functional programming techniques by applying them to a dummy problem and this is one of the few places my code stands out as decidedly imperative.
I also have
useWith(selfComposeWhile, [pipe(nthArg(1), always)]);
That takes a predicate that is only concerned with the nextVal, which seems like the more general case of this.
The Question:
Can anybody think of a more functional (sans recursion) way to write selfComposeWhile and its cousin?
R.unfold does mostly what you, it accepts a seed value (init), and transforms it, and on each iteration it returns the current value, and the new seed value. On each iteration you need to decide to continue or stop using a predicate.
The main difference between your function, and R.unfold is that the last one produces an array, and this is easily solvable with R.last:
const { curry, pipe, unfold, last } = R
const selfComposeWhile = curry(
(pred, fn, init) => pipe(
unfold(n => pred(n, fn(n)) && [n, fn(n)]),
last
)(init)
)
const { ifElse, gt, inc, dec, lt } = R
const incOrDec = ifElse(gt(30), inc, dec)
console.log(selfComposeWhile(lt, incOrDec, 0)) // -> 29
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.27.1/ramda.min.js" integrity="sha512-rZHvUXcc1zWKsxm7rJ8lVQuIr1oOmm7cShlvpV0gWf0RvbcJN6x96al/Rp2L2BI4a4ZkT2/YfVe/8YvB2UHzQw==" crossorigin="anonymous"></script>
The solution I've settled on for now.
I've taken selfComposeWhile and named it unfoldUntil. I'm not sure that's the best name as it doesn't return a list. It is basically R.until where the predicate can access the previous and the next value.
To bring them a bit more in alignment, I've changed my while behavior into until behavior (R.complement the predicate).
unfoldUntil
If it were typed:
unfoldUntil: <T>(
pred: (p:T, n:T) => boolean,
fn: (a:T) => T,
init: T
) => T
Implemented
const unfoldUntil = curry(
(pred, fn, init) => pipe(
unfold(n =>
isNil(n) ?
false :
call((next = fn(n)) =>
(pred(n, next) ?
[next, null] :
[next, next])
)
),
last
)(init)
);
Notes: This will never pass null/undefined into the transformation function (fn). You can use a transformation that returns null as a stopping condition and be returned the previous value. Otherwise, you'll be returned the first value of next that causes the predicate to return true.

ES6 dispose of the object destructuring left-over (object trimming)

One nice use-case of the destructuring feature when used with rest parameters is that you can get trimmed clones.
var source = { w1: 'val1', w2: 'val2', unwanted1: 'val3', unwanted2: 'val4'};
var {unwanted1, unwanted2, ...target} = source;
console.log(target); // `{ w1: 'val1', w2: 'val2' }` Exactly what you want
However, the side effect is that your scope is now polluted with two variables that you never care to use: unwanted1 and unwanted2.
If _ meant don't care, you could do something like this
var {
unwanted1:_, // throw away
unwanted2:_, // throw away
target
} = source;
However, in Javascript _ is a proper identifier.
If used once in that manner (unwanted: _), you'll end up with one unwanted variable called _, which goes against the goal.
If used more than once, like above, an error is issued:
SyntaxError: Identifier '_' has already been declared.
Is there any way I can throw away the undesired artifacts/variables of destructuring?
Of course, the following solutions are always available.
var target = {
w1: source.w1,
w2: source.w2,
}
and
var target = {...source};
delete target.unwanted1;
delete target.unwanted2;
However doing this with destructuring only seems to be the cleanest way if you're cloning an object with many parameters and you need to exclude just a couple.
Introducing _, __, ___, etc to drop 1,2,3 or more properties doesn't make much difference as it still creates the variables, which '...you will never care to use' and moreover, it threatens to add a flavor of spaghetti to your code.
However, since you need to indicate explicitly which exactly properties you want to drop, one may consider other object trimming techniques, e.g.
filter unwanted properties
const obj = {prop1: 1, prop2:2, prop3: 3, prop4: 4, prop5: 5},
keysToDrop = ['prop2', 'prop3', 'prop4'],
trimmedObj = Object.fromEntries(
Object
.entries(obj)
.filter(([key,val]) => !keysToDrop.includes(key)
)
)
console.log(trimmedObj)
.as-console-wrapper{min-height:100%;}
make use of Array.prototype.reduce(), which may even give you certain performance boost, compared to destructuring
const obj = {prop1: 1, prop2:2, prop3: 3, prop4: 4, prop5: 5},
keysToDrop = ['prop2', 'prop3', 'prop4'],
trimmedObj = Object
.keys(obj)
.reduce((r,key) =>
(!keysToDrop.includes(key) && (r[key] = obj[key]), r),{})
console.log(trimmedObj)
.as-console-wrapper{min-height:100%;}
Use _, __,___ or Just method to exclude: :D haha
function prop(source, excluded) {
if (source == null) return {};
var target = {};
var sourceKeys = Object.keys(source);
var key, i;
for (i = 0; i < sourceKeys.length; i++) {
key = sourceKeys[i];
if (excluded.indexOf(key) >= 0) continue;
target[key] = source[key];
}
return target;
}
var source = {
w1: "val1",
w2: "val2",
unwanted1: "val3",
unwanted2: "val4"
};
var target = prop(source, ["unwanted1", "unwanted2"]);
Is there any way I can throw away the undesired artifacts/variables of destructuring?
The only way that's also not too terrible would be to define a function that does the same thing:
const clone = ({unwanted1, unwanted2, ...target}) => target;
const target = clone(source);
The variables are still created but their visibility is limited to the function which terminates immediately.
However doing this with destructuring only seems to be the cleanest way if you're cloning an object with many parameters and you need to exclude just a couple.
The disadvantage of the above approach is that the function is specific to a specific object. You cannot reuse it for other objects. Sure, it's rather small so maybe that's not a big deal. But having a more generic helper function might be easier to understand.

Typescript, turn Array of functions into merged type of all returned values

So I have a an array of functions (or actually an object of functions but it doesn't matter) which returns a different objects such as this:
const arr = [
() => ({ a: "a" }),
() => ({ b: "b" })
]
and now I want to get a type that contains all the merged values such as:
{
a: string;
b: string;
}
If tried some reduce solutions but all I've gotten to is a type that looks like:
{ a: string } | { b: string }
which isn't what I'm looking for.
Any ideas?
Update 1
The array in the example is a simplification and the actual return values of the functions are unique and is therefore needed to be kept as is => I cannot use a generalized interface such as
interface ReturnValues {
[key: string]: string;
}
Update 2
The problem is not of a JS kind but of TS and it's types. Ultimately I want to achieve this kind of functionality:
const result = arr.reduce((sum, fn) => Object.assign(sum, fn()), {})
and I want the type of result to be { a: string, b: string } so that I can call result.a and typescript will know that this is a string. If the result is { a: string } | { b: string }, calling result.a typescript says this is of the type any.
Also, for the ease of it, one can assume that there is no overlapping of the returning values of the functions.
you can use Array.reduce
const arr = [
() => ({ a: "a" }),
() => ({ b: "b" })
]
const obj = arr.reduce((acc, cur) => ({ ...acc, ...cur() }), {});
console.log(obj);
Since TypeScript doesn't have proper variadic type support yet (See this issue), the only real way to achieve what you're looking for is this:
const a = [{a:1},{b:2}] as const;
function merge<TA, TB>(a: TA, b: TB): TA & TB;
function merge<TA, TB, TC>(a: TA, b: TB, c: TC): TA & TB & TC;
function merge<TA, TB, TC, TD>(a: TA, b: TB, c: TC, d: TD): TA & TB & TC & TD;
function merge(...list: Array<any>): any {}
const b = merge(...a);
There are 3 primary methods of "mixing" javascript objects.
The process your looking to achieve is called a "mixin".
The older and more widely used method is to use whats called an extend function.
There are many ways to write an extend function, but they mostly look something like this:
const extend = (obj, mixin) => {
Object.keys(mixin).forEach(key => obj[key] = mixin[key]);
return obj;
};
here "obj" is your first object, and "mixin" is the object you want to mix into "obj", the function returns an object that is a mix of the two.
The concept here is quite simple. You loop over the keys of one object, and incrementally assign them to another, a little bit like copying a file on your hard drive.
There is a BIG DRAWBACK with this method though, and that is any properties on the destination object that have a matching name WILL get overwritten.
You can only mix two objects at a time, but you do get control over the loop at every step in case you need to do extra processing (See later on in my answer).
Newer browsers make it somewhat easier with the Object.Assign call:
Object.assign(obj1, mix1, mix2);
Here "obj1" is the final mixed object, and "mix1", "mix2" are your source objects, "obj1" will be a result of "mix1" & "mix2" being combined together.
The MDN article on "Object.Assign" can be found here:
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/assign
Like the extend function above "Object Assign" WILL overwrite properties in the destination object, but it does have the advantage of doing many at a time. My example above only shows 2 "mix" objects, but you can in theory have as many as you like, and that comes in really useful when you have them all in array as you have.
In an array you can either map the objects into one function, and then use the spread operator available in newer browsers, or you can use for..in to loop over the collection.
If your using JQuery, you can use it's foreach method, and underscore.js has dozens of ways of looping.
Since your using TypeScript you can also combine a lot of this with typescripts looping operators too.
There is a 3rd way of merging objects, it's not widely used but it is gaining traction, and that's the "Flight-Mixin" approach that uses the Array prototype, it looks something like this:
const EnumerableFirstLast = (function () { // function based module pattern.
const first = function () {
return this[0];
},
last = function () {
return this[this.length - 1];
};
return function () { // function based Flight-Mixin mechanics ...
this.first = first; // ... referring to ...
this.last = last; // ... shared code.
};
}());
EnumerableFirstLast.call(Array.prototype);
The idea here is that the two objects all ready have the functionality you require on them, so instead of "mixing" them, your just providing a single interface that delegates to them behind the scenes.
Beacuse your adding to the array prototype, you can now do things like the following:
const a = [1, 2, 3];
a.first(); // 1
a.last(); // 3
This might seem as if it's of no use, until you consider what you've in effect just done is added two new functions to a datatype you cannot normally control, this MIGHT if applied to your own objects allow you to add functions dynamically, that simply just grab the values you need to merge in a loop without too much trouble, it would however require a bit of extra planning which is why I'm adding this as more of an idea for further exploration rather than part of the solution.
This method is better suited for objects that are largely function based rather than data based as your objects seem to be.
Irrespective of which mixin method you use though, you will still need to iterate over your array collection with a loop, and you will still need to use spread to get all the keys and properties in one place.
If you consider something like
const myarr = [
{name: "peter", surname: "shaw"},
{name: "schagler", surname: "kahn"}
]
The way the spread operator works is to bust those array entries out into individual parts. So for example, IF we had the following function:
function showTwoNames(entryOne, entryTwo) {
console.log(entryOne.name + " " + entryOne.surname);
console.log(entryTwo.name + " " + entryTwo.surname);
}
You could call that function with the spread operator as follows:
showTwoNames(...myarr);
If your array had more than 2 entries in it, then the rest would be ignored in this case, the number of entries taken from the array is directly proportional to the number of arguments for the function.
You could if you wanted to do the following:
function showTwoNames(entryOne, entryTwo, ...theRest) {
console.log(entryOne.name + " " + entryOne.surname);
console.log(entryTwo.name + " " + entryTwo.surname);
console.log("There are " + theRest.length + " extra entries in the array");
}
Please NOTE that I'm not checking for nulls and undefined or anything here, it should go without saying that you should ALWAYS error check function parameters especially in JavaScript/TypeScript code.
The spread operator can in it's own right be used to combine objects, it can be simpler to understand than other methods like "ObjectAssign" beacuse quite simply you use it as follows:
var destination = { ...source1, ...source2, ...source3); // for as many sources as needed.
Like the other methods this will overwrite properties with the same name.
If you need to preserve all properties, even identically named ones, then you have no choice but to use something like an extend function, but instead of just merging directly using a for-each as my first example shows, you'll need to examine the contents of "key" while also looking in the destination to see if "key" exists and renaming as required.
Update RE: the OP's updates
So being the curious kind I am, I just tried your updated notes on one of my Linux servers, Typescript version is 3.8.3, Node is 12.14.1 and it all seems to work just as you expect it to:
I'm using all the latest versions, so it makes me wonder if your problem is maybe a bug in an old version of TS, or a feature that has only just been added in the newest build and is not present in the version your using.
Maybe try an update see what happens.
It seems that TypeScript doesn't have a native solution for this. But I found a workaround.
As mentioned in the question, using the reduce-method one gets a TS type of { a: string } | { b: string } (and to be clear, of course also a resulting object of { a: "a", b: "b" }.
However, to get from { a: string } | { b: string } to { a: string, b: string } I used the following snippet to merge the types:
type UnionToIntersection<U> = (U extends any
? (k: U) => void
: never) extends (k: infer I) => void
? I
: never;
So this would be my resulting code:
const arr = [
() => ({ a: "a" }),
() => ({ b: "b" })
]
const result = arr.reduce((sum, fn) => Object.assign(sum, fn()))
// Result is now { a: "a", b: "b" }
// but the TS type is '() => ({ a: string } | { b: string })'
type ResultUnion = ReturnType<typeof result>
// ResultUnion = { a: string } | { b: string }
type ResultIntersection = UnionToIntersection<ResultUnion>
// This is where the magic happens
// ResultIntersection = { a: string } & { b: string}
// It's not _exactly_ what I wanted, but it does the trick.
// Done

Mapping using higher-order functions with ramda.js

I have a pattern in my code that keeps recurring and seems like it should be pretty common, but I can't for the life of me figure out what it's called or whether there are common ways of handling it: maping using a function that takes an argument that is itself the result of a function taking the maped element as an argument.
Here's the pattern itself. I've named the function I want mapply (map-apply), but that seems like the wrong name:
const mapply = (outer, inner) => el => outer(inner(el))(el)
What is this actually called? How can I achieve it in idiomatic Ramda? It just seems like it has to be a thing in the world with smart people telling me how to handle it.
My use case is doing some basic quasi-Newtonian physics work, applying forces to objects. To calculate some forces, you need some information about the object—location, mass, velocity, etc. A (very) simplified example:
const g = Vector.create(0, 1),
gravity = ({ mass }) => Vector.multiply(mass)(g),
applyForce = force => body => {
const { mass } = body,
acceleration = Vector.divide(mass)(force)
return R.merge(body, { acceleration })
}
//...
const gravitated = R.map(mapply(applyForce, gravity))(bodies)
Can somebody tell me: What is this? How would you Ramda-fy it? What pitfalls, edge cases, difficulties should I watch out for? What are the smart ways to handle it?
(I've searched and searched—SO, Ramda's GitHub repo, some other functional programming resources. But perhaps my Google-fu just isn't where it needs to be. Apologies if I have overlooked something obvious. Thanks!)
This is a composition. It is specifically compose (or pipe, if you're into being backwards).
In math (consider, say, single variable calculus), you would have some statement like fx or f(x) signifying that there is some function, f, which transforms x, and the transformation shall be described elsewhere...
Then you get into craziness, when you see (g º f)(x). "G of F" (or many other descriptions).
(g º f)(x) == g(f(x))
Look familiar?
const compose = (g, f) => x => g(f(x));
Of course, you can extend this paradigm by using composed functions as operations inside of composed functions.
const tripleAddOneAndHalve = compose(halve, compose(add1, triple));
tripleAddOneAndHalve(3); // 5
For a variadic version of this, you can do one of two things, depending on whether you'd like to get deeper into function composition, or straighten out just a little bit.
// easier for most people to follow
const compose = (...fs) => x =>
fs.reduceRight((x, f) => f(x), x);
// bakes many a noodle
const compose = (...fs) => x =>
fs.reduceRight((f, g) => x => g(f(x)));
But now, if you take something like a curried, or partial map, for instance:
const curry = (f, ...initialArgs) => (...additionalArgs) => {
const arity = f.length;
const args = [...initialArgs, ...additionalArgs];
return args.length >= arity ? f(...args) : curry(f, ...args);
};
const map = curry((transform, functor) =>
functor.map(transform));
const reduce = ((reducer, seed, reducible) =>
reducible.reduce(reducer, seed));
const concat = (a, b) => a.concat(b);
const flatMap = curry((transform, arr) =>
arr.map(transform).reduce(concat, []));
You can do some spiffy things:
const calculateCombinedAge = compose(
reduce((total, age) => total + age, 0),
map(employee => employee.age),
flatMap(team => team.members));
const totalAge = calculateCombinedAge([{
teamName: "A",
members: [{ name: "Bob", age: 32 }, { name: "Sally", age: 20 }],
}, {
teamName: "B",
members: [{ name: "Doug", age: 35 }, { name: "Hannah", age: 41 }],
}]); // 128
Pretty powerful stuff. Of course, all of this is available in Ramda, too.
const mapply0 = (outer, inner) => el => outer(inner(el))(el);
const mapply1 = (outer, inner) => R.converge(
R.uncurryN(2, outer),
[
inner,
R.identity,
],
);
const mapply2 = R.useWith(
R.converge,
[
R.uncurry(2),
R.prepend(R.__, [R.identity]),
],
);
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.24.1/ramda.min.js"></script>
I haven't tested this but it will probably work.
The first is your function.
The second uses converge to pass 'el' through the inner function and then the identity function and pass both into an uncurried version of outer.
R.uncurryN(2, outer) works like this outer(inner(el), el), this means that converge can supply the parameters.
the third might be too far but it's fun anyway, you are calling converge with the first parameter as an uncurried version of outer and the second as an array containing inner and the identity, useWith does this which completely removes function definitions from the solution.
I'm not sure if this is what you were looking for but these are the 3 ways of writing it I found.
Paraphrased from the comments on the question:
mapply is, actually, chain:
R.chain(f, g)(x); //=> f(g(x), x)
Well, mostly. In this case, note that x must be an array.
My solution to the problem, then, is:
const gravitated = R.map(
R.chain(applyForce, R.compose(R.of, gravity))
)(bodies)
The Ramda documentation for chain is not terribly helpful in this case: it reads simply, "chain maps a function over a list and concatenates the results." (ramdajs.com/docs/#chain)
The answer is lurking in the second example there, where two functions are passed to chain and partially applied. I could not see that until after reading these answers here.
(Thanks to ftor, bergi, and Scott Sauyet.)

Functional Javascript - Convert to dotted format in FP way (uses Ramda)

I am learning functional programming in Javascript and using Ramda. I have this object
var fieldvalues = { name: "hello there", mobile: "1234",
meta: {status: "new"},
comments: [ {user: "john", comment: "hi"},
{user:"ram", comment: "hello"}]
};
to be converted like this:
{
comments.0.comment: "hi",
comments.0.user: "john",
comments.1.comment: "hello",
comments.1.user: "ram",
meta.status: "new",
mobile: "1234",
name: "hello there"
}
I have tried this Ramda source, which works.
var _toDotted = function(acc, obj) {
var key = obj[0], val = obj[1];
if(typeof(val) != "object") { // Matching name, mobile etc
acc[key] = val;
return acc;
}
if(!Array.isArray(val)) { // Matching meta
for(var k in val)
acc[key + "." + k] = val[k];
return acc;
}
// Matching comments
for(var idx in val) {
for(var k2 in val[idx]) {
acc[key + "." + idx + "." + k2] = val[idx][k2];
}
}
return acc;
};
// var toDotted = R.pipe(R.toPairs, R.reduce(_toDotted, {}));
var toDotted = R.pipe(R.toPairs, R.curry( function(obj) {
return R.reduce(_toDotted, {}, obj);
}));
console.log(toDotted(fieldvalues));
However, I am not sure if this is close to Functional programming methods. It just seems to be wrapped around some functional code.
Any ideas or pointers, where I can make this more functional way of writing this code.
The code snippet available here.
UPDATE 1
Updated the code to solve a problem, where the old data was getting tagged along.
Thanks
A functional approach would
use recursion to deal with arbitrarily shaped data
use multiple tiny functions as building blocks
use pattern matching on the data to choose the computation on a case-by-case basis
Whether you pass through a mutable object as an accumulator (for performance) or copy properties around (for purity) doesn't really matter, as long as the end result (on your public API) is immutable. Actually there's a nice third way that you already used: association lists (key-value pairs), which will simplify dealing with the object structure in Ramda.
const primitive = (keys, val) => [R.pair(keys.join("."), val)];
const array = (keys, arr) => R.addIndex(R.chain)((v, i) => dot(R.append(keys, i), v), arr);
const object = (keys, obj) => R.chain(([v, k]) => dot(R.append(keys, k), v), R.toPairs(obj));
const dot = (keys, val) =>
(Object(val) !== val
? primitive
: Array.isArray(val)
? array
: object
)(keys, val);
const toDotted = x => R.fromPairs(dot([], x))
Alternatively to concatenating the keys and passing them as arguments, you can also map R.prepend(key) over the result of each dot call.
Your solution is hard-coded to have inherent knowledge of the data structure (the nested for loops). A better solution would know nothing about the input data and still give you the expected result.
Either way, this is a pretty weird problem, but I was particularly bored so I figured I'd give it a shot. I mostly find this a completely pointless exercise because I cannot picture a scenario where the expected output could ever be better than the input.
This isn't a Rambda solution because there's no reason for it to be. You should understand the solution as a simple recursive procedure. If you can understand it, converting it to a sugary Rambda solution is trivial.
// determine if input is object
const isObject = x=> Object(x) === x
// flatten object
const oflatten = (data) => {
let loop = (namespace, acc, data) => {
if (Array.isArray(data))
data.forEach((v,k)=>
loop(namespace.concat([k]), acc, v))
else if (isObject(data))
Object.keys(data).forEach(k=>
loop(namespace.concat([k]), acc, data[k]))
else
Object.assign(acc, {[namespace.join('.')]: data})
return acc
}
return loop([], {}, data)
}
// example data
var fieldvalues = {
name: "hello there",
mobile: "1234",
meta: {status: "new"},
comments: [
{user: "john", comment: "hi"},
{user: "ram", comment: "hello"}
]
}
// show me the money ...
console.log(oflatten(fieldvalues))
Total function
oflatten is reasonably robust and will work on any input. Even when the input is an array, a primitive value, or undefined. You can be certain you will always get an object as output.
// array input example
console.log(oflatten(['a', 'b', 'c']))
// {
// "0": "a",
// "1": "b",
// "2": "c"
// }
// primitive value example
console.log(oflatten(5))
// {
// "": 5
// }
// undefined example
console.log(oflatten())
// {
// "": undefined
// }
How it works …
It takes an input of any kind, then …
It starts the loop with two state variables: namespace and acc . acc is your return value and is always initialized with an empty object {}. And namespace keeps track of the nesting keys and is always initialized with an empty array, []
notice I don't use a String to namespace the key because a root namespace of '' prepended to any key will always be .somekey. That is not the case when you use a root namespace of [].
Using the same example, [].concat(['somekey']).join('.') will give you the proper key, 'somekey'.
Similarly, ['meta'].concat(['status']).join('.') will give you 'meta.status'. See? Using an array for the key computation will make this a lot easier.
The loop has a third parameter, data, the current value we are processing. The first loop iteration will always be the original input
We do a simple case analysis on data's type. This is necessary because JavaScript doesn't have pattern matching. Just because were using a if/else doesn't mean it's not functional paradigm.
If data is an Array, we want to iterate through the array, and recursively call loop on each of the child values. We pass along the value's key as namespace.concat([k]) which will become the new namespace for the nested call. Notice, that nothing gets assigned to acc at this point. We only want to assign to acc when we have reached a value and until then, we're just building up the namespace.
If the data is an Object, we iterate through it just like we did with an Array. There's a separate case analysis for this because the looping syntax for objects is slightly different than arrays. Otherwise, it's doing the exact same thing.
If the data is neither an Array or an Object, we've reached a value. At this point we can assign the data value to the acc using the built up namespace as the key. Because we're done building the namespace for this key, all we have to do compute the final key is namespace.join('.') and everything works out.
The resulting object will always have as many pairs as values that were found in the original object.

Categories

Resources