es6 calling a function in map loop - javascript

The following code works fine.
function getProducts(params) {
return params.productQuantities
.map(prod => ({
purchaseOrderLine: null,
haulerCostCode: getOrderLine(params, prod).haulCostCode,
productCostCode: getOrderLine(params, prod).productCostCode,
typeOfWork: getOrderLine(params, prod).productCostCode,
}))
.reduce((accumulator, currentValue) => {
accumulator.push(currentValue);
return accumulator;
}, []);
}
function getOrderLine(params, ticketLine) {
return params.orderDetail.order.orderLineItems
.find(orderLine => orderLine.id == ticketLine.id);
}
My question is how do I avoid multiple calls to getOrderLine()?

use a function body instead of a function expression:
.map(prod => {
const o = getOrderLine(params, prod);
return {
purchaseOrderLine: null,
haulerCostCode: o.haulCostCode,
productCostCode: o.productCostCode,
typeOfWork: o.productCostCode,
}
})

You could use function composition -
const comp = (f, g) =>
x => f(g(x))
const getOrderLine = params => ticketLine =>
params.orderDetail.order.orderLineItems
.find(orderLine => orderLine.id == ticketLine.id)
const makeProduct = orderLine =>
( { purchaseOrderLine: null
, haulerCostCode: orderLine.haulCostCode
, productCostCode: orderLine.productCostCode
, typeOfWork: orderLine.productCostCode
}
)
const getProducts = params =>
params.productQuantities
.map(comp(makeProduct, getOrderLine(params)))
I removed the reduce bit because it's doesn't make any sense. map already creates a new array.
There's other serious problems here though. These functions are digging into object properties sometimes three levels deep. This creates a tight coupling in you code base. See Law of Demeter

Related

How to simplify function which returns an object?

I have a function which returns an object but I don't like that I gotta declare it first and then do forEach method
export default (data) => {
const keysWithDotsObject = {};
Object.keys(data).forEach((keyWithDot) => {
Object.keys(data[keyWithDot]).forEach((key) => {
keysWithDotsObject[`${keyWithDot}.${key}`] = data[keyWithDot][key];
});
});
return keysWithDotsObject;
};
I think there should be something like this
export default (data) => {
const keysWithDotsObject = Object.keys(data).map((keyWithDot) => {
Object.keys(data[keyWithDot]).map((key) => ({
[`${keyWithDot}.${key}`]: data[keyWithDot][key],
}));
});
return keysWithDotsObject;
};
But for some reason, it doesn't work.
PS: In this part --
[`${keyWithDot}.${key}`]
-- I'm trying to create a key with a name separated by a dot (I don't like that, but that's what back-end wants me to)
Input :
Query1 = {
locus_ids: [25, 26],
microorganism_ids: [12],
};
Output :
Query1.locus_ids: [25, 26],
Query1.microorganism_ids: [12]
I also would like any suggestions on how to write more readable code
Did you consider using reduce?
export default (data) => Object.keys(data).reduce((acc, keyWithDot) => (
Object.keys(data[keyWithDot]).forEach((key) => {
acc[`${keyWithDot}.${key}`] = data[keyWithDot][key];
}),
acc
), {});
You can also use Object.fromEntries, map and flatMap should do the job:
export default (data) =>
Object.fromEntries(
Object.keys(data).flatMap((keyWithDot) =>
Object.keys(data[keyWithDot]).map((key) => [`${keyWithDot}.${key}`, data[keyWithDot][key]])
)
);
First, you build an array for each subentry, for each subentry, you flatten the array you got into an array of key/value, then with Object.fromEntries, you make a new object!
What if the backend decides to add one more nesting? I would choose to go with a recursive function that accounts for that:
function flattenObject(data) {
return Object.fromEntries(
Object.entries(data).flatMap(([key, value]) => {
if (Array.isArray(value) || typeof value !== 'object') {
// The condition might need to be changed depending on the expected data types
return [[key, value]];
}
return Object.entries(flattenObject(value))
.map(([suffix, nestedValue]) => [`${key}.${suffix}`, nestedValue]);
})
)
}
This works even for inputs such as:
{
query1: {
nested: {
test: true
}
},
query2: [1, 2, 3]
}
The above example results in:
{
"query1.nested.test": true,
"query2": [1,2,3]
}

Do I need to use PrevState even if I spread the state into a variable?

I am testing some code to try and understand the race condition regarding the use of setState().
my code can be found here
my code below:
import React from "react";
export default class App extends React.Component {
state = {
id: "",
ids: [{ id: 7 }, { id: 14 }]
};
// here is where I create the id numbers
uniqueIdCreatorHandler = incrementAmount => {
let ids = [...this.state.ids];
let highestId = 0;
if (ids.length > 0) {
highestId = ids
.map(value => {
return value.id;
})
.reduce((a, b) => {
return Math.max(a, b);
});
}
let newId = highestId + incrementAmount;
ids.push({ id: newId });
this.setState({ ids: ids });
};
idDeleterHanlder = currentIndex => {
let ids = this.state.ids;
ids.splice(currentIndex, 1);
this.setState({ ids: ids });
};
//below is when I test performing the function twice, in order to figure if the result would be a race condition
double = (firstIncrementAmount, secondIncrementAmount) => {
this.uniqueIdCreatorHandler(firstIncrementAmount);
this.uniqueIdCreatorHandler(secondIncrementAmount);
};
render() {
let ids = this.state.ids.map((id, index) => {
return (
<p onClick={() => this.idDeleterHanlder(index)} key={id.id}>
id:{id.id}
</p>
);
});
return (
<div className="App">
<button onClick={() => this.uniqueIdCreatorHandler(1)}>
Push new id
</button>
<button onClick={() => this.double(1, 2)}>Add some Ids</button>
<p>all ids below:</p>
{ids}
</div>
);
}
}
when invoking the double function on the second button only the secondIncrementAmount works. You can test it by changing the argument values on the call made on the onClick method.
I think that I should somehow use prevState on this.setState in order to fix this.
How could I avoid this issue here? This matter started at CodeReview but I did not realize how could I fix this.
There is also a recommendation to spread the mapped ids into Math.max and I could not figure out how and Why to do it. Isn't the creation of the new array by mapping the spreaded key values safe enough?
.splice and .push mutate the array. Thus the current state then does not match the currently rendered version anymore. Instead, use .slice (or .filter) and [...old, new] for immutable stateupdates:
deleteId = index => {
this.setState(({ ids }) => ({ ids: ids.filter((id, i) => i !== index) }));
};
uniqueIdCreatorHandler = increment => {
const highest = Math.max(0, ...this.state.ids.map(it => it.id ));
this.setState(({ ids }) => ({ ids: [...ids, { id: highest + increment }] }));
};
setState can be asynchronous, batching up multiple changes and then applying them all at once. So when you spread the state you might be spreading an old version of the state and throwing out a change that should have happened.
The function version of setState avoids this. React guarantees that you will be passed in the most recent state, even if there's some other state update that you didn't know about. And then you can product the new state based on that.
There is also a recommendation to spread the mapped ids into Math.max and I could not figure out how and Why to do it
That's just to simplify the code for finding the max. Math.max can be passed an abitrary number of arguments, rather than just two at a time, so you don't need to use reduce to get the maximum of an array.
uniqueIdCreatorHandler = incrementAmount => {
this.setState(prevState => {
let ids = [...prevState.ids];
let highestId = Math.max(...ids.map(value => value.id));
let newId = highestId + incrementAmount;
ids.push({ id: newId });
this.setState({ ids: ids });
});
};
This isn't the most elegant solution but you can pass a callback to setState(see https://reactjs.org/docs/react-component.html#setstate).
If you modify uniqueIdCreatorHandler like this:
uniqueIdCreatorHandler = (incrementAmount, next) => {
let ids = [...this.state.ids];
let highestId = 0;
if (ids.length > 0) {
highestId = ids
.map(value => {
return value.id;
})
.reduce((a, b) => {
return Math.max(a, b);
});
}
let newId = highestId + incrementAmount;
ids.push({ id: newId });
this.setState({ ids: ids }, next); //next will be called once the setState is finished
};
You can call it inside double like this.
double = (firstIncrementAmount, secondIncrementAmount) => {
this.uniqueIdCreatorHandler(
firstIncrementAmount,
() => this.uniqueIdCreatorHandler(secondIncrementAmount)
);
};

Refactor Javascript ES6

i need help with refactoring below block of code. I was asked to avoid using let and to use const, how can i use constant here as i need to return all the options having possible match id.
const findRecordExists = (options, possibleMatchId) => {
let item;
options.forEach(option => {
option.waivers.forEach(waiver => {
if (waiver.waiverNameId === possibleMatchId) {
item = option;
}
});
});
return item;
};
Example of options would be :
options: [{
name:Abc
waivers: [ {waiverNameId :1}, {waiverNameId:2} ]
}]
Use filter to iterate over the options array, returning whether .some of the waiverNameIds match:
const findRecordExists = (options, possibleMatchId) => {
return options.filter(
({ waivers }) => waivers.some(
({ waiverNameId }) => waiverNameId === possibleMatchId
)
);
};
Or, if you don't like destructuring:
const findRecordExists = (options, possibleMatchId) => {
return options.filter(
option => option.waivers.some(
wavier => wavier.waiverNameId => waiverNameId === possibleMatchId
)
);
};
Since the result is being immediately returned from the findRecordExists function, there isn't even any need for an intermediate item (or items) variable.
That's okay.
Using const to declare an identifier only makes the value of the identifier unchangeable if the value of the identifier is a JavaScript primitive e.g a number or a boolean.
If the value of the identifier is an object or an array (an array is a type of object in JavaScript), using const to declare it doesn't mean that the value of that object identifier cannot be changes. It only means that the identifier cannot be reassigned.
To refactor your code using const, use the code listing below
const findRecordExists = (options, possibleMatchId) => {
const optionsWithPossibleMatches = [];
options.forEach(option => {
option.waivers.forEach(waiver => {
if (waiver.waiverNameId === possibleMatchId) {
optionsWithPossibleMatches.push(option);
}
});
});
return optionsWithPossibleMatches;
};
If you want to skip intermediate steps of creating variables to store each option that matches your condition, you can use the filter method as prescribed by #CertainPerformance
You can re-factor with using find method. This will simplify and avoids the item variable.
const options = [
{
name: "Abc",
waivers: [{ waiverNameId: 1 }, { waiverNameId: 2 }]
}
];
const findRecordExists = (options, possibleMatchId) =>
options.find(option =>
option.waivers.find(waiver => waiver.waiverNameId === possibleMatchId)
);
console.log(findRecordExists(options, 2));
console.log(findRecordExists(options, 3));

javascript: in a chain, how to get size of previous array?

When chaining filter and reduce, how to get the size of the filtered array? I need that size for tailoring responsive design CSS.
My (simplified) code, uses the 3rd and 4th parameters of the callback:
json.articles
.filter(a => keep(a,name))
.reduce((el, a,i,t) =>
i===0? DOMtag('section',{'class':`media${t.length}`})
.appendChild(DOMtag(`article`, content(a,0))).parentElement
: el.appendChild(DOMtag(`article`, content(a,i))).parentElement,
null);
or, even simpler (thanks to luc and his lazy evaluation suggestion below):
json.articles
.filter(a => keep(a,name))
.reduce((el, a,i,t) =>
(el || DOMtag('section',{'class':`media${t.length}`}))
.appendChild(DOMtag(`article`, content(a,i))).parentElement,
null);
Both code work, but if someone has an idea about binding this somehow, it would be possible to use the initial value of the accumulator, such as:
json.articles
.filter(a => keep(a,name))
.reduce((el, a) =>
el.appendChild(DOMtag(`article`, content(a,i))).parentElement,
DOMtag('section',{'class':`media${this.length}`}));
Any idea?
There is no way to have access to the filtered array length at the moment of passing the default value of reduce since the expression is evaluated before reduce is called.
You can, though, simplify the contents of your function as follows:
// Sample data
const json = {articles: [{name: 'test'},{},{name: 'test2'},{},]};
// functional helper function to illustrate the example
function DOMTag(tagName, {content, className}) {
const tag = document.createElement(tagName);
if (content) {
tag.appendChild(document.createTextNode(content));
}
if (className) {
tag.className = className;
}
return tag;
}
// Start of actual answer code:
const section = json.articles
.filter(a => a.name)
.map(a => DOMTag(`article`, {content: a.name}))
.reduce((p, c, i, a) =>
(p || DOMTag('section',{className:`media${a.length}`}))
.appendChild(c).parentElement
, null);
section.className = `media${section.childNodes.length}`
// Validation
console.log(section.outerHTML);
Another option that would be clean is to set the className outside of the chain:
// Sample data
const json = {articles: [{name: 'test'},{},{name: 'test2'},{},]};
// functional helper function to illustrate the example
function DOMTag(tagName, content) {
const tag = document.createElement(tagName);
if (content) {
tag.appendChild(document.createTextNode(content));
}
return tag;
}
// Start of actual answer code:
const section = json.articles
.filter(a => a.name)
.map(el => DOMTag('article', el.name))
.reduce((p,c) => (p.appendChild(c), p), DOMTag('section'))
section.className = `media${section.childNodes.length}`
// Validation
console.log(section.outerHTML);
Option2: use a function to abstract the creation of the container:
// Sample data
const json = {articles: [{name: 'test'},{},{name: 'test2'},{},]};
// functional helper function to illustrate the example
function DOMTag(tagName, {content, className}) {
const tag = document.createElement(tagName);
if (content) {
tag.appendChild(document.createTextNode(content));
}
if (className) {
tag.className = className;
}
return tag;
}
// Start of actual answer code:
function section(children) {
const section = DOMTag('section', {
className: `media${children.length}`
});
children.forEach(c => section.appendChild(c));
return section;
}
const s = section(
json.articles
.filter(a => a.name)
.map(el => DOMTag('article', {content: el.name}))
)
// Validation
console.log(s.outerHTML);
Option 3: more functions! If you absolutely must use reduce:
// Sample data
const json = {articles: [{name: 'test'},{},{name: 'test2'},{},]};
// functional helper function to illustrate the example
function DOMTag(tagName, {content, className}) {
const tag = document.createElement(tagName);
if (content) {
tag.appendChild(document.createTextNode(content));
}
if (className) {
tag.className = className;
}
return tag;
}
// Start of actual answer code:
function defaultSection() {
let m = false;
return (a = []) => {
if (!m) {
m = DOMTag('section', {className: `media${a.length}`});
}
return m;
};
}
const section = json.articles
.filter(a => a.name)
.map(el => DOMTag('article', {content: el.name}))
.reduce((p,c,i,a) => {
p(a).appendChild(c);
return p;
}, defaultSection())();
// Validation
console.log(section.outerHTML);

How do I map & filter this in a point-free style

Dear StackOverflowers…
I have a set of posts:
const posts = [
{ title: 'post1', tags: ['all', 'half', 'third', 'quarter', 'sixth']},
{ title: 'post2', tags: ['all', 'half', 'third', 'quarter', 'sixth']},
{ title: 'post3', tags: ['all', 'half', 'third', 'quarter']},
{ title: 'post4', tags: ['all', 'half', 'third']},
{ title: 'post5', tags: ['all', 'half']},
{ title: 'post6', tags: ['all', 'half']},
{ title: 'post7', tags: ['all']},
{ title: 'post8', tags: ['all']},
{ title: 'post9', tags: ['all']},
{ title: 'post10', tags: ['all']},
{ title: 'post11', tags: ['all']},
{ title: 'post12', tags: ['all']}
];
And an ever increasing set of utility functions:
const map = f => list => list.map(f);
const filter = f => list => list.filter(f);
const reduce = f => y => xs => xs.reduce((y,x)=> f(y)(x), y);
const pipe = (fn,...fns) => (...args) => fns.reduce( (acc, f) => f(acc), fn(...args));
const comp = (...fns) => pipe(...fns.reverse()); // const comp = (f, g) => x => f(g(x));
const prop = prop => obj => obj[prop];
const propEq = v => p => obj => prop(p)(obj) === v;
const flatten = reduce(y=> x=> y.concat(Array.isArray(x) ? flatten (x) : x)) ([]);
const unique = list => list.filter((v, i, a) => a.indexOf(v) === i);
const add = a => b => a + b;
const addO = a => b => Object.assign(a, b);
const log = x => console.log(x);
And I would like to massage the data into the format:
[
{ title: 'sixth', posts: [array of post objects that all have tag 'sixth'] },
{ title: 'quarter', posts: [array of post objects that all have tag 'quarter'] },
{ title: 'third', posts: [array of post objects that all have tag ’third'] },
etc...
]
Using a point-free style, utilising just the reusable, compact utility functions.
I can get the unique tags from all the posts:
const tagsFor = comp(
unique,
flatten,
map(prop('tags'))
);
tagsFor(posts);
And I can work out how to achieve what I want using map & filter:
tagsFor(posts).map(function(tag) {
return {
title: tag,
posts: posts.filter(function(post) {
return post.tags.some(t => t === tag);
});
};
});
I just can’t seem to get my head around achieving this in a tacit manner.
Any pointers would be gratefully received...
I can see the influence of some of my other answers in your current work ^_^ #Bergi is giving you good advice, too. Just keep making generic procedures and composing them together.
I just can’t seem to get my head around achieving this in a tacit manner.
Well the goal shouldn't be to go completely point-free. Often times you will end up with really weird comp (comp (f)) and comp (f) (comp (g)) stuff that is really hard to grok when you come back to it later.
We can still make a couple improvements with your code tho
This is the code we are changing
// your original code
tagsFor(posts).map(function(tag) {
return {
title: tag,
posts: posts.filter(function(post) {
return post.tags.some(t => t === tag);
});
};
});
This is the updated code
// yay
tagsFor(posts).map(makeTag(posts));
// OR
map (makeTag (posts)) (tagsFor (posts));
Here's the utitilies
const comp = f => g => x => f (g (x));
const apply = f => x => f (x);
const eq = x => y => y === x;
const some = f => xs => xs.some(apply(f));
const filter = f => xs => xs.filter(apply(f));
const postHasTag = tag => comp (some (eq (tag))) (prop ('tags'));
const makeTag = posts => tag => ({
title: tag,
posts: filter (postHasTag (tag)) (posts)
});
Of course this is just one way to do it. Let me know if this helps or if you have any other questions !
"Ever-increasing set of utility functions"
It might feel overwhelming to have lots of utility functions, but you should be watching out for some that feel like you're duplicating behaviours.
Take this one for example ...
const propEq = v => p => obj => prop(p)(obj) === v;
3 parameters doesn't mean it's a bad function, but it should at least cause you to think twice about it and make sure they're required. Remember, it becomes harder to compose functions with more parameters, so you should be thinking carefully about the order of parameters too. Anyway, this propEq function should be raising a red flag for you.
const eq = x => y => y === x;
const prop = x => y => y[x];
const propEq = p => x => comp (eq(x)) (prop(p))
Once you have eq defined as a function, you should be able to compose it when you encounter the uncomposable === in your other functions. This goes for all operators in JavaScript.
As a little challenge, take a look at your reduce, pipe, and comp and see if you can remove a couple points. If you're getting stuck, let me know.
So with many thanks to #naomik for the restructuring and #Berghi for leading me down the rabbit hole of combinatory logic this is what I came up with…
First off, tagsFor is collecting all the unique entries of some nested arrays into a single array, which sounds like generic functionality rather than something specific to any particular problem so I rewrote it to:
const collectUniq = (p) => comp( // is this what flatMap does?
uniq,
flatten,
map(prop(p))
);
So taking #naomik’s input we have:
const hasTag = tag => comp( // somePropEq?
some(eq(tag)),
prop('tags')
);
const makeTag = files => tag => ({
title: tag,
posts: filter (hasTag(tag)) (files)
});
const buildTags = comp(
map(makeTag(posts)),
collectUniq('tags')
);
The problem for any tacit solution is that the data (posts) is buried in makeTag in map.
SKI calculus, and BCKW logic gives us a useful set of combinatory logic functions, which I’ll just leave here:
const I = x => x; // id
const B = f => g => x => f(g(x)); // compose <$>
const K = x => y => x; // pure
const C = f => x => y => f(y)(x); // flip
const W = f => x => f(x)(x); // join
const S = f => g => x => f(x)(g(x)); // sub <*>
We can could alias these to id, comp, pure, flip etc.. but in this instance I don’t think it helps grok anything.
So, let’s dig out posts with B (compose):
const buildTags = comp(
B(map, makeTag)(posts),
collectUniq('tags')
);
And now we can see it is in the form of f(x)(g(x)) where: f = B(map, makeTag); g = collectUniq('tags’); and x = posts:
const buildTags = S(B(map)(makeTag))(collectUniq('tags'));
Now it’s tacit, declarative, and easy to grok (to my mind anyway)
Right, somebody get me a beer that took me 3 DAYS! (ouch)

Categories

Resources