I am trying to switch my programming style to declarative from imperative, but there is some concept that is bugging me like the performance when it comes to the loop. For example, I have an original DATA, and after manipulating it I wish to get 3 expected outcomes: itemsHash, namesHash, rangeItemsHash
// original data
const DATA = [
{id: 1, name: 'Alan', date: '2021-01-01', age: 0},
{id: 2, name: 'Ben', date: '1980-02-02', age: 41},
{id: 3, name: 'Clara', date: '1959-03-03', age: 61},
]
...
// expected outcome
// itemsHash => {
// 1: {id: 1, name: 'Alan', date: '2021-01-01', age: 0},
// 2: {id: 2, name: 'Ben', date: '1980-02-02', age: 41},
// 3: {id: 3, name: 'Clara', date: '1959-03-03', age: 61},
// }
// namesHash => {1: 'Alan', 2: 'Ben', 3: 'Clara'}
// rangeItemsHash => {
// minor: [{id: 1, name: 'Alan', date: '2021-01-01', age: 0}],
// junior: [{id: 2, name: 'Ben', date: '1980-02-02', age: 41}],
// senior: [{id: 3, name: 'Clara', date: '1959-03-03', age: 61}],
// }
// imperative way
const itemsHash = {}
const namesHash = {}
const rangeItemsHash = {}
DATA.forEach(person => {
itemsHash[person.id] = person;
namesHash[person.id] = person.name;
if (person.age > 60){
if (typeof rangeItemsHash['senior'] === 'undefined'){
rangeItemsHash['senior'] = []
}
rangeItemsHash['senior'].push(person)
}
else if (person.age > 21){
if (typeof rangeItemsHash['junior'] === 'undefined'){
rangeItemsHash['junior'] = []
}
rangeItemsHash['junior'].push(person)
}
else {
if (typeof rangeItemsHash['minor'] === 'undefined'){
rangeItemsHash['minor'] = []
}
rangeItemsHash['minor'].push(person)
}
})
// declarative way
const itemsHash = R.indexBy(R.prop('id'))(DATA);
const namesHash = R.compose(R.map(R.prop('name')),R.indexBy(R.prop('id')))(DATA);
const gt21 = R.gt(R.__, 21);
const lt60 = R.lte(R.__, 60);
const isMinor = R.lt(R.__, 21);
const isJunior = R.both(gt21, lt60);
const isSenior = R.gt(R.__, 60);
const groups = {minor: isMinor, junior: isJunior, senior: isSenior };
const rangeItemsHash = R.map((method => R.filter(R.compose(method, R.prop('age')))(DATA)))(groups)
To achieve the expected outcome, imperative only loops once while declarative loops at least 3 times(itemsHash,namesHash ,rangeItemsHash ). Which one is better? Is there any trade-off on performance?
I have several responses to this.
First, have you tested to know that performance is a problem? Far too much performance work is done on code that is not even close to being a bottleneck in an application. This often happens at the expense of code simplicity and clarity. So my usual rule is to write the simple and obvious code first, trying not to be stupid about performance, but never worrying overmuch about it. Then, if my application is unacceptably slow, benchmark it to find what parts are causing the largest issues, then optimize those. I've rarely had those places be the equivalent of looping three times rather than one. But of course it could happen.
If it does, and you really need to do this in a single loop, it's not terribly difficult to do this on top of a reduce call. We could write something like this:
// helper function
const ageGroup = ({age}) => age > 60 ? 'senior' : age > 21 ? 'junior' : 'minor'
// main function
const convert = (people) =>
people.reduce (({itemsHash, namesHash , rangeItemsHash}, person, _, __, group = ageGroup (person)) => ({
itemsHash: {...itemsHash, [person .id]: person},
namesHash: {...namesHash, [person .id]: person.name},
rangeItemsHash: {...rangeItemsHash, [group]: [...(rangeItemsHash [group] || []), person]}
}), {itemsHash: {}, namesHash: {}, rangeItemsHash: {}})
// sample data
const data = [{id: 1, name: 'Alan', date: '2021-01-01', age: 0}, {id: 2, name: 'Ben', date: '1980-02-02', age: 41}, {id: 3, name: 'Clara', date: '1959-03-03', age: 61}]
// demo
console .log (JSON .stringify (
convert (data)
, null, 4))
.as-console-wrapper {max-height: 100% !important; top: 0}
(You can remove the JSON .stringify call to demonstrate that the references are shared between the various output hashes.)
There are two directions I might go from here to clean up this code.
The first would be to use Ramda. It has some functions that would help simplify a few things here. Using R.reduce, we could eliminate the annoying placeholder parameters that I use to allow me to add the default parameter group to the reduce signature, and maintain expressions-over-statements style coding. (We could alternatively do something with R.call.) And using evolve together with functions like assoc and over, we can make this more declarative like this:
// helper function
const ageGroup = ({age}) => age > 60 ? 'senior' : age > 21 ? 'junior' : 'minor'
// main function
const convert = (people) =>
reduce (
(acc, person, group = ageGroup (person)) => evolve ({
itemsHash: assoc (person.id, person),
namesHash: assoc (person.id, person.name),
rangeItemsHash: over (lensProp (group), append (person))
}) (acc), {itemsHash: {}, namesHash: {}, rangeItemsHash: {minor: [], junior: [], senior: []}},
people
)
// sample data
const data = [{id: 1, name: 'Alan', date: '2021-01-01', age: 0}, {id: 2, name: 'Ben', date: '1980-02-02', age: 41}, {id: 3, name: 'Clara', date: '1959-03-03', age: 61}]
// demo
console .log (JSON .stringify (
convert (data)
, null, 4))
.as-console-wrapper {max-height: 100% !important; top: 0}
<script src="//cdnjs.cloudflare.com/ajax/libs/ramda/0.27.1/ramda.js"></script>
<script> const {reduce, evolve, assoc, over, lensProp, append} = R </script>
A slight downside to this version over the previous one is the need to predefine the categories senior, junior, and minor in the accumulator. We could certainly write an alternative to lensProp that somehow deals with default values, but that would take us further afield.
The other direction I might go is to note that there is still one potentially serious performance problem in the code, one Rich Snapp called the reduce ({...spread}) anti-pattern. To solve that, we might want to mutate our accumulator object in the reduce callback. Ramda -- by its very philosophic nature -- will not help you with this. But we can define some helper functions that will clean our code up at the same time we address this issue, with something like this:
// utility functions
const push = (x, xs) => ((xs .push (x)), x)
const put = (k, v, o) => ((o[k] = v), o)
const appendTo = (k, v, o) => put (k, push (v, o[k] || []), o)
// helper function
const ageGroup = ({age}) => age > 60 ? 'senior' : age > 21 ? 'junior' : 'minor'
// main function
const convert = (people) =>
people.reduce (({itemsHash, namesHash , rangeItemsHash}, person, _, __, group = ageGroup(person)) => ({
itemsHash: put (person.id, person, itemsHash),
namesHash: put (person.id, person.name, namesHash),
rangeItemsHash: appendTo (group, person, rangeItemsHash)
}), {itemsHash: {}, namesHash: {}, rangeItemsHash: {}})
// sample data
const data = [{id: 1, name: 'Alan', date: '2021-01-01', age: 0}, {id: 2, name: 'Ben', date: '1980-02-02', age: 41}, {id: 3, name: 'Clara', date: '1959-03-03', age: 61}]
// demo
console .log (JSON .stringify (
convert (data)
, null, 4))
.as-console-wrapper {max-height: 100% !important; top: 0}
But in the end, as already suggested, I would not do this unless performance was provably a problem. I think it's much nicer with Ramda code like this:
const ageGroup = ({age}) => age > 60 ? 'senior' : age > 21 ? 'junior' : 'minor'
const convert = applySpec ({
itemsHash: indexBy (prop ('id')),
nameHash: compose (fromPairs, map (props (['id', 'name']))),
rangeItemsHash: groupBy (ageGroup)
})
const data = [{id: 1, name: 'Alan', date: '2021-01-01', age: 0}, {id: 2, name: 'Ben', date: '1980-02-02', age: 41}, {id: 3, name: 'Clara', date: '1959-03-03', age: 61}]
console .log (JSON .stringify(
convert (data)
, null, 4))
.as-console-wrapper {max-height: 100% !important; top: 0}
<script src="//cdnjs.cloudflare.com/ajax/libs/ramda/0.27.1/ramda.js"></script>
<script> const {applySpec, indexBy, prop, compose, fromPairs, map, props, groupBy} = R </script>
Here we might want -- for consistency's sake -- to make ageGroup point-free and/or inline it in the main function. That's not hard, and another answer gave an example of that. I personally find it more readable like this. (There's also probably a cleaner version of namesHash, but I'm out of time.)
This version loops three times, exactly what you are worried about. There are times when that might be a problem. But I wouldn't spend much effort on that unless it's a demonstrable problem. Clean code is a useful goal on its own.
Similar how to .map(f).map(g) == .map(compose(g, f)), you can compose reducers to ensure a single pass gives you all results.
Writing declarative code does not really have anything to do with the decision to loop once or multiple times.
// Reducer logic for all 3 values you're interested in
// id: person
const idIndexReducer = (idIndex, p) =>
({ ...idIndex, [p.id]: p });
// id: name
const idNameIndexReducer = (idNameIndex, p) =>
({ ...idNameIndex, [p.id]: p.name });
// Age
const ageLabel = ({ age }) => age > 60 ? "senior" : age > 40 ? "medior" : "junior";
const ageGroupReducer = (ageGroups, p) => {
const ageKey = ageLabel(p);
return {
...ageGroups,
[ageKey]: (ageGroups[ageKey] || []).concat(p)
}
}
// Combine the reducers
const seed = { idIndex: {}, idNameIndex: {}, ageGroups: {} };
const reducer = ({ idIndex, idNameIndex, ageGroups }, p) => ({
idIndex: idIndexReducer(idIndex, p),
idNameIndex: idNameIndexReducer(idNameIndex, p),
ageGroups: ageGroupReducer(ageGroups, p)
})
const DATA = [
{id: 1, name: 'Alan', date: '2021-01-01', age: 0},
{id: 2, name: 'Ben', date: '1980-02-02', age: 41},
{id: 3, name: 'Clara', date: '1959-03-03', age: 61},
]
// Loop once
console.log(
JSON.stringify(DATA.reduce(reducer, seed), null, 2)
);
Subjective part: Whether it's worth it? I don't think so. I like simple code, and in my own experience going from 1 to 3 loops when working with limited data sets usually is unnoticeable.
So, if using Ramda, I'd stick to:
const { prop, indexBy, map, groupBy, pipe } = R;
const DATA = [
{id: 1, name: 'Alan', date: '2021-01-01', age: 0},
{id: 2, name: 'Ben', date: '1980-02-02', age: 41},
{id: 3, name: 'Clara', date: '1959-03-03', age: 61},
];
const byId = indexBy(prop("id"), DATA);
const nameById = map(prop("name"), byId);
const ageGroups = groupBy(
pipe(
prop("age"),
age => age > 60 ? "senior" : age > 40 ? "medior" : "junior"
),
DATA
);
console.log(JSON.stringify({ byId, nameById, ageGroups }, null, 2))
<script src="https://cdn.jsdelivr.net/npm/ramda#0.27.1/dist/ramda.min.js"></script>
Related
I have an array of objects contains data of persons
const oldArr = [
{
id: 1,
name: 'Alex',
},
{
id: 2,
name: 'John',
},
{
id: 3,
name: 'Jack',
}
]
then I add data to this array to each element where I end up with new key called money with value of 20 as the following
oldArr.map((el, index) => el.money = 20)
and the array becomes like this
...
{
id: 2,
name: 'John',
money: 20
},
...
Now, I have a new array with new data (new person) but missing the money I have added before. (careful person with id 2 is not there)
const newArr = [
{
id: 1,
name: 'Alex',
},
{
id: 3,
name: 'Jack',
},
{
id: 4,
name: 'Chris',
},
]
I want to update the old array with new data but also keep the mutated data, and I want the result to end up like this:
const result = [
{
id: 1,
name: 'Alex',
money: 20
},
{
id: 3,
name: 'Jack',
money: 20
},
{
id: 4,
name: 'Chris',
},
]
Thanks for the help.
Just a note: map creates a whole new array, it doesn't make sense to use it for just mutating the contents. Use forEach or just a regular for loop instead.
oldArr.forEach((el) => (el.money = 20));
The following will give you the intended result:
const result = newArr.map(
(newEl) => oldArr.find((el) => el.id === newEl.id) || newEl
);
The OR operator || returns the second argument if the first is falsey.
You can optimize this by mapping items by id instead of brute force searching the old array.
const idMap = new Map();
oldArr.forEach((el) => {
el.money = 20;
idMap.set(el.id, el);
});
const result = newArr.map((newEl) => idMap.get(newEl.id) || newEl);
Stackblitz: https://stackblitz.com/edit/js-f3sw8w?file=index.js
If I getted it clear you are just trying to iterate throw the items of array generating a new array with the property "money" added to each one.
If so the map is the best option, just assign it to a new variable and change the item before return the element like bellow.
const oldArr = [
{
id: 1,
name: "Alex"
},
{
id: 2,
name: "John"
},
{
id: 3,
name: "Jack"
}
];
const newArr = oldArr.map((el) => {
el.money = "20";
return el;
});
console.log(oldArr);
console.log(newArr);
In this way you'll be able to keep both arrays.
If wasn't this, pls let me know.
Just merge the objects:
const result = oldArr.map((person) => ({
...person,
...newArr.find((cur) => cur.id === person.id),
}));
I'm just learning js now (late to the party I know) and I have the following code and I was wondering if it could be written in a cleaner/simpler way?
Also, ideally, instead of using "if (obj.id === 1)" I would like to iterate through the array and add age based on the sequence i.e. [0] would become '32' and so on.
const students = [ // Three objects, each with four properties
{
id: 1,
name: 'Mark',
profession: 'Developer',
skill: 'JavaScript'
},
{
id: 2,
name: 'Ariel',
profession: 'Developer',
skill: 'HTML'
},
{
id: 3,
name: 'Jason',
profession: 'Designer',
skill: 'CSS'
},
];
const studentsWithAge = students.map(obj => {
if (obj.id === 1) {
return {...obj, age: '32'};
} else if (obj.id === 2) {
return {...obj, age: '26'};
} else if (obj.id === 3) {
return {...obj, age: '28'};
}
return obj;
});
console.log(studentsWithAge);
// output
// [
// {
// id: 1,
// name: 'Mark',
// profession: 'Developer',
// skill: 'JavaScript',
// age: '32'
// },
// {
// id: 2,
// name: 'Ariel',
// profession: 'Developer',
// skill: 'HTML',
// age: '26'
// },
// {
// id: 3,
// name: 'Jason',
// profession: 'Designer',
// skill: 'CSS',
// age: '28'
// }
// ]
You can map the array into the object like so:
const ages = ['32', '26', '28'];
const studentsWithAge = students.map(obj => { ...obj, age: ages[obj.id-1] });
You could create an ages array and use the index to map the value to the corresponding object.
const students = [ // Three objects, each with four properties
{
id: 1,
name: 'Mark',
profession: 'Developer',
skill: 'JavaScript'
},
{
id: 2,
name: 'Ariel',
profession: 'Developer',
skill: 'HTML'
},
{
id: 3,
name: 'Jason',
profession: 'Designer',
skill: 'CSS'
},
];
const ages = [32, 26, 28];
const result = students.map((s, i) => {
return { ...s, age: ages[i] }
});
console.log(result);
Your code is true, another way to add ages by the id is the following code. Just use ids as object key and age as value.
The following code check if id exists in the ages const then add it to the studentsWithAge. It works exactly like your code.
const ages = {1: '32', 2: '26', 3: '28'};
const studentsWithAge = students.map(obj => {
if(ages[obj.id]) obj.age = ages[obj.id];
return obj;
});
But if you're sure all ids have age value simpler code like this could be used:
const ages = {1: '32', 2: '26', 3: '28'};
const studentsWithAge = students.map(obj => ({...obj, age: ages[obj.id]}));
The solution depends on how you store the ages data. Here's an example if you keep the ages data in an array of objects, just like you keep the students data.
This approach is easily extended, you can add any other fields related to the student to the object.
students = [
{id: 1,name: 'Mark',profession: 'Developer',skill: 'JavaScript'},
{id: 2,name: 'Ariel',profession: 'Developer',skill: 'HTML'},
{id: 3,name: 'Jason',profession: 'Designer',skill: 'CSS'}];
extraInfo = [{id: 1, age:32}, {id:2, age: 26}, {id:3, age: 33}];
const result = students.map((s)=>
({ ...s, ...extraInfo.find((a) => a.id === s.id) })
);
console.log(result);
.as-console-wrapper{min-height: 100%!important; top: 0}
I'm trying to evenly distribute several people to different events. For example:
People Events
John 10.10.2021 (4 people needed)
Jack 11.10.2021 (2 people needed)
Harry 12.10.2021 (1 people needed)
Charlie 13.10.2021 (3 people needed)
Jacob 14.10.2021 (5 people needed)
I want it, so that always the amount of people needed get assigned to the event. In the best case, every person gets assigned the same amount of times.
How can I achieve this in javascript / typescript?
You could use map() to loop through events and add people to them by slicing the array of people. You could add logic for them to be separated evenly.
const people = ["John", "Jack", "Harry", "Charlie", "Jacob"];
const events = [{
date: "10.10.2021",
numPeople: 4
}, {
date: "11.10.2021",
numPeople: 2
}, {
date: "12.10.2021",
numPeople: 1
}, {
date: "13.10.2021",
numPeople: 3
}, {
date: "14.10.2021",
numPeople: 5
}];
const result = events.map((e) => {
return {
...e,
people: people.slice(0, e.numPeople)
}
});
console.log(result);
Like this maybe:
const people = ['John', 'Jack', 'Harry', 'Charlie', 'Jacob']
const events = [{date: '10.10.2021', nr: 4}, {date: '11.10.2021', nr: 2},
{date: '12.10.2021', nr: 1}, {date: '13.10.2021', nr: 3}, {date: '14.10.2021', nr: 5}]
const ppl = []
people.forEach(p => ppl.push({'name': p, 'assigned': 0}))
const arr = []
events.forEach(e => {
let ex = {[e.date]: []}
arr.push(ex)
for(let i = 0; i < e.nr; i++) {
let min = Math.min(...ppl.map(p => p.assigned))
let pe = ppl.filter(p => p.assigned === min)
ex[e.date].push(pe[0].name)
pe[0].assigned++
}
})
console.log(...arr)
I'm trying to write a function to find the descendants of a person. The data I have is set up in an array and is formatted like this:
{
"id": 159819275,
"firstName": "Jasmine",
"lastName": "Bob",
"gender": "female",
"dob": "12/18/1969",
"height": 58,
"weight": 156,
"eyeColor": "blue",
"occupation": "assistant",
"parents": [409574486, 260451248],
"currentSpouse": 951747547
},
The function I wrote takes in the the ID of the person whose descendants I am looking for, the array of people, and creates a new array that holds the descendants.
function findDescendants (id, people, descendantsArray = []){
descendantsArray = people.filter(function(el){
return el.parents[0] === id || el.parents[1 === id] //id works if it's a variable, but if it's a array it won't work. otherwise function is good
})
displayPeople(descendantsArray) //calls a method that alerts the descendant's names into a string
id = []; //resets the id and turns it into an array
for(let i = 0; i<descendantsArray.length; i++){
id[i] = descendantsArray[i].id
} //puts the id's of the listed descendants into the array id
if (descendantsArray.length === 0){
return descendantsArray;
}
else {
findDescendants(id, people, descendantsArray)
} //sends in the array id, the data set people, and the descendantsArray (which contains the children of the person in question.
}
My problem is that when I call the array the second time, the filter doesn't compare the el.parents to all elements in id and I'm not sure where to go from here.
descendantsArray = people.filter(function(el){
return el.parents[0] === id || el.parents[1 === id] //id works if it's a variable, but if it's a array it won't work. otherwise function is good
})
what I would like this statement to do is filter out all elements in people whose "parents: " contain any of the elements in the array of id. Any help would be appreciated.
You can write something like this:
const getDescendents = (people, id) =>
people .filter (({parents = []}) => parents .includes (id))
.flatMap (person => [person, ... getDescendents (people, person.id)])
If we had a tree like this:
Alice + Bob
|
+---------------+--------------+
| |
Charlie + Denise Edward + Francine
| |
+------+-------+ |
| | |
Geroge + Helen Irwin + Jean Kayleigh
|
|
|
Leroy
and we called getDescendents(people, 1) (where Alice has id 1), we would get back the objects representing these people: [Denise, Helen, Irwin, Leroy, Edward, Kayleigh]. That involves a depth-first, pre-order traversal of the data. If we wanted to sort (by id or whatever), we could do that after the call or -- with some minor performance degradation -- as a final step of the function:
const getDescendents = (people, id) =>
people .filter (({parents = []}) => parents .includes (id))
.flatMap (person => [person, ... getDescendents (people, person.id)])
.sort (({id: a}, {id: b}) => a - b)
const people = [{name: 'Alice', id: 1, parents: []}, {name: 'Bob', id: 2, parents: []}, {name: 'Charlie', id: 3, parents: []}, {name: 'Denise', id: 4, parents: [1, 2]}, {name: 'Edward', id: 5, parents: [1, 2]}, {name: 'Francine', id: 6, parents: []}, {name: 'George', id: 7, parents: []}, {name: 'Helen', id: 8, parents: [3, 4]}, {name: 'Irwin', id: 9, parents: [3, 4]}, {name: 'Jean', id: 10, parents: []}, {name: 'Kayleigh', id: 11, parents: [5, 6]}, {name: 'Leroy', id: 12, parents: [9, 10]}]
people .forEach (
({name, id}) => console .log (
`${name} ==> [${getDescendents(people, id).map(({name}) => name).join(', ')}]`
)
)
console .log ('Denise\'s descendents: ', getDescendents (people, 4))
.as-console-wrapper {max-height: 100% !important; top: 0}
I have an array of object in JSON and want to change one value's properties.
for example assume I have a key field which is unique and amount, name props.
my approach is to find an object in the list with findIndex or map then remove it and make a new object and push to it. is this good way?
can recommend better approach or functions?
Lenses might be the canonical way to deal with this, although Ramda has a number of alternatives.
const people = [
{id: 1, name: 'fred', age: 28},
{id: 2, name: 'wilma', age: 25},
{id: 3, name: 'barney', age: 27},
{id: 4, name: 'betty', age: 29},
]
const personIdx = name => findIndex(propEq('name', name), people)
const ageLens = idx => lensPath([idx, 'age'])
const wLens = ageLens(personIdx('wilma'))
const newPeople = over(wLens, age => age + 1, people)
//=> [
// {id: 1, name: 'fred', age: 28},
// {id: 2, name: 'wilma', age: 26},
// {id: 3, name: 'barney', age: 27},
// {id: 4, name: 'betty', age: 29},
// ]
Note that although newPeople is a brand new object, it shares as much as it can with the existing people. For instance, newPeople[3] === people[3] //=> true.
Also note that as well as adjusting a parameter with this lens using over, we could simply fetch the value using view:
view(wLens, people) //=> 25
Or we could set it to a fixed value with set:
set(wLens, 42, people) //=> new version of `people` with wilma's age at 42
Finally, note that lenses compose. We could have also written this:
const ageLens = idx => compose(lensIndex(idx), lensProp('age')).
Lens composition can be very powerful.
You can see this in action on the Rand REPL.
Something like this maybe?
var org =
[
{name:"one",age:1}
,{name:"two",age:2}
]
;
var newArray =
org
.map(
(x,index)=>
index === 1
?Object.assign(
{}
,x
,{name:"new name"}
)
:x
)
;