change a prop in list of object in ramda.js - javascript

I have an array of object in JSON and want to change one value's properties.
for example assume I have a key field which is unique and amount, name props.
my approach is to find an object in the list with findIndex or map then remove it and make a new object and push to it. is this good way?
can recommend better approach or functions?

Lenses might be the canonical way to deal with this, although Ramda has a number of alternatives.
const people = [
{id: 1, name: 'fred', age: 28},
{id: 2, name: 'wilma', age: 25},
{id: 3, name: 'barney', age: 27},
{id: 4, name: 'betty', age: 29},
]
const personIdx = name => findIndex(propEq('name', name), people)
const ageLens = idx => lensPath([idx, 'age'])
const wLens = ageLens(personIdx('wilma'))
const newPeople = over(wLens, age => age + 1, people)
//=> [
// {id: 1, name: 'fred', age: 28},
// {id: 2, name: 'wilma', age: 26},
// {id: 3, name: 'barney', age: 27},
// {id: 4, name: 'betty', age: 29},
// ]
Note that although newPeople is a brand new object, it shares as much as it can with the existing people. For instance, newPeople[3] === people[3] //=> true.
Also note that as well as adjusting a parameter with this lens using over, we could simply fetch the value using view:
view(wLens, people) //=> 25
Or we could set it to a fixed value with set:
set(wLens, 42, people) //=> new version of `people` with wilma's age at 42
Finally, note that lenses compose. We could have also written this:
const ageLens = idx => compose(lensIndex(idx), lensProp('age')).
Lens composition can be very powerful.
You can see this in action on the Rand REPL.

Something like this maybe?
var org =
[
{name:"one",age:1}
,{name:"two",age:2}
]
;
var newArray =
org
.map(
(x,index)=>
index === 1
?Object.assign(
{}
,x
,{name:"new name"}
)
:x
)
;

Related

How do I convert an array of objects to an object of objects? [duplicate]

This question already has answers here:
Convert Array to Object
(46 answers)
Closed 26 days ago.
For example, I have the following array of objects:
[{id:1, name: Hana, age: 30}, {id:2, name: Sana, age: 20}, {id:3, name: Kana, age: 30}]
I want to convert it to an object of objects as following:
{0:{id:1, name: Hana, age: 30}, 1:{id:2, name: Sana, age: 20}, 2:{id:3, name: Kana, age: 30}}
Using Object's pre built method assign you can achieve this.
Object.assign({}, yourObject);
No need to iterate through the Array unnecessary.
You can easily achieve the result, using a simple map function and store the result in an object as a key:value pair
const data = [{id:1, name: 'Hana', age: 30}, {id:2, name: 'Sana', age: 20}, {id:3, name: 'Kana', age: 30}]
const resultObj = {}
data.map((obj,index) => resultObj[index] = obj)
console.log(resultObj)
You can map that array and get its unique value (in this case i have taken id as key) then map it according you want to display array.
Here is an example to do that.
var arr = [{
id: 1,
name: 'Hana',
age: 30
}, {
id: 2,
name: 'Sana',
age: 20
}, {
id: 3,
name: 'Kana',
age: 30
}]
var mapped = arr.map(item => ({
[item.id]: item
}));
var newObj = Object.assign({}, ...mapped);
console.log(newObj);

How to use the method includes() or other methods so that i could filter the array to another that will have only properties that called 'age'

for example I have certain array which has the objects in it:
const people = [
{id: 1, name: 'Michael', age: 26},
{id: 2, name: 'Tom', age: 15},
{id: 3, name: 'Kevin', age: 56},
{id: 4, name: 'Christian', year: 1990},
]
now I need to filter it to the another array which will have the same properties excluding last as i said above, that is, I wanna have objects with property "age" in new array, here is my trying of realize it below:
const ages = array.filter((el, index, arr) => {
if(arr.includes('age')){
return true
}
})
console.log(ages)
Javascript's interpreter returns undefined, what's wrong with my code? thanks to who will solve it
You could run the filter method on the array and inside the filter condition, you could check the object's keys array for the 'age' attribute.
Here's the code for your response.
const ages = array.filter(person => Object.keys(person).includes('age'));
console.log(ages);

declarative loop vs imperative loop

I am trying to switch my programming style to declarative from imperative, but there is some concept that is bugging me like the performance when it comes to the loop. For example, I have an original DATA, and after manipulating it I wish to get 3 expected outcomes: itemsHash, namesHash, rangeItemsHash
// original data
const DATA = [
{id: 1, name: 'Alan', date: '2021-01-01', age: 0},
{id: 2, name: 'Ben', date: '1980-02-02', age: 41},
{id: 3, name: 'Clara', date: '1959-03-03', age: 61},
]
...
// expected outcome
// itemsHash => {
// 1: {id: 1, name: 'Alan', date: '2021-01-01', age: 0},
// 2: {id: 2, name: 'Ben', date: '1980-02-02', age: 41},
// 3: {id: 3, name: 'Clara', date: '1959-03-03', age: 61},
// }
// namesHash => {1: 'Alan', 2: 'Ben', 3: 'Clara'}
// rangeItemsHash => {
// minor: [{id: 1, name: 'Alan', date: '2021-01-01', age: 0}],
// junior: [{id: 2, name: 'Ben', date: '1980-02-02', age: 41}],
// senior: [{id: 3, name: 'Clara', date: '1959-03-03', age: 61}],
// }
// imperative way
const itemsHash = {}
const namesHash = {}
const rangeItemsHash = {}
DATA.forEach(person => {
itemsHash[person.id] = person;
namesHash[person.id] = person.name;
if (person.age > 60){
if (typeof rangeItemsHash['senior'] === 'undefined'){
rangeItemsHash['senior'] = []
}
rangeItemsHash['senior'].push(person)
}
else if (person.age > 21){
if (typeof rangeItemsHash['junior'] === 'undefined'){
rangeItemsHash['junior'] = []
}
rangeItemsHash['junior'].push(person)
}
else {
if (typeof rangeItemsHash['minor'] === 'undefined'){
rangeItemsHash['minor'] = []
}
rangeItemsHash['minor'].push(person)
}
})
// declarative way
const itemsHash = R.indexBy(R.prop('id'))(DATA);
const namesHash = R.compose(R.map(R.prop('name')),R.indexBy(R.prop('id')))(DATA);
const gt21 = R.gt(R.__, 21);
const lt60 = R.lte(R.__, 60);
const isMinor = R.lt(R.__, 21);
const isJunior = R.both(gt21, lt60);
const isSenior = R.gt(R.__, 60);
const groups = {minor: isMinor, junior: isJunior, senior: isSenior };
const rangeItemsHash = R.map((method => R.filter(R.compose(method, R.prop('age')))(DATA)))(groups)
To achieve the expected outcome, imperative only loops once while declarative loops at least 3 times(itemsHash,namesHash ,rangeItemsHash ). Which one is better? Is there any trade-off on performance?
I have several responses to this.
First, have you tested to know that performance is a problem? Far too much performance work is done on code that is not even close to being a bottleneck in an application. This often happens at the expense of code simplicity and clarity. So my usual rule is to write the simple and obvious code first, trying not to be stupid about performance, but never worrying overmuch about it. Then, if my application is unacceptably slow, benchmark it to find what parts are causing the largest issues, then optimize those. I've rarely had those places be the equivalent of looping three times rather than one. But of course it could happen.
If it does, and you really need to do this in a single loop, it's not terribly difficult to do this on top of a reduce call. We could write something like this:
// helper function
const ageGroup = ({age}) => age > 60 ? 'senior' : age > 21 ? 'junior' : 'minor'
// main function
const convert = (people) =>
people.reduce (({itemsHash, namesHash , rangeItemsHash}, person, _, __, group = ageGroup (person)) => ({
itemsHash: {...itemsHash, [person .id]: person},
namesHash: {...namesHash, [person .id]: person.name},
rangeItemsHash: {...rangeItemsHash, [group]: [...(rangeItemsHash [group] || []), person]}
}), {itemsHash: {}, namesHash: {}, rangeItemsHash: {}})
// sample data
const data = [{id: 1, name: 'Alan', date: '2021-01-01', age: 0}, {id: 2, name: 'Ben', date: '1980-02-02', age: 41}, {id: 3, name: 'Clara', date: '1959-03-03', age: 61}]
// demo
console .log (JSON .stringify (
convert (data)
, null, 4))
.as-console-wrapper {max-height: 100% !important; top: 0}
(You can remove the JSON .stringify call to demonstrate that the references are shared between the various output hashes.)
There are two directions I might go from here to clean up this code.
The first would be to use Ramda. It has some functions that would help simplify a few things here. Using R.reduce, we could eliminate the annoying placeholder parameters that I use to allow me to add the default parameter group to the reduce signature, and maintain expressions-over-statements style coding. (We could alternatively do something with R.call.) And using evolve together with functions like assoc and over, we can make this more declarative like this:
// helper function
const ageGroup = ({age}) => age > 60 ? 'senior' : age > 21 ? 'junior' : 'minor'
// main function
const convert = (people) =>
reduce (
(acc, person, group = ageGroup (person)) => evolve ({
itemsHash: assoc (person.id, person),
namesHash: assoc (person.id, person.name),
rangeItemsHash: over (lensProp (group), append (person))
}) (acc), {itemsHash: {}, namesHash: {}, rangeItemsHash: {minor: [], junior: [], senior: []}},
people
)
// sample data
const data = [{id: 1, name: 'Alan', date: '2021-01-01', age: 0}, {id: 2, name: 'Ben', date: '1980-02-02', age: 41}, {id: 3, name: 'Clara', date: '1959-03-03', age: 61}]
// demo
console .log (JSON .stringify (
convert (data)
, null, 4))
.as-console-wrapper {max-height: 100% !important; top: 0}
<script src="//cdnjs.cloudflare.com/ajax/libs/ramda/0.27.1/ramda.js"></script>
<script> const {reduce, evolve, assoc, over, lensProp, append} = R </script>
A slight downside to this version over the previous one is the need to predefine the categories senior, junior, and minor in the accumulator. We could certainly write an alternative to lensProp that somehow deals with default values, but that would take us further afield.
The other direction I might go is to note that there is still one potentially serious performance problem in the code, one Rich Snapp called the reduce ({...spread}) anti-pattern. To solve that, we might want to mutate our accumulator object in the reduce callback. Ramda -- by its very philosophic nature -- will not help you with this. But we can define some helper functions that will clean our code up at the same time we address this issue, with something like this:
// utility functions
const push = (x, xs) => ((xs .push (x)), x)
const put = (k, v, o) => ((o[k] = v), o)
const appendTo = (k, v, o) => put (k, push (v, o[k] || []), o)
// helper function
const ageGroup = ({age}) => age > 60 ? 'senior' : age > 21 ? 'junior' : 'minor'
// main function
const convert = (people) =>
people.reduce (({itemsHash, namesHash , rangeItemsHash}, person, _, __, group = ageGroup(person)) => ({
itemsHash: put (person.id, person, itemsHash),
namesHash: put (person.id, person.name, namesHash),
rangeItemsHash: appendTo (group, person, rangeItemsHash)
}), {itemsHash: {}, namesHash: {}, rangeItemsHash: {}})
// sample data
const data = [{id: 1, name: 'Alan', date: '2021-01-01', age: 0}, {id: 2, name: 'Ben', date: '1980-02-02', age: 41}, {id: 3, name: 'Clara', date: '1959-03-03', age: 61}]
// demo
console .log (JSON .stringify (
convert (data)
, null, 4))
.as-console-wrapper {max-height: 100% !important; top: 0}
But in the end, as already suggested, I would not do this unless performance was provably a problem. I think it's much nicer with Ramda code like this:
const ageGroup = ({age}) => age > 60 ? 'senior' : age > 21 ? 'junior' : 'minor'
const convert = applySpec ({
itemsHash: indexBy (prop ('id')),
nameHash: compose (fromPairs, map (props (['id', 'name']))),
rangeItemsHash: groupBy (ageGroup)
})
const data = [{id: 1, name: 'Alan', date: '2021-01-01', age: 0}, {id: 2, name: 'Ben', date: '1980-02-02', age: 41}, {id: 3, name: 'Clara', date: '1959-03-03', age: 61}]
console .log (JSON .stringify(
convert (data)
, null, 4))
.as-console-wrapper {max-height: 100% !important; top: 0}
<script src="//cdnjs.cloudflare.com/ajax/libs/ramda/0.27.1/ramda.js"></script>
<script> const {applySpec, indexBy, prop, compose, fromPairs, map, props, groupBy} = R </script>
Here we might want -- for consistency's sake -- to make ageGroup point-free and/or inline it in the main function. That's not hard, and another answer gave an example of that. I personally find it more readable like this. (There's also probably a cleaner version of namesHash, but I'm out of time.)
This version loops three times, exactly what you are worried about. There are times when that might be a problem. But I wouldn't spend much effort on that unless it's a demonstrable problem. Clean code is a useful goal on its own.
Similar how to .map(f).map(g) == .map(compose(g, f)), you can compose reducers to ensure a single pass gives you all results.
Writing declarative code does not really have anything to do with the decision to loop once or multiple times.
// Reducer logic for all 3 values you're interested in
// id: person
const idIndexReducer = (idIndex, p) =>
({ ...idIndex, [p.id]: p });
// id: name
const idNameIndexReducer = (idNameIndex, p) =>
({ ...idNameIndex, [p.id]: p.name });
// Age
const ageLabel = ({ age }) => age > 60 ? "senior" : age > 40 ? "medior" : "junior";
const ageGroupReducer = (ageGroups, p) => {
const ageKey = ageLabel(p);
return {
...ageGroups,
[ageKey]: (ageGroups[ageKey] || []).concat(p)
}
}
// Combine the reducers
const seed = { idIndex: {}, idNameIndex: {}, ageGroups: {} };
const reducer = ({ idIndex, idNameIndex, ageGroups }, p) => ({
idIndex: idIndexReducer(idIndex, p),
idNameIndex: idNameIndexReducer(idNameIndex, p),
ageGroups: ageGroupReducer(ageGroups, p)
})
const DATA = [
{id: 1, name: 'Alan', date: '2021-01-01', age: 0},
{id: 2, name: 'Ben', date: '1980-02-02', age: 41},
{id: 3, name: 'Clara', date: '1959-03-03', age: 61},
]
// Loop once
console.log(
JSON.stringify(DATA.reduce(reducer, seed), null, 2)
);
Subjective part: Whether it's worth it? I don't think so. I like simple code, and in my own experience going from 1 to 3 loops when working with limited data sets usually is unnoticeable.
So, if using Ramda, I'd stick to:
const { prop, indexBy, map, groupBy, pipe } = R;
const DATA = [
{id: 1, name: 'Alan', date: '2021-01-01', age: 0},
{id: 2, name: 'Ben', date: '1980-02-02', age: 41},
{id: 3, name: 'Clara', date: '1959-03-03', age: 61},
];
const byId = indexBy(prop("id"), DATA);
const nameById = map(prop("name"), byId);
const ageGroups = groupBy(
pipe(
prop("age"),
age => age > 60 ? "senior" : age > 40 ? "medior" : "junior"
),
DATA
);
console.log(JSON.stringify({ byId, nameById, ageGroups }, null, 2))
<script src="https://cdn.jsdelivr.net/npm/ramda#0.27.1/dist/ramda.min.js"></script>

Simple JavaScript Map - Using an Object as the Return

I'm studying the map function and tried to make a contrived example which I thought would work. This code works fine:
let students = [{name: 'Susan', grades: [88, 38, 28]}, {name: 'Robert', grades: [28,97, 17]}];
let newStudents = students.map((el) => el.name);
console.log(newStudents); // [ 'Susan', 'Robert' ]
But what I really wanted was the following in the map function:
let newStudents = students.map((el) => {name: el.name});
// [ undefined, undefined ]
// I assumed to get back the following: [ {name: 'Susan'}, {name: 'Robert'} ]
Why is using an object in the return portion of the map function not allowed?
You need to wrap the object in normal function parenthesis.
let newStudents = students.map((el) => ({name: el.name}));
^ ^
let students = [{name: 'Susan', grades: [88, 38, 28]}, {name: 'Robert', grades: [28,97, 17]}];
let newStudents = students.map((el) => ({name: el.name}));
console.log(newStudents);

How to modify a subset of data without using variables

Using functional Javascript like Underscore, Lodhash, Ramda, Immutable JS, if I have some (semi-accurate) data like this:
var data = {
people: [
{name: 'Vishwanathan Anand', age: 46},
{name: 'Garry Kasparov', age: 52},
{name: 'Magnus Carlsen', age: 25},
],
computers: [
{name: 'Deep Blue', age: 26},
{name: 'Deep Fritz', age: 21},
{name: 'Deep Thought', age: 28},
]
}
I wish to transform it to
var data = {
people: [
{name: 'Vishwanathan Anand', age: 46, rank: 0},
{name: 'Garry Kasparov', age: 52, rank: 1},
{name: 'Magnus Carlsen', age: 25, rank 2},
],
computers: [
{name: 'Deep Blue', age: 26},
{name: 'Deep Fritz', age: 21},
{name: 'Deep Thought', age: 28},
]
}
Note how only the people substructure got rank.
I know I can,
_.extend({
computers: _.map(data.people, (p, i) => {
p.rank = i;
return p;
})}, {
computers: data.computers
})
But what if I need to do this without using any variables (no more access to data!) using underscore's chain?
Something like
_.chain(data).subset('people').map((p, i) => {
p.rank = i;
return p;
})
NOTE This is a real problem and not a matter of convenience. I am working on a project that involves creating a sort of environment for functional operators and variables are not allowed.
It seems Underscore and the like operate on the entire structure (Array / List). is there any way I can ask it to operate on a substructure while preserving the rest?
This solution is a bit unpleasant but it works for this case.
_.chain(data)
.mapObject((value, key) => {
if (key==='people') {
return value.map((p,i) => _.extend(p, {rank: i}));
} else {
return value;
}
})
.value();
With Ramda you can use R.evolve to create a function, that accepts a key and an callback (cb), and maps the items of key to the required form:
const { evolve, addIndex, map } = R
const mapPart = (cb, key) => evolve({
[key]: addIndex(map)(cb)
})
const data = {"people":[{"name":"Vishwanathan Anand","age":46},{"name":"Garry Kasparov","age":52},{"name":"Magnus Carlsen","age":25}],"computers":[{"name":"Deep Blue","age":26},{"name":"Deep Fritz","age":21},{"name":"Deep Thought","age":28}]}
const result = mapPart((o, rank) => ({ ...o, rank }), 'people')(data)
console.log(result)
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.26.1/ramda.js"></script>

Categories

Resources