This question already has answers here:
Convert Array to Object
(46 answers)
Closed 26 days ago.
For example, I have the following array of objects:
[{id:1, name: Hana, age: 30}, {id:2, name: Sana, age: 20}, {id:3, name: Kana, age: 30}]
I want to convert it to an object of objects as following:
{0:{id:1, name: Hana, age: 30}, 1:{id:2, name: Sana, age: 20}, 2:{id:3, name: Kana, age: 30}}
Using Object's pre built method assign you can achieve this.
Object.assign({}, yourObject);
No need to iterate through the Array unnecessary.
You can easily achieve the result, using a simple map function and store the result in an object as a key:value pair
const data = [{id:1, name: 'Hana', age: 30}, {id:2, name: 'Sana', age: 20}, {id:3, name: 'Kana', age: 30}]
const resultObj = {}
data.map((obj,index) => resultObj[index] = obj)
console.log(resultObj)
You can map that array and get its unique value (in this case i have taken id as key) then map it according you want to display array.
Here is an example to do that.
var arr = [{
id: 1,
name: 'Hana',
age: 30
}, {
id: 2,
name: 'Sana',
age: 20
}, {
id: 3,
name: 'Kana',
age: 30
}]
var mapped = arr.map(item => ({
[item.id]: item
}));
var newObj = Object.assign({}, ...mapped);
console.log(newObj);
I am trying to switch my programming style to declarative from imperative, but there is some concept that is bugging me like the performance when it comes to the loop. For example, I have an original DATA, and after manipulating it I wish to get 3 expected outcomes: itemsHash, namesHash, rangeItemsHash
// original data
const DATA = [
{id: 1, name: 'Alan', date: '2021-01-01', age: 0},
{id: 2, name: 'Ben', date: '1980-02-02', age: 41},
{id: 3, name: 'Clara', date: '1959-03-03', age: 61},
]
...
// expected outcome
// itemsHash => {
// 1: {id: 1, name: 'Alan', date: '2021-01-01', age: 0},
// 2: {id: 2, name: 'Ben', date: '1980-02-02', age: 41},
// 3: {id: 3, name: 'Clara', date: '1959-03-03', age: 61},
// }
// namesHash => {1: 'Alan', 2: 'Ben', 3: 'Clara'}
// rangeItemsHash => {
// minor: [{id: 1, name: 'Alan', date: '2021-01-01', age: 0}],
// junior: [{id: 2, name: 'Ben', date: '1980-02-02', age: 41}],
// senior: [{id: 3, name: 'Clara', date: '1959-03-03', age: 61}],
// }
// imperative way
const itemsHash = {}
const namesHash = {}
const rangeItemsHash = {}
DATA.forEach(person => {
itemsHash[person.id] = person;
namesHash[person.id] = person.name;
if (person.age > 60){
if (typeof rangeItemsHash['senior'] === 'undefined'){
rangeItemsHash['senior'] = []
}
rangeItemsHash['senior'].push(person)
}
else if (person.age > 21){
if (typeof rangeItemsHash['junior'] === 'undefined'){
rangeItemsHash['junior'] = []
}
rangeItemsHash['junior'].push(person)
}
else {
if (typeof rangeItemsHash['minor'] === 'undefined'){
rangeItemsHash['minor'] = []
}
rangeItemsHash['minor'].push(person)
}
})
// declarative way
const itemsHash = R.indexBy(R.prop('id'))(DATA);
const namesHash = R.compose(R.map(R.prop('name')),R.indexBy(R.prop('id')))(DATA);
const gt21 = R.gt(R.__, 21);
const lt60 = R.lte(R.__, 60);
const isMinor = R.lt(R.__, 21);
const isJunior = R.both(gt21, lt60);
const isSenior = R.gt(R.__, 60);
const groups = {minor: isMinor, junior: isJunior, senior: isSenior };
const rangeItemsHash = R.map((method => R.filter(R.compose(method, R.prop('age')))(DATA)))(groups)
To achieve the expected outcome, imperative only loops once while declarative loops at least 3 times(itemsHash,namesHash ,rangeItemsHash ). Which one is better? Is there any trade-off on performance?
I have several responses to this.
First, have you tested to know that performance is a problem? Far too much performance work is done on code that is not even close to being a bottleneck in an application. This often happens at the expense of code simplicity and clarity. So my usual rule is to write the simple and obvious code first, trying not to be stupid about performance, but never worrying overmuch about it. Then, if my application is unacceptably slow, benchmark it to find what parts are causing the largest issues, then optimize those. I've rarely had those places be the equivalent of looping three times rather than one. But of course it could happen.
If it does, and you really need to do this in a single loop, it's not terribly difficult to do this on top of a reduce call. We could write something like this:
// helper function
const ageGroup = ({age}) => age > 60 ? 'senior' : age > 21 ? 'junior' : 'minor'
// main function
const convert = (people) =>
people.reduce (({itemsHash, namesHash , rangeItemsHash}, person, _, __, group = ageGroup (person)) => ({
itemsHash: {...itemsHash, [person .id]: person},
namesHash: {...namesHash, [person .id]: person.name},
rangeItemsHash: {...rangeItemsHash, [group]: [...(rangeItemsHash [group] || []), person]}
}), {itemsHash: {}, namesHash: {}, rangeItemsHash: {}})
// sample data
const data = [{id: 1, name: 'Alan', date: '2021-01-01', age: 0}, {id: 2, name: 'Ben', date: '1980-02-02', age: 41}, {id: 3, name: 'Clara', date: '1959-03-03', age: 61}]
// demo
console .log (JSON .stringify (
convert (data)
, null, 4))
.as-console-wrapper {max-height: 100% !important; top: 0}
(You can remove the JSON .stringify call to demonstrate that the references are shared between the various output hashes.)
There are two directions I might go from here to clean up this code.
The first would be to use Ramda. It has some functions that would help simplify a few things here. Using R.reduce, we could eliminate the annoying placeholder parameters that I use to allow me to add the default parameter group to the reduce signature, and maintain expressions-over-statements style coding. (We could alternatively do something with R.call.) And using evolve together with functions like assoc and over, we can make this more declarative like this:
// helper function
const ageGroup = ({age}) => age > 60 ? 'senior' : age > 21 ? 'junior' : 'minor'
// main function
const convert = (people) =>
reduce (
(acc, person, group = ageGroup (person)) => evolve ({
itemsHash: assoc (person.id, person),
namesHash: assoc (person.id, person.name),
rangeItemsHash: over (lensProp (group), append (person))
}) (acc), {itemsHash: {}, namesHash: {}, rangeItemsHash: {minor: [], junior: [], senior: []}},
people
)
// sample data
const data = [{id: 1, name: 'Alan', date: '2021-01-01', age: 0}, {id: 2, name: 'Ben', date: '1980-02-02', age: 41}, {id: 3, name: 'Clara', date: '1959-03-03', age: 61}]
// demo
console .log (JSON .stringify (
convert (data)
, null, 4))
.as-console-wrapper {max-height: 100% !important; top: 0}
<script src="//cdnjs.cloudflare.com/ajax/libs/ramda/0.27.1/ramda.js"></script>
<script> const {reduce, evolve, assoc, over, lensProp, append} = R </script>
A slight downside to this version over the previous one is the need to predefine the categories senior, junior, and minor in the accumulator. We could certainly write an alternative to lensProp that somehow deals with default values, but that would take us further afield.
The other direction I might go is to note that there is still one potentially serious performance problem in the code, one Rich Snapp called the reduce ({...spread}) anti-pattern. To solve that, we might want to mutate our accumulator object in the reduce callback. Ramda -- by its very philosophic nature -- will not help you with this. But we can define some helper functions that will clean our code up at the same time we address this issue, with something like this:
// utility functions
const push = (x, xs) => ((xs .push (x)), x)
const put = (k, v, o) => ((o[k] = v), o)
const appendTo = (k, v, o) => put (k, push (v, o[k] || []), o)
// helper function
const ageGroup = ({age}) => age > 60 ? 'senior' : age > 21 ? 'junior' : 'minor'
// main function
const convert = (people) =>
people.reduce (({itemsHash, namesHash , rangeItemsHash}, person, _, __, group = ageGroup(person)) => ({
itemsHash: put (person.id, person, itemsHash),
namesHash: put (person.id, person.name, namesHash),
rangeItemsHash: appendTo (group, person, rangeItemsHash)
}), {itemsHash: {}, namesHash: {}, rangeItemsHash: {}})
// sample data
const data = [{id: 1, name: 'Alan', date: '2021-01-01', age: 0}, {id: 2, name: 'Ben', date: '1980-02-02', age: 41}, {id: 3, name: 'Clara', date: '1959-03-03', age: 61}]
// demo
console .log (JSON .stringify (
convert (data)
, null, 4))
.as-console-wrapper {max-height: 100% !important; top: 0}
But in the end, as already suggested, I would not do this unless performance was provably a problem. I think it's much nicer with Ramda code like this:
const ageGroup = ({age}) => age > 60 ? 'senior' : age > 21 ? 'junior' : 'minor'
const convert = applySpec ({
itemsHash: indexBy (prop ('id')),
nameHash: compose (fromPairs, map (props (['id', 'name']))),
rangeItemsHash: groupBy (ageGroup)
})
const data = [{id: 1, name: 'Alan', date: '2021-01-01', age: 0}, {id: 2, name: 'Ben', date: '1980-02-02', age: 41}, {id: 3, name: 'Clara', date: '1959-03-03', age: 61}]
console .log (JSON .stringify(
convert (data)
, null, 4))
.as-console-wrapper {max-height: 100% !important; top: 0}
<script src="//cdnjs.cloudflare.com/ajax/libs/ramda/0.27.1/ramda.js"></script>
<script> const {applySpec, indexBy, prop, compose, fromPairs, map, props, groupBy} = R </script>
Here we might want -- for consistency's sake -- to make ageGroup point-free and/or inline it in the main function. That's not hard, and another answer gave an example of that. I personally find it more readable like this. (There's also probably a cleaner version of namesHash, but I'm out of time.)
This version loops three times, exactly what you are worried about. There are times when that might be a problem. But I wouldn't spend much effort on that unless it's a demonstrable problem. Clean code is a useful goal on its own.
Similar how to .map(f).map(g) == .map(compose(g, f)), you can compose reducers to ensure a single pass gives you all results.
Writing declarative code does not really have anything to do with the decision to loop once or multiple times.
// Reducer logic for all 3 values you're interested in
// id: person
const idIndexReducer = (idIndex, p) =>
({ ...idIndex, [p.id]: p });
// id: name
const idNameIndexReducer = (idNameIndex, p) =>
({ ...idNameIndex, [p.id]: p.name });
// Age
const ageLabel = ({ age }) => age > 60 ? "senior" : age > 40 ? "medior" : "junior";
const ageGroupReducer = (ageGroups, p) => {
const ageKey = ageLabel(p);
return {
...ageGroups,
[ageKey]: (ageGroups[ageKey] || []).concat(p)
}
}
// Combine the reducers
const seed = { idIndex: {}, idNameIndex: {}, ageGroups: {} };
const reducer = ({ idIndex, idNameIndex, ageGroups }, p) => ({
idIndex: idIndexReducer(idIndex, p),
idNameIndex: idNameIndexReducer(idNameIndex, p),
ageGroups: ageGroupReducer(ageGroups, p)
})
const DATA = [
{id: 1, name: 'Alan', date: '2021-01-01', age: 0},
{id: 2, name: 'Ben', date: '1980-02-02', age: 41},
{id: 3, name: 'Clara', date: '1959-03-03', age: 61},
]
// Loop once
console.log(
JSON.stringify(DATA.reduce(reducer, seed), null, 2)
);
Subjective part: Whether it's worth it? I don't think so. I like simple code, and in my own experience going from 1 to 3 loops when working with limited data sets usually is unnoticeable.
So, if using Ramda, I'd stick to:
const { prop, indexBy, map, groupBy, pipe } = R;
const DATA = [
{id: 1, name: 'Alan', date: '2021-01-01', age: 0},
{id: 2, name: 'Ben', date: '1980-02-02', age: 41},
{id: 3, name: 'Clara', date: '1959-03-03', age: 61},
];
const byId = indexBy(prop("id"), DATA);
const nameById = map(prop("name"), byId);
const ageGroups = groupBy(
pipe(
prop("age"),
age => age > 60 ? "senior" : age > 40 ? "medior" : "junior"
),
DATA
);
console.log(JSON.stringify({ byId, nameById, ageGroups }, null, 2))
<script src="https://cdn.jsdelivr.net/npm/ramda#0.27.1/dist/ramda.min.js"></script>
Using functional Javascript like Underscore, Lodhash, Ramda, Immutable JS, if I have some (semi-accurate) data like this:
var data = {
people: [
{name: 'Vishwanathan Anand', age: 46},
{name: 'Garry Kasparov', age: 52},
{name: 'Magnus Carlsen', age: 25},
],
computers: [
{name: 'Deep Blue', age: 26},
{name: 'Deep Fritz', age: 21},
{name: 'Deep Thought', age: 28},
]
}
I wish to transform it to
var data = {
people: [
{name: 'Vishwanathan Anand', age: 46, rank: 0},
{name: 'Garry Kasparov', age: 52, rank: 1},
{name: 'Magnus Carlsen', age: 25, rank 2},
],
computers: [
{name: 'Deep Blue', age: 26},
{name: 'Deep Fritz', age: 21},
{name: 'Deep Thought', age: 28},
]
}
Note how only the people substructure got rank.
I know I can,
_.extend({
computers: _.map(data.people, (p, i) => {
p.rank = i;
return p;
})}, {
computers: data.computers
})
But what if I need to do this without using any variables (no more access to data!) using underscore's chain?
Something like
_.chain(data).subset('people').map((p, i) => {
p.rank = i;
return p;
})
NOTE This is a real problem and not a matter of convenience. I am working on a project that involves creating a sort of environment for functional operators and variables are not allowed.
It seems Underscore and the like operate on the entire structure (Array / List). is there any way I can ask it to operate on a substructure while preserving the rest?
This solution is a bit unpleasant but it works for this case.
_.chain(data)
.mapObject((value, key) => {
if (key==='people') {
return value.map((p,i) => _.extend(p, {rank: i}));
} else {
return value;
}
})
.value();
With Ramda you can use R.evolve to create a function, that accepts a key and an callback (cb), and maps the items of key to the required form:
const { evolve, addIndex, map } = R
const mapPart = (cb, key) => evolve({
[key]: addIndex(map)(cb)
})
const data = {"people":[{"name":"Vishwanathan Anand","age":46},{"name":"Garry Kasparov","age":52},{"name":"Magnus Carlsen","age":25}],"computers":[{"name":"Deep Blue","age":26},{"name":"Deep Fritz","age":21},{"name":"Deep Thought","age":28}]}
const result = mapPart((o, rank) => ({ ...o, rank }), 'people')(data)
console.log(result)
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.26.1/ramda.js"></script>