I'd like to transfrom the following object:
{
'id-1': { prop: 'val1' },
'id-2': { prop: 'val2' },
}
To array:
[
{ id: 'id-1', prop: 'val1' },
{ id: 'id-2', prop: 'val2' },
]
What I have done so far (it works):
R.pipe(
R.toPairs,
R.map(([id, props]) => ({
id,
...props,
}))
)
I'd like to solve it using Ramda only - if possible.
I'd suggest that solving it "using Ramda only" is a bad design goal, unless this is an exercise in learning Ramda. I'm one of the founders of Ramda and a big fan, but Ramda is only a toolkit meant to simplify your code, to make it easier to work in a certain paradigm.
That said, we could certainly write a point-free version of this using Ramda. The first thing that comes to my mind would be this*:
const transform = pipe(
toPairs,
map(apply(useWith(merge, [objOf('id'), identity])))
)
const data = {'id-1': { prop: 'val1' }, 'id-2': { prop: 'val2'}}
console.log(transform(data))
<script src="https://bundle.run/ramda#0.26.1"></script><script>
const {pipe, toPairs, map, apply, useWith, merge, objOf, identity} = ramda </script>
But this is less readable than your original, not more.
This code:
const transform = pipe(
toPairs,
map(([id, props]) => ({...props, id}))
)
is crystal-clear, whereas that Ramda version requires one to understand Ramda-specific useWith and objOf and slightly obscure apply -- I would hope that map, merge, and identity are clear.
In fact, this code is simple enough that I might well write it as a one-liner, in which case, I switch to compose over pipe:
const transform = compose(map(([id, props]) => ({...props, id})), toPairs)
But I probably wouldn't do so, as I find that multi-line pipe version easier to read.
Finally note that we can do this in a fairly readable way without any Ramda tools at all:
const transform = (data) =>
Object.entries(data).map(
([id, props]) => ({...props, id})
)
If I was already using Ramda in my code-base, I would prefer the pipe version above to this; I think it's somewhat easier to read. But would never introduce Ramda into a project only for that fairly minor difference.
I worry that people make a fetish over point-free code. It's a tool. Use it when it makes your code more understandable. Skip it when it makes your code more obscure. Here I think you're starting from quite readable code; it's difficult to improve on it.
*Note that identity here is not strictly necessary; you can skip it with no harm. The function generated by useWith without that identity will incorrectly report an arity of 1, but since the function is immediately wrapped with apply and then further placed in the context of receiving the a two-element array from toPairs, there is nothing which depends upon that arity. But I find it a good habit to include it regardless.
what about this?
probably less verbose!
const toArray = R.pipe(
R.toPairs,
R.map(
R.apply(R.assoc('id')),
),
);
const data = {
'id-1': { prop: 'val1' },
'id-2': { prop: 'val2' },
};
console.log('result', toArray(data));
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.26.1/ramda.js"></script>
Related
I need suggestions on improving a UI component that receives an object that contains all data types i.e. string, int, boolean, arrays and object and displays all the data types.
example of the object. PS: this object is also provide in the codesandbox link.
const data = {
foo1: {},
foo9: true,
foo3: "some random string",
foo4: "some random string",
created_at: {
$date: 1637368143.618
},
sources: {
foo1: {
first_date: {
$date: 1637368143.618
}
}
},
download: {
status: "pending"
},
foo8: "some random string",
foo5: "some random string",
foo7: ["sms"],
foo10: "some random string",
foo11: {
bar5: 0,
bar3: null,
bar1: null,
bar2: 0
},
foo28: ["some random string", "some random string2"],
foo19: "some random string"
};
currently i loop through the object and display the data in a card.
Object.entries(objSorted).map(([key, value])
but first the object is sorted to the info in the following order of string, int or boolean, array and object.
this is the link to the codesandbox. here
PS: This is more of a suggestion giving than a question, i don't have any problem with the code. i only want to know if the way am currently displaying this data is good and user friendly(i don't think so that's why i am asking). and if you think there is a better way you can comment or edit the codesandbox or create a new codesandbox.
Your intuition is correct! We can make this a little easier to understand by decomposing everything in a way similar to what you mentioned. What we really want is better separation of concerns. All of this hand waves details away, but the concepts will be what you take with you.
You have data.
Sort the data.
Loop through the outputs from step 2 and render the cards.
profit?
Step 1/2. You have data. Why not digest this before rendering? There are tons of ways of doing this, but for the sake of readability lets create a hook and be conscience about our downstream consumers. We can make our ObjectExtract component way simpler if it has well formatted data... And we can memo the results so that our digested data is cached and therefor doesn't recompute ever render.
const useCardData = (data) => {
return useMemo(() => {
const entries = Object.entries(data)
.map(([key, value]) => {
let type = "unknown";
if (Array.isArray(key)) {
type = "array";
} else if (typeof value === "boolean") {
type = "boolean";
} else if { ... };
return { type, value, key, priority: getTypeSortId(type) };
})
.sort((x, y) => x.priority - y.priority)
return entries;
}, [data]);
};
We digest our data as much as we can. The key is needed later because using index as a key is almost never what you want since when the order changes, react uses keys to track how rows should be shifted. Numeric indexes from arrays really don't describe the uniqueness of the record. Imagine sorting the records, index 1 could mean completely different things before and after sort, and your data may or may not show up correctly. We use the object property name as our unique key, since objects do a good job of not allowing duplicate keys.
Step 3. Let's use the hook and render the data.
const CardRenderer = ({ data }) => {
const cards = useCardData(data);
return (
<>
{cards.map((thing) => {
switch (thing.type) {
case "boolean":
return <BooleanCard key={thing.key} data={thing} />;
case "string":
return <StringCard key={thing.key} data={thing} />;
case "...":
return ...;
}
}}
</>
)
};
Since our data is well formed we can simply loop over it with a switch. Instead of defining all cases and their results in one component, notice how we just create smaller card components to handle each specific case. Separation of concern. The boolean card cares about what booleans look like ect. This makes it SUPER easy to test since you can test each card individually and then test the output of our CardRenderer to make sure its output is reasonable.
Step 4. profit. We broke everything down to components. Each piece only cares about a specific responsibility and that makes it easy to glue things together. It makes it composable. We can test each piece by itself and make sure its doing the right thing. We can make some really complex things like this while keeping the complexity of each individual hidden away.
What I described is SOLID. The examples in the link are in PHP but I think you'll get the gist. We can and should use the same patterns in react to build really cool things while managing the every growing complexity of cooler things.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
Below is a JavaScript Array
[
["0x34","2","3"],
["0x35","2","3"],
["0x34","1","3"]
]
Required output:
<0x34>:2:3:1:3,<0x35>:2:3
Please help me. Thanks in advance.
Since there are already answers here, I'll throw in one which seems to do the job. But OP, please take to heart the comment to your post by charlietfl. We expect to see more effort here from those posting questions.
This is how I might do this:
const transform = (input) =>
Object .values (
input .reduce ((a, [k, ...vs]) => ({...a, [k]: `${a [k] || `<${k}>`}:${vs .join (':')}`}), {})
) .join (',')
const input = [["0x34", "2", "3"], ["0x35", "2", "3"], ["0x34", "1", "3"]]
console .log (transform (input))
We use reduce to do our grouping. In that step we also append the secondary elements from our group, so that most everything is done in there. Then we select the values of the resulting object, and join them with commas.
Even better would be to write or use a generic groupBy function as might be found in libraries such as Ramda, lodash, or Underscore, but I'll leave that to you.
Update
A comment suggests a different style of coding:
[A]s a friendly reminder for coding beginners: please don't code like this, unless it's only for a project that no one else works on except you. If not, stick to basic formatting guidelines, create meaningful variable names and don't be afraid to use space to make code more readable.
While I appreciate the intention, I disagree quite strongly. Much of my recent career has involved mentoring and training junior programmers, not novices first learning to code but those in their first few years of professional programming.
One of my first goals is to get them out of their initial comfort zones by exposing them to more advanced concepts, more efficient techniques, more expressive code. The point is to get them to never think of coding as rote. If they've always done it this way, then they're not learning and growing when they do so one more time; they probably won't see the abstractions that can help lead to much better software.
As I mentioned originally, I would in practice build a solution for this atop a function like groupBy. I'm one of the founders of Ramda, and tend to think in its terms. I would probably use some of its tools for this, something like:
const transform = pipe (
groupBy (head),
map (map (tail)),
map (flatten),
toPairs,
map (([k, vs]) => `<${k}>:${vs .join (':')}`),
join (',')
)
const transform = pipe (
groupBy (head),
map (map (tail)),
map (flatten),
toPairs,
map (([k, vs]) => `<${k}>:${vs .join (':')}`),
join (',')
)
const input = [["0x34", "2", "3"], ["0x35", "2", "3"], ["0x34", "1", "3"]]
console .log (transform (input))
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.27.1/ramda.js"></script>
<script>const {pipe, groupBy, head, map, tail, flatten, toPairs, join} = R </script>
I recommend to my students that they use such libraries, or, even better, build up their own versions of helpful utility functions in order to build their code on top of reusable abstractions.
As to the code presented, there is one thing I would in retrospect do differently regarding spacing. I would choose to break that long line into several, formatting it like this:
const transform = (input) =>
Object .values (
input .reduce ((a, [k, ...vs]) => ({
...a,
[k]: `${a [k] || `<${k}>`}:${vs .join (':')}`
}), {})
) .join (',')
But I don't think that is at all what was meant by the comment. It sounds as thought the commenter was suggesting something like this:
const transform = (input) => {
const grouped = input .reduce (
(accumulator, strings) => {
const key = strings [0]
const values = strings .slice (1)
if (! accumulator [key]) {
accumulator [key] = '<' + key + '>'
}
const valuesString = values .join (':')
accumulator [key] = accumulator [key] + ':' + valuesString
return accumulator
},
{}
)
const records = Object .values (grouped)
const results = records .join (',')
return results
}
const transform = (input) => {
const grouped = input .reduce (
(accumulator, strings) => {
const key = strings [0]
const values = strings .slice (1)
if (! accumulator [key]) {
accumulator [key] = '<' + key + '>'
}
const valuesString = values .join (':')
accumulator [key] = accumulator [key] + ':' + valuesString
return accumulator
},
{}
)
const records = Object .values (grouped)
const results = records .join (',')
return results
}
const input = [["0x34", "2", "3"], ["0x35", "2", "3"], ["0x34", "1", "3"]]
console .log (transform (input))
or something similar using a for/forEach loop in place of reduce. (If this is not what was suggested, #MauriceNino, I apologize, and would like to hear more of what your meant.) If this what was suggested, then I do disagree.
This does a better job of explaining in detail how to calculate the value, but it's very easy to get lost in the process and forget what it is we want to achieve here. I don't want to spend the time trying to think like the computer. I would much rather have the computer try to think like me.
Take for instance the variable named grouped. Perhaps if we understood the business context better, we could come up with a better name than that, but as it stands, it was the best I can do. What do we gain by knowing and having to keep in our minds the variable grouped. It's defined in one statement, used in the next to calculate another temporary variable, record (again I don't have useful names to give these things), and then ignored after that? And the same with record and results. All this is information crowding my head when I'm trying to understand this function.
And it gets worse with the relationship with strings, key and value. By learning a slight bit of syntax, we can replace
(accumulator, strings) => {
const key = strings [0]
const values = strings .slice (1)
with
(accumulator, [key, ...values]) => {
and not have to try to keep strings in our head. This does reduce line-count, but much more importantly, it keeps the usage of the variables very close to their definitions.
When I write this:
input .reduce ((a, [k, ...vs]) => ({
...a,
[k]: `${a [k] || `<${k}>`}:${vs .join (':')}`
}), {})
I choose the shorter variable names a, k, and vs over the longer ones because, while they are evocative of accumulator, keys, and values, they don't force the same sort of assumptions that those longer words do. They have a much greater chance of being applicable to other situations where I might write similar code. Moreover, they are right there in view; I know what they are when I encounter them because their definitions are only one or two lines up from their uses. For some related points, see John DeGoes' excellent Descriptive Variable Names: A Code Smell.
There's a possibility that the reaction was to something here that is more problematic. Rich Snapp posted a great description of something he calls The reduce ({...spread}) anti-pattern. This is a legitimate concern. My version is less performant than it could be because instead of mutating the accumulator the reduce callback returns a new object every time. This is an intentional choice, and I could have avoided it and still stuck with my expression-focused style by using a mutation and a comma operator, but I find that avoiding mutation is extremely useful. If this is found to be a hot-spot in my code-base, then I will change. But I will stick with the simpler approach first.
And I do find my approach simpler. This is not to say that it's more familiar. I use "simple" here in the sense made famous in Rich Hickey's classic talk Simple Made Easy of having fewer ideas woven together. That's what I try to stress with my students. Familiarity does not make your code better. Simplicity does. And I would argue that the version I first presented (or it's spread-out alternative) is significantly simpler than this version.
Ciao, you could find distinct indexes (I mean 0x34, 0x35), then iterate in input values filtering the elements on index. Put the values found in array and join values to make the result string. Something like:
let input = [
["0x34", "2", "3"],
["0x35", "2", "3"],
["0x34", "1", "3"],
];
let indexes = [...new Set(input.map((val) => val[0]))];
let finalResult = [];
indexes.forEach((index) => {
let result = ["<" + index + ">"];
input
.filter((val) => val[0] === index)
.forEach((el) => {
el.forEach((val) => {
if (val !== index) result.push(val);
});
});
finalResult.push(result.join(":"));
});
console.log(finalResult.join(","));
First you could reduce it to accumulate all arrays with the same key and then join the resulting objects values like so:
const input = [["0x34","2","3"], ["0x35","2","3"], ["0x34","1","3"]];
const result = Object.entries(
input.reduce((acc, arr) => { // Add up the arrays of the same key
const key = arr[0],
values = arr.slice(1);
if(acc[key] == null)
acc[key] = values;
else
acc[key] = [...acc[key], ...values];
return acc;
}, {})
)
.map(([key, values]) => [`<${key}>`, ...values].join(':')); // map it to an array and then join it with a :
console.log(result)
// Or as a whole string
console.log(result.join(','))
reduce() goes over each item and lets you create an accumulated value (or object) out of them
Object.entries() creates an array out of an object
map() transforms an array according to the given function
So the idea in the snippet is, that you group the arrays by its first value into an object using reduce(), then iterate over all the resulting key/value pairs, using Object.entries() and in the end create your desired string with map()
Join method creates and returns a new string by concatenating all of the elements in an array
console.log(
[["0x34","2","3"],
["0x35","2","3"],
["0x34","1","3"]].map(subarray => subarray.join(':')).join(',')
);
I have a complex JSON object. I'm trying to process it to create an array that looks like:
[
[ "Name", "Spec", "Spec" ],
[ "Name", "Spec", "Spec" ]
]
This is where I am stuck:
let array = products.map((product) => {
return [
product.name,
product.attributes
.map((attribute) => (
attribute.spec
))
.reduce((accumulator, currentValue) => {
return accumulator.concat(currentValue);
}, [])
];
});
That gives the result:
[
[ "Name", [ "Spec", "Spec" ] ],
[ "Name", [ "Spec", "Spec" ] ]
]
Admittedly, I don't entirely understand the reduce method and it's initialValue argument here. I know using that method can flatten the array at the top level use of map, but at the deeper level, seems to do nothing.
I've searched online but have only found answers that involve completely flattening deep arrays. And the flatten() method is not an option due to lack of compatibility.
Can someone please advise on how to flatten the second level only? If possible, I'd like to accomplish this through mutating the array.
You don't need the reducer there - it's only making things unnecessarily complicated. Map the attributes to their spec property, and then use spread:
const array = products.map(({ name, attributes }) => {
const specs = attributes.map(attribute => attribute.spec);
return [name, ...specs];
});
1. Why does this fail?
You put your reduce in the wrong place. You're flattening the list of specs, which was already a flat array. You want to flatten the list that has the name and the list of specs. Here is one possibility:
const array = products.map(prod => [
prod.name,
prod.attributes.map(attr => attr.spec)
].reduce((acc, curr) => acc.concat(curr), []));
2. What's a better solution?
As CertainPerformance points out, there is a simpler version, which I might write slightly differently as
const array = products.map(({name, attributes}) =>
[name, ...attributes.map(attr => attr.spec)]
);
3. What if I need to reuse flatten in other places?
Extract it from the first solution as a reusable function. This is not a full replacement for the new Array flatten method, but it might be all you need:
const flatten = arr => arr.reduce((acc, curr) => acc.concat(curr), [])
const array = products.map(prod => flatten([
prod.name,
prod.attributes.map(attr => attr.spec)
])
)
4. How does that reduce call flatten one level?
We can think of [x, y, z].reduce(fn, initial) as performing these steps
Call fn(initial, x), yielding value a
Call fn(a, y), yielding value b
Call fn(b, z), yielding value c
Since the array is exhausted, return value c
In other words [x, y, z].reduce(fn, initial) returns fn(fn(fn(initial, x), y), z).
When fn is (acc, val) => acc.concat(val), then we can think of ['name', ['spec1', 'spec2']].reduce(fn, []) as fn(fn([], 'name'), ['spec1', 'spec2']), which is the same as ([].concat('name')).concat(['spec1', 'spec2']), which, of course is ['name', 'spec1', 'spec2'].
5. Was there anything wrong with my question?
I'm glad you asked. :-)
There was one significant failing. You didn't include any sample data. To help with this problem required one to try to reconstruct your data formats from your code. It would have been easy enough to give a minimal example such as:
const products = [
{name: 'foo', attributes: [{spec: 'bar'}, {spec: 'baz'}]},
{name: 'oof', attributes: [{spec: 'rab'}, {spec: 'zab'}]}
]
with a matching expected output:
[
["foo", "bar", "baz"],
["oof", "rab", "zab"]
]
6. How about my output structure?
Now that you mention it, this seems a strange structure. You might have good reasons for it, but it is odd.
Arrays generally serve two purposes in Javascript. They are either arbitrary-length lists of elements of the same type or they are fixed length lists, with specific types at each index (aka tuples.)
But your structure combines both of these. They are arbitrary- (at least so it seems) length lists, where the first entry is a name, and subsequent ones are specs. While there might be justification for this, you might want to think about whether this structure is particularly useful.
7. How can I do this without mutating?
If possible, I'd like to accomplish this through mutating the array.
I refuse to take part in such horrors.
Seriously, though, immutable data makes for so much easier coding. Is there any real reason you listed that as a requirement?
Functional programming newbie here. I have this object:
{
_id: '2014d5db-55dc-4078-ae87-382c226d0785',
_source: {
phone: '00447827434313',
...
}
}
In the end I want to have it in this format:
{
id: '2014d5db-55dc-4078-ae87-382c226d0785',
phone: '00447827434313',
...
}
Basically extracting _source, and renaming _id to id.
I created this function below which works, but I'm trying use only Ramda's functions instead of creating new objects by hand. I assume it's more "functional" way, let me know if it doesn't really matter.
const test = o => merge(o._source, { id: o._id })
Thanks very much
I don't think there's a particular built-in Ramda function for that. But it's not hard to write one on top of lensPath, view, and map:
const remap = R.curry((desc, obj) => R.map(path => R.view(R.lensPath(path), obj), desc));
const myExtract = remap({
id: ['_id'],
phone: ['_source', 'phone']
});
myExtract(input);
//=> {"id": "2014d5db-55dc-4078-ae87-382c226d0785", "phone": "00447827434313"}
It only works this simply if your output is described as a flat list of fields (of course their properties could themselves be objects.) But one where you pulled from nested paths and pushed to nested paths would not be too much harder to write. The user API would be uglier, though, I imagine.
I don't see any clean way to make this points-free and still retain readability. Perhaps someone else might manage that, but I think this is already pretty nice.
You can see this in action on the Ramda REPL.
I have heard it said that, with new (or at least better) JavaScript language options, we don't need to use forEach anymore. I wonder if this was an indirect way of advocating for functional programming. I have recently encountered an array manipulation algorithm that required the nested use of forEach and am wondering if there is a more functional way of approaching the problem. I was able to re-write the code using reduce but it ended up being longer than the forEach solution. In and of itself, slightly longer code might not be "bad", but this does make me wonder if there is perhaps an even better way to use functional programming to solve this problem.
The problem is as follows: re-work an array of arrays to produce a flattened array of all the original elements, but with each new element now paired with the number of the original sub-array it came from. e.g. Convert [[8,2,4],[5],[1,7]] into [[8,0],[2,0],[4,0],[5,1],[1,2],[7,2]].
(For readers to make sense of many of the initial critical comments to this question, the original wording of the question focused on making the code shorter. My true intention was instead to ask about a better way to approach this problem, leading to the re-worded question above.)
const oldArr = [[8,2,4],[5],[1,7]];
const newArr1 = [];
oldArr.forEach((subArr, subArrNum) => {
subArr.forEach(elmt => {newArr1.push([elmt, subArrNum]);});
});
const newArr2 = oldArr.reduce((accum1, subArr, subArrNum) => accum1.concat(
subArr.reduce((accum2, elmt) => accum2.concat([[elmt, subArrNum]]), [])
), []);
console.log(JSON.stringify(newArr1));
console.log(JSON.stringify(newArr2));
// for the sake of comparing code length, here are the exact same two
// solutions using only single letter variable names with each solution
// compressed onto a single line:
const o = oldArr;
let x,y;
x=[];o.forEach((s,n)=>{s.forEach(e=>{x.push([e,n]);});});
y=o.reduce((a,s,n)=>a.concat(s.reduce((b,e)=>b.concat([[e,n]]),[])),[]);
console.log(JSON.stringify(x));
console.log(JSON.stringify(y));
In the comments on the question, #Ryan provided a solution, labelled as newArray1 in the code below, while #Potter provided a "babel-safe" variant, labelled as newArray2 (with minor changes in variable names and removal of parentheses to make the two functions more easily comparable):
const o = [[8,2,4],[5],[1,7]];
const newArray1 = [].concat ( ...o.map((x,i)=>x.map(y=>[y,i])));
const newArray2 = [].concat.apply([], o.map((x,i)=>x.map(y=>[y,i])));
console.log(JSON.stringify(newArray1));
console.log(JSON.stringify(newArray2));
I think the reduction of clutter in this solution makes it slightly easier to understand than my original functional solution.
Note, however, that Ryan also points out in a further comment on the question a problem with this approach. He writes that [].concat(...x) (and, I presume, also its "babel-safe" variant) "...can cause a stack overflow, for one; try [].concat(...Array(1000000)). Ideally there would be a builtin along the lines of const concat = arrs => arrs.reduce((m, n) => m.concat(n), []); (but maybe more efficient)."