I have a complex JSON object. I'm trying to process it to create an array that looks like:
[
[ "Name", "Spec", "Spec" ],
[ "Name", "Spec", "Spec" ]
]
This is where I am stuck:
let array = products.map((product) => {
return [
product.name,
product.attributes
.map((attribute) => (
attribute.spec
))
.reduce((accumulator, currentValue) => {
return accumulator.concat(currentValue);
}, [])
];
});
That gives the result:
[
[ "Name", [ "Spec", "Spec" ] ],
[ "Name", [ "Spec", "Spec" ] ]
]
Admittedly, I don't entirely understand the reduce method and it's initialValue argument here. I know using that method can flatten the array at the top level use of map, but at the deeper level, seems to do nothing.
I've searched online but have only found answers that involve completely flattening deep arrays. And the flatten() method is not an option due to lack of compatibility.
Can someone please advise on how to flatten the second level only? If possible, I'd like to accomplish this through mutating the array.
You don't need the reducer there - it's only making things unnecessarily complicated. Map the attributes to their spec property, and then use spread:
const array = products.map(({ name, attributes }) => {
const specs = attributes.map(attribute => attribute.spec);
return [name, ...specs];
});
1. Why does this fail?
You put your reduce in the wrong place. You're flattening the list of specs, which was already a flat array. You want to flatten the list that has the name and the list of specs. Here is one possibility:
const array = products.map(prod => [
prod.name,
prod.attributes.map(attr => attr.spec)
].reduce((acc, curr) => acc.concat(curr), []));
2. What's a better solution?
As CertainPerformance points out, there is a simpler version, which I might write slightly differently as
const array = products.map(({name, attributes}) =>
[name, ...attributes.map(attr => attr.spec)]
);
3. What if I need to reuse flatten in other places?
Extract it from the first solution as a reusable function. This is not a full replacement for the new Array flatten method, but it might be all you need:
const flatten = arr => arr.reduce((acc, curr) => acc.concat(curr), [])
const array = products.map(prod => flatten([
prod.name,
prod.attributes.map(attr => attr.spec)
])
)
4. How does that reduce call flatten one level?
We can think of [x, y, z].reduce(fn, initial) as performing these steps
Call fn(initial, x), yielding value a
Call fn(a, y), yielding value b
Call fn(b, z), yielding value c
Since the array is exhausted, return value c
In other words [x, y, z].reduce(fn, initial) returns fn(fn(fn(initial, x), y), z).
When fn is (acc, val) => acc.concat(val), then we can think of ['name', ['spec1', 'spec2']].reduce(fn, []) as fn(fn([], 'name'), ['spec1', 'spec2']), which is the same as ([].concat('name')).concat(['spec1', 'spec2']), which, of course is ['name', 'spec1', 'spec2'].
5. Was there anything wrong with my question?
I'm glad you asked. :-)
There was one significant failing. You didn't include any sample data. To help with this problem required one to try to reconstruct your data formats from your code. It would have been easy enough to give a minimal example such as:
const products = [
{name: 'foo', attributes: [{spec: 'bar'}, {spec: 'baz'}]},
{name: 'oof', attributes: [{spec: 'rab'}, {spec: 'zab'}]}
]
with a matching expected output:
[
["foo", "bar", "baz"],
["oof", "rab", "zab"]
]
6. How about my output structure?
Now that you mention it, this seems a strange structure. You might have good reasons for it, but it is odd.
Arrays generally serve two purposes in Javascript. They are either arbitrary-length lists of elements of the same type or they are fixed length lists, with specific types at each index (aka tuples.)
But your structure combines both of these. They are arbitrary- (at least so it seems) length lists, where the first entry is a name, and subsequent ones are specs. While there might be justification for this, you might want to think about whether this structure is particularly useful.
7. How can I do this without mutating?
If possible, I'd like to accomplish this through mutating the array.
I refuse to take part in such horrors.
Seriously, though, immutable data makes for so much easier coding. Is there any real reason you listed that as a requirement?
Related
I've already tried to find a solution on stack, but I didn't found a possible reply, so I decided to open a topic to ask:
Let's say we have 2 arrays: one containing "keys" and another one containing "values"
Example:
keys = [CO2, Blood, General, AnotherKey, ... ]
values = [[2,5,4,6],[4,5,6],[1,3,34.5,43.4],[... [
I have to create a Json with a specific structure like:
[{
name: 'CO2',
data: [2,5,4,6]
}, {
name: 'Blood',
data: [4,5,6]
}, {
name: 'General',
data: [1,3,34.5,43.4]
}, {
...
},
}]
I've tried to make some test bymyself, like concatenate strings and then encode it as json, but I don't think is the correct path to follow and a good implementation of it ... I've also take a look on JSON.PARSE, JSON.stringify, but I never arrived at good solution so... I am asking if someone know the correct way to implements it!
EDIT:
In reality, i didn't find a solution since "name" and "data" are no strings but object
Here's one way to get your desired output:
keys = ["CO2", "Blood", "General", "AnotherKey"]
values = [[2,5,4,6],[4,5,6],[1,3,34.5,43.4],[0] ]
const output = keys.map((x, i) => {
return {"name": x, "data": values[i]}
})
console.log(output)
However, since you're literally constructing key/value pairs, you should consider whether an object might be a better data format to output:
keys = ["CO2", "Blood", "General", "AnotherKey"]
values = [[2,5,4,6],[4,5,6],[1,3,34.5,43.4],[0] ]
const output = {}
for (let i=0; i<keys.length; i++) {
output[keys[i]] = values[i]
}
console.log(output)
With this data structure you can easily get the data for any keyword (e.g. output.CO2). With your array structure you would need to iterate over the array every time you wanted to find something in it.
(Note: The reason you weren't getting anywhere useful by searching for JSON methods is that nothing in your question has anything to do with JSON; you're just trying to transform some data from one format to another. JSON is a string representation of a data object.)
So first of all, I am not expecting a specific solution to my problem, but instead some insights from more experienced developers that could enlighten me and put me on the right track. As I am not yet experienced enough in algorithms and data structures and I take this as a challenge for myself.
I have n number of arrays, where n >= 2.
They all contain objects and in the end, I want an array that contains only the common elements between all these arrays.
array1 = [{ id: 1 }, { id: 2 }, { id: 6 }, { id: 10 }]
array2 = [{ id: 2 }, { id: 4 }, { id: 10 }]
array3 = [{ id: 2 }, { id: 3 }, { id: 10 }]
arrayOfArrays = [array1, array2, array3]
intersect = [{ id: 2 }, { id: 10 }]
How would one approach this problem? I have read solutions using Divide And Conquer, or Hash tables, and even using the lodash library but I would like to implement my own solution for once and not rely on anything external, and at the same time practice algorithms.
For efficiency, I would start by locating the shortest array. This should be the one you work with. You can run a reduce on the arrayOfArrays to iterate through and return the index of the shortest length.
const shortestIndex = arrayOfArrays.reduce((accumulator, currentArray, currentIndex) => currentArray.length < arrayOfArrays[index] ? currentIndex : accumulator, 0);
Take the shortest array and call the reduce function again, this will iterate through the array and allow you to accumulate a final value. The second parameter is the starting value, which is a new array.
shortestArray.reduce((accumulator, currentObject) => /*TODO*/, [])
For the code, we basically need to loop through the remaining arrays and make sure it exists in all of them. You can use the every function since it will fail fast meaning the first array it doesn't exist in will trigger it to return false.
Inside the every you can call some to check if there is at least one match.
isMatch = remainingArrays.every(array => array.some(object => object.id === currentObject.id))
If it's a match, add it to the accumulator which will be your final result. Otherwise, just return the accumulator.
return isMatch ? [...accumulator, currentObject] : accumulator;
Putting all that together should get you a decent solution. I'm sure there are more optimizations that could be made, but that's where I would start.
reduce
every
some
The general solution is to iterate through an input and check for each value whether it exists in all of the other inputs. (Time complexity: O(l * n * l) where n is number of arrays and l is the average length of an array)
Following the ideas of the other two answers, we can improve this brute-force approach a bit by
iterating through the smallest input
using a Set for efficient lookup of ids instead of iteration
so it becomes (with O(l * n + min_l * n) = O(n * l))
const arrayOfIdSets = arrayOfArrays.map(arr =>
new Set(arr.map(val => val.id))
);
const smallestArray = arrayOfArrays.reduce((smallest, arr) =>
smallest.length < arr.length ? smallest : arr
);
const intersection = smallestArray.filter(val =>
arrayOfIdSets.every(set => set.has(val.id))
);
A good way to approach these kinds of problems, both in interviews and in just regular life, is to think of the most obvious approach you can come up with, no matter how inefficient, and think think about how you can improve it. This is usually called a "brute force" approach.
So for this problem, perhaps an obvious but inefficient approach would be to iterate through every item in array1 and check if it is in both array2 and array 3, and note it down (in another array) if it is. Then repeat again for each item in array2 and in array 3, making sure to only note down items you haven't noted down before.
We can see that will be inefficient because we'll be looking for a single item in an array many times, which is quite slow for an array. But it'll work!
Now we can get to work improving our solution. One thing to notice is that finding the intersection of 3 arrays is the same as finding the intersection of the third array with the intersection of the first and second array. So we can look for a solution to the simpler problem of the intersection of 2 arrays, to build one of an intersection for 3 arrays.
This is where it's handy to know your datastructures. You want to be able to ask the question, "does this structure contain a particular element?" as quickly as possible. Think about what structures are good for that kind of a lookup (known as search). More experienced engineers have this memorized/learned, but you can reference something like https://www.bigocheatsheet.com/ to see that sets are good at this.
I'll stop there to not give the full solution, but once you've seen that sets are fast at both insertion and search, think about how you can use that to solve your problem.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
Below is a JavaScript Array
[
["0x34","2","3"],
["0x35","2","3"],
["0x34","1","3"]
]
Required output:
<0x34>:2:3:1:3,<0x35>:2:3
Please help me. Thanks in advance.
Since there are already answers here, I'll throw in one which seems to do the job. But OP, please take to heart the comment to your post by charlietfl. We expect to see more effort here from those posting questions.
This is how I might do this:
const transform = (input) =>
Object .values (
input .reduce ((a, [k, ...vs]) => ({...a, [k]: `${a [k] || `<${k}>`}:${vs .join (':')}`}), {})
) .join (',')
const input = [["0x34", "2", "3"], ["0x35", "2", "3"], ["0x34", "1", "3"]]
console .log (transform (input))
We use reduce to do our grouping. In that step we also append the secondary elements from our group, so that most everything is done in there. Then we select the values of the resulting object, and join them with commas.
Even better would be to write or use a generic groupBy function as might be found in libraries such as Ramda, lodash, or Underscore, but I'll leave that to you.
Update
A comment suggests a different style of coding:
[A]s a friendly reminder for coding beginners: please don't code like this, unless it's only for a project that no one else works on except you. If not, stick to basic formatting guidelines, create meaningful variable names and don't be afraid to use space to make code more readable.
While I appreciate the intention, I disagree quite strongly. Much of my recent career has involved mentoring and training junior programmers, not novices first learning to code but those in their first few years of professional programming.
One of my first goals is to get them out of their initial comfort zones by exposing them to more advanced concepts, more efficient techniques, more expressive code. The point is to get them to never think of coding as rote. If they've always done it this way, then they're not learning and growing when they do so one more time; they probably won't see the abstractions that can help lead to much better software.
As I mentioned originally, I would in practice build a solution for this atop a function like groupBy. I'm one of the founders of Ramda, and tend to think in its terms. I would probably use some of its tools for this, something like:
const transform = pipe (
groupBy (head),
map (map (tail)),
map (flatten),
toPairs,
map (([k, vs]) => `<${k}>:${vs .join (':')}`),
join (',')
)
const transform = pipe (
groupBy (head),
map (map (tail)),
map (flatten),
toPairs,
map (([k, vs]) => `<${k}>:${vs .join (':')}`),
join (',')
)
const input = [["0x34", "2", "3"], ["0x35", "2", "3"], ["0x34", "1", "3"]]
console .log (transform (input))
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.27.1/ramda.js"></script>
<script>const {pipe, groupBy, head, map, tail, flatten, toPairs, join} = R </script>
I recommend to my students that they use such libraries, or, even better, build up their own versions of helpful utility functions in order to build their code on top of reusable abstractions.
As to the code presented, there is one thing I would in retrospect do differently regarding spacing. I would choose to break that long line into several, formatting it like this:
const transform = (input) =>
Object .values (
input .reduce ((a, [k, ...vs]) => ({
...a,
[k]: `${a [k] || `<${k}>`}:${vs .join (':')}`
}), {})
) .join (',')
But I don't think that is at all what was meant by the comment. It sounds as thought the commenter was suggesting something like this:
const transform = (input) => {
const grouped = input .reduce (
(accumulator, strings) => {
const key = strings [0]
const values = strings .slice (1)
if (! accumulator [key]) {
accumulator [key] = '<' + key + '>'
}
const valuesString = values .join (':')
accumulator [key] = accumulator [key] + ':' + valuesString
return accumulator
},
{}
)
const records = Object .values (grouped)
const results = records .join (',')
return results
}
const transform = (input) => {
const grouped = input .reduce (
(accumulator, strings) => {
const key = strings [0]
const values = strings .slice (1)
if (! accumulator [key]) {
accumulator [key] = '<' + key + '>'
}
const valuesString = values .join (':')
accumulator [key] = accumulator [key] + ':' + valuesString
return accumulator
},
{}
)
const records = Object .values (grouped)
const results = records .join (',')
return results
}
const input = [["0x34", "2", "3"], ["0x35", "2", "3"], ["0x34", "1", "3"]]
console .log (transform (input))
or something similar using a for/forEach loop in place of reduce. (If this is not what was suggested, #MauriceNino, I apologize, and would like to hear more of what your meant.) If this what was suggested, then I do disagree.
This does a better job of explaining in detail how to calculate the value, but it's very easy to get lost in the process and forget what it is we want to achieve here. I don't want to spend the time trying to think like the computer. I would much rather have the computer try to think like me.
Take for instance the variable named grouped. Perhaps if we understood the business context better, we could come up with a better name than that, but as it stands, it was the best I can do. What do we gain by knowing and having to keep in our minds the variable grouped. It's defined in one statement, used in the next to calculate another temporary variable, record (again I don't have useful names to give these things), and then ignored after that? And the same with record and results. All this is information crowding my head when I'm trying to understand this function.
And it gets worse with the relationship with strings, key and value. By learning a slight bit of syntax, we can replace
(accumulator, strings) => {
const key = strings [0]
const values = strings .slice (1)
with
(accumulator, [key, ...values]) => {
and not have to try to keep strings in our head. This does reduce line-count, but much more importantly, it keeps the usage of the variables very close to their definitions.
When I write this:
input .reduce ((a, [k, ...vs]) => ({
...a,
[k]: `${a [k] || `<${k}>`}:${vs .join (':')}`
}), {})
I choose the shorter variable names a, k, and vs over the longer ones because, while they are evocative of accumulator, keys, and values, they don't force the same sort of assumptions that those longer words do. They have a much greater chance of being applicable to other situations where I might write similar code. Moreover, they are right there in view; I know what they are when I encounter them because their definitions are only one or two lines up from their uses. For some related points, see John DeGoes' excellent Descriptive Variable Names: A Code Smell.
There's a possibility that the reaction was to something here that is more problematic. Rich Snapp posted a great description of something he calls The reduce ({...spread}) anti-pattern. This is a legitimate concern. My version is less performant than it could be because instead of mutating the accumulator the reduce callback returns a new object every time. This is an intentional choice, and I could have avoided it and still stuck with my expression-focused style by using a mutation and a comma operator, but I find that avoiding mutation is extremely useful. If this is found to be a hot-spot in my code-base, then I will change. But I will stick with the simpler approach first.
And I do find my approach simpler. This is not to say that it's more familiar. I use "simple" here in the sense made famous in Rich Hickey's classic talk Simple Made Easy of having fewer ideas woven together. That's what I try to stress with my students. Familiarity does not make your code better. Simplicity does. And I would argue that the version I first presented (or it's spread-out alternative) is significantly simpler than this version.
Ciao, you could find distinct indexes (I mean 0x34, 0x35), then iterate in input values filtering the elements on index. Put the values found in array and join values to make the result string. Something like:
let input = [
["0x34", "2", "3"],
["0x35", "2", "3"],
["0x34", "1", "3"],
];
let indexes = [...new Set(input.map((val) => val[0]))];
let finalResult = [];
indexes.forEach((index) => {
let result = ["<" + index + ">"];
input
.filter((val) => val[0] === index)
.forEach((el) => {
el.forEach((val) => {
if (val !== index) result.push(val);
});
});
finalResult.push(result.join(":"));
});
console.log(finalResult.join(","));
First you could reduce it to accumulate all arrays with the same key and then join the resulting objects values like so:
const input = [["0x34","2","3"], ["0x35","2","3"], ["0x34","1","3"]];
const result = Object.entries(
input.reduce((acc, arr) => { // Add up the arrays of the same key
const key = arr[0],
values = arr.slice(1);
if(acc[key] == null)
acc[key] = values;
else
acc[key] = [...acc[key], ...values];
return acc;
}, {})
)
.map(([key, values]) => [`<${key}>`, ...values].join(':')); // map it to an array and then join it with a :
console.log(result)
// Or as a whole string
console.log(result.join(','))
reduce() goes over each item and lets you create an accumulated value (or object) out of them
Object.entries() creates an array out of an object
map() transforms an array according to the given function
So the idea in the snippet is, that you group the arrays by its first value into an object using reduce(), then iterate over all the resulting key/value pairs, using Object.entries() and in the end create your desired string with map()
Join method creates and returns a new string by concatenating all of the elements in an array
console.log(
[["0x34","2","3"],
["0x35","2","3"],
["0x34","1","3"]].map(subarray => subarray.join(':')).join(',')
);
Although my question stems from DataTables.net, I imagine it is applicable elsewhere:
I retrieve an array-like object from a DataTables-created table like this:
var data = tableInstance.data(); // tableInstance is already a DataTables table instance
But the data, while array-like, is actually an object decorated with the DataTables API, resulting in an "array" that looks something like this (reduced to a fake "brief" version):
[
0: {thing: "stuff"},
1: {thing: "nextStuff"},
$: function(){},
button: function() {},
length: 2
]
I would like to isolate just the actual array. Does anybody spot an elegant way of doing this? The "obvious" way is to just iterate X times, up to data.length. For example, using an "each" iterator, which inherently does just that:
var newData = [];
data.each(function (el, index) {
newData.push(el);
})
But I can't help wondering if there's a better way. Generating the new array (or editing in-place... no requirement for it to be new) by removing unwanted properties, rather than by pushing wanted items into a new array.
Or is this just too much of a micro-optimization (even with tens of thousands of items) to even bother with?
There is a better way. Use Array.from.
const newData = Array.from(data)
I found tons of examples of how to turn an objects keys/values into a URL encoded query string, but I need to use an array of objects and turn them into a query string.
I want to use pure JS or lodash to accomplish this task.
I am supplying an array of objects in this format:
[
{ points: "30.09404881287048,-96.064453125" },
{ points: "30.09404881287048,-94.63485717773439" },
{ points: "29.345072482286373,-96.064453125" },
{ points: "29.345072482286373,-94.63485717773439"}
]
I need it to be in this format:
points=30.09404881287048%2C-96.064453125&points=30.09404881287048%2C-94.63485717773439&points=29.345072482286373%2C-96.064453125&points=29.345072482286373%2C-94.63485717773439
I am currently accomplishing this using these two methods:
import { map as _map } from 'lodash';
const createQueryStringFromObject = object =>
_map(object, (value, key) => `${encodeURIComponent(key)}=${encodeURIComponent(value)}`)
.join('&');
const createQueryStringFromArray = array => array.reduce((queryString, object, index) => {
return queryString + createQueryStringFromObject(object) + (index < array.length - 1 ? '&' : '');
}, '');
I find the reduce function implementation sloppy though and I think it could be cleaner and more efficient. Any suggestions?
EDIT: I would like to keep the method generic so it can accept an array of objects with any key and not just specifically the key points. I also would like to keep the createQueryStringFromObject() method because I need it elsewhere on the site.
I like your solution, but I think just using built in prototype methods will clean up the code quite a bit:
list.map(p => `points=${encodeURIComponent(p.points)}`).join('&')
which results in
"points=30.09404881287048%2C-96.064453125&points=30.09404881287048%2C-94.63485717773439&points=29.345072482286373%2C-96.064453125&points=29.345072482286373%2C-94.63485717773439"
I am looking for a generic implementation that is not specific to any objects containing the key of points. Thanks to melpomene's suggestion to just use another map and join, I gave that a shot and I definitely like that format alot more. Much shorter and clear!
const createQueryStringFromArray = array =>
_map(array, object => createQueryStringFromObject(object))
.join('&');