JS Dynamically Generated For Loop - javascript

I am using a 3rd party API that allows me to search for housing properties. Unfortunately the API is not written in a way to allow me to search for a range so I have to make a separate call for each value in the range.
So if I want to search for all the housing properties that have 2 or 3 bedrooms I would have to make call for 2 bedrooms, then another call for 3 bedrooms. Now this can get quite tricky as there are multiple fields that can contain a range of numbers (bedrooms, bathroom, floors, garage size...).
My brute force JavaScript solution for this is to create a nested for loop that will create an array of all the calls. This is not a scalable solution and I'm looking for a way to dynamically create this for loop or another alternative way to get an array of all my calls.
My current solution:
const searchParams = {
minBedrooms: 2,
maxBedrooms: 4,
minBathrooms: 1,
maxBathrooms: 3,
minFloors: 1,
maxFloors: 1
};
let promises = [];
for (let bedrooms = searchParams.minBedrooms; bedrooms <= searchParams.maxBedrooms; bedrooms++) {
for (let bathrooms = searchParams.minBathrooms; bathrooms <= searchParams.maxBathrooms; bathrooms++) {
for (let floors = searchParams.minFloors; floors <= searchParams.maxFloors; floors++) {
promises.push(callApi(bedrooms, bathrooms, floors));
}
}
}
Promise.all(promises).then(response => {
// do stuff with response
}
Furthermore the user might not specify one of the search parameters (ie - number of bedrooms). As a result, the API will not apply that specific filter. My code currently will fail no bedroom values are passed in, and writing condition statements for each for loop is not a desire of mine.
Any ideas on how to dynamically generate the above nested for loop?
EDIT
My current solution will fail if the user does not specify the number of bedrooms but specifies bathrooms/floors as the initial for loop will not get run. I don't want to resort to using condition statements along with lots of nested loops to be creating my promise array. This is why I feel like I need to use a dynamically generated for loop.

One way to look at this is called a Cartesian product A × B × C -- for every a in A and b in B and c in C, you want a tuple (a, b, c).
For example {1, 2} × {3, 4} has 4 resulting tuples: (1, 3), (1, 4), (2, 3), (2, 4).
The easiest way to produce this is to start with just the options in the first set: (1) and (2). Then, for each option in the second set, complete each tuple with the new value:
(1), (2) with 3 added gets (1, 3) and (2, 3)
(1), (2) with 4 added gets (1, 4) and (2, 4)
In code, this might look like this:
// Each key is associated with the set of values that it can take on
const properties = {
"bedrooms": [2, 3],
"bathrooms": [1, 2],
"floors": [1, 2, 3, 4],
}
// Start with a single "empty" tuple
let tuples = [{}]
for (let p in properties) {
// For each property, augment all of the old tuples
let nextTuples = []
for (let option of properties[p]) {
// with each possible option
nextTuples = nextTuples.concat(tuples.map(old => ({[p]: option, ...old})))
}
tuples = nextTuples;
}
with tuples ending up like
[
{
"floors": 1,
"bathrooms": 1,
"bedrooms": 2
},
{
"floors": 1,
"bathrooms": 1,
"bedrooms": 3
},
......
{
"floors": 4,
"bathrooms": 2,
"bedrooms": 3
}
]

Similar to the answer from Curtis F, you can use something like
const properties = {
"bedrooms": {min: 2, max: 3},
"bathrooms": {min: 1, max: 2},
"floors": {min: 1, max: 4},
}
Then build up the "tuples" in a similar manner except that your for loop needs to count from min to max for each object in properties.

One approach you can use is a recursive function, where each layer will iterate one dimension of your matrix:
const dims = [
{ min: 2, max: 4 },
{ min: 1, max: 3 },
{ min: 1, max: 1 },
];
function matrix(dims, ...args) {
const dim = dims[0];
return dims.length
? [...new Array(dim.max - dim.min + 1)]
.map((_,i) => i + dim.min)
.map(x => matrix(dims.slice(1), ...args, x))
.reduce((a, b) => [...a, ...b], [])
: [callApi(...args)];
}
function callApi(bedrooms, bathrooms, floors) {
return `bedrooms: ${bedrooms}, bathrooms: ${bathrooms}, floors: ${floors}`;
}
console.log(matrix(dims));

Use recursive call.
Try following code:
{
function Looping ( parameters, fn, args = [] ) {
if ( parameters.length ) {
let [ [ key, param ], ...pass ] = parameters;
let promises = [];
for ( let i = param.min; i <= param.max; i++ ) {
promises.push( ...Looping( pass, fn, [ ...args, i ] ) );
}
return promises;
}
else {
return [ fn( ...args ) ];
}
}
const searchParams = {
Bedrooms: { min: 2, max: 4 },
Bathrooms: { min: 1, max: 3 },
Floors: { min: 1, max: 1 }
};
function callApi ( a, b, c ) { return Promise.resolve( `Bed: ${a}, Bath: ${b}, Floor: ${c}` ); }
console.time( 'Recursive' );
Promise.all( Looping( Object.entries( searchParams ), ( bedrooms, bathrooms, floors ) => callApi( bedrooms, bathrooms, floors ) ) )
.then( a => {
console.timeEnd( 'Recursive' );
console.log( a );
} );
}
Recursive call type faster than mapping.
( async () => {
await new Promise( resolve => {
console.time( 'map' );
function mm ( a, b ) { let r = []; for ( let i = a; i <= b; i++ ) r.push( i ); return r; }
const properties = {
Bedrooms: mm( 1, 100 ),
Bathrooms: mm( 1, 100 ),
Floors: mm( 1, 100 )
};
// Start with a single "empty" tuple
let tuples = [{}]
for (let p in properties) {
// For each property, augment all of the old tuples
let nextTuples = []
for (let option of properties[p]) {
// with each possible option
nextTuples = nextTuples.concat(tuples.map(old => ({[p]: option, ...old})))
}
tuples = nextTuples;
}
let promises = [];
function callApi ( a, b, c ) { return Promise.resolve( `Bed: ${a}, Bath: ${b}, Floor: ${c}` ); }
for ( const i of tuples ) {
let arg = [];
for ( const [ k, v ] of Object.entries( i ) ) {
arg.push( v );
}
promises.push( callApi( ...arg ) );
}
Promise.all( promises ).then( a => {
console.timeEnd( 'map' );
//console.log( a );
resolve();
} );
} );
await new Promise( resolve => {
function Looping ( parameters, fn, args = [] ) {
if ( parameters.length ) {
let [ [ key, param ], ...pass ] = parameters;
let promises = [];
for ( let i = param.min; i <= param.max; i++ ) {
promises.push( ...Looping( pass, fn, [ ...args, i ] ) );
}
return promises;
}
else {
return [ fn( ...args ) ];
}
}
const searchParams = {
Bedrooms: { min: 1, max: 100 },
Bathrooms: { min: 1, max: 100 },
Floors: { min: 1, max: 100 }
};
function callApi ( a, b, c ) { return Promise.resolve( `Bed: ${a}, Bath: ${b}, Floor: ${c}` ); }
console.time( 'Recursive' );
Promise.all( Looping( Object.entries( searchParams ), ( bedrooms, bathrooms, floors ) => callApi( bedrooms, bathrooms, floors ) ) )
.then( a => {
console.timeEnd( 'Recursive' );
//console.log( a );
resolve();
} );
} );
} )();

Related

How to find smallest and biggest value from list of elements, in most efficient way?

I have a two sets of elements, one holds list of numbers and second of names.
something like this.
A: 4,
B: 3,
C: 2,
A: 5,
C: 3,
And my task is to find elements with smallest value and highest value.
I know that i could create array of objects and sort it with map [{A: 4},{C:2}....]
But i was wondering is there any efficient ways of doing it.
Instead of creating a have object, and use three loops.
Would it be possible to replace it with something more efficient.
Like set or something where i could just call set.getItemWithMinValue, set.getItemWithMaxValue
and return would be : C:2, A:5
Sorry for silly quesition, i am still learning.
You are going to have to loop, parse the object into its values, and check if the value is greater or less.
var data = [
{ A: 4 },
{ B: 3 },
{ C: 2 },
{ A: 5 },
{ C: 3 },
];
const results = data.reduce((minMax, item) => {
const value = Object.values(item)[0];
if (!minMax) {
minMax = {
min: { value, item },
max: { value, item },
}
} else if (minMax.min.value > value) {
minMax.min = { value, item };
} else if (minMax.max.value < value) {
minMax.max = { value, item };
}
return minMax;
}, null);
console.log(results.min.item);
console.log(results.max.item);
This would be one way of doing it. Caution: the array will be changed (sorted) in the course of the script.
const arr=[{A: 4},{B: 3},{C: 2},{A: 5},{C: 3}],
val=o=>Object.values(o)[0];
arr.sort((a,b)=>val(a)-val(b));
console.log(arr[0],arr[arr.length-1])
if you have this data
const arr = [{A:2},{A: 4},{B: 3},{C: 2},{A: 5},{C: 3}];
you can iterate over it even if you don't know the property of the objects like this.ç
const arr = [{A:2},{A: 4},{B: 3},{C: 2},{A: 5},{C: 3}];
const result = arr.sort((prev, next) => {
const prevProp = Object.getOwnPropertyNames(prev);
const nextProp = Object.getOwnPropertyNames(next);
return prev[prevProp] - next[nextProp]
});
console.log('max',result[0]);
console.log('min',result[result.length - 1]);
You could take a single loop with getting the entries from the object.
This approach expects only a singl min and max value.
const
array = [{ A: 4 }, { B: 3 }, { C: 2 }, { A: 5 }, { C: 3 }];
let min, max;
for (const object of array) {
const [[k, v]] = Object.entries(object);
if (!min || min[1] > v) min = [k, v];
if (!max || max[1] < v) max = [k, v];
}
console.log('min', Object.fromEntries([min]));
console.log('max', Object.fromEntries([max]));
This approach respects more than one name with same min or max value.
const
array = [{ A: 4 }, { B: 3 }, { C: 2 }, { A: 5 }, { C: 3 }, { F: 2 }];
let min, max;
for (const object of array) {
const v = Object.values(object)[0];
if (!min || min[1] > v) min = [[object], v];
else if (min[1] === v) min[0].push(object);
if (!max || max[1] < v) max = [[object], v];
else if (max[1] === v) max[0].push(object);
}
console.log('min', min[0]);
console.log('max', max[0]);
This is probably premature optimisation but I'm going to leave it here in case it's useful to anyone. When you need both the minimum and the maximum, you can save one comparison for every two objects (that is, cut it down from two comparisons per object to three comparison per two objects) by taking the objects in pairs, comparing the pair of objects with each other (one comparison) and then comparing only the larger one with the accumulated maximum and only the smaller one with the accumulated minimum.
To start the procedure, you have the options of initialising both the maximum and the minimum with the first element, or of initialising the maximum as the larger of the first two elements and the minimum as the smaller of the two. If you know in advance how many elements there are, you can choose one or the other of these depending on whether the number of objects to scan is odd or even, so that the scan will always be over complete pairs.
The code is slightly more complicated, so it's only worth doing if the performance benefit is actually significant in your application. Or, I suppose, as a learning exercise.
You can use Array.prototype.reduce():
const array = [{ A: 4 }, { B: 3 }, { C: 2 }, { A: 5 }, { C: 3 }]
const result = array.reduce((a, c) => {
const [v, max, min] = [c, a.max, a.min].map((o) => Object.values(o)[0])
a.max = max > v ? a.max : c
a.min = min < v ? a.min : c
return a
},
{ max: {}, min: {} })
console.log(result)
Or you can use Array.prototype.sort():
const array = [{ A: 4 }, { B: 3 }, { C: 2 }, { A: 5 }, { C: 3 }]
const arraySorted = array.sort((a, b) => {
const [aValue, bValue] = [a, b].map((o) => Object.values(o)[0])
return bValue - aValue
})
const result = {
max: arraySorted[0],
min: arraySorted[arraySorted.length - 1],
}
console.log(result)

How create array containing all combinations of array 1-n array items?

We have the following task, to convert the array below called passengerFlights (object with passenger-id-keys and array of flights) to:
(1) an array with all possible combination of passenger-flights:
EXPECTED OUTPUT:
[
["aaa", "ddd", "eee"],
["aaa", "ddd", "fff"],
["bbb", "ddd", "eee"],
["bbb", "ddd", "fff"],
["ccc", "ddd", "eee"],
["ccc", "ddd", "fff"]
]
and (2) with the stipulation that there can be any number of passengers.
The following is an attempt to solve this first as a static example of three flights, although it's not clear the best way to (1) create the array with all possible combinations, and (2) how to solve the 2-n requirement, we assume recursion of some kind.
const passengerFlights = {
777: [
{
_id: "aaa"
},
{
_id: "bbb"
},
{
_id: "ccc"
}
],
888: [
{
_id: "ddd"
}
],
999: [
{
_id: "eee"
},
{
_id: "fff"
}
],
};
const getGroupedFlights = (passengerFlights) => {
let indexPointer = 0;
const indexCounters = [0, 0, 0];
const arr = [];
while (indexCounters[0] <= passengerFlights['777'].length - 1 || indexCounters[1] <= passengerFlights['888'].length - 1 || indexCounters[2] <= passengerFlights['999'].length - 1) {
arr.push([passengerFlights['777'][0]._id, passengerFlights['888'][0]._id, passengerFlights['999'][0]._id]);
if (indexCounters[2] < passengerFlights['999'].length) indexCounters[2]++;
if (indexCounters[2] >= passengerFlights['999'].length - 1 && indexCounters[1] < passengerFlights['888'].length) indexCounters[1]++;
if (indexCounters[1] >= passengerFlights['888'].length - 1 && indexCounters[0] < passengerFlights['777'].length) indexCounters[0]++;
console.log(indexCounters, passengerFlights['888'].length - 1);
}
return arr;
}
const groupedFlights = getGroupedFlights(passengerFlights);
console.log(groupedFlights);
It's just a simple recursive problem....
const
passengerFlights =
{ 777: [ { _id: 'aaa' }, { _id: 'bbb' }, { _id: 'ccc' } ]
, 888: [ { _id: 'ddd' } ]
, 999: [ { _id: 'eee' }, { _id: 'fff' } ]
}
, result = combinations( passengerFlights, '_id' )
;
console.log( showArr(result) )
function combinations( obj, KeyName )
{
let
result = []
, keys = Object.keys(obj) // [ "777", "888", "999" ]
, max = keys.length -1
;
f_recursif_combi(0)
return result
function f_recursif_combi( level, arr = [] )
{
obj[ keys[level] ] // like :passengerFlights['777']
.forEach( elm =>
{
let arr2 = [...arr, elm[KeyName] ]; // arr + elm['_id']
(level < max)
? f_recursif_combi(level +1, arr2 )
: result.push( arr2 )
})
}
}
// ************************************ just to present result...
function showArr(Arr)
{
const r = { '[["': `[ [ '`, '","': `', '`, '"],["': `' ]\n, [ '`, '"]]': `' ]\n]` }
return JSON
.stringify(result)
.replace(/\[\[\"|\"\,\"|\"\]\,\[\"|\"\]\]/g,(x)=>r[x])
}
.as-console-wrapper {max-height: 100%!important;top:0 }
I think it's the Cartesian product of flights sets. So this should help:
Cartesian product of multiple arrays in JavaScript
As another answer suggests, you can use a basic cartesian product function. Use Object.values(passengerFlights) to pass in the array of arrays.
function *product(arrs, p = []) {
if (arrs.length == 0)
yield p
else
for (const value of arrs[0])
yield *product(arrs.slice(1), [...p, value])
}
const passengerFlights =
{777: [{_id: "aaa"},{_id: "bbb"},{_id: "ccc"}],888: [{_id: "ddd"}],999: [{_id: "eee"},{_id: "fff"}]}
for (const p of product(Object.values(passengerFlights)))
console.log(JSON.stringify(p.map(obj => obj._id)))
I used JSON.stringify for easy visualization in the demo
["aaa","ddd","eee"]
["aaa","ddd","fff"]
["bbb","ddd","eee"]
["bbb","ddd","fff"]
["ccc","ddd","eee"]
["ccc","ddd","fff"]
But for your program you will probably prefer Array.from
console.log(
Array.from(
product(Object.values(passengerFlights)),
p => p.map(obj => obj._id)
)
)
[
["aaa","ddd","eee"],
["aaa","ddd","fff"],
["bbb","ddd","eee"],
["bbb","ddd","fff"],
["ccc","ddd","eee"],
["ccc","ddd","fff"]
]
Since the order of the expected result is not important, we can make the program more efficient.
function *product(arrs) {
if (arrs.length == 0)
yield []
else
for (const p of product(arrs.slice(1)))
for (const value of arrs[0])
yield [value, ...p]
}
const passengerFlights =
{777: [{_id: "aaa"},{_id: "bbb"},{_id: "ccc"}],888: [{_id: "ddd"}],999: [{_id: "eee"},{_id: "fff"}]}
for (const p of product(Object.values(passengerFlights)))
console.log(JSON.stringify(p.map(obj => obj._id)))
["aaa","ddd","eee"]
["bbb","ddd","eee"]
["ccc","ddd","eee"]
["aaa","ddd","fff"]
["bbb","ddd","fff"]
["ccc","ddd","fff"]

Functional way to get previous element during map

I have an array which I map over. I need to compare the current element with the previous. I am detecting if the current element is the same as the previous element by comparing their ids and doing something different based on this condition. Is there any purely functional way to do it without doing index math?
items.map((item, index) => {
if(item.id === items[index - 1 > 0 ? index - 1 : 0].id) {
// do something
} else {
// do something else
}
})
The code works but I would like to avoid doing math on the index. Is there any way to do it?
The reduce() function provides a functional what you need:
items.reduce((previousValue, currentValue) => {
if(currentValue.id === previousValue.id) {
// do something
} else {
// do something else
}
});
Are you sure that you want a map? This sounds like an XY problem. If you want to map over adjacent elements of an array then you'd have to define your own function.
const mapAdjacent = (mapping, array) => {
const {length} = array, size = length - 1, result = new Array(size);
for (let i = 0; i < size; i++) result[i] = mapping(array[i], array[i + 1]);
return result;
};
const items = [1, 2, 3, 4, 5];
const result = mapAdjacent((x, y) => [x, y], items);
console.log(result); // [[1, 2], [2, 3], [3, 4], [4, 5]]
Note that this will throw a RangeError if you give it an empty array as input.
const mapAdjacent = (mapping, array) => {
const {length} = array, size = length - 1, result = new Array(size);
for (let i = 0; i < size; i++) result[i] = mapping(array[i], array[i + 1]);
return result;
};
const items = [];
const result = mapAdjacent((x, y) => [x, y], items); // RangeError: Invalid array length
console.log(result);
I think this is good behaviour because you shouldn't be giving mapAdjacent an empty array to begin with.
Here's a purely functional implementation of mapAdjacent which uses reduceRight. As an added bonus, it works for any iterable object.
const mapAdjacent = (mapping, [head, ...tail]) =>
tail.reduceRight((recur, item) => prev =>
[mapping(prev, item), ...recur(item)]
, _ => [])(head);
const items = "hello";
const result = mapAdjacent((x, y) => [x, y], items);
console.log(result); // [['h', 'e'], ['e', 'l'], ['l', 'l'], ['l', 'o']]
Unlike the iterative version, it returns an empty array instead of throwing an error if you give it an empty array as input.
const mapAdjacent = (mapping, [head, ...tail]) =>
tail.reduceRight((recur, item) => prev =>
[mapping(prev, item), ...recur(item)]
, _ => [])(head);
const items = "";
const result = mapAdjacent((x, y) => [x, y], items);
console.log(result); // []
Note that this is an unintended side effect of array destructuring with rest elements in JavaScript. The equivalent Haskell version does raise an exception.
mapAdjacent :: (a -> a -> b) -> [a] -> [b]
mapAdjacent f (x:xs) = foldr (\y g x -> f x y : g y) (const []) xs x
main :: IO ()
main = do
print $ mapAdjacent (,) "hello" -- [('h','e'),('e','l'),('l','l'),('l','o')]
print $ mapAdjacent (,) "" -- Exception: Non-exhaustive patterns in function mapAdjacent
However, returning an empty array might be desirable for this function. It's equivalent to adding the mapAdjacent f [] = [] case in Haskell.
Not a particularly fast implementation, but destructuring assignment makes it particularly elegant -
const None =
Symbol ()
const mapAdjacent = (f, [ a = None, b = None, ...more ] = []) =>
a === None || b === None
? []
: [ f (a, b), ...mapAdjacent (f, [ b, ...more ]) ]
const pair = (a, b) =>
[ a, b ]
console.log(mapAdjacent(pair, [ 1, 2, 3 ]))
// [ [ 1, 2 ], [ 2, 3 ] ]
console.log(mapAdjacent(pair, "hello"))
// [ [ h, e ], [ e, l ], [ l, l ], [ l, o ] ]
console.log(mapAdjacent(pair, [ 1 ]))
// []
console.log(mapAdjacent(pair, []))
// []
Or write it as a generator -
const mapAdjacent = function* (f, iter = [])
{ while (iter.length > 1)
{ yield f (...iter.slice(0,2))
iter = iter.slice(1)
}
}
const pair = (a, b) =>
[ a, b ]
console.log(Array.from(mapAdjacent(pair, [ 1, 2, 3 ])))
// [ [ 1, 2 ], [ 2, 3 ] ]
console.log(Array.from(mapAdjacent(pair, "hello")))
// [ [ h, e ], [ e, l ], [ l, l ], [ l, o ] ]
console.log(Array.from(mapAdjacent(pair, [ 1 ])))
// []
console.log(Array.from(mapAdjacent(pair, [])))
// []
As I mentioned in a comment, I would suggest using reduce. Here is an example:
const input = [
{id: 1, value: "Apple Turnover"},
{id: 1, value: "Apple Turnover"},
{id: 2, value: "Banana Bread"},
{id: 3, value: "Chocolate"},
{id: 3, value: "Chocolate"},
{id: 3, value: "Chocolate"},
{id: 1, value: "Apple"},
{id: 4, value: "Danish"},
];
// Desired output: Array of strings equal to values in the above array,
// but with a prefix string of "New: " or "Repeated: " depending on whether
// the id is repeated or not
const reducer = (accumulator, currentValue) => {
let previousValue, descriptions, isRepeatedFromPrevious;
if (accumulator) {
previousValue = accumulator.previousValue;
descriptions = accumulator.descriptions;
isRepeatedFromPrevious = previousValue.id === currentValue.id;
} else {
descriptions = [];
isRepeatedFromPrevious = false;
}
if (isRepeatedFromPrevious) {
// The following line is not purely functional and performs a mutation,
// but maybe we do not care because the mutated object did not exist
// before this reducer ran.
descriptions.push("Repeated: " + currentValue.value);
} else {
// Again, this line is mutative
descriptions.push("New: " + currentValue.value);
}
return { previousValue: currentValue, descriptions }
};
const output = input.reduce(reducer, null).descriptions;
document.getElementById('output').innerText = JSON.stringify(output);
<output id=output></output>

How to remove repeating entries in a massive array (javascript)

I'm trying to graph a huge data set (about 1.6 million points) using Kendo UI. This number is too large, but I have figured out that many of these points are repeating. The data is currently stored in this format:
[ [x,y], [x,y], [x,y]...]
with each x and y being a number, thus each subarray is a point.
The approach I have in mind is to create a second empty array, and then loop through the very long original array, and only push each point to the new one if it isn't already found there.
I tried to use jQuery.inArray(), but it does not seem to work with the 2D array I have here.
I currently try this:
var datMinified = [];
for( z = 2; z < dat1.length; z++) //I start at 2 because the first 2 elements are strings, disregard this
{
if( !(testContains(datMinified, dat1[z])) )
{
datMinified.push(dat1[z])
}
}
with the helper functions defined as:
function testContains(arr, val)
{
for(i=0;i<arr.length;i++)
{
if( arraysEqual( arr[i], val) )
{
return true;
}
}
return false;
}
and:
function arraysEqual(arr1, arr2)
{
if(! (arr1.length == arr2.length))
{
return false;
}
for( i = 0; i < arr1.length; i++ )
{
if( !(arr1[i] == arr2[i]))
{
return false;
}
}
return true;
}
When I run this script, even with a smaller array of length 6 thousand it still gets stuck up.
Maybe jQuery is a good solution?
Edit: I was also thinking that there might be some way to tell the browser to not time out and just sit and work through the data?
You have a non-trivial problem but I'm gonna blast right through so ask questions if I lose you somewhere along the line. This solution does not cast the coordinate into a String or serialise it using other techniques like JSON.stringify -
Start with a way to create coordinates -
const Coord = (x, y) =>
[ x, y ]
To demonstrate the solution, I need to construct many random coordinates -
const rand = x =>
Math.floor(Math.random() * x)
const randCoord = x =>
Coord(rand(x), rand(x))
console.log(randCoord(1e3))
// [ 655, 89 ]
Now we make an array of 1 million random coordinates -
const million =
Array.from(Array(1e6), _ => randCoord(1e3))
Now we make a function to filter all of the unique values using DeepMap, a tiny module I developed in this answer.
const uniq = (coords = []) =>
{ const m = new Map
const r = []
for (const c of coords)
if (!DeepMap.has(m, c))
{ DeepMap.set(m, c, true)
r.push(c)
}
return r
}
Because for and DeepMap have excellent performance, uniq can identify all of the unique values in less than one second -
console.time("uniq")
const result = uniq(million)
console.timeEnd("uniq")
console.log("uniq length:", result.length)
console.log("sample:", result.slice(0,10))
// uniq: 535 ms
// uniq length: 631970
// sample:
// [ [ 908, 719 ]
// , [ 532, 967 ]
// , [ 228, 689 ]
// , [ 942, 546 ]
// , [ 716, 180 ]
// , [ 456, 427 ]
// , [ 714, 79 ]
// , [ 315, 480 ]
// , [ 985, 499 ]
// , [ 212, 407 ]
// ]
Expand the snippet below to verify the results in your own browser -
const DeepMap =
{ has: (map, [ k, ...ks ]) =>
ks.length === 0
? map.has(k)
: map.has(k)
? DeepMap.has(map.get(k), ks)
: false
, set: (map, [ k, ...ks ], value) =>
ks.length === 0
? map.set(k, value)
: map.has(k)
? (DeepMap.set(map.get(k), ks, value), map)
: map.set(k, DeepMap.set(new Map, ks, value))
}
const Coord = (x, y) =>
[ x, y ]
const rand = x =>
Math.floor(Math.random() * x)
const randCoord = x =>
Coord(rand(x), rand(x))
const million =
Array.from(Array(1e6), _ => randCoord(1e3))
const uniq = (coords = []) =>
{ const m = new Map
const r = []
for (const c of coords)
if (!DeepMap.has(m, c))
{ DeepMap.set(m, c, true)
r.push(c)
}
return r
}
console.time("uniq")
const result = uniq(million)
console.timeEnd("uniq")
console.log("uniq length:", result.length)
console.log("sample:", result.slice(0,10))
// uniq: 535 ms
// uniq length: 631970
// sample:
// [ [ 908, 719 ]
// , [ 532, 967 ]
// , [ 228, 689 ]
// , [ 942, 546 ]
// , [ 716, 180 ]
// , [ 456, 427 ]
// , [ 714, 79 ]
// , [ 315, 480 ]
// , [ 985, 499 ]
// , [ 212, 407 ]
// ]
By using generating smaller random coordinates, we can verify that uniq is generating a correct output. Below we generate coordinates up to [ 100, 100 ] for a maximum possibility of 10,000 unique coordinates. When you run the program below, because the coordinates are generated at random, it's possible that result.length will be under 10,000, but it should never exceed it - in which case we'd know an invalid (duplicate) coordinate was added -
const million =
Array.from(Array(1e6), _ => randCoord(1e2))
console.time("uniq")
const result = uniq(million)
console.timeEnd("uniq")
console.log("uniq length:", result.length)
console.log("sample:", result.slice(0,10))
// uniq: 173 ms
// uniq length: 10000
// sample:
// [ [ 50, 60 ]
// , [ 18, 69 ]
// , [ 87, 10 ]
// , [ 8, 7 ]
// , [ 91, 41 ]
// , [ 48, 47 ]
// , [ 78, 28 ]
// , [ 39, 12 ]
// , [ 18, 84 ]
// , [ 0, 71 ]
// ]
Expand the snippet below to verify the results in your own browser -
const DeepMap =
{ has: (map, [ k, ...ks ]) =>
ks.length === 0
? map.has(k)
: map.has(k)
? DeepMap.has(map.get(k), ks)
: false
, set: (map, [ k, ...ks ], value) =>
ks.length === 0
? map.set(k, value)
: map.has(k)
? (DeepMap.set(map.get(k), ks, value), map)
: map.set(k, DeepMap.set(new Map, ks, value))
}
const Coord = (x, y) =>
[ x, y ]
const rand = x =>
Math.floor(Math.random() * x)
const randCoord = x =>
Coord(rand(x), rand(x))
const uniq = (coords = []) =>
{ const m = new Map
const r = []
for (const c of coords)
if (!DeepMap.has(m, c))
{ DeepMap.set(m, c, true)
r.push(c)
}
return r
}
const million =
Array.from(Array(1e6), _ => randCoord(1e2))
console.time("uniq")
const result = uniq(million)
console.timeEnd("uniq")
console.log("uniq length:", result.length)
console.log("sample:", result.slice(0,10))
// uniq: 173 ms
// uniq length: 10000
// sample:
// [ [ 50, 60 ]
// , [ 18, 69 ]
// , [ 87, 10 ]
// , [ 8, 7 ]
// , [ 91, 41 ]
// , [ 48, 47 ]
// , [ 78, 28 ]
// , [ 39, 12 ]
// , [ 18, 84 ]
// , [ 0, 71 ]
// ]
Lastly, I'll include the DeepMap module used here -
const DeepMap =
{ has: (map, [ k, ...ks ]) =>
ks.length === 0
? map.has(k)
: map.has(k)
? DeepMap.has(map.get(k), ks)
: false
, set: (map, [ k, ...ks ], value) =>
ks.length === 0
? map.set(k, value)
: map.has(k)
? (DeepMap.set(map.get(k), ks, value), map)
: map.set(k, DeepMap.set(new Map, ks, value))
, get: (map, [ k, ...ks ]) =>
// ...
, entries: function* (map, fields = [])
// ...
}
For a complete implementation, see the linked Q&A. Fwiw, I do think you will find the link interesting as it provides more context for the complexity of this problem.
You could try something like this. Probably would be helpful to do some benchmarking, or consider doing is server side. That is a lot of data, and you probably are going to see most browser hang:
points = ["test", "string", [1,1], [1,2],[1,3],[1,4],[1,2],[1,3],[1,4],[1,5],[1,6],[1,7],[1,8],[2,1],[2,1],[2,2],[1,1],[1,1],[1,1],[1,1],[1,1]];
t={};
unique = points.filter(e=>!(t[e]=e in t));
console.log(unique);
UPDATE
In short: You can use a Set to automatically create a collection of unique values (which is what differentiates Set from Map), if these values are in a suitable (e.g. comparable) format:
let collection = new Set(data.map((point) => point.toString()));
collection = [...collection].map((val) => val.split(','));
these two lines are enough to filter your 1 million + array to unique values in about 1 second. For a lengthier explanation, see the third example =)...
Original Answer
jQuery is mainly for DOM manipulation and helping with (older) browser quirks, not for dealing with big data! So, no, I would not recommend that, plus it will slow down your processing even more...question is, can you use modern JS (e.g. generator functions) in your app or does it have to work in older browsers as well?
I'm not sure how this will go performance wise with 1+ million entries, but let me know how this works (where data is your datMinified of course):
const data = [
'string',
'string',
[1, 2],
[1, 2],
[2, 3],
[3, 4],
[3, 4],
[4, 5],
];
data.splice(0, 2); // remove your 2 strings at the beginning
console.time('filtering with reduce');
let collection = data.reduce((acc, val) => {
const pointstr = val.toString();
if ( !acc.includes(pointstr) ) {
acc.push(pointstr);
}
return acc;
}, []);
collection.map((point) => point.split(','));
console.timeEnd('filtering with reduce');
console.log(`filtered data has ${collection.length} entries!`);
a generator function could help you to keep memory consumption down (maybe?) =), and it would spare you the .map() part at the end of the above example:
console.time('filtering with generator');
function* filter(arr) {
let filtered = [];
for (var i = 0, l = arr.length; i < l; i++ ) {
const pointstr = arr[i].toString();
if ( !filtered.includes(pointstr) ) {
filtered.push(pointstr);
yield arr[i];
}
}
}
let collection = [];
for (let point of filter(data)) {
collection.push(point);
}
console.timeEnd('filtering with generator');
console.log(`filtered data has ${collection.length} entries!`);
EDIT
both of the above are horrible in terms of performance, here is a realistic scenario for your use case with 1'000'000 data points and a significant improvement based on #user633183 's suggestion to use a Set or Map. I chose to use a set because it represents a collection of unique values, which is exactly what we want, e.g. it takes automatically care of the filtering for us (if the data is in the right form to identify duplicates of course):
const randomBetween = (min,max) => Math.floor(Math.random()*(max-min+1)+min);
var data = Array(1000000);
for (var i = data.length; i; data[--i] = [randomBetween(1,1000), randomBetween(1, 1000)]);
console.log(`unfiltered data has ${data.length} entries!`);
console.time('filtering');
// create the Set with unique values by adding them as strings
// like that the Set will automatically filter duplicates
let collection = new Set(data.map((point) => point.toString()));
console.log(`filtered data has ${collection.size} entries!`);
// we still have to revert the toString() process here
// but we operate on the automatically filtered collection of points
// and this is fast!
collection = [...collection].map((val) => val.split(','));
console.log(`resulting data has ${collection.length} entries!`);
console.timeEnd('filtering');
thanks again #user633183, learned something today =)!
another option would be to combine the generator function with a Set like this:
console.time('filtering with generator and Set');
function* filterSet(arr) {
let filtered = new Set();
for (var i = 0, l = arr.length; i < l; i++ ) {
const pointstr = arr[i].toString();
if ( !filtered.has(pointstr) ) {
filtered.add(pointstr);
yield arr[i];
}
}
}
let collection = [];
for (let point of filterSet(data)) {
collection.push(point);
}
console.timeEnd('filtering with generator and Set');
console.log(`filtered data has ${collection.length} entries!`);
this again spares you from having to reverse the .toString() and is just slightly faster than the "direct" new Set() approach.
To finish this up, here a completely subjective benchmark on my machine with 100'000 data points:
unfiltered data has 100000 entries!
filtering with reduce: 31946.634ms
filtered data has 95232 entries!
filtering with generator: 39533.802ms
filtered data has 95232 entries!
filtering with generator and Set: 107.893ms
filtered data has 95232 entries!
filtering with Set: 159.894ms
filtered data has 95232 entries!

map add/reduce two array object with same index

I have two array object as following:
var arr1 = [
{
name: 1,
value: 10
},
{
name: 2,
value: 15
}
]
var arr2 = [
{
name: 3,
value: 5
},
{
name: 4,
value: 3
}
]
I want to redefine the key and reduce each data with the same index.
output:
var arr1 = [
{
itemLabel: 1,
itemValue: 5
},
{
itemLabel: 2,
itemValue: 12
}
]
I'm doing now as following:
formatData = arr1.map((row, index) => ({
itemLabel: arr1.name,
itemValue: arr1.value - arr2[index].value
}))
Is there any better solution of doing this?
One-man army
A simple recursive program that handles everything in a single function. There's a clear mixture of concerns here which hurts of function's overall readability. We'll see one such remedy for this problem below
const main = ([x, ...xs], [y, ...ys]) =>
x === undefined || y === undefined
? []
: [ { itemLabel: x.name, itemValue: x.value - y.value } ] .concat (main (xs, ys))
const arr1 =
[ { name: 1, value: 10 }, { name: 2, value: 15 } ]
const arr2 =
[ { name: 3, value: 5 }, { name: 4, value: 3 } ]
console.log (main (arr1, arr2))
// [ { itemLabel: 1, itemValue: 5 },
// { itemLabel: 2, itemValue: 12 } ]
Thinking with types
This part of the answer is influenced by type theory from the Monoid category – I won't go too far into it because I think the code should be able to demonstrate itself.
So we have two types in our problem: We'll call them Foo and Bar
Foo – has name, and value fields
Bar – has itemLabel and itemValue fields
We can represent our "types" however we want, but I chose a simple function which constructs an object
const Foo = (name, value) =>
({ name
, value
})
const Bar = (itemLabel, itemValue) =>
({ itemLabel
, itemValue
})
Making values of a type
To construct new values of our type, we just apply our functions to the field values
const arr1 =
[ Foo (1, 10), Foo (2, 15) ]
const arr2 =
[ Foo (3, 5), Foo (4, 3) ]
Let's see the data we have so far
console.log (arr1)
// [ { name: 1, value: 10 },
// { name: 2, value: 15 } ]
console.log (arr2)
// [ { name: 3, value: 5 },
// { name: 4, value: 3 } ]
Some high-level planning
We're off to a great start. We have two arrays of Foo values. Our objective is to work through the two arrays by taking one Foo value from each array, combining them (more on this later), and then moving onto the next pair
const zip = ([ x, ...xs ], [ y, ...ys ]) =>
x === undefined || y === undefined
? []
: [ [ x, y ] ] .concat (zip (xs, ys))
console.log (zip (arr1, arr2))
// [ [ { name: 1, value: 10 },
// { name: 3, value: 5 } ],
// [ { name: 2, value: 15 },
// { name: 4, value: 3 } ] ]
Combining values: concat
With the Foo values properly grouped together, we can now focus more on what that combining process is. Here, I'm going to define a generic concat and then implement it on our Foo type
// generic concat
const concat = (m1, m2) =>
m1.concat (m2)
const Foo = (name, value) =>
({ name
, value
, concat: ({name:_, value:value2}) =>
// keep the name from the first, subtract value2 from value
Foo (name, value - value2)
})
console.log (concat (Foo (1, 10), Foo (3, 5)))
// { name: 1, value: 5, concat: [Function] }
Does concat sound familiar? Array and String are also Monoid types!
concat ([ 1, 2 ], [ 3, 4 ])
// [ 1, 2, 3, 4 ]
concat ('foo', 'bar')
// 'foobar'
Higher-order functions
So now we have a way to combine two Foo values together. The name of the first Foo is kept, and the value properties are subtracted. Now we apply this to each pair in our "zipped" result. Functional programmers love higher-order functions, so you'll appreciate this higher-order harmony
const apply = f => xs =>
f (...xs)
zip (arr1, arr2) .map (apply (concat))
// [ { name: 1, value: 5, concat: [Function] },
// { name: 2, value: 12, concat: [Function] } ]
Transforming types
So now we have the Foo values with the correct name and value values, but we want our final answer to be Bar values. A specialized constructor is all we need
Bar.fromFoo = ({ name, value }) =>
Bar (name, value)
Bar.fromFoo (Foo (1,2))
// { itemLabel: 1, itemValue: 2 }
zip (arr1, arr2)
.map (apply (concat))
.map (Bar.fromFoo)
// [ { itemLabel: 1, itemValue: 5 },
// { itemLabel: 2, itemValue: 12 } ]
Hard work pays off
A beautiful, pure functional expression. Our program reads very nicely; flow and transformation of the data is easy to follow thanks to the declarative style.
// main :: ([Foo], [Foo]) -> [Bar]
const main = (xs, ys) =>
zip (xs, ys)
.map (apply (concat))
.map (Bar.fromFoo)
And a complete code demo, of course
const Foo = (name, value) =>
({ name
, value
, concat: ({name:_, value:value2}) =>
Foo (name, value - value2)
})
const Bar = (itemLabel, itemValue) =>
({ itemLabel
, itemValue
})
Bar.fromFoo = ({ name, value }) =>
Bar (name, value)
const concat = (m1, m2) =>
m1.concat (m2)
const apply = f => xs =>
f (...xs)
const zip = ([ x, ...xs ], [ y, ...ys ]) =>
x === undefined || y === undefined
? []
: [ [ x, y ] ] .concat (zip (xs, ys))
const main = (xs, ys) =>
zip (xs, ys)
.map (apply (concat))
.map (Bar.fromFoo)
const arr1 =
[ Foo (1, 10), Foo (2, 15) ]
const arr2 =
[ Foo (3, 5), Foo (4, 3) ]
console.log (main (arr1, arr2))
// [ { itemLabel: 1, itemValue: 5 },
// { itemLabel: 2, itemValue: 12 } ]
Remarks
Our program above is implemented with a .map-.map chain which means handling and creating intermediate values multiple times. We also created an intermediate array of [[x1,y1], [x2,y2], ...] in our call to zip. Category theory gives us things like equational reasoning so we could replace m.map(f).map(g) with m.map(compose(f,g)) and achieve the same result. So there's room to improve this yet, but I think this is just enough to cut your teeth and start thinking about things in a different way.
Your code is just fine, you could use recursion as well:
var arr1 =[{
name: 1,
value: 10
}, {
name: 2,
value: 15
}];
var arr2= [{
name: 3,
value: 5
}, {
name: 4,
value: 3
}]
const createObject=(arr1,arr2,ret=[])=>{
if(arr1.length!==arr2.length){
throw("Arrays should be the same length.")
}
const item = {
itemLabel: arr1[0].name,
itemValue: arr1[0].value - arr2[0].value
};
if(arr1.length===0){
return ret;
};
return createObject(arr1.slice(1),arr2.slice(1),ret.concat(item));
}
console.log(createObject(arr1,arr2));
Both functions implementing a map or reduce would have to use either arr1 or arr2 outside of their scope (not passed to it as parameter) so strictly speaking not pure. But you could easily solve it with partial application:
var arr1 =[{
name: 1,
value: 10
}, {
name: 2,
value: 15
}];
var arr2= [{
name: 3,
value: 5
}, {
name: 4,
value: 3
}];
const mapFunction = arr2 => (item,index) => {
return {
itemLabel: item.name,
itemValue: item.value - arr2[index].value
}
}
var createObject=(arr1,arr2,ret=[])=>{
if(arr1.length!==arr2.length){
throw("Arrays should be the same length.")
}
const mf = mapFunction(arr2);
return arr1.map(mf);
}
console.log(createObject(arr1,arr2));
But as CodingIntrigue mentioned in the comment: none of these are any "better" than you've already done.
To make your solution more functional you need to change your anonymous function to a pure (anonymous) function.
A pure function is a function that, given the same input, will always return the same output
The anonymous function depends on the mutable variable arr1 and arr2. That means that it depends on the system state. So it doesn't fit into the pure function rule.
The following is maybe not the best implementaion but I hope it gives you an idea..
Let's Make it Pure
To make it pure we can pass the variables into the function as arguments
const mapWithObject = (obj2, obj1, index) => ({
itemLabel: obj1.name,
itemValue: obj1.value - obj2[index].value
})
// example call
const result = mapWithObject(arr2, arr1[0], 0)
Ok, but now the function doesn't fit into map anymore because it takes 3 arguments instead of 2..
Let's Curry it
const mapWithObject = obj2 => (obj1, index) => ({
itemLabel: obj1.name,
itemValue: obj1.value - obj2[index].value
})
const mapObject_with_arr2 = mapWithObject(arr2)
// example call
const result = mapObject_with_arr2(arr1[0], 0)
Full Code
const arr1 = [{
name: 1,
value: 10
},
{
name: 2,
value: 15
}
]
const arr2 = [{
name: 3,
value: 5
},
{
name: 4,
value: 3
}
]
const mapWithObject = obj2 => (obj1, index) => ({
itemLabel: obj1.name,
itemValue: obj1.value - obj2[index].value
})
const mapObject_with_arr2 = mapWithObject(arr2)
const mappedObject = arr1.map(mapObject_with_arr2)
console.log(mappedObject)
If you don't care to much about performance, but want to separate your concerns a bit further you could use this approach:
Define a function that does the "pairing" between arr1 and arr2
[a, b, c] + [1, 2, 3] -> [ [ a, 1 ], [ b, 2 ], [ c, 3 ] ]
Define a function that clearly shows the merge strategy of two objects
{ a: 1, b: 10 } + { a: 2, b: 20 } -> { a: 1, b: -10 }
Define simple helpers that compose the two so you can pass your original arrays and be returned the desired output in one function call.
Here's an example:
var arr1=[{name:1,value:10},{name:2,value:15}],arr2=[{name:3,value:5},{name:4,value:3}];
// This is a very general method that bundles two
// arrays in an array of pairs. Put it in your utils
// and you can use it everywhere
const pairs = (arr1, arr2) => Array.from(
{ length: Math.max(arr1.length, arr2.length) },
(_, i) => [ arr1[i], arr2[i] ]
);
// This defines our merge strategy for two objects.
// Ideally, you should give it a better name, based
// on what the objects represent
const merge =
(base, ext) => ({
itemLabel: base.name,
itemValue: base.value - ext.value
});
// This is a helper that "applies" our merge method
// to an array of two items.
const mergePair = ([ base, ext ]) => merge(base, ext);
// Another helper that composes `pairs` and `mergePair`
// to allow you to pass your original data.
const mergeArrays = (arr1, arr2) => pairs(arr1, arr2).map(mergePair);
console.log(mergeArrays(arr1, arr2));

Categories

Resources