How to find combinations of multidimensional array in order? [duplicate] - javascript

How would you implement the Cartesian product of multiple arrays in JavaScript?
As an example,
cartesian([1, 2], [10, 20], [100, 200, 300])
should return
[
[1, 10, 100],
[1, 10, 200],
[1, 10, 300],
[2, 10, 100],
[2, 10, 200]
...
]

2020 Update: 1-line (!) answer with vanilla JS
Original 2017 Answer: 2-line answer with vanilla JS:
(see updates below)
All of the answers here are overly complicated, most of them take 20 lines of code or even more.
This example uses just two lines of vanilla JavaScript, no lodash, underscore or other libraries:
let f = (a, b) => [].concat(...a.map(a => b.map(b => [].concat(a, b))));
let cartesian = (a, b, ...c) => b ? cartesian(f(a, b), ...c) : a;
Update:
This is the same as above but improved to strictly follow the Airbnb JavaScript Style Guide - validated using ESLint with eslint-config-airbnb-base:
const f = (a, b) => [].concat(...a.map(d => b.map(e => [].concat(d, e))));
const cartesian = (a, b, ...c) => (b ? cartesian(f(a, b), ...c) : a);
Special thanks to ZuBB for letting me know about linter problems with the original code.
Update 2020:
Since I wrote this answer we got even better builtins, that can finally let us reduce (no pun intended) the code to just 1 line!
const cartesian =
(...a) => a.reduce((a, b) => a.flatMap(d => b.map(e => [d, e].flat())));
Special thanks to inker for suggesting the use of reduce.
Special thanks to Bergi for suggesting the use of the newly added flatMap.
Special thanks to ECMAScript 2019 for adding flat and flatMap to the language!
Example
This is the exact example from your question:
let output = cartesian([1,2],[10,20],[100,200,300]);
Output
This is the output of that command:
[ [ 1, 10, 100 ],
[ 1, 10, 200 ],
[ 1, 10, 300 ],
[ 1, 20, 100 ],
[ 1, 20, 200 ],
[ 1, 20, 300 ],
[ 2, 10, 100 ],
[ 2, 10, 200 ],
[ 2, 10, 300 ],
[ 2, 20, 100 ],
[ 2, 20, 200 ],
[ 2, 20, 300 ] ]
Demo
See demos on:
JS Bin with Babel (for old browsers)
JS Bin without Babel (for modern browsers)
Syntax
The syntax that I used here is nothing new.
My example uses the spread operator and the rest parameters - features of JavaScript defined in the 6th edition of the ECMA-262 standard published on June 2015 and developed much earlier, better known as ES6 or ES2015. See:
http://www.ecma-international.org/ecma-262/6.0/
https://developer.mozilla.org/en/docs/Web/JavaScript/Reference/Functions/rest_parameters
https://developer.mozilla.org/en/docs/Web/JavaScript/Reference/Operators/Spread_operator
The new methods from the Update 2020 example was added in ES2019:
http://www.ecma-international.org/ecma-262/10.0/
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/flat
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/flatMap
It makes code like this so simple that it's a sin not to use it. For old platforms that don't support it natively you can always use Babel or other tools to transpile it to older syntax - and in fact my example transpiled by Babel is still shorter and simpler than most of the examples here, but it doesn't really matter because the output of transpilation is not something that you need to understand or maintain, it's just a fact that I found interesting.
Conclusion
There's no need to write hundred of lines of code that is hard to maintain and there is no need to use entire libraries for such a simple thing, when two lines of vanilla JavaScript can easily get the job done. As you can see it really pays off to use modern features of the language and in cases where you need to support archaic platforms with no native support of the modern features you can always use Babel, TypeScript or other tools to transpile the new syntax to the old one.
Don't code like it's 1995
JavaScript evolves and it does so for a reason. TC39 does an amazing job of the language design with adding new features and the browser vendors do an amazing job of implementing those features.
To see the current state of native support of any given feature in the browsers, see:
http://caniuse.com/
https://kangax.github.io/compat-table/
To see the support in Node versions, see:
http://node.green/
To use modern syntax on platforms that don't support it natively, use Babel or TypeScript:
https://babeljs.io/
https://www.typescriptlang.org/

Here is a functional solution to the problem (without any mutable variable!) using reduce and flatten, provided by underscore.js:
function cartesianProductOf() {
return _.reduce(arguments, function(a, b) {
return _.flatten(_.map(a, function(x) {
return _.map(b, function(y) {
return x.concat([y]);
});
}), true);
}, [ [] ]);
}
// [[1,3,"a"],[1,3,"b"],[1,4,"a"],[1,4,"b"],[2,3,"a"],[2,3,"b"],[2,4,"a"],[2,4,"b"]]
console.log(cartesianProductOf([1, 2], [3, 4], ['a']));
<script src="https://cdnjs.cloudflare.com/ajax/libs/underscore.js/1.9.1/underscore.js"></script>
Remark: This solution was inspired by http://cwestblog.com/2011/05/02/cartesian-product-of-multiple-arrays/

Here's a modified version of #viebel's code in plain Javascript, without using any library:
function cartesianProduct(arr) {
return arr.reduce(function(a,b){
return a.map(function(x){
return b.map(function(y){
return x.concat([y]);
})
}).reduce(function(a,b){ return a.concat(b) },[])
}, [[]])
}
var a = cartesianProduct([[1, 2,3], [4, 5,6], [7, 8], [9,10]]);
console.log(JSON.stringify(a));

The following efficient generator function returns the cartesian product of all given iterables:
// Generate cartesian product of given iterables:
function* cartesian(head, ...tail) {
const remainder = tail.length > 0 ? cartesian(...tail) : [[]];
for (let r of remainder) for (let h of head) yield [h, ...r];
}
// Example:
console.log(...cartesian([1, 2], [10, 20], [100, 200, 300]));
It accepts arrays, strings, sets and all other objects implementing the iterable protocol.
Following the specification of the n-ary cartesian product it yields
[] if one or more given iterables are empty, e.g. [] or ''
[[a]] if a single iterable containing a single value a is given.
All other cases are handled as expected as demonstrated by the following test cases:
// Generate cartesian product of given iterables:
function* cartesian(head, ...tail) {
const remainder = tail.length > 0 ? cartesian(...tail) : [[]];
for (let r of remainder) for (let h of head) yield [h, ...r];
}
// Test cases:
console.log([...cartesian([])]); // []
console.log([...cartesian([1])]); // [[1]]
console.log([...cartesian([1, 2])]); // [[1], [2]]
console.log([...cartesian([1], [])]); // []
console.log([...cartesian([1, 2], [])]); // []
console.log([...cartesian([1], [2])]); // [[1, 2]]
console.log([...cartesian([1], [2], [3])]); // [[1, 2, 3]]
console.log([...cartesian([1, 2], [3, 4])]); // [[1, 3], [2, 3], [1, 4], [2, 4]]
console.log([...cartesian('')]); // []
console.log([...cartesian('ab', 'c')]); // [['a','c'], ['b', 'c']]
console.log([...cartesian([1, 2], 'ab')]); // [[1, 'a'], [2, 'a'], [1, 'b'], [2, 'b']]
console.log([...cartesian(new Set())]); // []
console.log([...cartesian(new Set([1]))]); // [[1]]
console.log([...cartesian(new Set([1, 1]))]); // [[1]]

It seems the community thinks this to be trivial and/or easy to find a reference implementation. However, upon brief inspection I couldn't find one, … either that or maybe it's just that I like re-inventing the wheel or solving classroom-like programming problems. Either way its your lucky day:
function cartProd(paramArray) {
function addTo(curr, args) {
var i, copy,
rest = args.slice(1),
last = !rest.length,
result = [];
for (i = 0; i < args[0].length; i++) {
copy = curr.slice();
copy.push(args[0][i]);
if (last) {
result.push(copy);
} else {
result = result.concat(addTo(copy, rest));
}
}
return result;
}
return addTo([], Array.prototype.slice.call(arguments));
}
>> console.log(cartProd([1,2], [10,20], [100,200,300]));
>> [
[1, 10, 100], [1, 10, 200], [1, 10, 300], [1, 20, 100],
[1, 20, 200], [1, 20, 300], [2, 10, 100], [2, 10, 200],
[2, 10, 300], [2, 20, 100], [2, 20, 200], [2, 20, 300]
]
Full reference implementation that's relatively efficient… 😁
On efficiency: You could gain some by taking the if out of the loop and having 2 separate loops since it is technically constant and you'd be helping with branch prediction and all that mess, but that point is kind of moot in JavaScript.

Here's a non-fancy, straightforward recursive solution:
function cartesianProduct(a) { // a = array of array
var i, j, l, m, a1, o = [];
if (!a || a.length == 0) return a;
a1 = a.splice(0, 1)[0]; // the first array of a
a = cartesianProduct(a);
for (i = 0, l = a1.length; i < l; i++) {
if (a && a.length)
for (j = 0, m = a.length; j < m; j++)
o.push([a1[i]].concat(a[j]));
else
o.push([a1[i]]);
}
return o;
}
console.log(cartesianProduct([[1, 2], [10, 20], [100, 200, 300]]));
// [
// [1,10,100],[1,10,200],[1,10,300],
// [1,20,100],[1,20,200],[1,20,300],
// [2,10,100],[2,10,200],[2,10,300],
// [2,20,100],[2,20,200],[2,20,300]
// ]

Here is a one-liner using the native ES2019 flatMap. No libraries needed, just a modern browser (or transpiler):
data.reduce((a, b) => a.flatMap(x => b.map(y => [...x, y])), [[]]);
It's essentially a modern version of viebel's answer, without lodash.

Here is a recursive way that uses an ECMAScript 2015 generator function so you don't have to create all of the tuples at once:
function* cartesian() {
let arrays = arguments;
function* doCartesian(i, prod) {
if (i == arrays.length) {
yield prod;
} else {
for (let j = 0; j < arrays[i].length; j++) {
yield* doCartesian(i + 1, prod.concat([arrays[i][j]]));
}
}
}
yield* doCartesian(0, []);
}
console.log(JSON.stringify(Array.from(cartesian([1,2],[10,20],[100,200,300]))));
console.log(JSON.stringify(Array.from(cartesian([[1],[2]],[10,20],[100,200,300]))));

This is a pure ES6 solution using arrow functions
function cartesianProduct(arr) {
return arr.reduce((a, b) =>
a.map(x => b.map(y => x.concat(y)))
.reduce((a, b) => a.concat(b), []), [[]]);
}
var arr = [[1, 2], [10, 20], [100, 200, 300]];
console.log(JSON.stringify(cartesianProduct(arr)));

functional programming
This question is tagged functional-programming so let's take a look at the List monad:
One application for this monadic list is representing nondeterministic computation. List can hold results for all execution paths in an algorithm...
Well that sounds like a perfect fit for cartesian. JavaScript gives us Array and the monadic binding function is Array.prototype.flatMap, so let's put them to use -
const cartesian = (...all) => {
const loop = (t, a, ...more) =>
a === undefined
? [ t ]
: a.flatMap(x => loop([ ...t, x ], ...more))
return loop([], ...all)
}
console.log(cartesian([1,2], [10,20], [100,200,300]))
[1,10,100]
[1,10,200]
[1,10,300]
[1,20,100]
[1,20,200]
[1,20,300]
[2,10,100]
[2,10,200]
[2,10,300]
[2,20,100]
[2,20,200]
[2,20,300]
more recursion
Other recursive implementations include -
const cartesian = (a, ...more) =>
a == null
? [[]]
: cartesian(...more).flatMap(c => a.map(v => [v,...c]))
for (const p of cartesian([1,2], [10,20], [100,200,300]))
console.log(JSON.stringify(p))
.as-console-wrapper { min-height: 100%; top: 0; }
[1,10,100]
[2,10,100]
[1,20,100]
[2,20,100]
[1,10,200]
[2,10,200]
[1,20,200]
[2,20,200]
[1,10,300]
[2,10,300]
[1,20,300]
[2,20,300]
Note the different order above. You can get lexicographic order by inverting the two loops. Be careful not avoid duplicating work by calling cartesian inside the loop like Nick's answer -
const bind = (x, f) => f(x)
const cartesian = (a, ...more) =>
a == null
? [[]]
: bind(cartesian(...more), r => a.flatMap(v => r.map(c => [v,...c])))
for (const p of cartesian([1,2], [10,20], [100,200,300]))
console.log(JSON.stringify(p))
.as-console-wrapper { min-height: 100%; top: 0; }
[1,10,100]
[1,10,200]
[1,10,300]
[1,20,100]
[1,20,200]
[1,20,300]
[2,10,100]
[2,10,200]
[2,10,300]
[2,20,100]
[2,20,200]
[2,20,300]
generators
Another option is to use generators. A generator is a good fit for combinatorics because the solution space can become very large. Generators offer lazy evaluation so they can be paused/resumed/canceled at any time -
function* cartesian(a, ...more) {
if (a == null) return yield []
for (const v of a)
for (const c of cartesian(...more)) // ⚠️
yield [v, ...c]
}
for (const p of cartesian([1,2], [10,20], [100,200,300]))
console.log(JSON.stringify(p))
.as-console-wrapper { min-height: 100%; top: 0; }
[1,10,100]
[1,10,200]
[1,10,300]
[1,20,100]
[1,20,200]
[1,20,300]
[2,10,100]
[2,10,200]
[2,10,300]
[2,20,100]
[2,20,200]
[2,20,300]
Maybe you saw that we called cartesian in a loop in the generator. If you suspect that can be optimized, it can! Here we use a generic tee function that forks any iterator n times -
function* cartesian(a, ...more) {
if (a == null) return yield []
for (const t of tee(cartesian(...more), a.length)) // ✅
for (const v of a)
for (const c of t) // ✅
yield [v, ...c]
}
Where tee is implemented as -
function tee(g, n = 2) {
const memo = []
function* iter(i) {
while (true) {
if (i >= memo.length) {
const w = g.next()
if (w.done) return
memo.push(w.value)
}
else yield memo[i++]
}
}
return Array.from(Array(n), _ => iter(0))
}
Even in small tests cartesian generator implemented with tee performs twice as fast.

Using a typical backtracking with ES6 generators,
function cartesianProduct(...arrays) {
let current = new Array(arrays.length);
return (function* backtracking(index) {
if(index == arrays.length) yield current.slice();
else for(let num of arrays[index]) {
current[index] = num;
yield* backtracking(index+1);
}
})(0);
}
for (let item of cartesianProduct([1,2],[10,20],[100,200,300])) {
console.log('[' + item.join(', ') + ']');
}
div.as-console-wrapper { max-height: 100%; }
Below there is a similar version compatible with older browsers.
function cartesianProduct(arrays) {
var result = [],
current = new Array(arrays.length);
(function backtracking(index) {
if(index == arrays.length) return result.push(current.slice());
for(var i=0; i<arrays[index].length; ++i) {
current[index] = arrays[index][i];
backtracking(index+1);
}
})(0);
return result;
}
cartesianProduct([[1,2],[10,20],[100,200,300]]).forEach(function(item) {
console.log('[' + item.join(', ') + ']');
});
div.as-console-wrapper { max-height: 100%; }

A single line approach, for better reading with indentations.
result = data.reduce(
(a, b) => a.reduce(
(r, v) => r.concat(b.map(w => [].concat(v, w))),
[]
)
);
It takes a single array with arrays of wanted cartesian items.
var data = [[1, 2], [10, 20], [100, 200, 300]],
result = data.reduce((a, b) => a.reduce((r, v) => r.concat(b.map(w => [].concat(v, w))), []));
console.log(result.map(a => a.join(' ')));
.as-console-wrapper { max-height: 100% !important; top: 0; }

A coffeescript version with lodash:
_ = require("lodash")
cartesianProduct = ->
return _.reduceRight(arguments, (a,b) ->
_.flatten(_.map(a,(x) -> _.map b, (y) -> x.concat(y)), true)
, [ [] ])

For those who needs TypeScript (reimplemented #Danny's answer)
/**
* Calculates "Cartesian Product" sets.
* #example
* cartesianProduct([[1,2], [4,8], [16,32]])
* Returns:
* [
* [1, 4, 16],
* [1, 4, 32],
* [1, 8, 16],
* [1, 8, 32],
* [2, 4, 16],
* [2, 4, 32],
* [2, 8, 16],
* [2, 8, 32]
* ]
* #see https://stackoverflow.com/a/36234242/1955709
* #see https://en.wikipedia.org/wiki/Cartesian_product
* #param arr {T[][]}
* #returns {T[][]}
*/
function cartesianProduct<T> (arr: T[][]): T[][] {
return arr.reduce((a, b) => {
return a.map(x => {
return b.map(y => {
return x.concat(y)
})
}).reduce((c, d) => c.concat(d), [])
}, [[]] as T[][])
}

In my particular setting, the "old-fashioned" approach seemed to be more efficient than the methods based on more modern features. Below is the code (including a small comparison with other solutions posted in this thread by #rsp and #sebnukem) should it prove useful to someone else as well.
The idea is following. Let's say we are constructing the outer product of N arrays, a_1,...,a_N each of which has m_i components. The outer product of these arrays has M=m_1*m_2*...*m_N elements and we can identify each of them with a N-dimensional vector the components of which are positive integers and i-th component is strictly bounded from above by m_i. For example, the vector (0, 0, ..., 0) would correspond to the particular combination within which one takes the first element from each array, while (m_1-1, m_2-1, ..., m_N-1) is identified with the combination where one takes the last element from each array. Thus in order to construct all M combinations, the function below consecutively constructs all such vectors and for each of them identifies the corresponding combination of the elements of the input arrays.
function cartesianProduct(){
const N = arguments.length;
var arr_lengths = Array(N);
var digits = Array(N);
var num_tot = 1;
for(var i = 0; i < N; ++i){
const len = arguments[i].length;
if(!len){
num_tot = 0;
break;
}
digits[i] = 0;
num_tot *= (arr_lengths[i] = len);
}
var ret = Array(num_tot);
for(var num = 0; num < num_tot; ++num){
var item = Array(N);
for(var j = 0; j < N; ++j){ item[j] = arguments[j][digits[j]]; }
ret[num] = item;
for(var idx = 0; idx < N; ++idx){
if(digits[idx] == arr_lengths[idx]-1){
digits[idx] = 0;
}else{
digits[idx] += 1;
break;
}
}
}
return ret;
}
//------------------------------------------------------------------------------
let _f = (a, b) => [].concat(...a.map(a => b.map(b => [].concat(a, b))));
let cartesianProduct_rsp = (a, b, ...c) => b ? cartesianProduct_rsp(_f(a, b), ...c) : a;
//------------------------------------------------------------------------------
function cartesianProduct_sebnukem(a) {
var i, j, l, m, a1, o = [];
if (!a || a.length == 0) return a;
a1 = a.splice(0, 1)[0];
a = cartesianProduct_sebnukem(a);
for (i = 0, l = a1.length; i < l; i++) {
if (a && a.length) for (j = 0, m = a.length; j < m; j++)
o.push([a1[i]].concat(a[j]));
else
o.push([a1[i]]);
}
return o;
}
//------------------------------------------------------------------------------
const L = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9];
const args = [L, L, L, L, L, L];
let fns = {
'cartesianProduct': function(args){ return cartesianProduct(...args); },
'cartesianProduct_rsp': function(args){ return cartesianProduct_rsp(...args); },
'cartesianProduct_sebnukem': function(args){ return cartesianProduct_sebnukem(args); }
};
Object.keys(fns).forEach(fname => {
console.time(fname);
const ret = fns[fname](args);
console.timeEnd(fname);
});
with node v6.12.2, I get following timings:
cartesianProduct: 427.378ms
cartesianProduct_rsp: 1710.829ms
cartesianProduct_sebnukem: 593.351ms

You could reduce the 2D array. Use flatMap on the accumulator array to get acc.length x curr.length number of combinations in each loop. [].concat(c, n) is used because c is a number in the first iteration and an array afterwards.
const data = [ [1, 2], [10, 20], [100, 200, 300] ];
const output = data.reduce((acc, curr) =>
acc.flatMap(c => curr.map(n => [].concat(c, n)))
)
console.log(JSON.stringify(output))
(This is based on Nina Scholz's answer)

Here's a recursive one-liner that works using only flatMap and map:
const inp = [
[1, 2],
[10, 20],
[100, 200, 300]
];
const cartesian = (first, ...rest) =>
rest.length ? first.flatMap(v => cartesian(...rest).map(c => [v].concat(c)))
: first;
console.log(cartesian(...inp));

A few of the answers under this topic fail when any of the input arrays contains an array item. You you better check that.
Anyways no need for underscore, lodash whatsoever. I believe this one should do it with pure JS ES6, as functional as it gets.
This piece of code uses a reduce and a nested map, simply to get the cartesian product of two arrays however the second array comes from a recursive call to the same function with one less array; hence.. a[0].cartesian(...a.slice(1))
Array.prototype.cartesian = function(...a){
return a.length ? this.reduce((p,c) => (p.push(...a[0].cartesian(...a.slice(1)).map(e => a.length > 1 ? [c,...e] : [c,e])),p),[])
: this;
};
var arr = ['a', 'b', 'c'],
brr = [1,2,3],
crr = [[9],[8],[7]];
console.log(JSON.stringify(arr.cartesian(brr,crr)));

No libraries needed! :)
Needs arrow functions though and probably not that efficient. :/
const flatten = (xs) =>
xs.flat(Infinity)
const binaryCartesianProduct = (xs, ys) =>
xs.map((xi) => ys.map((yi) => [xi, yi])).flat()
const cartesianProduct = (...xss) =>
xss.reduce(binaryCartesianProduct, [[]]).map(flatten)
console.log(cartesianProduct([1,2,3], [1,2,3], [1,2,3]))

Modern JavaScript in just a few lines. No external libraries or dependencies like Lodash.
function cartesian(...arrays) {
return arrays.reduce((a, b) => a.flatMap(x => b.map(y => x.concat([y]))), [ [] ]);
}
console.log(
cartesian([1, 2], [10, 20], [100, 200, 300])
.map(arr => JSON.stringify(arr))
.join('\n')
);

A more readable implementation
function productOfTwo(one, two) {
return one.flatMap(x => two.map(y => [].concat(x, y)));
}
function product(head = [], ...tail) {
if (tail.length === 0) return head;
return productOfTwo(head, product(...tail));
}
const test = product(
[1, 2, 3],
['a', 'b']
);
console.log(JSON.stringify(test));

Another, even more simplified, 2021-style answer using only reduce, map, and concat methods:
const cartesian = (...arr) => arr.reduce((a,c) => a.map(e => c.map(f => e.concat([f]))).reduce((a,c) => a.concat(c), []), [[]]);
console.log(cartesian([1, 2], [10, 20], [100, 200, 300]));

For those happy with a ramda solution:
import { xprod, flatten } from 'ramda';
const cartessian = (...xs) => xs.reduce(xprod).map(flatten)
Or the same without dependencies and two lego blocks for free (xprod and flatten):
const flatten = xs => xs.flat();
const xprod = (xs, ys) => xs.flatMap(x => ys.map(y => [x, y]));
const cartessian = (...xs) => xs.reduce(xprod).map(flatten);

Just for a choice a real simple implementation using array's reduce:
const array1 = ["day", "month", "year", "time"];
const array2 = ["from", "to"];
const process = (one, two) => [one, two].join(" ");
const product = array1.reduce((result, one) => result.concat(array2.map(two => process(one, two))), []);

A simple "mind and visually friendly" solution.
// t = [i, length]
const moveThreadForwardAt = (t, tCursor) => {
if (tCursor < 0)
return true; // reached end of first array
const newIndex = (t[tCursor][0] + 1) % t[tCursor][1];
t[tCursor][0] = newIndex;
if (newIndex == 0)
return moveThreadForwardAt(t, tCursor - 1);
return false;
}
const cartesianMult = (...args) => {
let result = [];
const t = Array.from(Array(args.length)).map((x, i) => [0, args[i].length]);
let reachedEndOfFirstArray = false;
while (false == reachedEndOfFirstArray) {
result.push(t.map((v, i) => args[i][v[0]]));
reachedEndOfFirstArray = moveThreadForwardAt(t, args.length - 1);
}
return result;
}
// cartesianMult(
// ['a1', 'b1', 'c1'],
// ['a2', 'b2'],
// ['a3', 'b3', 'c3'],
// ['a4', 'b4']
// );
console.log(cartesianMult(
['a1'],
['a2', 'b2'],
['a3', 'b3']
));

Yet another implementation. Not the shortest or fancy, but fast:
function cartesianProduct() {
var arr = [].slice.call(arguments),
intLength = arr.length,
arrHelper = [1],
arrToReturn = [];
for (var i = arr.length - 1; i >= 0; i--) {
arrHelper.unshift(arrHelper[0] * arr[i].length);
}
for (var i = 0, l = arrHelper[0]; i < l; i++) {
arrToReturn.push([]);
for (var j = 0; j < intLength; j++) {
arrToReturn[i].push(arr[j][(i / arrHelper[j + 1] | 0) % arr[j].length]);
}
}
return arrToReturn;
}

A simple, modified version of #viebel's code in plain Javascript:
function cartesianProduct(...arrays) {
return arrays.reduce((a, b) => {
return [].concat(...a.map(x => {
const next = Array.isArray(x) ? x : [x];
return [].concat(b.map(y => next.concat(...[y])));
}));
});
}
const product = cartesianProduct([1, 2], [10, 20], [100, 200, 300]);
console.log(product);
/*
[ [ 1, 10, 100 ],
[ 1, 10, 200 ],
[ 1, 10, 300 ],
[ 1, 20, 100 ],
[ 1, 20, 200 ],
[ 1, 20, 300 ],
[ 2, 10, 100 ],
[ 2, 10, 200 ],
[ 2, 10, 300 ],
[ 2, 20, 100 ],
[ 2, 20, 200 ],
[ 2, 20, 300 ] ];
*/

f=(a,b,c)=>a.flatMap(ai=>b.flatMap(bi=>c.map(ci=>[ai,bi,ci])))
This is for 3 arrays.
Some answers gave a way for any number of arrays.
This can easily contract or expand to less or more arrays.
I needed combinations of one set with repetitions, so I could have used:
f(a,a,a)
but used:
f=(a,b,c)=>a.flatMap(a1=>a.flatMap(a2=>a.map(a3=>[a1,a2,a3])))

A non-recursive approach that adds the ability to filter and modify the products before actually adding them to the result set.
Note: the use of .map rather than .forEach. In some browsers, .map runs faster.
function crossproduct(arrays, rowtest, rowaction) {
// Calculate the number of elements needed in the result
var result_elems = 1,
row_size = arrays.length;
arrays.map(function(array) {
result_elems *= array.length;
});
var temp = new Array(result_elems),
result = [];
// Go through each array and add the appropriate
// element to each element of the temp
var scale_factor = result_elems;
arrays.map(function(array) {
var set_elems = array.length;
scale_factor /= set_elems;
for (var i = result_elems - 1; i >= 0; i--) {
temp[i] = (temp[i] ? temp[i] : []);
var pos = i / scale_factor % set_elems;
// deal with floating point results for indexes,
// this took a little experimenting
if (pos < 1 || pos % 1 <= .5) {
pos = Math.floor(pos);
} else {
pos = Math.min(array.length - 1, Math.ceil(pos));
}
temp[i].push(array[pos]);
if (temp[i].length === row_size) {
var pass = (rowtest ? rowtest(temp[i]) : true);
if (pass) {
if (rowaction) {
result.push(rowaction(temp[i]));
} else {
result.push(temp[i]);
}
}
}
}
});
return result;
}
console.log(
crossproduct([[1, 2], [10, 20], [100, 200, 300]],null,null)
)

Similar in spirit to others, but highly readable imo.
function productOfTwo(a, b) {
return a.flatMap(c => b.map(d => [c, d].flat()));
}
[['a', 'b', 'c'], ['+', '-'], [1, 2, 3]].reduce(productOfTwo);

Related

How to generate a permutation given X combination of values [duplicate]

How would you implement the Cartesian product of multiple arrays in JavaScript?
As an example,
cartesian([1, 2], [10, 20], [100, 200, 300])
should return
[
[1, 10, 100],
[1, 10, 200],
[1, 10, 300],
[2, 10, 100],
[2, 10, 200]
...
]
2020 Update: 1-line (!) answer with vanilla JS
Original 2017 Answer: 2-line answer with vanilla JS:
(see updates below)
All of the answers here are overly complicated, most of them take 20 lines of code or even more.
This example uses just two lines of vanilla JavaScript, no lodash, underscore or other libraries:
let f = (a, b) => [].concat(...a.map(a => b.map(b => [].concat(a, b))));
let cartesian = (a, b, ...c) => b ? cartesian(f(a, b), ...c) : a;
Update:
This is the same as above but improved to strictly follow the Airbnb JavaScript Style Guide - validated using ESLint with eslint-config-airbnb-base:
const f = (a, b) => [].concat(...a.map(d => b.map(e => [].concat(d, e))));
const cartesian = (a, b, ...c) => (b ? cartesian(f(a, b), ...c) : a);
Special thanks to ZuBB for letting me know about linter problems with the original code.
Update 2020:
Since I wrote this answer we got even better builtins, that can finally let us reduce (no pun intended) the code to just 1 line!
const cartesian =
(...a) => a.reduce((a, b) => a.flatMap(d => b.map(e => [d, e].flat())));
Special thanks to inker for suggesting the use of reduce.
Special thanks to Bergi for suggesting the use of the newly added flatMap.
Special thanks to ECMAScript 2019 for adding flat and flatMap to the language!
Example
This is the exact example from your question:
let output = cartesian([1,2],[10,20],[100,200,300]);
Output
This is the output of that command:
[ [ 1, 10, 100 ],
[ 1, 10, 200 ],
[ 1, 10, 300 ],
[ 1, 20, 100 ],
[ 1, 20, 200 ],
[ 1, 20, 300 ],
[ 2, 10, 100 ],
[ 2, 10, 200 ],
[ 2, 10, 300 ],
[ 2, 20, 100 ],
[ 2, 20, 200 ],
[ 2, 20, 300 ] ]
Demo
See demos on:
JS Bin with Babel (for old browsers)
JS Bin without Babel (for modern browsers)
Syntax
The syntax that I used here is nothing new.
My example uses the spread operator and the rest parameters - features of JavaScript defined in the 6th edition of the ECMA-262 standard published on June 2015 and developed much earlier, better known as ES6 or ES2015. See:
http://www.ecma-international.org/ecma-262/6.0/
https://developer.mozilla.org/en/docs/Web/JavaScript/Reference/Functions/rest_parameters
https://developer.mozilla.org/en/docs/Web/JavaScript/Reference/Operators/Spread_operator
The new methods from the Update 2020 example was added in ES2019:
http://www.ecma-international.org/ecma-262/10.0/
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/flat
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/flatMap
It makes code like this so simple that it's a sin not to use it. For old platforms that don't support it natively you can always use Babel or other tools to transpile it to older syntax - and in fact my example transpiled by Babel is still shorter and simpler than most of the examples here, but it doesn't really matter because the output of transpilation is not something that you need to understand or maintain, it's just a fact that I found interesting.
Conclusion
There's no need to write hundred of lines of code that is hard to maintain and there is no need to use entire libraries for such a simple thing, when two lines of vanilla JavaScript can easily get the job done. As you can see it really pays off to use modern features of the language and in cases where you need to support archaic platforms with no native support of the modern features you can always use Babel, TypeScript or other tools to transpile the new syntax to the old one.
Don't code like it's 1995
JavaScript evolves and it does so for a reason. TC39 does an amazing job of the language design with adding new features and the browser vendors do an amazing job of implementing those features.
To see the current state of native support of any given feature in the browsers, see:
http://caniuse.com/
https://kangax.github.io/compat-table/
To see the support in Node versions, see:
http://node.green/
To use modern syntax on platforms that don't support it natively, use Babel or TypeScript:
https://babeljs.io/
https://www.typescriptlang.org/
Here is a functional solution to the problem (without any mutable variable!) using reduce and flatten, provided by underscore.js:
function cartesianProductOf() {
return _.reduce(arguments, function(a, b) {
return _.flatten(_.map(a, function(x) {
return _.map(b, function(y) {
return x.concat([y]);
});
}), true);
}, [ [] ]);
}
// [[1,3,"a"],[1,3,"b"],[1,4,"a"],[1,4,"b"],[2,3,"a"],[2,3,"b"],[2,4,"a"],[2,4,"b"]]
console.log(cartesianProductOf([1, 2], [3, 4], ['a']));
<script src="https://cdnjs.cloudflare.com/ajax/libs/underscore.js/1.9.1/underscore.js"></script>
Remark: This solution was inspired by http://cwestblog.com/2011/05/02/cartesian-product-of-multiple-arrays/
Here's a modified version of #viebel's code in plain Javascript, without using any library:
function cartesianProduct(arr) {
return arr.reduce(function(a,b){
return a.map(function(x){
return b.map(function(y){
return x.concat([y]);
})
}).reduce(function(a,b){ return a.concat(b) },[])
}, [[]])
}
var a = cartesianProduct([[1, 2,3], [4, 5,6], [7, 8], [9,10]]);
console.log(JSON.stringify(a));
The following efficient generator function returns the cartesian product of all given iterables:
// Generate cartesian product of given iterables:
function* cartesian(head, ...tail) {
const remainder = tail.length > 0 ? cartesian(...tail) : [[]];
for (let r of remainder) for (let h of head) yield [h, ...r];
}
// Example:
console.log(...cartesian([1, 2], [10, 20], [100, 200, 300]));
It accepts arrays, strings, sets and all other objects implementing the iterable protocol.
Following the specification of the n-ary cartesian product it yields
[] if one or more given iterables are empty, e.g. [] or ''
[[a]] if a single iterable containing a single value a is given.
All other cases are handled as expected as demonstrated by the following test cases:
// Generate cartesian product of given iterables:
function* cartesian(head, ...tail) {
const remainder = tail.length > 0 ? cartesian(...tail) : [[]];
for (let r of remainder) for (let h of head) yield [h, ...r];
}
// Test cases:
console.log([...cartesian([])]); // []
console.log([...cartesian([1])]); // [[1]]
console.log([...cartesian([1, 2])]); // [[1], [2]]
console.log([...cartesian([1], [])]); // []
console.log([...cartesian([1, 2], [])]); // []
console.log([...cartesian([1], [2])]); // [[1, 2]]
console.log([...cartesian([1], [2], [3])]); // [[1, 2, 3]]
console.log([...cartesian([1, 2], [3, 4])]); // [[1, 3], [2, 3], [1, 4], [2, 4]]
console.log([...cartesian('')]); // []
console.log([...cartesian('ab', 'c')]); // [['a','c'], ['b', 'c']]
console.log([...cartesian([1, 2], 'ab')]); // [[1, 'a'], [2, 'a'], [1, 'b'], [2, 'b']]
console.log([...cartesian(new Set())]); // []
console.log([...cartesian(new Set([1]))]); // [[1]]
console.log([...cartesian(new Set([1, 1]))]); // [[1]]
It seems the community thinks this to be trivial and/or easy to find a reference implementation. However, upon brief inspection I couldn't find one, … either that or maybe it's just that I like re-inventing the wheel or solving classroom-like programming problems. Either way its your lucky day:
function cartProd(paramArray) {
function addTo(curr, args) {
var i, copy,
rest = args.slice(1),
last = !rest.length,
result = [];
for (i = 0; i < args[0].length; i++) {
copy = curr.slice();
copy.push(args[0][i]);
if (last) {
result.push(copy);
} else {
result = result.concat(addTo(copy, rest));
}
}
return result;
}
return addTo([], Array.prototype.slice.call(arguments));
}
>> console.log(cartProd([1,2], [10,20], [100,200,300]));
>> [
[1, 10, 100], [1, 10, 200], [1, 10, 300], [1, 20, 100],
[1, 20, 200], [1, 20, 300], [2, 10, 100], [2, 10, 200],
[2, 10, 300], [2, 20, 100], [2, 20, 200], [2, 20, 300]
]
Full reference implementation that's relatively efficient… 😁
On efficiency: You could gain some by taking the if out of the loop and having 2 separate loops since it is technically constant and you'd be helping with branch prediction and all that mess, but that point is kind of moot in JavaScript.
Here's a non-fancy, straightforward recursive solution:
function cartesianProduct(a) { // a = array of array
var i, j, l, m, a1, o = [];
if (!a || a.length == 0) return a;
a1 = a.splice(0, 1)[0]; // the first array of a
a = cartesianProduct(a);
for (i = 0, l = a1.length; i < l; i++) {
if (a && a.length)
for (j = 0, m = a.length; j < m; j++)
o.push([a1[i]].concat(a[j]));
else
o.push([a1[i]]);
}
return o;
}
console.log(cartesianProduct([[1, 2], [10, 20], [100, 200, 300]]));
// [
// [1,10,100],[1,10,200],[1,10,300],
// [1,20,100],[1,20,200],[1,20,300],
// [2,10,100],[2,10,200],[2,10,300],
// [2,20,100],[2,20,200],[2,20,300]
// ]
Here is a one-liner using the native ES2019 flatMap. No libraries needed, just a modern browser (or transpiler):
data.reduce((a, b) => a.flatMap(x => b.map(y => [...x, y])), [[]]);
It's essentially a modern version of viebel's answer, without lodash.
Here is a recursive way that uses an ECMAScript 2015 generator function so you don't have to create all of the tuples at once:
function* cartesian() {
let arrays = arguments;
function* doCartesian(i, prod) {
if (i == arrays.length) {
yield prod;
} else {
for (let j = 0; j < arrays[i].length; j++) {
yield* doCartesian(i + 1, prod.concat([arrays[i][j]]));
}
}
}
yield* doCartesian(0, []);
}
console.log(JSON.stringify(Array.from(cartesian([1,2],[10,20],[100,200,300]))));
console.log(JSON.stringify(Array.from(cartesian([[1],[2]],[10,20],[100,200,300]))));
This is a pure ES6 solution using arrow functions
function cartesianProduct(arr) {
return arr.reduce((a, b) =>
a.map(x => b.map(y => x.concat(y)))
.reduce((a, b) => a.concat(b), []), [[]]);
}
var arr = [[1, 2], [10, 20], [100, 200, 300]];
console.log(JSON.stringify(cartesianProduct(arr)));
functional programming
This question is tagged functional-programming so let's take a look at the List monad:
One application for this monadic list is representing nondeterministic computation. List can hold results for all execution paths in an algorithm...
Well that sounds like a perfect fit for cartesian. JavaScript gives us Array and the monadic binding function is Array.prototype.flatMap, so let's put them to use -
const cartesian = (...all) => {
const loop = (t, a, ...more) =>
a === undefined
? [ t ]
: a.flatMap(x => loop([ ...t, x ], ...more))
return loop([], ...all)
}
console.log(cartesian([1,2], [10,20], [100,200,300]))
[1,10,100]
[1,10,200]
[1,10,300]
[1,20,100]
[1,20,200]
[1,20,300]
[2,10,100]
[2,10,200]
[2,10,300]
[2,20,100]
[2,20,200]
[2,20,300]
more recursion
Other recursive implementations include -
const cartesian = (a, ...more) =>
a == null
? [[]]
: cartesian(...more).flatMap(c => a.map(v => [v,...c]))
for (const p of cartesian([1,2], [10,20], [100,200,300]))
console.log(JSON.stringify(p))
.as-console-wrapper { min-height: 100%; top: 0; }
[1,10,100]
[2,10,100]
[1,20,100]
[2,20,100]
[1,10,200]
[2,10,200]
[1,20,200]
[2,20,200]
[1,10,300]
[2,10,300]
[1,20,300]
[2,20,300]
Note the different order above. You can get lexicographic order by inverting the two loops. Be careful not avoid duplicating work by calling cartesian inside the loop like Nick's answer -
const bind = (x, f) => f(x)
const cartesian = (a, ...more) =>
a == null
? [[]]
: bind(cartesian(...more), r => a.flatMap(v => r.map(c => [v,...c])))
for (const p of cartesian([1,2], [10,20], [100,200,300]))
console.log(JSON.stringify(p))
.as-console-wrapper { min-height: 100%; top: 0; }
[1,10,100]
[1,10,200]
[1,10,300]
[1,20,100]
[1,20,200]
[1,20,300]
[2,10,100]
[2,10,200]
[2,10,300]
[2,20,100]
[2,20,200]
[2,20,300]
generators
Another option is to use generators. A generator is a good fit for combinatorics because the solution space can become very large. Generators offer lazy evaluation so they can be paused/resumed/canceled at any time -
function* cartesian(a, ...more) {
if (a == null) return yield []
for (const v of a)
for (const c of cartesian(...more)) // ⚠️
yield [v, ...c]
}
for (const p of cartesian([1,2], [10,20], [100,200,300]))
console.log(JSON.stringify(p))
.as-console-wrapper { min-height: 100%; top: 0; }
[1,10,100]
[1,10,200]
[1,10,300]
[1,20,100]
[1,20,200]
[1,20,300]
[2,10,100]
[2,10,200]
[2,10,300]
[2,20,100]
[2,20,200]
[2,20,300]
Maybe you saw that we called cartesian in a loop in the generator. If you suspect that can be optimized, it can! Here we use a generic tee function that forks any iterator n times -
function* cartesian(a, ...more) {
if (a == null) return yield []
for (const t of tee(cartesian(...more), a.length)) // ✅
for (const v of a)
for (const c of t) // ✅
yield [v, ...c]
}
Where tee is implemented as -
function tee(g, n = 2) {
const memo = []
function* iter(i) {
while (true) {
if (i >= memo.length) {
const w = g.next()
if (w.done) return
memo.push(w.value)
}
else yield memo[i++]
}
}
return Array.from(Array(n), _ => iter(0))
}
Even in small tests cartesian generator implemented with tee performs twice as fast.
Using a typical backtracking with ES6 generators,
function cartesianProduct(...arrays) {
let current = new Array(arrays.length);
return (function* backtracking(index) {
if(index == arrays.length) yield current.slice();
else for(let num of arrays[index]) {
current[index] = num;
yield* backtracking(index+1);
}
})(0);
}
for (let item of cartesianProduct([1,2],[10,20],[100,200,300])) {
console.log('[' + item.join(', ') + ']');
}
div.as-console-wrapper { max-height: 100%; }
Below there is a similar version compatible with older browsers.
function cartesianProduct(arrays) {
var result = [],
current = new Array(arrays.length);
(function backtracking(index) {
if(index == arrays.length) return result.push(current.slice());
for(var i=0; i<arrays[index].length; ++i) {
current[index] = arrays[index][i];
backtracking(index+1);
}
})(0);
return result;
}
cartesianProduct([[1,2],[10,20],[100,200,300]]).forEach(function(item) {
console.log('[' + item.join(', ') + ']');
});
div.as-console-wrapper { max-height: 100%; }
A single line approach, for better reading with indentations.
result = data.reduce(
(a, b) => a.reduce(
(r, v) => r.concat(b.map(w => [].concat(v, w))),
[]
)
);
It takes a single array with arrays of wanted cartesian items.
var data = [[1, 2], [10, 20], [100, 200, 300]],
result = data.reduce((a, b) => a.reduce((r, v) => r.concat(b.map(w => [].concat(v, w))), []));
console.log(result.map(a => a.join(' ')));
.as-console-wrapper { max-height: 100% !important; top: 0; }
A coffeescript version with lodash:
_ = require("lodash")
cartesianProduct = ->
return _.reduceRight(arguments, (a,b) ->
_.flatten(_.map(a,(x) -> _.map b, (y) -> x.concat(y)), true)
, [ [] ])
For those who needs TypeScript (reimplemented #Danny's answer)
/**
* Calculates "Cartesian Product" sets.
* #example
* cartesianProduct([[1,2], [4,8], [16,32]])
* Returns:
* [
* [1, 4, 16],
* [1, 4, 32],
* [1, 8, 16],
* [1, 8, 32],
* [2, 4, 16],
* [2, 4, 32],
* [2, 8, 16],
* [2, 8, 32]
* ]
* #see https://stackoverflow.com/a/36234242/1955709
* #see https://en.wikipedia.org/wiki/Cartesian_product
* #param arr {T[][]}
* #returns {T[][]}
*/
function cartesianProduct<T> (arr: T[][]): T[][] {
return arr.reduce((a, b) => {
return a.map(x => {
return b.map(y => {
return x.concat(y)
})
}).reduce((c, d) => c.concat(d), [])
}, [[]] as T[][])
}
In my particular setting, the "old-fashioned" approach seemed to be more efficient than the methods based on more modern features. Below is the code (including a small comparison with other solutions posted in this thread by #rsp and #sebnukem) should it prove useful to someone else as well.
The idea is following. Let's say we are constructing the outer product of N arrays, a_1,...,a_N each of which has m_i components. The outer product of these arrays has M=m_1*m_2*...*m_N elements and we can identify each of them with a N-dimensional vector the components of which are positive integers and i-th component is strictly bounded from above by m_i. For example, the vector (0, 0, ..., 0) would correspond to the particular combination within which one takes the first element from each array, while (m_1-1, m_2-1, ..., m_N-1) is identified with the combination where one takes the last element from each array. Thus in order to construct all M combinations, the function below consecutively constructs all such vectors and for each of them identifies the corresponding combination of the elements of the input arrays.
function cartesianProduct(){
const N = arguments.length;
var arr_lengths = Array(N);
var digits = Array(N);
var num_tot = 1;
for(var i = 0; i < N; ++i){
const len = arguments[i].length;
if(!len){
num_tot = 0;
break;
}
digits[i] = 0;
num_tot *= (arr_lengths[i] = len);
}
var ret = Array(num_tot);
for(var num = 0; num < num_tot; ++num){
var item = Array(N);
for(var j = 0; j < N; ++j){ item[j] = arguments[j][digits[j]]; }
ret[num] = item;
for(var idx = 0; idx < N; ++idx){
if(digits[idx] == arr_lengths[idx]-1){
digits[idx] = 0;
}else{
digits[idx] += 1;
break;
}
}
}
return ret;
}
//------------------------------------------------------------------------------
let _f = (a, b) => [].concat(...a.map(a => b.map(b => [].concat(a, b))));
let cartesianProduct_rsp = (a, b, ...c) => b ? cartesianProduct_rsp(_f(a, b), ...c) : a;
//------------------------------------------------------------------------------
function cartesianProduct_sebnukem(a) {
var i, j, l, m, a1, o = [];
if (!a || a.length == 0) return a;
a1 = a.splice(0, 1)[0];
a = cartesianProduct_sebnukem(a);
for (i = 0, l = a1.length; i < l; i++) {
if (a && a.length) for (j = 0, m = a.length; j < m; j++)
o.push([a1[i]].concat(a[j]));
else
o.push([a1[i]]);
}
return o;
}
//------------------------------------------------------------------------------
const L = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9];
const args = [L, L, L, L, L, L];
let fns = {
'cartesianProduct': function(args){ return cartesianProduct(...args); },
'cartesianProduct_rsp': function(args){ return cartesianProduct_rsp(...args); },
'cartesianProduct_sebnukem': function(args){ return cartesianProduct_sebnukem(args); }
};
Object.keys(fns).forEach(fname => {
console.time(fname);
const ret = fns[fname](args);
console.timeEnd(fname);
});
with node v6.12.2, I get following timings:
cartesianProduct: 427.378ms
cartesianProduct_rsp: 1710.829ms
cartesianProduct_sebnukem: 593.351ms
You could reduce the 2D array. Use flatMap on the accumulator array to get acc.length x curr.length number of combinations in each loop. [].concat(c, n) is used because c is a number in the first iteration and an array afterwards.
const data = [ [1, 2], [10, 20], [100, 200, 300] ];
const output = data.reduce((acc, curr) =>
acc.flatMap(c => curr.map(n => [].concat(c, n)))
)
console.log(JSON.stringify(output))
(This is based on Nina Scholz's answer)
Here's a recursive one-liner that works using only flatMap and map:
const inp = [
[1, 2],
[10, 20],
[100, 200, 300]
];
const cartesian = (first, ...rest) =>
rest.length ? first.flatMap(v => cartesian(...rest).map(c => [v].concat(c)))
: first;
console.log(cartesian(...inp));
A few of the answers under this topic fail when any of the input arrays contains an array item. You you better check that.
Anyways no need for underscore, lodash whatsoever. I believe this one should do it with pure JS ES6, as functional as it gets.
This piece of code uses a reduce and a nested map, simply to get the cartesian product of two arrays however the second array comes from a recursive call to the same function with one less array; hence.. a[0].cartesian(...a.slice(1))
Array.prototype.cartesian = function(...a){
return a.length ? this.reduce((p,c) => (p.push(...a[0].cartesian(...a.slice(1)).map(e => a.length > 1 ? [c,...e] : [c,e])),p),[])
: this;
};
var arr = ['a', 'b', 'c'],
brr = [1,2,3],
crr = [[9],[8],[7]];
console.log(JSON.stringify(arr.cartesian(brr,crr)));
No libraries needed! :)
Needs arrow functions though and probably not that efficient. :/
const flatten = (xs) =>
xs.flat(Infinity)
const binaryCartesianProduct = (xs, ys) =>
xs.map((xi) => ys.map((yi) => [xi, yi])).flat()
const cartesianProduct = (...xss) =>
xss.reduce(binaryCartesianProduct, [[]]).map(flatten)
console.log(cartesianProduct([1,2,3], [1,2,3], [1,2,3]))
Modern JavaScript in just a few lines. No external libraries or dependencies like Lodash.
function cartesian(...arrays) {
return arrays.reduce((a, b) => a.flatMap(x => b.map(y => x.concat([y]))), [ [] ]);
}
console.log(
cartesian([1, 2], [10, 20], [100, 200, 300])
.map(arr => JSON.stringify(arr))
.join('\n')
);
A more readable implementation
function productOfTwo(one, two) {
return one.flatMap(x => two.map(y => [].concat(x, y)));
}
function product(head = [], ...tail) {
if (tail.length === 0) return head;
return productOfTwo(head, product(...tail));
}
const test = product(
[1, 2, 3],
['a', 'b']
);
console.log(JSON.stringify(test));
Another, even more simplified, 2021-style answer using only reduce, map, and concat methods:
const cartesian = (...arr) => arr.reduce((a,c) => a.map(e => c.map(f => e.concat([f]))).reduce((a,c) => a.concat(c), []), [[]]);
console.log(cartesian([1, 2], [10, 20], [100, 200, 300]));
For those happy with a ramda solution:
import { xprod, flatten } from 'ramda';
const cartessian = (...xs) => xs.reduce(xprod).map(flatten)
Or the same without dependencies and two lego blocks for free (xprod and flatten):
const flatten = xs => xs.flat();
const xprod = (xs, ys) => xs.flatMap(x => ys.map(y => [x, y]));
const cartessian = (...xs) => xs.reduce(xprod).map(flatten);
Just for a choice a real simple implementation using array's reduce:
const array1 = ["day", "month", "year", "time"];
const array2 = ["from", "to"];
const process = (one, two) => [one, two].join(" ");
const product = array1.reduce((result, one) => result.concat(array2.map(two => process(one, two))), []);
A simple "mind and visually friendly" solution.
// t = [i, length]
const moveThreadForwardAt = (t, tCursor) => {
if (tCursor < 0)
return true; // reached end of first array
const newIndex = (t[tCursor][0] + 1) % t[tCursor][1];
t[tCursor][0] = newIndex;
if (newIndex == 0)
return moveThreadForwardAt(t, tCursor - 1);
return false;
}
const cartesianMult = (...args) => {
let result = [];
const t = Array.from(Array(args.length)).map((x, i) => [0, args[i].length]);
let reachedEndOfFirstArray = false;
while (false == reachedEndOfFirstArray) {
result.push(t.map((v, i) => args[i][v[0]]));
reachedEndOfFirstArray = moveThreadForwardAt(t, args.length - 1);
}
return result;
}
// cartesianMult(
// ['a1', 'b1', 'c1'],
// ['a2', 'b2'],
// ['a3', 'b3', 'c3'],
// ['a4', 'b4']
// );
console.log(cartesianMult(
['a1'],
['a2', 'b2'],
['a3', 'b3']
));
Yet another implementation. Not the shortest or fancy, but fast:
function cartesianProduct() {
var arr = [].slice.call(arguments),
intLength = arr.length,
arrHelper = [1],
arrToReturn = [];
for (var i = arr.length - 1; i >= 0; i--) {
arrHelper.unshift(arrHelper[0] * arr[i].length);
}
for (var i = 0, l = arrHelper[0]; i < l; i++) {
arrToReturn.push([]);
for (var j = 0; j < intLength; j++) {
arrToReturn[i].push(arr[j][(i / arrHelper[j + 1] | 0) % arr[j].length]);
}
}
return arrToReturn;
}
A simple, modified version of #viebel's code in plain Javascript:
function cartesianProduct(...arrays) {
return arrays.reduce((a, b) => {
return [].concat(...a.map(x => {
const next = Array.isArray(x) ? x : [x];
return [].concat(b.map(y => next.concat(...[y])));
}));
});
}
const product = cartesianProduct([1, 2], [10, 20], [100, 200, 300]);
console.log(product);
/*
[ [ 1, 10, 100 ],
[ 1, 10, 200 ],
[ 1, 10, 300 ],
[ 1, 20, 100 ],
[ 1, 20, 200 ],
[ 1, 20, 300 ],
[ 2, 10, 100 ],
[ 2, 10, 200 ],
[ 2, 10, 300 ],
[ 2, 20, 100 ],
[ 2, 20, 200 ],
[ 2, 20, 300 ] ];
*/
f=(a,b,c)=>a.flatMap(ai=>b.flatMap(bi=>c.map(ci=>[ai,bi,ci])))
This is for 3 arrays.
Some answers gave a way for any number of arrays.
This can easily contract or expand to less or more arrays.
I needed combinations of one set with repetitions, so I could have used:
f(a,a,a)
but used:
f=(a,b,c)=>a.flatMap(a1=>a.flatMap(a2=>a.map(a3=>[a1,a2,a3])))
A non-recursive approach that adds the ability to filter and modify the products before actually adding them to the result set.
Note: the use of .map rather than .forEach. In some browsers, .map runs faster.
function crossproduct(arrays, rowtest, rowaction) {
// Calculate the number of elements needed in the result
var result_elems = 1,
row_size = arrays.length;
arrays.map(function(array) {
result_elems *= array.length;
});
var temp = new Array(result_elems),
result = [];
// Go through each array and add the appropriate
// element to each element of the temp
var scale_factor = result_elems;
arrays.map(function(array) {
var set_elems = array.length;
scale_factor /= set_elems;
for (var i = result_elems - 1; i >= 0; i--) {
temp[i] = (temp[i] ? temp[i] : []);
var pos = i / scale_factor % set_elems;
// deal with floating point results for indexes,
// this took a little experimenting
if (pos < 1 || pos % 1 <= .5) {
pos = Math.floor(pos);
} else {
pos = Math.min(array.length - 1, Math.ceil(pos));
}
temp[i].push(array[pos]);
if (temp[i].length === row_size) {
var pass = (rowtest ? rowtest(temp[i]) : true);
if (pass) {
if (rowaction) {
result.push(rowaction(temp[i]));
} else {
result.push(temp[i]);
}
}
}
}
});
return result;
}
console.log(
crossproduct([[1, 2], [10, 20], [100, 200, 300]],null,null)
)
Similar in spirit to others, but highly readable imo.
function productOfTwo(a, b) {
return a.flatMap(c => b.map(d => [c, d].flat()));
}
[['a', 'b', 'c'], ['+', '-'], [1, 2, 3]].reduce(productOfTwo);

JavaScript opposite of .flat() [duplicate]

Let's say that I have an Javascript array looking as following:
["Element 1","Element 2","Element 3",...]; // with close to a hundred elements.
What approach would be appropriate to chunk (split) the array into many smaller arrays with, lets say, 10 elements at its most?
The array.slice() method can extract a slice from the beginning, middle, or end of an array for whatever purposes you require, without changing the original array.
const chunkSize = 10;
for (let i = 0; i < array.length; i += chunkSize) {
const chunk = array.slice(i, i + chunkSize);
// do whatever
}
The last chunk may be smaller than chunkSize. For example when given an array of 12 elements the first chunk will have 10 elements, the second chunk only has 2.
Note that a chunkSize of 0 will cause an infinite loop.
Here's a ES6 version using reduce
const perChunk = 2 // items per chunk
const inputArray = ['a','b','c','d','e']
const result = inputArray.reduce((resultArray, item, index) => {
const chunkIndex = Math.floor(index/perChunk)
if(!resultArray[chunkIndex]) {
resultArray[chunkIndex] = [] // start a new chunk
}
resultArray[chunkIndex].push(item)
return resultArray
}, [])
console.log(result); // result: [['a','b'], ['c','d'], ['e']]
And you're ready to chain further map/reduce transformations.
Your input array is left intact
If you prefer a shorter but less readable version, you can sprinkle some concat into the mix for the same end result:
inputArray.reduce((all,one,i) => {
const ch = Math.floor(i/perChunk);
all[ch] = [].concat((all[ch]||[]),one);
return all
}, [])
You can use remainder operator to put consecutive items into different chunks:
const ch = (i % perChunk);
Modified from an answer by dbaseman: https://stackoverflow.com/a/10456344/711085
Object.defineProperty(Array.prototype, 'chunk_inefficient', {
value: function(chunkSize) {
var array = this;
return [].concat.apply([],
array.map(function(elem, i) {
return i % chunkSize ? [] : [array.slice(i, i + chunkSize)];
})
);
}
});
console.log(
[1, 2, 3, 4, 5, 6, 7].chunk_inefficient(3)
)
// [[1, 2, 3], [4, 5, 6], [7]]
minor addendum:
I should point out that the above is a not-that-elegant (in my mind) workaround to use Array.map. It basically does the following, where ~ is concatenation:
[[1,2,3]]~[]~[]~[] ~ [[4,5,6]]~[]~[]~[] ~ [[7]]
It has the same asymptotic running time as the method below, but perhaps a worse constant factor due to building empty lists. One could rewrite this as follows (mostly the same as Blazemonger's method, which is why I did not originally submit this answer):
More efficient method:
// refresh page if experimenting and you already defined Array.prototype.chunk
Object.defineProperty(Array.prototype, 'chunk', {
value: function(chunkSize) {
var R = [];
for (var i = 0; i < this.length; i += chunkSize)
R.push(this.slice(i, i + chunkSize));
return R;
}
});
console.log(
[1, 2, 3, 4, 5, 6, 7].chunk(3)
)
My preferred way nowadays is the above, or one of the following:
Array.range = function(n) {
// Array.range(5) --> [0,1,2,3,4]
return Array.apply(null,Array(n)).map((x,i) => i)
};
Object.defineProperty(Array.prototype, 'chunk', {
value: function(n) {
// ACTUAL CODE FOR CHUNKING ARRAY:
return Array.range(Math.ceil(this.length/n)).map((x,i) => this.slice(i*n,i*n+n));
}
});
Demo:
> JSON.stringify( Array.range(10).chunk(3) );
[[1,2,3],[4,5,6],[7,8,9],[10]]
Or if you don't want an Array.range function, it's actually just a one-liner (excluding the fluff):
var ceil = Math.ceil;
Object.defineProperty(Array.prototype, 'chunk', {value: function(n) {
return Array(ceil(this.length/n)).fill().map((_,i) => this.slice(i*n,i*n+n));
}});
or
Object.defineProperty(Array.prototype, 'chunk', {value: function(n) {
return Array.from(Array(ceil(this.length/n)), (_,i)=>this.slice(i*n,i*n+n));
}});
Try to avoid mucking with native prototypes, including Array.prototype, if you don't know who will be consuming your code (3rd parties, coworkers, yourself at a later date, etc.).
There are ways to safely extend prototypes (but not in all browsers) and there are ways to safely consume objects created from extended prototypes, but a better rule of thumb is to follow the Principle of Least Surprise and avoid these practices altogether.
If you have some time, watch Andrew Dupont's JSConf 2011 talk, "Everything is Permitted: Extending Built-ins", for a good discussion about this topic.
But back to the question, while the solutions above will work, they are overly complex and requiring unnecessary computational overhead. Here is my solution:
function chunk (arr, len) {
var chunks = [],
i = 0,
n = arr.length;
while (i < n) {
chunks.push(arr.slice(i, i += len));
}
return chunks;
}
// Optionally, you can do the following to avoid cluttering the global namespace:
Array.chunk = chunk;
Using generators
function* chunks(arr, n) {
for (let i = 0; i < arr.length; i += n) {
yield arr.slice(i, i + n);
}
}
let someArray = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
console.log([...chunks(someArray, 2)]) // [[0, 1], [2, 3], [4, 5], [6, 7], [8, 9]]
Can be typed with Typescript like so:
function* chunks<T>(arr: T[], n: number): Generator<T[], void> {
for (let i = 0; i < arr.length; i += n) {
yield arr.slice(i, i + n);
}
}
I tested the different answers into jsperf.com. The result is available there: https://web.archive.org/web/20150909134228/https://jsperf.com/chunk-mtds
And the fastest function (and that works from IE8) is this one:
function chunk(arr, chunkSize) {
if (chunkSize <= 0) throw "Invalid chunk size";
var R = [];
for (var i=0,len=arr.length; i<len; i+=chunkSize)
R.push(arr.slice(i,i+chunkSize));
return R;
}
Splice version using ES6
let [list,chunkSize] = [[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15], 6];
list = [...Array(Math.ceil(list.length / chunkSize))].map(_ => list.splice(0,chunkSize))
console.log(list);
I'd prefer to use splice method:
var chunks = function(array, size) {
var results = [];
while (array.length) {
results.push(array.splice(0, size));
}
return results;
};
Nowadays you can use lodash' chunk function to split the array into smaller arrays https://lodash.com/docs#chunk No need to fiddle with the loops anymore!
Old question: New answer! I actually was working with an answer from this question and had a friend improve on it! So here it is:
Array.prototype.chunk = function ( n ) {
if ( !this.length ) {
return [];
}
return [ this.slice( 0, n ) ].concat( this.slice(n).chunk(n) );
};
[1,2,3,4,5,6,7,8,9,0].chunk(3);
> [[1,2,3],[4,5,6],[7,8,9],[0]]
One more solution using Array.prototype.reduce():
const chunk = (array, size) =>
array.reduce((acc, _, i) => {
if (i % size === 0) acc.push(array.slice(i, i + size))
return acc
}, [])
// Usage:
const numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
const chunked = chunk(numbers, 3)
console.log(chunked)
This solution is very similar to the solution by Steve Holgado. However, because this solution doesn't utilize array spreading and doesn't create new arrays in the reducer function, it's faster (see jsPerf test) and subjectively more readable (simpler syntax) than the other solution.
At every nth iteration (where n = size; starting at the first iteration), the accumulator array (acc) is appended with a chunk of the array (array.slice(i, i + size)) and then returned. At other iterations, the accumulator array is returned as-is.
If size is zero, the method returns an empty array. If size is negative, the method returns broken results. So, if needed in your case, you may want to do something about negative or non-positive size values.
If speed is important in your case, a simple for loop would be faster than using reduce() (see the jsPerf test), and some may find this style more readable as well:
function chunk(array, size) {
// This prevents infinite loops
if (size < 1) throw new Error('Size must be positive')
const result = []
for (let i = 0; i < array.length; i += size) {
result.push(array.slice(i, i + size))
}
return result
}
There have been many answers but this is what I use:
const chunk = (arr, size) =>
arr
.reduce((acc, _, i) =>
(i % size)
? acc
: [...acc, arr.slice(i, i + size)]
, [])
// USAGE
const numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
chunk(numbers, 3)
// [[1, 2, 3], [4, 5, 6], [7, 8, 9], [10]]
First, check for a remainder when dividing the index by the chunk size.
If there is a remainder then just return the accumulator array.
If there is no remainder then the index is divisible by the chunk size, so take a slice from the original array (starting at the current index) and add it to the accumulator array.
So, the returned accumulator array for each iteration of reduce looks something like this:
// 0: [[1, 2, 3]]
// 1: [[1, 2, 3]]
// 2: [[1, 2, 3]]
// 3: [[1, 2, 3], [4, 5, 6]]
// 4: [[1, 2, 3], [4, 5, 6]]
// 5: [[1, 2, 3], [4, 5, 6]]
// 6: [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
// 7: [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
// 8: [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
// 9: [[1, 2, 3], [4, 5, 6], [7, 8, 9], [10]]
I think this a nice recursive solution with ES6 syntax:
const chunk = function(array, size) {
if (!array.length) {
return [];
}
const head = array.slice(0, size);
const tail = array.slice(size);
return [head, ...chunk(tail, size)];
};
console.log(chunk([1,2,3], 2));
ONE-LINER
const chunk = (a,n)=>[...Array(Math.ceil(a.length/n))].map((_,i)=>a.slice(n*i,n+n*i));
For TypeScript
const chunk = <T>(arr: T[], size: number): T[][] =>
[...Array(Math.ceil(arr.length / size))].map((_, i) =>
arr.slice(size * i, size + size * i)
);
DEMO
const chunk = (a,n)=>[...Array(Math.ceil(a.length/n))].map((_,i)=>a.slice(n*i,n+n*i));
document.write(JSON.stringify(chunk([1, 2, 3, 4], 2)));
Chunk By Number Of Groups
const part=(a,n)=>[...Array(n)].map((_,i)=>a.slice(i*Math.ceil(a.length/n),(i+1)*Math.ceil(a.length/n)));
For TypeScript
const part = <T>(a: T[], n: number): T[][] => {
const b = Math.ceil(a.length / n);
return [...Array(n)].map((_, i) => a.slice(i * b, (i + 1) * b));
};
DEMO
const part = (a, n) => {
const b = Math.ceil(a.length / n);
return [...Array(n)].map((_, i) => a.slice(i * b, (i + 1) * b));
};
document.write(JSON.stringify(part([1, 2, 3, 4, 5, 6], 2))+'<br/>');
document.write(JSON.stringify(part([1, 2, 3, 4, 5, 6, 7], 2)));
Ok, let's start with a fairly tight one:
function chunk(arr, n) {
return arr.slice(0,(arr.length+n-1)/n|0).
map(function(c,i) { return arr.slice(n*i,n*i+n); });
}
Which is used like this:
chunk([1,2,3,4,5,6,7], 2);
Then we have this tight reducer function:
function chunker(p, c, i) {
(p[i/this|0] = p[i/this|0] || []).push(c);
return p;
}
Which is used like this:
[1,2,3,4,5,6,7].reduce(chunker.bind(3),[]);
Since a kitten dies when we bind this to a number, we can do manual currying like this instead:
// Fluent alternative API without prototype hacks.
function chunker(n) {
return function(p, c, i) {
(p[i/n|0] = p[i/n|0] || []).push(c);
return p;
};
}
Which is used like this:
[1,2,3,4,5,6,7].reduce(chunker(3),[]);
Then the still pretty tight function which does it all in one go:
function chunk(arr, n) {
return arr.reduce(function(p, cur, i) {
(p[i/n|0] = p[i/n|0] || []).push(cur);
return p;
},[]);
}
chunk([1,2,3,4,5,6,7], 3);
I aimed at creating a simple non-mutating solution in pure ES6. Peculiarities in javascript make it necessary to fill the empty array before mapping :-(
function chunk(a, l) {
return new Array(Math.ceil(a.length / l)).fill(0)
.map((_, n) => a.slice(n*l, n*l + l));
}
This version with recursion seem simpler and more compelling:
function chunk(a, l) {
if (a.length == 0) return [];
else return [a.slice(0, l)].concat(chunk(a.slice(l), l));
}
The ridiculously weak array functions of ES6 makes for good puzzles :-)
Created a npm package for this https://www.npmjs.com/package/array.chunk
var result = [];
for (var i = 0; i < arr.length; i += size) {
result.push(arr.slice(i, size + i));
}
return result;
When using a TypedArray
var result = [];
for (var i = 0; i < arr.length; i += size) {
result.push(arr.subarray(i, size + i));
}
return result;
Using Array.prototype.splice() and splice it until the array has element.
Array.prototype.chunk = function(size) {
let result = [];
while(this.length) {
result.push(this.splice(0, size));
}
return result;
}
const arr = [1, 2, 3, 4, 5, 6, 7, 8, 9];
console.log(arr.chunk(2));
Update
Array.prototype.splice() populates the original array and after performing the chunk() the original array (arr) becomes [].
So if you want to keep the original array untouched, then copy and keep the arr data into another array and do the same thing.
Array.prototype.chunk = function(size) {
let data = [...this];
let result = [];
while(data.length) {
result.push(data.splice(0, size));
}
return result;
}
const arr = [1, 2, 3, 4, 5, 6, 7, 8, 9];
console.log('chunked:', arr.chunk(2));
console.log('original', arr);
P.S: Thanks to #mts-knn for mentioning the matter.
I recommend using lodash. Chunking is one of many useful functions there.
Instructions:
npm i --save lodash
Include in your project:
import * as _ from 'lodash';
Usage:
const arrayOfElements = ["Element 1","Element 2","Element 3", "Element 4", "Element 5","Element 6","Element 7","Element 8","Element 9","Element 10","Element 11","Element 12"]
const chunkedElements = _.chunk(arrayOfElements, 10)
You can find my sample here:
https://playcode.io/659171/
The following ES2015 approach works without having to define a function and directly on anonymous arrays (example with chunk size 2):
[11,22,33,44,55].map((_, i, all) => all.slice(2*i, 2*i+2)).filter(x=>x.length)
If you want to define a function for this, you could do it as follows (improving on K._'s comment on Blazemonger's answer):
const array_chunks = (array, chunk_size) => array
.map((_, i, all) => all.slice(i*chunk_size, (i+1)*chunk_size))
.filter(x => x.length)
If you use EcmaScript version >= 5.1, you can implement a functional version of chunk() using array.reduce() that has O(N) complexity:
function chunk(chunkSize, array) {
return array.reduce(function(previous, current) {
var chunk;
if (previous.length === 0 ||
previous[previous.length -1].length === chunkSize) {
chunk = []; // 1
previous.push(chunk); // 2
}
else {
chunk = previous[previous.length -1]; // 3
}
chunk.push(current); // 4
return previous; // 5
}, []); // 6
}
console.log(chunk(2, ['a', 'b', 'c', 'd', 'e']));
// prints [ [ 'a', 'b' ], [ 'c', 'd' ], [ 'e' ] ]
Explanation of each // nbr above:
Create a new chunk if the previous value, i.e. the previously returned array of chunks, is empty or if the last previous chunk has chunkSize items
Add the new chunk to the array of existing chunks
Otherwise, the current chunk is the last chunk in the array of chunks
Add the current value to the chunk
Return the modified array of chunks
Initialize the reduction by passing an empty array
Currying based on chunkSize:
var chunk3 = function(array) {
return chunk(3, array);
};
console.log(chunk3(['a', 'b', 'c', 'd', 'e']));
// prints [ [ 'a', 'b', 'c' ], [ 'd', 'e' ] ]
You can add the chunk() function to the global Array object:
Object.defineProperty(Array.prototype, 'chunk', {
value: function(chunkSize) {
return this.reduce(function(previous, current) {
var chunk;
if (previous.length === 0 ||
previous[previous.length -1].length === chunkSize) {
chunk = [];
previous.push(chunk);
}
else {
chunk = previous[previous.length -1];
}
chunk.push(current);
return previous;
}, []);
}
});
console.log(['a', 'b', 'c', 'd', 'e'].chunk(4));
// prints [ [ 'a', 'b', 'c' 'd' ], [ 'e' ] ]
Use chunk from lodash
lodash.chunk(arr,<size>).forEach(chunk=>{
console.log(chunk);
})
js
function splitToBulks(arr, bulkSize = 20) {
const bulks = [];
for (let i = 0; i < Math.ceil(arr.length / bulkSize); i++) {
bulks.push(arr.slice(i * bulkSize, (i + 1) * bulkSize));
}
return bulks;
}
console.log(splitToBulks([1, 2, 3, 4, 5, 6, 7], 3));
typescript
function splitToBulks<T>(arr: T[], bulkSize: number = 20): T[][] {
const bulks: T[][] = [];
for (let i = 0; i < Math.ceil(arr.length / bulkSize); i++) {
bulks.push(arr.slice(i * bulkSize, (i + 1) * bulkSize));
}
return bulks;
}
results = []
chunk_size = 10
while(array.length > 0){
results.push(array.splice(0, chunk_size))
}
The one line in pure javascript:
function chunks(array, size) {
return Array.apply(0,{length: Math.ceil(array.length / size)}).map((_, index) => array.slice(index*size, (index+1)*size))
}
// The following will group letters of the alphabet by 4
console.log(chunks([...Array(26)].map((x,i)=>String.fromCharCode(i + 97)), 4))
Here is an example where I split an array into chunks of 2 elements, simply by splicing chunks out of the array until the original array is empty.
const array = [86,133,87,133,88,133,89,133,90,133];
const new_array = [];
const chunksize = 2;
while (array.length) {
const chunk = array.splice(0,chunksize);
new_array.push(chunk);
}
console.log(new_array)
You can use the Array.prototype.reduce function to do this in one line.
let arr = [1,2,3,4];
function chunk(arr, size)
{
let result = arr.reduce((rows, key, index) => (index % size == 0 ? rows.push([key]) : rows[rows.length-1].push(key)) && rows, []);
return result;
}
console.log(chunk(arr,2));
const array = ['a', 'b', 'c', 'd', 'e'];
const size = 2;
const chunks = [];
while (array.length) {
chunks.push(array.splice(0, size));
}
console.log(chunks);
in coffeescript:
b = (a.splice(0, len) while a.length)
demo
a = [1, 2, 3, 4, 5, 6, 7]
b = (a.splice(0, 2) while a.length)
[ [ 1, 2 ],
[ 3, 4 ],
[ 5, 6 ],
[ 7 ] ]
And this would be my contribution to this topic. I guess .reduce() is the best way.
var segment = (arr, n) => arr.reduce((r,e,i) => i%n ? (r[r.length-1].push(e), r)
: (r.push([e]), r), []),
arr = Array.from({length: 31}).map((_,i) => i+1);
res = segment(arr,7);
console.log(JSON.stringify(res));
But the above implementation is not very efficient since .reduce() runs through all arr function. A more efficient approach (very close to the fastest imperative solution) would be, iterating over the reduced (to be chunked) array since we can calculate it's size in advance by Math.ceil(arr/n);. Once we have the empty result array like Array(Math.ceil(arr.length/n)).fill(); the rest is to map slices of the arr array into it.
function chunk(arr,n){
var r = Array(Math.ceil(arr.length/n)).fill();
return r.map((e,i) => arr.slice(i*n, i*n+n));
}
arr = Array.from({length: 31},(_,i) => i+1);
res = chunk(arr,7);
console.log(JSON.stringify(res));
So far so good but we can still simplify the above snipet further.
var chunk = (a,n) => Array.from({length: Math.ceil(a.length/n)}, (_,i) => a.slice(i*n, i*n+n)),
arr = Array.from({length: 31},(_,i) => i+1),
res = chunk(arr,7);
console.log(JSON.stringify(res));

Array values to be display as per the condition [duplicate]

Let's say that I have an Javascript array looking as following:
["Element 1","Element 2","Element 3",...]; // with close to a hundred elements.
What approach would be appropriate to chunk (split) the array into many smaller arrays with, lets say, 10 elements at its most?
The array.slice() method can extract a slice from the beginning, middle, or end of an array for whatever purposes you require, without changing the original array.
const chunkSize = 10;
for (let i = 0; i < array.length; i += chunkSize) {
const chunk = array.slice(i, i + chunkSize);
// do whatever
}
The last chunk may be smaller than chunkSize. For example when given an array of 12 elements the first chunk will have 10 elements, the second chunk only has 2.
Note that a chunkSize of 0 will cause an infinite loop.
Here's a ES6 version using reduce
const perChunk = 2 // items per chunk
const inputArray = ['a','b','c','d','e']
const result = inputArray.reduce((resultArray, item, index) => {
const chunkIndex = Math.floor(index/perChunk)
if(!resultArray[chunkIndex]) {
resultArray[chunkIndex] = [] // start a new chunk
}
resultArray[chunkIndex].push(item)
return resultArray
}, [])
console.log(result); // result: [['a','b'], ['c','d'], ['e']]
And you're ready to chain further map/reduce transformations.
Your input array is left intact
If you prefer a shorter but less readable version, you can sprinkle some concat into the mix for the same end result:
inputArray.reduce((all,one,i) => {
const ch = Math.floor(i/perChunk);
all[ch] = [].concat((all[ch]||[]),one);
return all
}, [])
You can use remainder operator to put consecutive items into different chunks:
const ch = (i % perChunk);
Modified from an answer by dbaseman: https://stackoverflow.com/a/10456344/711085
Object.defineProperty(Array.prototype, 'chunk_inefficient', {
value: function(chunkSize) {
var array = this;
return [].concat.apply([],
array.map(function(elem, i) {
return i % chunkSize ? [] : [array.slice(i, i + chunkSize)];
})
);
}
});
console.log(
[1, 2, 3, 4, 5, 6, 7].chunk_inefficient(3)
)
// [[1, 2, 3], [4, 5, 6], [7]]
minor addendum:
I should point out that the above is a not-that-elegant (in my mind) workaround to use Array.map. It basically does the following, where ~ is concatenation:
[[1,2,3]]~[]~[]~[] ~ [[4,5,6]]~[]~[]~[] ~ [[7]]
It has the same asymptotic running time as the method below, but perhaps a worse constant factor due to building empty lists. One could rewrite this as follows (mostly the same as Blazemonger's method, which is why I did not originally submit this answer):
More efficient method:
// refresh page if experimenting and you already defined Array.prototype.chunk
Object.defineProperty(Array.prototype, 'chunk', {
value: function(chunkSize) {
var R = [];
for (var i = 0; i < this.length; i += chunkSize)
R.push(this.slice(i, i + chunkSize));
return R;
}
});
console.log(
[1, 2, 3, 4, 5, 6, 7].chunk(3)
)
My preferred way nowadays is the above, or one of the following:
Array.range = function(n) {
// Array.range(5) --> [0,1,2,3,4]
return Array.apply(null,Array(n)).map((x,i) => i)
};
Object.defineProperty(Array.prototype, 'chunk', {
value: function(n) {
// ACTUAL CODE FOR CHUNKING ARRAY:
return Array.range(Math.ceil(this.length/n)).map((x,i) => this.slice(i*n,i*n+n));
}
});
Demo:
> JSON.stringify( Array.range(10).chunk(3) );
[[1,2,3],[4,5,6],[7,8,9],[10]]
Or if you don't want an Array.range function, it's actually just a one-liner (excluding the fluff):
var ceil = Math.ceil;
Object.defineProperty(Array.prototype, 'chunk', {value: function(n) {
return Array(ceil(this.length/n)).fill().map((_,i) => this.slice(i*n,i*n+n));
}});
or
Object.defineProperty(Array.prototype, 'chunk', {value: function(n) {
return Array.from(Array(ceil(this.length/n)), (_,i)=>this.slice(i*n,i*n+n));
}});
Try to avoid mucking with native prototypes, including Array.prototype, if you don't know who will be consuming your code (3rd parties, coworkers, yourself at a later date, etc.).
There are ways to safely extend prototypes (but not in all browsers) and there are ways to safely consume objects created from extended prototypes, but a better rule of thumb is to follow the Principle of Least Surprise and avoid these practices altogether.
If you have some time, watch Andrew Dupont's JSConf 2011 talk, "Everything is Permitted: Extending Built-ins", for a good discussion about this topic.
But back to the question, while the solutions above will work, they are overly complex and requiring unnecessary computational overhead. Here is my solution:
function chunk (arr, len) {
var chunks = [],
i = 0,
n = arr.length;
while (i < n) {
chunks.push(arr.slice(i, i += len));
}
return chunks;
}
// Optionally, you can do the following to avoid cluttering the global namespace:
Array.chunk = chunk;
Using generators
function* chunks(arr, n) {
for (let i = 0; i < arr.length; i += n) {
yield arr.slice(i, i + n);
}
}
let someArray = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
console.log([...chunks(someArray, 2)]) // [[0, 1], [2, 3], [4, 5], [6, 7], [8, 9]]
Can be typed with Typescript like so:
function* chunks<T>(arr: T[], n: number): Generator<T[], void> {
for (let i = 0; i < arr.length; i += n) {
yield arr.slice(i, i + n);
}
}
I tested the different answers into jsperf.com. The result is available there: https://web.archive.org/web/20150909134228/https://jsperf.com/chunk-mtds
And the fastest function (and that works from IE8) is this one:
function chunk(arr, chunkSize) {
if (chunkSize <= 0) throw "Invalid chunk size";
var R = [];
for (var i=0,len=arr.length; i<len; i+=chunkSize)
R.push(arr.slice(i,i+chunkSize));
return R;
}
Splice version using ES6
let [list,chunkSize] = [[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15], 6];
list = [...Array(Math.ceil(list.length / chunkSize))].map(_ => list.splice(0,chunkSize))
console.log(list);
I'd prefer to use splice method:
var chunks = function(array, size) {
var results = [];
while (array.length) {
results.push(array.splice(0, size));
}
return results;
};
Nowadays you can use lodash' chunk function to split the array into smaller arrays https://lodash.com/docs#chunk No need to fiddle with the loops anymore!
Old question: New answer! I actually was working with an answer from this question and had a friend improve on it! So here it is:
Array.prototype.chunk = function ( n ) {
if ( !this.length ) {
return [];
}
return [ this.slice( 0, n ) ].concat( this.slice(n).chunk(n) );
};
[1,2,3,4,5,6,7,8,9,0].chunk(3);
> [[1,2,3],[4,5,6],[7,8,9],[0]]
One more solution using Array.prototype.reduce():
const chunk = (array, size) =>
array.reduce((acc, _, i) => {
if (i % size === 0) acc.push(array.slice(i, i + size))
return acc
}, [])
// Usage:
const numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
const chunked = chunk(numbers, 3)
console.log(chunked)
This solution is very similar to the solution by Steve Holgado. However, because this solution doesn't utilize array spreading and doesn't create new arrays in the reducer function, it's faster (see jsPerf test) and subjectively more readable (simpler syntax) than the other solution.
At every nth iteration (where n = size; starting at the first iteration), the accumulator array (acc) is appended with a chunk of the array (array.slice(i, i + size)) and then returned. At other iterations, the accumulator array is returned as-is.
If size is zero, the method returns an empty array. If size is negative, the method returns broken results. So, if needed in your case, you may want to do something about negative or non-positive size values.
If speed is important in your case, a simple for loop would be faster than using reduce() (see the jsPerf test), and some may find this style more readable as well:
function chunk(array, size) {
// This prevents infinite loops
if (size < 1) throw new Error('Size must be positive')
const result = []
for (let i = 0; i < array.length; i += size) {
result.push(array.slice(i, i + size))
}
return result
}
There have been many answers but this is what I use:
const chunk = (arr, size) =>
arr
.reduce((acc, _, i) =>
(i % size)
? acc
: [...acc, arr.slice(i, i + size)]
, [])
// USAGE
const numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
chunk(numbers, 3)
// [[1, 2, 3], [4, 5, 6], [7, 8, 9], [10]]
First, check for a remainder when dividing the index by the chunk size.
If there is a remainder then just return the accumulator array.
If there is no remainder then the index is divisible by the chunk size, so take a slice from the original array (starting at the current index) and add it to the accumulator array.
So, the returned accumulator array for each iteration of reduce looks something like this:
// 0: [[1, 2, 3]]
// 1: [[1, 2, 3]]
// 2: [[1, 2, 3]]
// 3: [[1, 2, 3], [4, 5, 6]]
// 4: [[1, 2, 3], [4, 5, 6]]
// 5: [[1, 2, 3], [4, 5, 6]]
// 6: [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
// 7: [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
// 8: [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
// 9: [[1, 2, 3], [4, 5, 6], [7, 8, 9], [10]]
I think this a nice recursive solution with ES6 syntax:
const chunk = function(array, size) {
if (!array.length) {
return [];
}
const head = array.slice(0, size);
const tail = array.slice(size);
return [head, ...chunk(tail, size)];
};
console.log(chunk([1,2,3], 2));
ONE-LINER
const chunk = (a,n)=>[...Array(Math.ceil(a.length/n))].map((_,i)=>a.slice(n*i,n+n*i));
For TypeScript
const chunk = <T>(arr: T[], size: number): T[][] =>
[...Array(Math.ceil(arr.length / size))].map((_, i) =>
arr.slice(size * i, size + size * i)
);
DEMO
const chunk = (a,n)=>[...Array(Math.ceil(a.length/n))].map((_,i)=>a.slice(n*i,n+n*i));
document.write(JSON.stringify(chunk([1, 2, 3, 4], 2)));
Chunk By Number Of Groups
const part=(a,n)=>[...Array(n)].map((_,i)=>a.slice(i*Math.ceil(a.length/n),(i+1)*Math.ceil(a.length/n)));
For TypeScript
const part = <T>(a: T[], n: number): T[][] => {
const b = Math.ceil(a.length / n);
return [...Array(n)].map((_, i) => a.slice(i * b, (i + 1) * b));
};
DEMO
const part = (a, n) => {
const b = Math.ceil(a.length / n);
return [...Array(n)].map((_, i) => a.slice(i * b, (i + 1) * b));
};
document.write(JSON.stringify(part([1, 2, 3, 4, 5, 6], 2))+'<br/>');
document.write(JSON.stringify(part([1, 2, 3, 4, 5, 6, 7], 2)));
Ok, let's start with a fairly tight one:
function chunk(arr, n) {
return arr.slice(0,(arr.length+n-1)/n|0).
map(function(c,i) { return arr.slice(n*i,n*i+n); });
}
Which is used like this:
chunk([1,2,3,4,5,6,7], 2);
Then we have this tight reducer function:
function chunker(p, c, i) {
(p[i/this|0] = p[i/this|0] || []).push(c);
return p;
}
Which is used like this:
[1,2,3,4,5,6,7].reduce(chunker.bind(3),[]);
Since a kitten dies when we bind this to a number, we can do manual currying like this instead:
// Fluent alternative API without prototype hacks.
function chunker(n) {
return function(p, c, i) {
(p[i/n|0] = p[i/n|0] || []).push(c);
return p;
};
}
Which is used like this:
[1,2,3,4,5,6,7].reduce(chunker(3),[]);
Then the still pretty tight function which does it all in one go:
function chunk(arr, n) {
return arr.reduce(function(p, cur, i) {
(p[i/n|0] = p[i/n|0] || []).push(cur);
return p;
},[]);
}
chunk([1,2,3,4,5,6,7], 3);
I aimed at creating a simple non-mutating solution in pure ES6. Peculiarities in javascript make it necessary to fill the empty array before mapping :-(
function chunk(a, l) {
return new Array(Math.ceil(a.length / l)).fill(0)
.map((_, n) => a.slice(n*l, n*l + l));
}
This version with recursion seem simpler and more compelling:
function chunk(a, l) {
if (a.length == 0) return [];
else return [a.slice(0, l)].concat(chunk(a.slice(l), l));
}
The ridiculously weak array functions of ES6 makes for good puzzles :-)
Created a npm package for this https://www.npmjs.com/package/array.chunk
var result = [];
for (var i = 0; i < arr.length; i += size) {
result.push(arr.slice(i, size + i));
}
return result;
When using a TypedArray
var result = [];
for (var i = 0; i < arr.length; i += size) {
result.push(arr.subarray(i, size + i));
}
return result;
Using Array.prototype.splice() and splice it until the array has element.
Array.prototype.chunk = function(size) {
let result = [];
while(this.length) {
result.push(this.splice(0, size));
}
return result;
}
const arr = [1, 2, 3, 4, 5, 6, 7, 8, 9];
console.log(arr.chunk(2));
Update
Array.prototype.splice() populates the original array and after performing the chunk() the original array (arr) becomes [].
So if you want to keep the original array untouched, then copy and keep the arr data into another array and do the same thing.
Array.prototype.chunk = function(size) {
let data = [...this];
let result = [];
while(data.length) {
result.push(data.splice(0, size));
}
return result;
}
const arr = [1, 2, 3, 4, 5, 6, 7, 8, 9];
console.log('chunked:', arr.chunk(2));
console.log('original', arr);
P.S: Thanks to #mts-knn for mentioning the matter.
I recommend using lodash. Chunking is one of many useful functions there.
Instructions:
npm i --save lodash
Include in your project:
import * as _ from 'lodash';
Usage:
const arrayOfElements = ["Element 1","Element 2","Element 3", "Element 4", "Element 5","Element 6","Element 7","Element 8","Element 9","Element 10","Element 11","Element 12"]
const chunkedElements = _.chunk(arrayOfElements, 10)
You can find my sample here:
https://playcode.io/659171/
The following ES2015 approach works without having to define a function and directly on anonymous arrays (example with chunk size 2):
[11,22,33,44,55].map((_, i, all) => all.slice(2*i, 2*i+2)).filter(x=>x.length)
If you want to define a function for this, you could do it as follows (improving on K._'s comment on Blazemonger's answer):
const array_chunks = (array, chunk_size) => array
.map((_, i, all) => all.slice(i*chunk_size, (i+1)*chunk_size))
.filter(x => x.length)
If you use EcmaScript version >= 5.1, you can implement a functional version of chunk() using array.reduce() that has O(N) complexity:
function chunk(chunkSize, array) {
return array.reduce(function(previous, current) {
var chunk;
if (previous.length === 0 ||
previous[previous.length -1].length === chunkSize) {
chunk = []; // 1
previous.push(chunk); // 2
}
else {
chunk = previous[previous.length -1]; // 3
}
chunk.push(current); // 4
return previous; // 5
}, []); // 6
}
console.log(chunk(2, ['a', 'b', 'c', 'd', 'e']));
// prints [ [ 'a', 'b' ], [ 'c', 'd' ], [ 'e' ] ]
Explanation of each // nbr above:
Create a new chunk if the previous value, i.e. the previously returned array of chunks, is empty or if the last previous chunk has chunkSize items
Add the new chunk to the array of existing chunks
Otherwise, the current chunk is the last chunk in the array of chunks
Add the current value to the chunk
Return the modified array of chunks
Initialize the reduction by passing an empty array
Currying based on chunkSize:
var chunk3 = function(array) {
return chunk(3, array);
};
console.log(chunk3(['a', 'b', 'c', 'd', 'e']));
// prints [ [ 'a', 'b', 'c' ], [ 'd', 'e' ] ]
You can add the chunk() function to the global Array object:
Object.defineProperty(Array.prototype, 'chunk', {
value: function(chunkSize) {
return this.reduce(function(previous, current) {
var chunk;
if (previous.length === 0 ||
previous[previous.length -1].length === chunkSize) {
chunk = [];
previous.push(chunk);
}
else {
chunk = previous[previous.length -1];
}
chunk.push(current);
return previous;
}, []);
}
});
console.log(['a', 'b', 'c', 'd', 'e'].chunk(4));
// prints [ [ 'a', 'b', 'c' 'd' ], [ 'e' ] ]
Use chunk from lodash
lodash.chunk(arr,<size>).forEach(chunk=>{
console.log(chunk);
})
js
function splitToBulks(arr, bulkSize = 20) {
const bulks = [];
for (let i = 0; i < Math.ceil(arr.length / bulkSize); i++) {
bulks.push(arr.slice(i * bulkSize, (i + 1) * bulkSize));
}
return bulks;
}
console.log(splitToBulks([1, 2, 3, 4, 5, 6, 7], 3));
typescript
function splitToBulks<T>(arr: T[], bulkSize: number = 20): T[][] {
const bulks: T[][] = [];
for (let i = 0; i < Math.ceil(arr.length / bulkSize); i++) {
bulks.push(arr.slice(i * bulkSize, (i + 1) * bulkSize));
}
return bulks;
}
results = []
chunk_size = 10
while(array.length > 0){
results.push(array.splice(0, chunk_size))
}
The one line in pure javascript:
function chunks(array, size) {
return Array.apply(0,{length: Math.ceil(array.length / size)}).map((_, index) => array.slice(index*size, (index+1)*size))
}
// The following will group letters of the alphabet by 4
console.log(chunks([...Array(26)].map((x,i)=>String.fromCharCode(i + 97)), 4))
Here is an example where I split an array into chunks of 2 elements, simply by splicing chunks out of the array until the original array is empty.
const array = [86,133,87,133,88,133,89,133,90,133];
const new_array = [];
const chunksize = 2;
while (array.length) {
const chunk = array.splice(0,chunksize);
new_array.push(chunk);
}
console.log(new_array)
You can use the Array.prototype.reduce function to do this in one line.
let arr = [1,2,3,4];
function chunk(arr, size)
{
let result = arr.reduce((rows, key, index) => (index % size == 0 ? rows.push([key]) : rows[rows.length-1].push(key)) && rows, []);
return result;
}
console.log(chunk(arr,2));
const array = ['a', 'b', 'c', 'd', 'e'];
const size = 2;
const chunks = [];
while (array.length) {
chunks.push(array.splice(0, size));
}
console.log(chunks);
in coffeescript:
b = (a.splice(0, len) while a.length)
demo
a = [1, 2, 3, 4, 5, 6, 7]
b = (a.splice(0, 2) while a.length)
[ [ 1, 2 ],
[ 3, 4 ],
[ 5, 6 ],
[ 7 ] ]
And this would be my contribution to this topic. I guess .reduce() is the best way.
var segment = (arr, n) => arr.reduce((r,e,i) => i%n ? (r[r.length-1].push(e), r)
: (r.push([e]), r), []),
arr = Array.from({length: 31}).map((_,i) => i+1);
res = segment(arr,7);
console.log(JSON.stringify(res));
But the above implementation is not very efficient since .reduce() runs through all arr function. A more efficient approach (very close to the fastest imperative solution) would be, iterating over the reduced (to be chunked) array since we can calculate it's size in advance by Math.ceil(arr/n);. Once we have the empty result array like Array(Math.ceil(arr.length/n)).fill(); the rest is to map slices of the arr array into it.
function chunk(arr,n){
var r = Array(Math.ceil(arr.length/n)).fill();
return r.map((e,i) => arr.slice(i*n, i*n+n));
}
arr = Array.from({length: 31},(_,i) => i+1);
res = chunk(arr,7);
console.log(JSON.stringify(res));
So far so good but we can still simplify the above snipet further.
var chunk = (a,n) => Array.from({length: Math.ceil(a.length/n)}, (_,i) => a.slice(i*n, i*n+n)),
arr = Array.from({length: 31},(_,i) => i+1),
res = chunk(arr,7);
console.log(JSON.stringify(res));

How to find all possible combinations of an Array of Arrays [duplicate]

How would you implement the Cartesian product of multiple arrays in JavaScript?
As an example,
cartesian([1, 2], [10, 20], [100, 200, 300])
should return
[
[1, 10, 100],
[1, 10, 200],
[1, 10, 300],
[2, 10, 100],
[2, 10, 200]
...
]
2020 Update: 1-line (!) answer with vanilla JS
Original 2017 Answer: 2-line answer with vanilla JS:
(see updates below)
All of the answers here are overly complicated, most of them take 20 lines of code or even more.
This example uses just two lines of vanilla JavaScript, no lodash, underscore or other libraries:
let f = (a, b) => [].concat(...a.map(a => b.map(b => [].concat(a, b))));
let cartesian = (a, b, ...c) => b ? cartesian(f(a, b), ...c) : a;
Update:
This is the same as above but improved to strictly follow the Airbnb JavaScript Style Guide - validated using ESLint with eslint-config-airbnb-base:
const f = (a, b) => [].concat(...a.map(d => b.map(e => [].concat(d, e))));
const cartesian = (a, b, ...c) => (b ? cartesian(f(a, b), ...c) : a);
Special thanks to ZuBB for letting me know about linter problems with the original code.
Update 2020:
Since I wrote this answer we got even better builtins, that can finally let us reduce (no pun intended) the code to just 1 line!
const cartesian =
(...a) => a.reduce((a, b) => a.flatMap(d => b.map(e => [d, e].flat())));
Special thanks to inker for suggesting the use of reduce.
Special thanks to Bergi for suggesting the use of the newly added flatMap.
Special thanks to ECMAScript 2019 for adding flat and flatMap to the language!
Example
This is the exact example from your question:
let output = cartesian([1,2],[10,20],[100,200,300]);
Output
This is the output of that command:
[ [ 1, 10, 100 ],
[ 1, 10, 200 ],
[ 1, 10, 300 ],
[ 1, 20, 100 ],
[ 1, 20, 200 ],
[ 1, 20, 300 ],
[ 2, 10, 100 ],
[ 2, 10, 200 ],
[ 2, 10, 300 ],
[ 2, 20, 100 ],
[ 2, 20, 200 ],
[ 2, 20, 300 ] ]
Demo
See demos on:
JS Bin with Babel (for old browsers)
JS Bin without Babel (for modern browsers)
Syntax
The syntax that I used here is nothing new.
My example uses the spread operator and the rest parameters - features of JavaScript defined in the 6th edition of the ECMA-262 standard published on June 2015 and developed much earlier, better known as ES6 or ES2015. See:
http://www.ecma-international.org/ecma-262/6.0/
https://developer.mozilla.org/en/docs/Web/JavaScript/Reference/Functions/rest_parameters
https://developer.mozilla.org/en/docs/Web/JavaScript/Reference/Operators/Spread_operator
The new methods from the Update 2020 example was added in ES2019:
http://www.ecma-international.org/ecma-262/10.0/
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/flat
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/flatMap
It makes code like this so simple that it's a sin not to use it. For old platforms that don't support it natively you can always use Babel or other tools to transpile it to older syntax - and in fact my example transpiled by Babel is still shorter and simpler than most of the examples here, but it doesn't really matter because the output of transpilation is not something that you need to understand or maintain, it's just a fact that I found interesting.
Conclusion
There's no need to write hundred of lines of code that is hard to maintain and there is no need to use entire libraries for such a simple thing, when two lines of vanilla JavaScript can easily get the job done. As you can see it really pays off to use modern features of the language and in cases where you need to support archaic platforms with no native support of the modern features you can always use Babel, TypeScript or other tools to transpile the new syntax to the old one.
Don't code like it's 1995
JavaScript evolves and it does so for a reason. TC39 does an amazing job of the language design with adding new features and the browser vendors do an amazing job of implementing those features.
To see the current state of native support of any given feature in the browsers, see:
http://caniuse.com/
https://kangax.github.io/compat-table/
To see the support in Node versions, see:
http://node.green/
To use modern syntax on platforms that don't support it natively, use Babel or TypeScript:
https://babeljs.io/
https://www.typescriptlang.org/
Here is a functional solution to the problem (without any mutable variable!) using reduce and flatten, provided by underscore.js:
function cartesianProductOf() {
return _.reduce(arguments, function(a, b) {
return _.flatten(_.map(a, function(x) {
return _.map(b, function(y) {
return x.concat([y]);
});
}), true);
}, [ [] ]);
}
// [[1,3,"a"],[1,3,"b"],[1,4,"a"],[1,4,"b"],[2,3,"a"],[2,3,"b"],[2,4,"a"],[2,4,"b"]]
console.log(cartesianProductOf([1, 2], [3, 4], ['a']));
<script src="https://cdnjs.cloudflare.com/ajax/libs/underscore.js/1.9.1/underscore.js"></script>
Remark: This solution was inspired by http://cwestblog.com/2011/05/02/cartesian-product-of-multiple-arrays/
Here's a modified version of #viebel's code in plain Javascript, without using any library:
function cartesianProduct(arr) {
return arr.reduce(function(a,b){
return a.map(function(x){
return b.map(function(y){
return x.concat([y]);
})
}).reduce(function(a,b){ return a.concat(b) },[])
}, [[]])
}
var a = cartesianProduct([[1, 2,3], [4, 5,6], [7, 8], [9,10]]);
console.log(JSON.stringify(a));
The following efficient generator function returns the cartesian product of all given iterables:
// Generate cartesian product of given iterables:
function* cartesian(head, ...tail) {
const remainder = tail.length > 0 ? cartesian(...tail) : [[]];
for (let r of remainder) for (let h of head) yield [h, ...r];
}
// Example:
console.log(...cartesian([1, 2], [10, 20], [100, 200, 300]));
It accepts arrays, strings, sets and all other objects implementing the iterable protocol.
Following the specification of the n-ary cartesian product it yields
[] if one or more given iterables are empty, e.g. [] or ''
[[a]] if a single iterable containing a single value a is given.
All other cases are handled as expected as demonstrated by the following test cases:
// Generate cartesian product of given iterables:
function* cartesian(head, ...tail) {
const remainder = tail.length > 0 ? cartesian(...tail) : [[]];
for (let r of remainder) for (let h of head) yield [h, ...r];
}
// Test cases:
console.log([...cartesian([])]); // []
console.log([...cartesian([1])]); // [[1]]
console.log([...cartesian([1, 2])]); // [[1], [2]]
console.log([...cartesian([1], [])]); // []
console.log([...cartesian([1, 2], [])]); // []
console.log([...cartesian([1], [2])]); // [[1, 2]]
console.log([...cartesian([1], [2], [3])]); // [[1, 2, 3]]
console.log([...cartesian([1, 2], [3, 4])]); // [[1, 3], [2, 3], [1, 4], [2, 4]]
console.log([...cartesian('')]); // []
console.log([...cartesian('ab', 'c')]); // [['a','c'], ['b', 'c']]
console.log([...cartesian([1, 2], 'ab')]); // [[1, 'a'], [2, 'a'], [1, 'b'], [2, 'b']]
console.log([...cartesian(new Set())]); // []
console.log([...cartesian(new Set([1]))]); // [[1]]
console.log([...cartesian(new Set([1, 1]))]); // [[1]]
It seems the community thinks this to be trivial and/or easy to find a reference implementation. However, upon brief inspection I couldn't find one, … either that or maybe it's just that I like re-inventing the wheel or solving classroom-like programming problems. Either way its your lucky day:
function cartProd(paramArray) {
function addTo(curr, args) {
var i, copy,
rest = args.slice(1),
last = !rest.length,
result = [];
for (i = 0; i < args[0].length; i++) {
copy = curr.slice();
copy.push(args[0][i]);
if (last) {
result.push(copy);
} else {
result = result.concat(addTo(copy, rest));
}
}
return result;
}
return addTo([], Array.prototype.slice.call(arguments));
}
>> console.log(cartProd([1,2], [10,20], [100,200,300]));
>> [
[1, 10, 100], [1, 10, 200], [1, 10, 300], [1, 20, 100],
[1, 20, 200], [1, 20, 300], [2, 10, 100], [2, 10, 200],
[2, 10, 300], [2, 20, 100], [2, 20, 200], [2, 20, 300]
]
Full reference implementation that's relatively efficient… 😁
On efficiency: You could gain some by taking the if out of the loop and having 2 separate loops since it is technically constant and you'd be helping with branch prediction and all that mess, but that point is kind of moot in JavaScript.
Here's a non-fancy, straightforward recursive solution:
function cartesianProduct(a) { // a = array of array
var i, j, l, m, a1, o = [];
if (!a || a.length == 0) return a;
a1 = a.splice(0, 1)[0]; // the first array of a
a = cartesianProduct(a);
for (i = 0, l = a1.length; i < l; i++) {
if (a && a.length)
for (j = 0, m = a.length; j < m; j++)
o.push([a1[i]].concat(a[j]));
else
o.push([a1[i]]);
}
return o;
}
console.log(cartesianProduct([[1, 2], [10, 20], [100, 200, 300]]));
// [
// [1,10,100],[1,10,200],[1,10,300],
// [1,20,100],[1,20,200],[1,20,300],
// [2,10,100],[2,10,200],[2,10,300],
// [2,20,100],[2,20,200],[2,20,300]
// ]
Here is a one-liner using the native ES2019 flatMap. No libraries needed, just a modern browser (or transpiler):
data.reduce((a, b) => a.flatMap(x => b.map(y => [...x, y])), [[]]);
It's essentially a modern version of viebel's answer, without lodash.
Here is a recursive way that uses an ECMAScript 2015 generator function so you don't have to create all of the tuples at once:
function* cartesian() {
let arrays = arguments;
function* doCartesian(i, prod) {
if (i == arrays.length) {
yield prod;
} else {
for (let j = 0; j < arrays[i].length; j++) {
yield* doCartesian(i + 1, prod.concat([arrays[i][j]]));
}
}
}
yield* doCartesian(0, []);
}
console.log(JSON.stringify(Array.from(cartesian([1,2],[10,20],[100,200,300]))));
console.log(JSON.stringify(Array.from(cartesian([[1],[2]],[10,20],[100,200,300]))));
This is a pure ES6 solution using arrow functions
function cartesianProduct(arr) {
return arr.reduce((a, b) =>
a.map(x => b.map(y => x.concat(y)))
.reduce((a, b) => a.concat(b), []), [[]]);
}
var arr = [[1, 2], [10, 20], [100, 200, 300]];
console.log(JSON.stringify(cartesianProduct(arr)));
functional programming
This question is tagged functional-programming so let's take a look at the List monad:
One application for this monadic list is representing nondeterministic computation. List can hold results for all execution paths in an algorithm...
Well that sounds like a perfect fit for cartesian. JavaScript gives us Array and the monadic binding function is Array.prototype.flatMap, so let's put them to use -
const cartesian = (...all) => {
const loop = (t, a, ...more) =>
a === undefined
? [ t ]
: a.flatMap(x => loop([ ...t, x ], ...more))
return loop([], ...all)
}
console.log(cartesian([1,2], [10,20], [100,200,300]))
[1,10,100]
[1,10,200]
[1,10,300]
[1,20,100]
[1,20,200]
[1,20,300]
[2,10,100]
[2,10,200]
[2,10,300]
[2,20,100]
[2,20,200]
[2,20,300]
more recursion
Other recursive implementations include -
const cartesian = (a, ...more) =>
a == null
? [[]]
: cartesian(...more).flatMap(c => a.map(v => [v,...c]))
for (const p of cartesian([1,2], [10,20], [100,200,300]))
console.log(JSON.stringify(p))
.as-console-wrapper { min-height: 100%; top: 0; }
[1,10,100]
[2,10,100]
[1,20,100]
[2,20,100]
[1,10,200]
[2,10,200]
[1,20,200]
[2,20,200]
[1,10,300]
[2,10,300]
[1,20,300]
[2,20,300]
Note the different order above. You can get lexicographic order by inverting the two loops. Be careful not avoid duplicating work by calling cartesian inside the loop like Nick's answer -
const bind = (x, f) => f(x)
const cartesian = (a, ...more) =>
a == null
? [[]]
: bind(cartesian(...more), r => a.flatMap(v => r.map(c => [v,...c])))
for (const p of cartesian([1,2], [10,20], [100,200,300]))
console.log(JSON.stringify(p))
.as-console-wrapper { min-height: 100%; top: 0; }
[1,10,100]
[1,10,200]
[1,10,300]
[1,20,100]
[1,20,200]
[1,20,300]
[2,10,100]
[2,10,200]
[2,10,300]
[2,20,100]
[2,20,200]
[2,20,300]
generators
Another option is to use generators. A generator is a good fit for combinatorics because the solution space can become very large. Generators offer lazy evaluation so they can be paused/resumed/canceled at any time -
function* cartesian(a, ...more) {
if (a == null) return yield []
for (const v of a)
for (const c of cartesian(...more)) // ⚠️
yield [v, ...c]
}
for (const p of cartesian([1,2], [10,20], [100,200,300]))
console.log(JSON.stringify(p))
.as-console-wrapper { min-height: 100%; top: 0; }
[1,10,100]
[1,10,200]
[1,10,300]
[1,20,100]
[1,20,200]
[1,20,300]
[2,10,100]
[2,10,200]
[2,10,300]
[2,20,100]
[2,20,200]
[2,20,300]
Maybe you saw that we called cartesian in a loop in the generator. If you suspect that can be optimized, it can! Here we use a generic tee function that forks any iterator n times -
function* cartesian(a, ...more) {
if (a == null) return yield []
for (const t of tee(cartesian(...more), a.length)) // ✅
for (const v of a)
for (const c of t) // ✅
yield [v, ...c]
}
Where tee is implemented as -
function tee(g, n = 2) {
const memo = []
function* iter(i) {
while (true) {
if (i >= memo.length) {
const w = g.next()
if (w.done) return
memo.push(w.value)
}
else yield memo[i++]
}
}
return Array.from(Array(n), _ => iter(0))
}
Even in small tests cartesian generator implemented with tee performs twice as fast.
Using a typical backtracking with ES6 generators,
function cartesianProduct(...arrays) {
let current = new Array(arrays.length);
return (function* backtracking(index) {
if(index == arrays.length) yield current.slice();
else for(let num of arrays[index]) {
current[index] = num;
yield* backtracking(index+1);
}
})(0);
}
for (let item of cartesianProduct([1,2],[10,20],[100,200,300])) {
console.log('[' + item.join(', ') + ']');
}
div.as-console-wrapper { max-height: 100%; }
Below there is a similar version compatible with older browsers.
function cartesianProduct(arrays) {
var result = [],
current = new Array(arrays.length);
(function backtracking(index) {
if(index == arrays.length) return result.push(current.slice());
for(var i=0; i<arrays[index].length; ++i) {
current[index] = arrays[index][i];
backtracking(index+1);
}
})(0);
return result;
}
cartesianProduct([[1,2],[10,20],[100,200,300]]).forEach(function(item) {
console.log('[' + item.join(', ') + ']');
});
div.as-console-wrapper { max-height: 100%; }
A single line approach, for better reading with indentations.
result = data.reduce(
(a, b) => a.reduce(
(r, v) => r.concat(b.map(w => [].concat(v, w))),
[]
)
);
It takes a single array with arrays of wanted cartesian items.
var data = [[1, 2], [10, 20], [100, 200, 300]],
result = data.reduce((a, b) => a.reduce((r, v) => r.concat(b.map(w => [].concat(v, w))), []));
console.log(result.map(a => a.join(' ')));
.as-console-wrapper { max-height: 100% !important; top: 0; }
A coffeescript version with lodash:
_ = require("lodash")
cartesianProduct = ->
return _.reduceRight(arguments, (a,b) ->
_.flatten(_.map(a,(x) -> _.map b, (y) -> x.concat(y)), true)
, [ [] ])
For those who needs TypeScript (reimplemented #Danny's answer)
/**
* Calculates "Cartesian Product" sets.
* #example
* cartesianProduct([[1,2], [4,8], [16,32]])
* Returns:
* [
* [1, 4, 16],
* [1, 4, 32],
* [1, 8, 16],
* [1, 8, 32],
* [2, 4, 16],
* [2, 4, 32],
* [2, 8, 16],
* [2, 8, 32]
* ]
* #see https://stackoverflow.com/a/36234242/1955709
* #see https://en.wikipedia.org/wiki/Cartesian_product
* #param arr {T[][]}
* #returns {T[][]}
*/
function cartesianProduct<T> (arr: T[][]): T[][] {
return arr.reduce((a, b) => {
return a.map(x => {
return b.map(y => {
return x.concat(y)
})
}).reduce((c, d) => c.concat(d), [])
}, [[]] as T[][])
}
In my particular setting, the "old-fashioned" approach seemed to be more efficient than the methods based on more modern features. Below is the code (including a small comparison with other solutions posted in this thread by #rsp and #sebnukem) should it prove useful to someone else as well.
The idea is following. Let's say we are constructing the outer product of N arrays, a_1,...,a_N each of which has m_i components. The outer product of these arrays has M=m_1*m_2*...*m_N elements and we can identify each of them with a N-dimensional vector the components of which are positive integers and i-th component is strictly bounded from above by m_i. For example, the vector (0, 0, ..., 0) would correspond to the particular combination within which one takes the first element from each array, while (m_1-1, m_2-1, ..., m_N-1) is identified with the combination where one takes the last element from each array. Thus in order to construct all M combinations, the function below consecutively constructs all such vectors and for each of them identifies the corresponding combination of the elements of the input arrays.
function cartesianProduct(){
const N = arguments.length;
var arr_lengths = Array(N);
var digits = Array(N);
var num_tot = 1;
for(var i = 0; i < N; ++i){
const len = arguments[i].length;
if(!len){
num_tot = 0;
break;
}
digits[i] = 0;
num_tot *= (arr_lengths[i] = len);
}
var ret = Array(num_tot);
for(var num = 0; num < num_tot; ++num){
var item = Array(N);
for(var j = 0; j < N; ++j){ item[j] = arguments[j][digits[j]]; }
ret[num] = item;
for(var idx = 0; idx < N; ++idx){
if(digits[idx] == arr_lengths[idx]-1){
digits[idx] = 0;
}else{
digits[idx] += 1;
break;
}
}
}
return ret;
}
//------------------------------------------------------------------------------
let _f = (a, b) => [].concat(...a.map(a => b.map(b => [].concat(a, b))));
let cartesianProduct_rsp = (a, b, ...c) => b ? cartesianProduct_rsp(_f(a, b), ...c) : a;
//------------------------------------------------------------------------------
function cartesianProduct_sebnukem(a) {
var i, j, l, m, a1, o = [];
if (!a || a.length == 0) return a;
a1 = a.splice(0, 1)[0];
a = cartesianProduct_sebnukem(a);
for (i = 0, l = a1.length; i < l; i++) {
if (a && a.length) for (j = 0, m = a.length; j < m; j++)
o.push([a1[i]].concat(a[j]));
else
o.push([a1[i]]);
}
return o;
}
//------------------------------------------------------------------------------
const L = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9];
const args = [L, L, L, L, L, L];
let fns = {
'cartesianProduct': function(args){ return cartesianProduct(...args); },
'cartesianProduct_rsp': function(args){ return cartesianProduct_rsp(...args); },
'cartesianProduct_sebnukem': function(args){ return cartesianProduct_sebnukem(args); }
};
Object.keys(fns).forEach(fname => {
console.time(fname);
const ret = fns[fname](args);
console.timeEnd(fname);
});
with node v6.12.2, I get following timings:
cartesianProduct: 427.378ms
cartesianProduct_rsp: 1710.829ms
cartesianProduct_sebnukem: 593.351ms
You could reduce the 2D array. Use flatMap on the accumulator array to get acc.length x curr.length number of combinations in each loop. [].concat(c, n) is used because c is a number in the first iteration and an array afterwards.
const data = [ [1, 2], [10, 20], [100, 200, 300] ];
const output = data.reduce((acc, curr) =>
acc.flatMap(c => curr.map(n => [].concat(c, n)))
)
console.log(JSON.stringify(output))
(This is based on Nina Scholz's answer)
Here's a recursive one-liner that works using only flatMap and map:
const inp = [
[1, 2],
[10, 20],
[100, 200, 300]
];
const cartesian = (first, ...rest) =>
rest.length ? first.flatMap(v => cartesian(...rest).map(c => [v].concat(c)))
: first;
console.log(cartesian(...inp));
A few of the answers under this topic fail when any of the input arrays contains an array item. You you better check that.
Anyways no need for underscore, lodash whatsoever. I believe this one should do it with pure JS ES6, as functional as it gets.
This piece of code uses a reduce and a nested map, simply to get the cartesian product of two arrays however the second array comes from a recursive call to the same function with one less array; hence.. a[0].cartesian(...a.slice(1))
Array.prototype.cartesian = function(...a){
return a.length ? this.reduce((p,c) => (p.push(...a[0].cartesian(...a.slice(1)).map(e => a.length > 1 ? [c,...e] : [c,e])),p),[])
: this;
};
var arr = ['a', 'b', 'c'],
brr = [1,2,3],
crr = [[9],[8],[7]];
console.log(JSON.stringify(arr.cartesian(brr,crr)));
No libraries needed! :)
Needs arrow functions though and probably not that efficient. :/
const flatten = (xs) =>
xs.flat(Infinity)
const binaryCartesianProduct = (xs, ys) =>
xs.map((xi) => ys.map((yi) => [xi, yi])).flat()
const cartesianProduct = (...xss) =>
xss.reduce(binaryCartesianProduct, [[]]).map(flatten)
console.log(cartesianProduct([1,2,3], [1,2,3], [1,2,3]))
Modern JavaScript in just a few lines. No external libraries or dependencies like Lodash.
function cartesian(...arrays) {
return arrays.reduce((a, b) => a.flatMap(x => b.map(y => x.concat([y]))), [ [] ]);
}
console.log(
cartesian([1, 2], [10, 20], [100, 200, 300])
.map(arr => JSON.stringify(arr))
.join('\n')
);
A more readable implementation
function productOfTwo(one, two) {
return one.flatMap(x => two.map(y => [].concat(x, y)));
}
function product(head = [], ...tail) {
if (tail.length === 0) return head;
return productOfTwo(head, product(...tail));
}
const test = product(
[1, 2, 3],
['a', 'b']
);
console.log(JSON.stringify(test));
Another, even more simplified, 2021-style answer using only reduce, map, and concat methods:
const cartesian = (...arr) => arr.reduce((a,c) => a.map(e => c.map(f => e.concat([f]))).reduce((a,c) => a.concat(c), []), [[]]);
console.log(cartesian([1, 2], [10, 20], [100, 200, 300]));
For those happy with a ramda solution:
import { xprod, flatten } from 'ramda';
const cartessian = (...xs) => xs.reduce(xprod).map(flatten)
Or the same without dependencies and two lego blocks for free (xprod and flatten):
const flatten = xs => xs.flat();
const xprod = (xs, ys) => xs.flatMap(x => ys.map(y => [x, y]));
const cartessian = (...xs) => xs.reduce(xprod).map(flatten);
Just for a choice a real simple implementation using array's reduce:
const array1 = ["day", "month", "year", "time"];
const array2 = ["from", "to"];
const process = (one, two) => [one, two].join(" ");
const product = array1.reduce((result, one) => result.concat(array2.map(two => process(one, two))), []);
A simple "mind and visually friendly" solution.
// t = [i, length]
const moveThreadForwardAt = (t, tCursor) => {
if (tCursor < 0)
return true; // reached end of first array
const newIndex = (t[tCursor][0] + 1) % t[tCursor][1];
t[tCursor][0] = newIndex;
if (newIndex == 0)
return moveThreadForwardAt(t, tCursor - 1);
return false;
}
const cartesianMult = (...args) => {
let result = [];
const t = Array.from(Array(args.length)).map((x, i) => [0, args[i].length]);
let reachedEndOfFirstArray = false;
while (false == reachedEndOfFirstArray) {
result.push(t.map((v, i) => args[i][v[0]]));
reachedEndOfFirstArray = moveThreadForwardAt(t, args.length - 1);
}
return result;
}
// cartesianMult(
// ['a1', 'b1', 'c1'],
// ['a2', 'b2'],
// ['a3', 'b3', 'c3'],
// ['a4', 'b4']
// );
console.log(cartesianMult(
['a1'],
['a2', 'b2'],
['a3', 'b3']
));
Yet another implementation. Not the shortest or fancy, but fast:
function cartesianProduct() {
var arr = [].slice.call(arguments),
intLength = arr.length,
arrHelper = [1],
arrToReturn = [];
for (var i = arr.length - 1; i >= 0; i--) {
arrHelper.unshift(arrHelper[0] * arr[i].length);
}
for (var i = 0, l = arrHelper[0]; i < l; i++) {
arrToReturn.push([]);
for (var j = 0; j < intLength; j++) {
arrToReturn[i].push(arr[j][(i / arrHelper[j + 1] | 0) % arr[j].length]);
}
}
return arrToReturn;
}
A simple, modified version of #viebel's code in plain Javascript:
function cartesianProduct(...arrays) {
return arrays.reduce((a, b) => {
return [].concat(...a.map(x => {
const next = Array.isArray(x) ? x : [x];
return [].concat(b.map(y => next.concat(...[y])));
}));
});
}
const product = cartesianProduct([1, 2], [10, 20], [100, 200, 300]);
console.log(product);
/*
[ [ 1, 10, 100 ],
[ 1, 10, 200 ],
[ 1, 10, 300 ],
[ 1, 20, 100 ],
[ 1, 20, 200 ],
[ 1, 20, 300 ],
[ 2, 10, 100 ],
[ 2, 10, 200 ],
[ 2, 10, 300 ],
[ 2, 20, 100 ],
[ 2, 20, 200 ],
[ 2, 20, 300 ] ];
*/
f=(a,b,c)=>a.flatMap(ai=>b.flatMap(bi=>c.map(ci=>[ai,bi,ci])))
This is for 3 arrays.
Some answers gave a way for any number of arrays.
This can easily contract or expand to less or more arrays.
I needed combinations of one set with repetitions, so I could have used:
f(a,a,a)
but used:
f=(a,b,c)=>a.flatMap(a1=>a.flatMap(a2=>a.map(a3=>[a1,a2,a3])))
A non-recursive approach that adds the ability to filter and modify the products before actually adding them to the result set.
Note: the use of .map rather than .forEach. In some browsers, .map runs faster.
function crossproduct(arrays, rowtest, rowaction) {
// Calculate the number of elements needed in the result
var result_elems = 1,
row_size = arrays.length;
arrays.map(function(array) {
result_elems *= array.length;
});
var temp = new Array(result_elems),
result = [];
// Go through each array and add the appropriate
// element to each element of the temp
var scale_factor = result_elems;
arrays.map(function(array) {
var set_elems = array.length;
scale_factor /= set_elems;
for (var i = result_elems - 1; i >= 0; i--) {
temp[i] = (temp[i] ? temp[i] : []);
var pos = i / scale_factor % set_elems;
// deal with floating point results for indexes,
// this took a little experimenting
if (pos < 1 || pos % 1 <= .5) {
pos = Math.floor(pos);
} else {
pos = Math.min(array.length - 1, Math.ceil(pos));
}
temp[i].push(array[pos]);
if (temp[i].length === row_size) {
var pass = (rowtest ? rowtest(temp[i]) : true);
if (pass) {
if (rowaction) {
result.push(rowaction(temp[i]));
} else {
result.push(temp[i]);
}
}
}
}
});
return result;
}
console.log(
crossproduct([[1, 2], [10, 20], [100, 200, 300]],null,null)
)
Similar in spirit to others, but highly readable imo.
function productOfTwo(a, b) {
return a.flatMap(c => b.map(d => [c, d].flat()));
}
[['a', 'b', 'c'], ['+', '-'], [1, 2, 3]].reduce(productOfTwo);

Cartesian product of multiple arrays in JavaScript

How would you implement the Cartesian product of multiple arrays in JavaScript?
As an example,
cartesian([1, 2], [10, 20], [100, 200, 300])
should return
[
[1, 10, 100],
[1, 10, 200],
[1, 10, 300],
[2, 10, 100],
[2, 10, 200]
...
]
2020 Update: 1-line (!) answer with vanilla JS
Original 2017 Answer: 2-line answer with vanilla JS:
(see updates below)
All of the answers here are overly complicated, most of them take 20 lines of code or even more.
This example uses just two lines of vanilla JavaScript, no lodash, underscore or other libraries:
let f = (a, b) => [].concat(...a.map(a => b.map(b => [].concat(a, b))));
let cartesian = (a, b, ...c) => b ? cartesian(f(a, b), ...c) : a;
Update:
This is the same as above but improved to strictly follow the Airbnb JavaScript Style Guide - validated using ESLint with eslint-config-airbnb-base:
const f = (a, b) => [].concat(...a.map(d => b.map(e => [].concat(d, e))));
const cartesian = (a, b, ...c) => (b ? cartesian(f(a, b), ...c) : a);
Special thanks to ZuBB for letting me know about linter problems with the original code.
Update 2020:
Since I wrote this answer we got even better builtins, that can finally let us reduce (no pun intended) the code to just 1 line!
const cartesian =
(...a) => a.reduce((a, b) => a.flatMap(d => b.map(e => [d, e].flat())));
Special thanks to inker for suggesting the use of reduce.
Special thanks to Bergi for suggesting the use of the newly added flatMap.
Special thanks to ECMAScript 2019 for adding flat and flatMap to the language!
Example
This is the exact example from your question:
let output = cartesian([1,2],[10,20],[100,200,300]);
Output
This is the output of that command:
[ [ 1, 10, 100 ],
[ 1, 10, 200 ],
[ 1, 10, 300 ],
[ 1, 20, 100 ],
[ 1, 20, 200 ],
[ 1, 20, 300 ],
[ 2, 10, 100 ],
[ 2, 10, 200 ],
[ 2, 10, 300 ],
[ 2, 20, 100 ],
[ 2, 20, 200 ],
[ 2, 20, 300 ] ]
Demo
See demos on:
JS Bin with Babel (for old browsers)
JS Bin without Babel (for modern browsers)
Syntax
The syntax that I used here is nothing new.
My example uses the spread operator and the rest parameters - features of JavaScript defined in the 6th edition of the ECMA-262 standard published on June 2015 and developed much earlier, better known as ES6 or ES2015. See:
http://www.ecma-international.org/ecma-262/6.0/
https://developer.mozilla.org/en/docs/Web/JavaScript/Reference/Functions/rest_parameters
https://developer.mozilla.org/en/docs/Web/JavaScript/Reference/Operators/Spread_operator
The new methods from the Update 2020 example was added in ES2019:
http://www.ecma-international.org/ecma-262/10.0/
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/flat
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/flatMap
It makes code like this so simple that it's a sin not to use it. For old platforms that don't support it natively you can always use Babel or other tools to transpile it to older syntax - and in fact my example transpiled by Babel is still shorter and simpler than most of the examples here, but it doesn't really matter because the output of transpilation is not something that you need to understand or maintain, it's just a fact that I found interesting.
Conclusion
There's no need to write hundred of lines of code that is hard to maintain and there is no need to use entire libraries for such a simple thing, when two lines of vanilla JavaScript can easily get the job done. As you can see it really pays off to use modern features of the language and in cases where you need to support archaic platforms with no native support of the modern features you can always use Babel, TypeScript or other tools to transpile the new syntax to the old one.
Don't code like it's 1995
JavaScript evolves and it does so for a reason. TC39 does an amazing job of the language design with adding new features and the browser vendors do an amazing job of implementing those features.
To see the current state of native support of any given feature in the browsers, see:
http://caniuse.com/
https://kangax.github.io/compat-table/
To see the support in Node versions, see:
http://node.green/
To use modern syntax on platforms that don't support it natively, use Babel or TypeScript:
https://babeljs.io/
https://www.typescriptlang.org/
Here is a functional solution to the problem (without any mutable variable!) using reduce and flatten, provided by underscore.js:
function cartesianProductOf() {
return _.reduce(arguments, function(a, b) {
return _.flatten(_.map(a, function(x) {
return _.map(b, function(y) {
return x.concat([y]);
});
}), true);
}, [ [] ]);
}
// [[1,3,"a"],[1,3,"b"],[1,4,"a"],[1,4,"b"],[2,3,"a"],[2,3,"b"],[2,4,"a"],[2,4,"b"]]
console.log(cartesianProductOf([1, 2], [3, 4], ['a']));
<script src="https://cdnjs.cloudflare.com/ajax/libs/underscore.js/1.9.1/underscore.js"></script>
Remark: This solution was inspired by http://cwestblog.com/2011/05/02/cartesian-product-of-multiple-arrays/
Here's a modified version of #viebel's code in plain Javascript, without using any library:
function cartesianProduct(arr) {
return arr.reduce(function(a,b){
return a.map(function(x){
return b.map(function(y){
return x.concat([y]);
})
}).reduce(function(a,b){ return a.concat(b) },[])
}, [[]])
}
var a = cartesianProduct([[1, 2,3], [4, 5,6], [7, 8], [9,10]]);
console.log(JSON.stringify(a));
The following efficient generator function returns the cartesian product of all given iterables:
// Generate cartesian product of given iterables:
function* cartesian(head, ...tail) {
const remainder = tail.length > 0 ? cartesian(...tail) : [[]];
for (let r of remainder) for (let h of head) yield [h, ...r];
}
// Example:
console.log(...cartesian([1, 2], [10, 20], [100, 200, 300]));
It accepts arrays, strings, sets and all other objects implementing the iterable protocol.
Following the specification of the n-ary cartesian product it yields
[] if one or more given iterables are empty, e.g. [] or ''
[[a]] if a single iterable containing a single value a is given.
All other cases are handled as expected as demonstrated by the following test cases:
// Generate cartesian product of given iterables:
function* cartesian(head, ...tail) {
const remainder = tail.length > 0 ? cartesian(...tail) : [[]];
for (let r of remainder) for (let h of head) yield [h, ...r];
}
// Test cases:
console.log([...cartesian([])]); // []
console.log([...cartesian([1])]); // [[1]]
console.log([...cartesian([1, 2])]); // [[1], [2]]
console.log([...cartesian([1], [])]); // []
console.log([...cartesian([1, 2], [])]); // []
console.log([...cartesian([1], [2])]); // [[1, 2]]
console.log([...cartesian([1], [2], [3])]); // [[1, 2, 3]]
console.log([...cartesian([1, 2], [3, 4])]); // [[1, 3], [2, 3], [1, 4], [2, 4]]
console.log([...cartesian('')]); // []
console.log([...cartesian('ab', 'c')]); // [['a','c'], ['b', 'c']]
console.log([...cartesian([1, 2], 'ab')]); // [[1, 'a'], [2, 'a'], [1, 'b'], [2, 'b']]
console.log([...cartesian(new Set())]); // []
console.log([...cartesian(new Set([1]))]); // [[1]]
console.log([...cartesian(new Set([1, 1]))]); // [[1]]
It seems the community thinks this to be trivial and/or easy to find a reference implementation. However, upon brief inspection I couldn't find one, … either that or maybe it's just that I like re-inventing the wheel or solving classroom-like programming problems. Either way its your lucky day:
function cartProd(paramArray) {
function addTo(curr, args) {
var i, copy,
rest = args.slice(1),
last = !rest.length,
result = [];
for (i = 0; i < args[0].length; i++) {
copy = curr.slice();
copy.push(args[0][i]);
if (last) {
result.push(copy);
} else {
result = result.concat(addTo(copy, rest));
}
}
return result;
}
return addTo([], Array.prototype.slice.call(arguments));
}
>> console.log(cartProd([1,2], [10,20], [100,200,300]));
>> [
[1, 10, 100], [1, 10, 200], [1, 10, 300], [1, 20, 100],
[1, 20, 200], [1, 20, 300], [2, 10, 100], [2, 10, 200],
[2, 10, 300], [2, 20, 100], [2, 20, 200], [2, 20, 300]
]
Full reference implementation that's relatively efficient… 😁
On efficiency: You could gain some by taking the if out of the loop and having 2 separate loops since it is technically constant and you'd be helping with branch prediction and all that mess, but that point is kind of moot in JavaScript.
Here's a non-fancy, straightforward recursive solution:
function cartesianProduct(a) { // a = array of array
var i, j, l, m, a1, o = [];
if (!a || a.length == 0) return a;
a1 = a.splice(0, 1)[0]; // the first array of a
a = cartesianProduct(a);
for (i = 0, l = a1.length; i < l; i++) {
if (a && a.length)
for (j = 0, m = a.length; j < m; j++)
o.push([a1[i]].concat(a[j]));
else
o.push([a1[i]]);
}
return o;
}
console.log(cartesianProduct([[1, 2], [10, 20], [100, 200, 300]]));
// [
// [1,10,100],[1,10,200],[1,10,300],
// [1,20,100],[1,20,200],[1,20,300],
// [2,10,100],[2,10,200],[2,10,300],
// [2,20,100],[2,20,200],[2,20,300]
// ]
Here is a one-liner using the native ES2019 flatMap. No libraries needed, just a modern browser (or transpiler):
data.reduce((a, b) => a.flatMap(x => b.map(y => [...x, y])), [[]]);
It's essentially a modern version of viebel's answer, without lodash.
Here is a recursive way that uses an ECMAScript 2015 generator function so you don't have to create all of the tuples at once:
function* cartesian() {
let arrays = arguments;
function* doCartesian(i, prod) {
if (i == arrays.length) {
yield prod;
} else {
for (let j = 0; j < arrays[i].length; j++) {
yield* doCartesian(i + 1, prod.concat([arrays[i][j]]));
}
}
}
yield* doCartesian(0, []);
}
console.log(JSON.stringify(Array.from(cartesian([1,2],[10,20],[100,200,300]))));
console.log(JSON.stringify(Array.from(cartesian([[1],[2]],[10,20],[100,200,300]))));
This is a pure ES6 solution using arrow functions
function cartesianProduct(arr) {
return arr.reduce((a, b) =>
a.map(x => b.map(y => x.concat(y)))
.reduce((a, b) => a.concat(b), []), [[]]);
}
var arr = [[1, 2], [10, 20], [100, 200, 300]];
console.log(JSON.stringify(cartesianProduct(arr)));
functional programming
This question is tagged functional-programming so let's take a look at the List monad:
One application for this monadic list is representing nondeterministic computation. List can hold results for all execution paths in an algorithm...
Well that sounds like a perfect fit for cartesian. JavaScript gives us Array and the monadic binding function is Array.prototype.flatMap, so let's put them to use -
const cartesian = (...all) => {
const loop = (t, a, ...more) =>
a === undefined
? [ t ]
: a.flatMap(x => loop([ ...t, x ], ...more))
return loop([], ...all)
}
console.log(cartesian([1,2], [10,20], [100,200,300]))
[1,10,100]
[1,10,200]
[1,10,300]
[1,20,100]
[1,20,200]
[1,20,300]
[2,10,100]
[2,10,200]
[2,10,300]
[2,20,100]
[2,20,200]
[2,20,300]
more recursion
Other recursive implementations include -
const cartesian = (a, ...more) =>
a == null
? [[]]
: cartesian(...more).flatMap(c => a.map(v => [v,...c]))
for (const p of cartesian([1,2], [10,20], [100,200,300]))
console.log(JSON.stringify(p))
.as-console-wrapper { min-height: 100%; top: 0; }
[1,10,100]
[2,10,100]
[1,20,100]
[2,20,100]
[1,10,200]
[2,10,200]
[1,20,200]
[2,20,200]
[1,10,300]
[2,10,300]
[1,20,300]
[2,20,300]
Note the different order above. You can get lexicographic order by inverting the two loops. Be careful not avoid duplicating work by calling cartesian inside the loop like Nick's answer -
const bind = (x, f) => f(x)
const cartesian = (a, ...more) =>
a == null
? [[]]
: bind(cartesian(...more), r => a.flatMap(v => r.map(c => [v,...c])))
for (const p of cartesian([1,2], [10,20], [100,200,300]))
console.log(JSON.stringify(p))
.as-console-wrapper { min-height: 100%; top: 0; }
[1,10,100]
[1,10,200]
[1,10,300]
[1,20,100]
[1,20,200]
[1,20,300]
[2,10,100]
[2,10,200]
[2,10,300]
[2,20,100]
[2,20,200]
[2,20,300]
generators
Another option is to use generators. A generator is a good fit for combinatorics because the solution space can become very large. Generators offer lazy evaluation so they can be paused/resumed/canceled at any time -
function* cartesian(a, ...more) {
if (a == null) return yield []
for (const v of a)
for (const c of cartesian(...more)) // ⚠️
yield [v, ...c]
}
for (const p of cartesian([1,2], [10,20], [100,200,300]))
console.log(JSON.stringify(p))
.as-console-wrapper { min-height: 100%; top: 0; }
[1,10,100]
[1,10,200]
[1,10,300]
[1,20,100]
[1,20,200]
[1,20,300]
[2,10,100]
[2,10,200]
[2,10,300]
[2,20,100]
[2,20,200]
[2,20,300]
Maybe you saw that we called cartesian in a loop in the generator. If you suspect that can be optimized, it can! Here we use a generic tee function that forks any iterator n times -
function* cartesian(a, ...more) {
if (a == null) return yield []
for (const t of tee(cartesian(...more), a.length)) // ✅
for (const v of a)
for (const c of t) // ✅
yield [v, ...c]
}
Where tee is implemented as -
function tee(g, n = 2) {
const memo = []
function* iter(i) {
while (true) {
if (i >= memo.length) {
const w = g.next()
if (w.done) return
memo.push(w.value)
}
else yield memo[i++]
}
}
return Array.from(Array(n), _ => iter(0))
}
Even in small tests cartesian generator implemented with tee performs twice as fast.
Using a typical backtracking with ES6 generators,
function cartesianProduct(...arrays) {
let current = new Array(arrays.length);
return (function* backtracking(index) {
if(index == arrays.length) yield current.slice();
else for(let num of arrays[index]) {
current[index] = num;
yield* backtracking(index+1);
}
})(0);
}
for (let item of cartesianProduct([1,2],[10,20],[100,200,300])) {
console.log('[' + item.join(', ') + ']');
}
div.as-console-wrapper { max-height: 100%; }
Below there is a similar version compatible with older browsers.
function cartesianProduct(arrays) {
var result = [],
current = new Array(arrays.length);
(function backtracking(index) {
if(index == arrays.length) return result.push(current.slice());
for(var i=0; i<arrays[index].length; ++i) {
current[index] = arrays[index][i];
backtracking(index+1);
}
})(0);
return result;
}
cartesianProduct([[1,2],[10,20],[100,200,300]]).forEach(function(item) {
console.log('[' + item.join(', ') + ']');
});
div.as-console-wrapper { max-height: 100%; }
A single line approach, for better reading with indentations.
result = data.reduce(
(a, b) => a.reduce(
(r, v) => r.concat(b.map(w => [].concat(v, w))),
[]
)
);
It takes a single array with arrays of wanted cartesian items.
var data = [[1, 2], [10, 20], [100, 200, 300]],
result = data.reduce((a, b) => a.reduce((r, v) => r.concat(b.map(w => [].concat(v, w))), []));
console.log(result.map(a => a.join(' ')));
.as-console-wrapper { max-height: 100% !important; top: 0; }
A coffeescript version with lodash:
_ = require("lodash")
cartesianProduct = ->
return _.reduceRight(arguments, (a,b) ->
_.flatten(_.map(a,(x) -> _.map b, (y) -> x.concat(y)), true)
, [ [] ])
For those who needs TypeScript (reimplemented #Danny's answer)
/**
* Calculates "Cartesian Product" sets.
* #example
* cartesianProduct([[1,2], [4,8], [16,32]])
* Returns:
* [
* [1, 4, 16],
* [1, 4, 32],
* [1, 8, 16],
* [1, 8, 32],
* [2, 4, 16],
* [2, 4, 32],
* [2, 8, 16],
* [2, 8, 32]
* ]
* #see https://stackoverflow.com/a/36234242/1955709
* #see https://en.wikipedia.org/wiki/Cartesian_product
* #param arr {T[][]}
* #returns {T[][]}
*/
function cartesianProduct<T> (arr: T[][]): T[][] {
return arr.reduce((a, b) => {
return a.map(x => {
return b.map(y => {
return x.concat(y)
})
}).reduce((c, d) => c.concat(d), [])
}, [[]] as T[][])
}
In my particular setting, the "old-fashioned" approach seemed to be more efficient than the methods based on more modern features. Below is the code (including a small comparison with other solutions posted in this thread by #rsp and #sebnukem) should it prove useful to someone else as well.
The idea is following. Let's say we are constructing the outer product of N arrays, a_1,...,a_N each of which has m_i components. The outer product of these arrays has M=m_1*m_2*...*m_N elements and we can identify each of them with a N-dimensional vector the components of which are positive integers and i-th component is strictly bounded from above by m_i. For example, the vector (0, 0, ..., 0) would correspond to the particular combination within which one takes the first element from each array, while (m_1-1, m_2-1, ..., m_N-1) is identified with the combination where one takes the last element from each array. Thus in order to construct all M combinations, the function below consecutively constructs all such vectors and for each of them identifies the corresponding combination of the elements of the input arrays.
function cartesianProduct(){
const N = arguments.length;
var arr_lengths = Array(N);
var digits = Array(N);
var num_tot = 1;
for(var i = 0; i < N; ++i){
const len = arguments[i].length;
if(!len){
num_tot = 0;
break;
}
digits[i] = 0;
num_tot *= (arr_lengths[i] = len);
}
var ret = Array(num_tot);
for(var num = 0; num < num_tot; ++num){
var item = Array(N);
for(var j = 0; j < N; ++j){ item[j] = arguments[j][digits[j]]; }
ret[num] = item;
for(var idx = 0; idx < N; ++idx){
if(digits[idx] == arr_lengths[idx]-1){
digits[idx] = 0;
}else{
digits[idx] += 1;
break;
}
}
}
return ret;
}
//------------------------------------------------------------------------------
let _f = (a, b) => [].concat(...a.map(a => b.map(b => [].concat(a, b))));
let cartesianProduct_rsp = (a, b, ...c) => b ? cartesianProduct_rsp(_f(a, b), ...c) : a;
//------------------------------------------------------------------------------
function cartesianProduct_sebnukem(a) {
var i, j, l, m, a1, o = [];
if (!a || a.length == 0) return a;
a1 = a.splice(0, 1)[0];
a = cartesianProduct_sebnukem(a);
for (i = 0, l = a1.length; i < l; i++) {
if (a && a.length) for (j = 0, m = a.length; j < m; j++)
o.push([a1[i]].concat(a[j]));
else
o.push([a1[i]]);
}
return o;
}
//------------------------------------------------------------------------------
const L = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9];
const args = [L, L, L, L, L, L];
let fns = {
'cartesianProduct': function(args){ return cartesianProduct(...args); },
'cartesianProduct_rsp': function(args){ return cartesianProduct_rsp(...args); },
'cartesianProduct_sebnukem': function(args){ return cartesianProduct_sebnukem(args); }
};
Object.keys(fns).forEach(fname => {
console.time(fname);
const ret = fns[fname](args);
console.timeEnd(fname);
});
with node v6.12.2, I get following timings:
cartesianProduct: 427.378ms
cartesianProduct_rsp: 1710.829ms
cartesianProduct_sebnukem: 593.351ms
You could reduce the 2D array. Use flatMap on the accumulator array to get acc.length x curr.length number of combinations in each loop. [].concat(c, n) is used because c is a number in the first iteration and an array afterwards.
const data = [ [1, 2], [10, 20], [100, 200, 300] ];
const output = data.reduce((acc, curr) =>
acc.flatMap(c => curr.map(n => [].concat(c, n)))
)
console.log(JSON.stringify(output))
(This is based on Nina Scholz's answer)
Here's a recursive one-liner that works using only flatMap and map:
const inp = [
[1, 2],
[10, 20],
[100, 200, 300]
];
const cartesian = (first, ...rest) =>
rest.length ? first.flatMap(v => cartesian(...rest).map(c => [v].concat(c)))
: first;
console.log(cartesian(...inp));
A few of the answers under this topic fail when any of the input arrays contains an array item. You you better check that.
Anyways no need for underscore, lodash whatsoever. I believe this one should do it with pure JS ES6, as functional as it gets.
This piece of code uses a reduce and a nested map, simply to get the cartesian product of two arrays however the second array comes from a recursive call to the same function with one less array; hence.. a[0].cartesian(...a.slice(1))
Array.prototype.cartesian = function(...a){
return a.length ? this.reduce((p,c) => (p.push(...a[0].cartesian(...a.slice(1)).map(e => a.length > 1 ? [c,...e] : [c,e])),p),[])
: this;
};
var arr = ['a', 'b', 'c'],
brr = [1,2,3],
crr = [[9],[8],[7]];
console.log(JSON.stringify(arr.cartesian(brr,crr)));
No libraries needed! :)
Needs arrow functions though and probably not that efficient. :/
const flatten = (xs) =>
xs.flat(Infinity)
const binaryCartesianProduct = (xs, ys) =>
xs.map((xi) => ys.map((yi) => [xi, yi])).flat()
const cartesianProduct = (...xss) =>
xss.reduce(binaryCartesianProduct, [[]]).map(flatten)
console.log(cartesianProduct([1,2,3], [1,2,3], [1,2,3]))
Modern JavaScript in just a few lines. No external libraries or dependencies like Lodash.
function cartesian(...arrays) {
return arrays.reduce((a, b) => a.flatMap(x => b.map(y => x.concat([y]))), [ [] ]);
}
console.log(
cartesian([1, 2], [10, 20], [100, 200, 300])
.map(arr => JSON.stringify(arr))
.join('\n')
);
A more readable implementation
function productOfTwo(one, two) {
return one.flatMap(x => two.map(y => [].concat(x, y)));
}
function product(head = [], ...tail) {
if (tail.length === 0) return head;
return productOfTwo(head, product(...tail));
}
const test = product(
[1, 2, 3],
['a', 'b']
);
console.log(JSON.stringify(test));
Another, even more simplified, 2021-style answer using only reduce, map, and concat methods:
const cartesian = (...arr) => arr.reduce((a,c) => a.map(e => c.map(f => e.concat([f]))).reduce((a,c) => a.concat(c), []), [[]]);
console.log(cartesian([1, 2], [10, 20], [100, 200, 300]));
For those happy with a ramda solution:
import { xprod, flatten } from 'ramda';
const cartessian = (...xs) => xs.reduce(xprod).map(flatten)
Or the same without dependencies and two lego blocks for free (xprod and flatten):
const flatten = xs => xs.flat();
const xprod = (xs, ys) => xs.flatMap(x => ys.map(y => [x, y]));
const cartessian = (...xs) => xs.reduce(xprod).map(flatten);
Just for a choice a real simple implementation using array's reduce:
const array1 = ["day", "month", "year", "time"];
const array2 = ["from", "to"];
const process = (one, two) => [one, two].join(" ");
const product = array1.reduce((result, one) => result.concat(array2.map(two => process(one, two))), []);
A simple "mind and visually friendly" solution.
// t = [i, length]
const moveThreadForwardAt = (t, tCursor) => {
if (tCursor < 0)
return true; // reached end of first array
const newIndex = (t[tCursor][0] + 1) % t[tCursor][1];
t[tCursor][0] = newIndex;
if (newIndex == 0)
return moveThreadForwardAt(t, tCursor - 1);
return false;
}
const cartesianMult = (...args) => {
let result = [];
const t = Array.from(Array(args.length)).map((x, i) => [0, args[i].length]);
let reachedEndOfFirstArray = false;
while (false == reachedEndOfFirstArray) {
result.push(t.map((v, i) => args[i][v[0]]));
reachedEndOfFirstArray = moveThreadForwardAt(t, args.length - 1);
}
return result;
}
// cartesianMult(
// ['a1', 'b1', 'c1'],
// ['a2', 'b2'],
// ['a3', 'b3', 'c3'],
// ['a4', 'b4']
// );
console.log(cartesianMult(
['a1'],
['a2', 'b2'],
['a3', 'b3']
));
Yet another implementation. Not the shortest or fancy, but fast:
function cartesianProduct() {
var arr = [].slice.call(arguments),
intLength = arr.length,
arrHelper = [1],
arrToReturn = [];
for (var i = arr.length - 1; i >= 0; i--) {
arrHelper.unshift(arrHelper[0] * arr[i].length);
}
for (var i = 0, l = arrHelper[0]; i < l; i++) {
arrToReturn.push([]);
for (var j = 0; j < intLength; j++) {
arrToReturn[i].push(arr[j][(i / arrHelper[j + 1] | 0) % arr[j].length]);
}
}
return arrToReturn;
}
A simple, modified version of #viebel's code in plain Javascript:
function cartesianProduct(...arrays) {
return arrays.reduce((a, b) => {
return [].concat(...a.map(x => {
const next = Array.isArray(x) ? x : [x];
return [].concat(b.map(y => next.concat(...[y])));
}));
});
}
const product = cartesianProduct([1, 2], [10, 20], [100, 200, 300]);
console.log(product);
/*
[ [ 1, 10, 100 ],
[ 1, 10, 200 ],
[ 1, 10, 300 ],
[ 1, 20, 100 ],
[ 1, 20, 200 ],
[ 1, 20, 300 ],
[ 2, 10, 100 ],
[ 2, 10, 200 ],
[ 2, 10, 300 ],
[ 2, 20, 100 ],
[ 2, 20, 200 ],
[ 2, 20, 300 ] ];
*/
f=(a,b,c)=>a.flatMap(ai=>b.flatMap(bi=>c.map(ci=>[ai,bi,ci])))
This is for 3 arrays.
Some answers gave a way for any number of arrays.
This can easily contract or expand to less or more arrays.
I needed combinations of one set with repetitions, so I could have used:
f(a,a,a)
but used:
f=(a,b,c)=>a.flatMap(a1=>a.flatMap(a2=>a.map(a3=>[a1,a2,a3])))
A non-recursive approach that adds the ability to filter and modify the products before actually adding them to the result set.
Note: the use of .map rather than .forEach. In some browsers, .map runs faster.
function crossproduct(arrays, rowtest, rowaction) {
// Calculate the number of elements needed in the result
var result_elems = 1,
row_size = arrays.length;
arrays.map(function(array) {
result_elems *= array.length;
});
var temp = new Array(result_elems),
result = [];
// Go through each array and add the appropriate
// element to each element of the temp
var scale_factor = result_elems;
arrays.map(function(array) {
var set_elems = array.length;
scale_factor /= set_elems;
for (var i = result_elems - 1; i >= 0; i--) {
temp[i] = (temp[i] ? temp[i] : []);
var pos = i / scale_factor % set_elems;
// deal with floating point results for indexes,
// this took a little experimenting
if (pos < 1 || pos % 1 <= .5) {
pos = Math.floor(pos);
} else {
pos = Math.min(array.length - 1, Math.ceil(pos));
}
temp[i].push(array[pos]);
if (temp[i].length === row_size) {
var pass = (rowtest ? rowtest(temp[i]) : true);
if (pass) {
if (rowaction) {
result.push(rowaction(temp[i]));
} else {
result.push(temp[i]);
}
}
}
}
});
return result;
}
console.log(
crossproduct([[1, 2], [10, 20], [100, 200, 300]],null,null)
)
Similar in spirit to others, but highly readable imo.
function productOfTwo(a, b) {
return a.flatMap(c => b.map(d => [c, d].flat()));
}
[['a', 'b', 'c'], ['+', '-'], [1, 2, 3]].reduce(productOfTwo);

Categories

Resources