Are these pointers working behind the scene? Javascript [duplicate] - javascript

In order to duplicate an array in JavaScript: Which of the following is faster to use?
Slice method
var dup_array = original_array.slice();
For loop
for(var i = 0, len = original_array.length; i < len; ++i)
dup_array[i] = original_array[i];
I know both ways do only a shallow copy: if original_array contains references to objects, objects won't be cloned, but only the references will be copied, and therefore both arrays will have references to the same objects.
But this is not the point of this question.
I'm asking only about speed.

There are at least 6 (!) ways to clone an array:
loop
slice
Array.from()
concat
spread syntax (FASTEST)
map A.map(function(e){return e;});
There has been a huuuge BENCHMARKS thread, providing following information:
for blink browsers slice() is the fastest method, concat() is a bit slower, and while loop is 2.4x slower.
for other browsers while loop is the fastest method, since those browsers don't have internal optimizations for slice and concat.
This remains true in Jul 2016.
Below are simple scripts that you can copy-paste into your browser's console and run several times to see the picture. They output milliseconds, lower is better.
while loop
n = 1000*1000;
start = + new Date();
a = Array(n);
b = Array(n);
i = a.length;
while(i--) b[i] = a[i];
console.log(new Date() - start);
slice
n = 1000*1000;
start = + new Date();
a = Array(n);
b = a.slice();
console.log(new Date() - start);
Please note that these methods will clone the Array object itself, array contents however are copied by reference and are not deep cloned.
origAr == clonedArr //returns false
origAr[0] == clonedArr[0] //returns true

Technically slice is the fastest way. However, it is even faster if you add the 0 begin index.
myArray.slice(0);
is faster than
myArray.slice();
https://jsben.ch/F0SZ3

what about es6 way?
arr2 = [...arr1];

Easiest way to deep clone Array or Object:
var dup_array = JSON.parse(JSON.stringify(original_array))

🏁 Fastest Way to Clone an Array
I made this very plain utility function to test the time that it takes to clone an array. It is not 100% reliable however it can give you a bulk idea as for how long it takes to clone an existing array:
function clone(fn) {
const arr = [...Array(1000000)];
console.time('timer');
fn(arr);
console.timeEnd('timer');
}
And tested different approach:
1) 5.79ms -> clone(arr => Object.values(arr));
2) 7.23ms -> clone(arr => [].concat(arr));
3) 9.13ms -> clone(arr => arr.slice());
4) 24.04ms -> clone(arr => { const a = []; for (let val of arr) { a.push(val); } return a; });
5) 30.02ms -> clone(arr => [...arr]);
6) 39.72ms -> clone(arr => JSON.parse(JSON.stringify(arr)));
7) 99.80ms -> clone(arr => arr.map(i => i));
8) 259.29ms -> clone(arr => Object.assign([], arr));
9) Maximum call stack size exceeded -> clone(arr => Array.of(...arr));
UPDATE:
Tests were made back in 2018, so today most likely you'll get different result with current browsers.
Out of all of those, the only way to deep clone an array is by using JSON.parse(JSON.stringify(arr)).
That said, do not use the above if your array might include functions as it will return null.Thank you #GilEpshtain for this update.

var cloned_array = [].concat(target_array);

I put together a quick demo: http://jsbin.com/agugo3/edit
My results on Internet Explorer 8 are 156, 782, and 750, which would indicate slice is much faster in this case.

a.map(e => e) is another alternative for this job. As of today .map() is very fast (almost as fast as .slice(0)) in Firefox, but not in Chrome.
On the other hand, if an array is multi-dimensional, since arrays are objects and objects are reference types, none of the slice or concat methods will be a cure... So one proper way of cloning an array is an invention of Array.prototype.clone() as follows.
Array.prototype.clone = function(){
return this.map(e => Array.isArray(e) ? e.clone() : e);
};
var arr = [ 1, 2, 3, 4, [ 1, 2, [ 1, 2, 3 ], 4 , 5], 6 ],
brr = arr.clone();
brr[4][2][1] = "two";
console.log(JSON.stringify(arr));
console.log(JSON.stringify(brr));

Fastest way to clone an Array of Objects will be using spread operator
var clonedArray=[...originalArray]
or
var clonedArray = originalArray.slice(0); //with 0 index it's little bit faster than normal slice()
but the objects inside that cloned array will still pointing at the old memory location. hence change to clonedArray objects will also change the orignalArray. So
var clonedArray = originalArray.map(({...ele}) => {return ele})
this will not only create new array but also the objects will be cloned to.
disclaimer if you are working with nested object in that case spread operator will work as SHALLOW CLONE. At that point better to use
var clonedArray=JSON.parse(JSON.stringify(originalArray));

Take a look at: link. It's not about speed, but comfort. Besides as you can see you can only use slice(0) on primitive types.
To make an independent copy of an array rather than a copy of the refence to it, you can use the array slice method.
Example:
To make an independent copy of an array rather than a copy of the refence to it, you can use the array slice method.
var oldArray = ["mip", "map", "mop"];
var newArray = oldArray.slice();
To copy or clone an object :
function cloneObject(source) {
for (i in source) {
if (typeof source[i] == 'source') {
this[i] = new cloneObject(source[i]);
}
else{
this[i] = source[i];
}
}
}
var obj1= {bla:'blabla',foo:'foofoo',etc:'etc'};
var obj2= new cloneObject(obj1);
Source: link

ECMAScript 2015 way with the Spread operator:
Basic examples:
var copyOfOldArray = [...oldArray]
var twoArraysBecomeOne = [...firstArray, ..seccondArray]
Try in the browser console:
var oldArray = [1, 2, 3]
var copyOfOldArray = [...oldArray]
console.log(oldArray)
console.log(copyOfOldArray)
var firstArray = [5, 6, 7]
var seccondArray = ["a", "b", "c"]
var twoArraysBecomOne = [...firstArray, ...seccondArray]
console.log(twoArraysBecomOne);
References
6 Great Uses of the Spread Operator
Spread syntax

As #Dan said "This answer becomes outdated fast. Use benchmarks to check the actual situation", there is one specific answer from jsperf that has not had an answer for itself: while:
var i = a.length;
while(i--) { b[i] = a[i]; }
had 960,589 ops/sec with the runnerup a.concat() at 578,129 ops/sec, which is 60%.
This is the lastest Firefox (40) 64 bit.
#aleclarson created a new, more reliable benchmark.

Benchmark time!
function log(data) {
document.getElementById("log").textContent += data + "\n";
}
benchmark = (() => {
time_function = function(ms, f, num) {
var z = 0;
var t = new Date().getTime();
for (z = 0;
((new Date().getTime() - t) < ms); z++)
f(num);
return (z)
}
function clone1(arr) {
return arr.slice(0);
}
function clone2(arr) {
return [...arr]
}
function clone3(arr) {
return [].concat(arr);
}
Array.prototype.clone = function() {
return this.map(e => Array.isArray(e) ? e.clone() : e);
};
function clone4(arr) {
return arr.clone();
}
function benchmark() {
function compare(a, b) {
if (a[1] > b[1]) {
return -1;
}
if (a[1] < b[1]) {
return 1;
}
return 0;
}
funcs = [clone1, clone2, clone3, clone4];
results = [];
funcs.forEach((ff) => {
console.log("Benchmarking: " + ff.name);
var s = time_function(2500, ff, Array(1024));
results.push([ff, s]);
console.log("Score: " + s);
})
return results.sort(compare);
}
return benchmark;
})()
log("Starting benchmark...\n");
res = benchmark();
console.log("Winner: " + res[0][0].name + " !!!");
count = 1;
res.forEach((r) => {
log((count++) + ". " + r[0].name + " score: " + Math.floor(10000 * r[1] / res[0][1]) / 100 + ((count == 2) ? "% *winner*" : "% speed of winner.") + " (" + Math.round(r[1] * 100) / 100 + ")");
});
log("\nWinner code:\n");
log(res[0][0].toString());
<textarea rows="50" cols="80" style="font-size: 16; resize:none; border: none;" id="log"></textarea>
The benchmark will run for 10s since you click the button.
My results:
Chrome (V8 engine):
1. clone1 score: 100% *winner* (4110764)
2. clone3 score: 74.32% speed of winner. (3055225)
3. clone2 score: 30.75% speed of winner. (1264182)
4. clone4 score: 21.96% speed of winner. (902929)
Firefox (SpiderMonkey Engine):
1. clone1 score: 100% *winner* (8448353)
2. clone3 score: 16.44% speed of winner. (1389241)
3. clone4 score: 5.69% speed of winner. (481162)
4. clone2 score: 2.27% speed of winner. (192433)
Winner code:
function clone1(arr) {
return arr.slice(0);
}
Winner engine:
SpiderMonkey (Mozilla/Firefox)

It depends on the browser. If you look in the blog post Array.prototype.slice vs manual array creation, there is a rough guide to performance of each:
Results:

There is a much cleaner solution:
var srcArray = [1, 2, 3];
var clonedArray = srcArray.length === 1 ? [srcArray[0]] : Array.apply(this, srcArray);
The length check is required, because the Array constructor behaves differently when it is called with exactly one argument.

Remember .slice() won't work for two-dimensional arrays. You'll need a function like this:
function copy(array) {
return array.map(function(arr) {
return arr.slice();
});
}

It depends on the length of the array. If the array length is <= 1,000,000, the slice and concat methods are taking approximately the same time. But when you give a wider range, the concat method wins.
For example, try this code:
var original_array = [];
for(var i = 0; i < 10000000; i ++) {
original_array.push( Math.floor(Math.random() * 1000000 + 1));
}
function a1() {
var dup = [];
var start = Date.now();
dup = original_array.slice();
var end = Date.now();
console.log('slice method takes ' + (end - start) + ' ms');
}
function a2() {
var dup = [];
var start = Date.now();
dup = original_array.concat([]);
var end = Date.now();
console.log('concat method takes ' + (end - start) + ' ms');
}
function a3() {
var dup = [];
var start = Date.now();
for(var i = 0; i < original_array.length; i ++) {
dup.push(original_array[i]);
}
var end = Date.now();
console.log('for loop with push method takes ' + (end - start) + ' ms');
}
function a4() {
var dup = [];
var start = Date.now();
for(var i = 0; i < original_array.length; i ++) {
dup[i] = original_array[i];
}
var end = Date.now();
console.log('for loop with = method takes ' + (end - start) + ' ms');
}
function a5() {
var dup = new Array(original_array.length)
var start = Date.now();
for(var i = 0; i < original_array.length; i ++) {
dup.push(original_array[i]);
}
var end = Date.now();
console.log('for loop with = method and array constructor takes ' + (end - start) + ' ms');
}
a1();
a2();
a3();
a4();
a5();
If you set the length of original_array to 1,000,000, the slice method and concat method are taking approximately the same time (3-4 ms, depending on the random numbers).
If you set the length of original_array to 10,000,000, then the slice method takes over 60 ms and the concat method takes over 20 ms.

In ES6, you can simply utilize the Spread syntax.
Example:
let arr = ['a', 'b', 'c'];
let arr2 = [...arr];
Please note that the spread operator generates a completely new array, so modifying one won't affect the other.
Example:
arr2.push('d') // becomes ['a', 'b', 'c', 'd']
console.log(arr) // while arr retains its values ['a', 'b', 'c']

A simple solution:
original = [1,2,3]
cloned = original.map(x=>x)

const arr = ['1', '2', '3'];
// Old way
const cloneArr = arr.slice();
// ES6 way
const cloneArrES6 = [...arr];
// But problem with 3rd approach is that if you are using muti-dimensional
// array, then only first level is copied
const nums = [
[1, 2],
[10],
];
const cloneNums = [...nums];
// Let's change the first item in the first nested item in our cloned array.
cloneNums[0][0] = '8';
console.log(cloneNums);
// [ [ '8', 2 ], [ 10 ], [ 300 ] ]
// NOOooo, the original is also affected
console.log(nums);
// [ [ '8', 2 ], [ 10 ], [ 300 ] ]
So, in order to avoid these scenarios to happen, use
const arr = ['1', '2', '3'];
const cloneArr = Array.from(arr);

There were several ways to clone an array. Basically, Cloning was categorized in two ways:
Shallow copy
Deep copy
Shallow copies only cover the 1st level of the array and the rest are
referenced. If you want a true copy of nested elements in the arrays, you’ll need a
deep clone.
Example :
const arr1 = [1,2,3,4,5,6,7]
// Normal Array (shallow copy is enough)
const arr2 = [1,2,3,[4],[[5]],6,7]
// Nested Array (Deep copy required)
Approach 1 : Using (...)Spread Operator (Shallow copy enough)
const newArray = [...arr1] // [1,2,3,4,5,6,7]
Approach 2 : Using Array builtIn Slice method (Deep copy)
const newArray = arr1.slice() // [1,2,3,4,5,6,7]
Approach 3 : Using Array builtIn Concat method (Deep a copy)
const newArray = [].concat(arr1) // [1,2,3,4,5,6,7]
Approach 4 : Using JSON.stringify/parse. (Deep a copy & fastest)
const newArray = JSON.parse(JSON.stringify(arr2));) // [1,2,3,[4],[[5]],6,7]
Approach 5: Using own recursive function or using loadash's __.cloneDeep method. (Deep copy)

Fast ways to duplicate an array in JavaScript in Order:
#1: array1copy = [...array1];
#2: array1copy = array1.slice(0);
#3: array1copy = array1.slice();
If your array objects contain some JSON-non-serializable content (functions, Number.POSITIVE_INFINITY, etc.) better to use
array1copy = JSON.parse(JSON.stringify(array1))

You can follow this code. Immutable way array clone. This is the perfect way to array cloning
const array = [1, 2, 3, 4]
const newArray = [...array]
newArray.push(6)
console.log(array)
console.log(newArray)

If you want a REAL cloned object/array in JS with cloned references of all attributes and sub-objects:
export function clone(arr) {
return JSON.parse(JSON.stringify(arr))
}
ALL other operations do not create clones, because they just change the base address of the root element, not of the included objects.
Except you traverse recursive through the object-tree.
For a simple copy, these are OK. For storage address relevant operations I suggest (and in most all other cases, because this is fast!) to type convert into string and back in a complete new object.

If you are taking about slice it is used to copy elements from an array and create a clone with same no. of elements or less no. of elements.
var arr = [1, 2, 3 , 4, 5];
function slc() {
var sliced = arr.slice(0, 5);
// arr.slice(position to start copying master array , no. of items in new array)
console.log(sliced);
}
slc(arr);

Related

Why an array does not reassign in every loop? [duplicate]

In order to duplicate an array in JavaScript: Which of the following is faster to use?
Slice method
var dup_array = original_array.slice();
For loop
for(var i = 0, len = original_array.length; i < len; ++i)
dup_array[i] = original_array[i];
I know both ways do only a shallow copy: if original_array contains references to objects, objects won't be cloned, but only the references will be copied, and therefore both arrays will have references to the same objects.
But this is not the point of this question.
I'm asking only about speed.
There are at least 6 (!) ways to clone an array:
loop
slice
Array.from()
concat
spread syntax (FASTEST)
map A.map(function(e){return e;});
There has been a huuuge BENCHMARKS thread, providing following information:
for blink browsers slice() is the fastest method, concat() is a bit slower, and while loop is 2.4x slower.
for other browsers while loop is the fastest method, since those browsers don't have internal optimizations for slice and concat.
This remains true in Jul 2016.
Below are simple scripts that you can copy-paste into your browser's console and run several times to see the picture. They output milliseconds, lower is better.
while loop
n = 1000*1000;
start = + new Date();
a = Array(n);
b = Array(n);
i = a.length;
while(i--) b[i] = a[i];
console.log(new Date() - start);
slice
n = 1000*1000;
start = + new Date();
a = Array(n);
b = a.slice();
console.log(new Date() - start);
Please note that these methods will clone the Array object itself, array contents however are copied by reference and are not deep cloned.
origAr == clonedArr //returns false
origAr[0] == clonedArr[0] //returns true
Technically slice is the fastest way. However, it is even faster if you add the 0 begin index.
myArray.slice(0);
is faster than
myArray.slice();
https://jsben.ch/F0SZ3
what about es6 way?
arr2 = [...arr1];
Easiest way to deep clone Array or Object:
var dup_array = JSON.parse(JSON.stringify(original_array))
🏁 Fastest Way to Clone an Array
I made this very plain utility function to test the time that it takes to clone an array. It is not 100% reliable however it can give you a bulk idea as for how long it takes to clone an existing array:
function clone(fn) {
const arr = [...Array(1000000)];
console.time('timer');
fn(arr);
console.timeEnd('timer');
}
And tested different approach:
1) 5.79ms -> clone(arr => Object.values(arr));
2) 7.23ms -> clone(arr => [].concat(arr));
3) 9.13ms -> clone(arr => arr.slice());
4) 24.04ms -> clone(arr => { const a = []; for (let val of arr) { a.push(val); } return a; });
5) 30.02ms -> clone(arr => [...arr]);
6) 39.72ms -> clone(arr => JSON.parse(JSON.stringify(arr)));
7) 99.80ms -> clone(arr => arr.map(i => i));
8) 259.29ms -> clone(arr => Object.assign([], arr));
9) Maximum call stack size exceeded -> clone(arr => Array.of(...arr));
UPDATE:
Tests were made back in 2018, so today most likely you'll get different result with current browsers.
Out of all of those, the only way to deep clone an array is by using JSON.parse(JSON.stringify(arr)).
That said, do not use the above if your array might include functions as it will return null.Thank you #GilEpshtain for this update.
var cloned_array = [].concat(target_array);
I put together a quick demo: http://jsbin.com/agugo3/edit
My results on Internet Explorer 8 are 156, 782, and 750, which would indicate slice is much faster in this case.
a.map(e => e) is another alternative for this job. As of today .map() is very fast (almost as fast as .slice(0)) in Firefox, but not in Chrome.
On the other hand, if an array is multi-dimensional, since arrays are objects and objects are reference types, none of the slice or concat methods will be a cure... So one proper way of cloning an array is an invention of Array.prototype.clone() as follows.
Array.prototype.clone = function(){
return this.map(e => Array.isArray(e) ? e.clone() : e);
};
var arr = [ 1, 2, 3, 4, [ 1, 2, [ 1, 2, 3 ], 4 , 5], 6 ],
brr = arr.clone();
brr[4][2][1] = "two";
console.log(JSON.stringify(arr));
console.log(JSON.stringify(brr));
Fastest way to clone an Array of Objects will be using spread operator
var clonedArray=[...originalArray]
or
var clonedArray = originalArray.slice(0); //with 0 index it's little bit faster than normal slice()
but the objects inside that cloned array will still pointing at the old memory location. hence change to clonedArray objects will also change the orignalArray. So
var clonedArray = originalArray.map(({...ele}) => {return ele})
this will not only create new array but also the objects will be cloned to.
disclaimer if you are working with nested object in that case spread operator will work as SHALLOW CLONE. At that point better to use
var clonedArray=JSON.parse(JSON.stringify(originalArray));
Take a look at: link. It's not about speed, but comfort. Besides as you can see you can only use slice(0) on primitive types.
To make an independent copy of an array rather than a copy of the refence to it, you can use the array slice method.
Example:
To make an independent copy of an array rather than a copy of the refence to it, you can use the array slice method.
var oldArray = ["mip", "map", "mop"];
var newArray = oldArray.slice();
To copy or clone an object :
function cloneObject(source) {
for (i in source) {
if (typeof source[i] == 'source') {
this[i] = new cloneObject(source[i]);
}
else{
this[i] = source[i];
}
}
}
var obj1= {bla:'blabla',foo:'foofoo',etc:'etc'};
var obj2= new cloneObject(obj1);
Source: link
ECMAScript 2015 way with the Spread operator:
Basic examples:
var copyOfOldArray = [...oldArray]
var twoArraysBecomeOne = [...firstArray, ..seccondArray]
Try in the browser console:
var oldArray = [1, 2, 3]
var copyOfOldArray = [...oldArray]
console.log(oldArray)
console.log(copyOfOldArray)
var firstArray = [5, 6, 7]
var seccondArray = ["a", "b", "c"]
var twoArraysBecomOne = [...firstArray, ...seccondArray]
console.log(twoArraysBecomOne);
References
6 Great Uses of the Spread Operator
Spread syntax
As #Dan said "This answer becomes outdated fast. Use benchmarks to check the actual situation", there is one specific answer from jsperf that has not had an answer for itself: while:
var i = a.length;
while(i--) { b[i] = a[i]; }
had 960,589 ops/sec with the runnerup a.concat() at 578,129 ops/sec, which is 60%.
This is the lastest Firefox (40) 64 bit.
#aleclarson created a new, more reliable benchmark.
Benchmark time!
function log(data) {
document.getElementById("log").textContent += data + "\n";
}
benchmark = (() => {
time_function = function(ms, f, num) {
var z = 0;
var t = new Date().getTime();
for (z = 0;
((new Date().getTime() - t) < ms); z++)
f(num);
return (z)
}
function clone1(arr) {
return arr.slice(0);
}
function clone2(arr) {
return [...arr]
}
function clone3(arr) {
return [].concat(arr);
}
Array.prototype.clone = function() {
return this.map(e => Array.isArray(e) ? e.clone() : e);
};
function clone4(arr) {
return arr.clone();
}
function benchmark() {
function compare(a, b) {
if (a[1] > b[1]) {
return -1;
}
if (a[1] < b[1]) {
return 1;
}
return 0;
}
funcs = [clone1, clone2, clone3, clone4];
results = [];
funcs.forEach((ff) => {
console.log("Benchmarking: " + ff.name);
var s = time_function(2500, ff, Array(1024));
results.push([ff, s]);
console.log("Score: " + s);
})
return results.sort(compare);
}
return benchmark;
})()
log("Starting benchmark...\n");
res = benchmark();
console.log("Winner: " + res[0][0].name + " !!!");
count = 1;
res.forEach((r) => {
log((count++) + ". " + r[0].name + " score: " + Math.floor(10000 * r[1] / res[0][1]) / 100 + ((count == 2) ? "% *winner*" : "% speed of winner.") + " (" + Math.round(r[1] * 100) / 100 + ")");
});
log("\nWinner code:\n");
log(res[0][0].toString());
<textarea rows="50" cols="80" style="font-size: 16; resize:none; border: none;" id="log"></textarea>
The benchmark will run for 10s since you click the button.
My results:
Chrome (V8 engine):
1. clone1 score: 100% *winner* (4110764)
2. clone3 score: 74.32% speed of winner. (3055225)
3. clone2 score: 30.75% speed of winner. (1264182)
4. clone4 score: 21.96% speed of winner. (902929)
Firefox (SpiderMonkey Engine):
1. clone1 score: 100% *winner* (8448353)
2. clone3 score: 16.44% speed of winner. (1389241)
3. clone4 score: 5.69% speed of winner. (481162)
4. clone2 score: 2.27% speed of winner. (192433)
Winner code:
function clone1(arr) {
return arr.slice(0);
}
Winner engine:
SpiderMonkey (Mozilla/Firefox)
It depends on the browser. If you look in the blog post Array.prototype.slice vs manual array creation, there is a rough guide to performance of each:
Results:
There is a much cleaner solution:
var srcArray = [1, 2, 3];
var clonedArray = srcArray.length === 1 ? [srcArray[0]] : Array.apply(this, srcArray);
The length check is required, because the Array constructor behaves differently when it is called with exactly one argument.
Remember .slice() won't work for two-dimensional arrays. You'll need a function like this:
function copy(array) {
return array.map(function(arr) {
return arr.slice();
});
}
It depends on the length of the array. If the array length is <= 1,000,000, the slice and concat methods are taking approximately the same time. But when you give a wider range, the concat method wins.
For example, try this code:
var original_array = [];
for(var i = 0; i < 10000000; i ++) {
original_array.push( Math.floor(Math.random() * 1000000 + 1));
}
function a1() {
var dup = [];
var start = Date.now();
dup = original_array.slice();
var end = Date.now();
console.log('slice method takes ' + (end - start) + ' ms');
}
function a2() {
var dup = [];
var start = Date.now();
dup = original_array.concat([]);
var end = Date.now();
console.log('concat method takes ' + (end - start) + ' ms');
}
function a3() {
var dup = [];
var start = Date.now();
for(var i = 0; i < original_array.length; i ++) {
dup.push(original_array[i]);
}
var end = Date.now();
console.log('for loop with push method takes ' + (end - start) + ' ms');
}
function a4() {
var dup = [];
var start = Date.now();
for(var i = 0; i < original_array.length; i ++) {
dup[i] = original_array[i];
}
var end = Date.now();
console.log('for loop with = method takes ' + (end - start) + ' ms');
}
function a5() {
var dup = new Array(original_array.length)
var start = Date.now();
for(var i = 0; i < original_array.length; i ++) {
dup.push(original_array[i]);
}
var end = Date.now();
console.log('for loop with = method and array constructor takes ' + (end - start) + ' ms');
}
a1();
a2();
a3();
a4();
a5();
If you set the length of original_array to 1,000,000, the slice method and concat method are taking approximately the same time (3-4 ms, depending on the random numbers).
If you set the length of original_array to 10,000,000, then the slice method takes over 60 ms and the concat method takes over 20 ms.
In ES6, you can simply utilize the Spread syntax.
Example:
let arr = ['a', 'b', 'c'];
let arr2 = [...arr];
Please note that the spread operator generates a completely new array, so modifying one won't affect the other.
Example:
arr2.push('d') // becomes ['a', 'b', 'c', 'd']
console.log(arr) // while arr retains its values ['a', 'b', 'c']
A simple solution:
original = [1,2,3]
cloned = original.map(x=>x)
const arr = ['1', '2', '3'];
// Old way
const cloneArr = arr.slice();
// ES6 way
const cloneArrES6 = [...arr];
// But problem with 3rd approach is that if you are using muti-dimensional
// array, then only first level is copied
const nums = [
[1, 2],
[10],
];
const cloneNums = [...nums];
// Let's change the first item in the first nested item in our cloned array.
cloneNums[0][0] = '8';
console.log(cloneNums);
// [ [ '8', 2 ], [ 10 ], [ 300 ] ]
// NOOooo, the original is also affected
console.log(nums);
// [ [ '8', 2 ], [ 10 ], [ 300 ] ]
So, in order to avoid these scenarios to happen, use
const arr = ['1', '2', '3'];
const cloneArr = Array.from(arr);
There were several ways to clone an array. Basically, Cloning was categorized in two ways:
Shallow copy
Deep copy
Shallow copies only cover the 1st level of the array and the rest are
referenced. If you want a true copy of nested elements in the arrays, you’ll need a
deep clone.
Example :
const arr1 = [1,2,3,4,5,6,7]
// Normal Array (shallow copy is enough)
const arr2 = [1,2,3,[4],[[5]],6,7]
// Nested Array (Deep copy required)
Approach 1 : Using (...)Spread Operator (Shallow copy enough)
const newArray = [...arr1] // [1,2,3,4,5,6,7]
Approach 2 : Using Array builtIn Slice method (Deep copy)
const newArray = arr1.slice() // [1,2,3,4,5,6,7]
Approach 3 : Using Array builtIn Concat method (Deep a copy)
const newArray = [].concat(arr1) // [1,2,3,4,5,6,7]
Approach 4 : Using JSON.stringify/parse. (Deep a copy & fastest)
const newArray = JSON.parse(JSON.stringify(arr2));) // [1,2,3,[4],[[5]],6,7]
Approach 5: Using own recursive function or using loadash's __.cloneDeep method. (Deep copy)
Fast ways to duplicate an array in JavaScript in Order:
#1: array1copy = [...array1];
#2: array1copy = array1.slice(0);
#3: array1copy = array1.slice();
If your array objects contain some JSON-non-serializable content (functions, Number.POSITIVE_INFINITY, etc.) better to use
array1copy = JSON.parse(JSON.stringify(array1))
You can follow this code. Immutable way array clone. This is the perfect way to array cloning
const array = [1, 2, 3, 4]
const newArray = [...array]
newArray.push(6)
console.log(array)
console.log(newArray)
If you want a REAL cloned object/array in JS with cloned references of all attributes and sub-objects:
export function clone(arr) {
return JSON.parse(JSON.stringify(arr))
}
ALL other operations do not create clones, because they just change the base address of the root element, not of the included objects.
Except you traverse recursive through the object-tree.
For a simple copy, these are OK. For storage address relevant operations I suggest (and in most all other cases, because this is fast!) to type convert into string and back in a complete new object.
If you are taking about slice it is used to copy elements from an array and create a clone with same no. of elements or less no. of elements.
var arr = [1, 2, 3 , 4, 5];
function slc() {
var sliced = arr.slice(0, 5);
// arr.slice(position to start copying master array , no. of items in new array)
console.log(sliced);
}
slc(arr);

Reduce method of array not reaching last element [duplicate]

This question already has answers here:
Get all unique values in a JavaScript array (remove duplicates)
(91 answers)
Closed 5 years ago.
I have a very simple JavaScript array that may or may not contain duplicates.
var names = ["Mike","Matt","Nancy","Adam","Jenny","Nancy","Carl"];
I need to remove the duplicates and put the unique values in a new array.
I could point to all the code that I've tried but I think it's useless because they don't work. I accept jQuery solutions too.
Similar question:
Get all non-unique values (i.e.: duplicate/more than one occurrence) in an array
TL;DR
Using the Set constructor and the spread syntax:
uniq = [...new Set(array)];
( Note that var uniq will be an array... new Set() turns it into a set, but [... ] turns it back into an array again )
"Smart" but naĂŻve way
uniqueArray = a.filter(function(item, pos) {
return a.indexOf(item) == pos;
})
Basically, we iterate over the array and, for each element, check if the first position of this element in the array is equal to the current position. Obviously, these two positions are different for duplicate elements.
Using the 3rd ("this array") parameter of the filter callback we can avoid a closure of the array variable:
uniqueArray = a.filter(function(item, pos, self) {
return self.indexOf(item) == pos;
})
Although concise, this algorithm is not particularly efficient for large arrays (quadratic time).
Hashtables to the rescue
function uniq(a) {
var seen = {};
return a.filter(function(item) {
return seen.hasOwnProperty(item) ? false : (seen[item] = true);
});
}
This is how it's usually done. The idea is to place each element in a hashtable and then check for its presence instantly. This gives us linear time, but has at least two drawbacks:
since hash keys can only be strings or symbols in JavaScript, this code doesn't distinguish numbers and "numeric strings". That is, uniq([1,"1"]) will return just [1]
for the same reason, all objects will be considered equal: uniq([{foo:1},{foo:2}]) will return just [{foo:1}].
That said, if your arrays contain only primitives and you don't care about types (e.g. it's always numbers), this solution is optimal.
The best from two worlds
A universal solution combines both approaches: it uses hash lookups for primitives and linear search for objects.
function uniq(a) {
var prims = {"boolean":{}, "number":{}, "string":{}}, objs = [];
return a.filter(function(item) {
var type = typeof item;
if(type in prims)
return prims[type].hasOwnProperty(item) ? false : (prims[type][item] = true);
else
return objs.indexOf(item) >= 0 ? false : objs.push(item);
});
}
sort | uniq
Another option is to sort the array first, and then remove each element equal to the preceding one:
function uniq(a) {
return a.sort().filter(function(item, pos, ary) {
return !pos || item != ary[pos - 1];
});
}
Again, this doesn't work with objects (because all objects are equal for sort). Additionally, we silently change the original array as a side effect - not good! However, if your input is already sorted, this is the way to go (just remove sort from the above).
Unique by...
Sometimes it's desired to uniquify a list based on some criteria other than just equality, for example, to filter out objects that are different, but share some property. This can be done elegantly by passing a callback. This "key" callback is applied to each element, and elements with equal "keys" are removed. Since key is expected to return a primitive, hash table will work fine here:
function uniqBy(a, key) {
var seen = {};
return a.filter(function(item) {
var k = key(item);
return seen.hasOwnProperty(k) ? false : (seen[k] = true);
})
}
A particularly useful key() is JSON.stringify which will remove objects that are physically different, but "look" the same:
a = [[1,2,3], [4,5,6], [1,2,3]]
b = uniqBy(a, JSON.stringify)
console.log(b) // [[1,2,3], [4,5,6]]
If the key is not primitive, you have to resort to the linear search:
function uniqBy(a, key) {
var index = [];
return a.filter(function (item) {
var k = key(item);
return index.indexOf(k) >= 0 ? false : index.push(k);
});
}
In ES6 you can use a Set:
function uniqBy(a, key) {
let seen = new Set();
return a.filter(item => {
let k = key(item);
return seen.has(k) ? false : seen.add(k);
});
}
or a Map:
function uniqBy(a, key) {
return [
...new Map(
a.map(x => [key(x), x])
).values()
]
}
which both also work with non-primitive keys.
First or last?
When removing objects by a key, you might to want to keep the first of "equal" objects or the last one.
Use the Set variant above to keep the first, and the Map to keep the last:
function uniqByKeepFirst(a, key) {
let seen = new Set();
return a.filter(item => {
let k = key(item);
return seen.has(k) ? false : seen.add(k);
});
}
function uniqByKeepLast(a, key) {
return [
...new Map(
a.map(x => [key(x), x])
).values()
]
}
//
data = [
{a:1, u:1},
{a:2, u:2},
{a:3, u:3},
{a:4, u:1},
{a:5, u:2},
{a:6, u:3},
];
console.log(uniqByKeepFirst(data, it => it.u))
console.log(uniqByKeepLast(data, it => it.u))
Libraries
Both underscore and Lo-Dash provide uniq methods. Their algorithms are basically similar to the first snippet above and boil down to this:
var result = [];
a.forEach(function(item) {
if(result.indexOf(item) < 0) {
result.push(item);
}
});
This is quadratic, but there are nice additional goodies, like wrapping native indexOf, ability to uniqify by a key (iteratee in their parlance), and optimizations for already sorted arrays.
If you're using jQuery and can't stand anything without a dollar before it, it goes like this:
$.uniqArray = function(a) {
return $.grep(a, function(item, pos) {
return $.inArray(item, a) === pos;
});
}
which is, again, a variation of the first snippet.
Performance
Function calls are expensive in JavaScript, therefore the above solutions, as concise as they are, are not particularly efficient. For maximal performance, replace filter with a loop and get rid of other function calls:
function uniq_fast(a) {
var seen = {};
var out = [];
var len = a.length;
var j = 0;
for(var i = 0; i < len; i++) {
var item = a[i];
if(seen[item] !== 1) {
seen[item] = 1;
out[j++] = item;
}
}
return out;
}
This chunk of ugly code does the same as the snippet #3 above, but an order of magnitude faster (as of 2017 it's only twice as fast - JS core folks are doing a great job!)
function uniq(a) {
var seen = {};
return a.filter(function(item) {
return seen.hasOwnProperty(item) ? false : (seen[item] = true);
});
}
function uniq_fast(a) {
var seen = {};
var out = [];
var len = a.length;
var j = 0;
for(var i = 0; i < len; i++) {
var item = a[i];
if(seen[item] !== 1) {
seen[item] = 1;
out[j++] = item;
}
}
return out;
}
/////
var r = [0,1,2,3,4,5,6,7,8,9],
a = [],
LEN = 1000,
LOOPS = 1000;
while(LEN--)
a = a.concat(r);
var d = new Date();
for(var i = 0; i < LOOPS; i++)
uniq(a);
document.write('<br>uniq, ms/loop: ' + (new Date() - d)/LOOPS)
var d = new Date();
for(var i = 0; i < LOOPS; i++)
uniq_fast(a);
document.write('<br>uniq_fast, ms/loop: ' + (new Date() - d)/LOOPS)
ES6
ES6 provides the Set object, which makes things a whole lot easier:
function uniq(a) {
return Array.from(new Set(a));
}
or
let uniq = a => [...new Set(a)];
Note that, unlike in python, ES6 sets are iterated in insertion order, so this code preserves the order of the original array.
However, if you need an array with unique elements, why not use sets right from the beginning?
Generators
A "lazy", generator-based version of uniq can be built on the same basis:
take the next value from the argument
if it's been seen already, skip it
otherwise, yield it and add it to the set of already seen values
function* uniqIter(a) {
let seen = new Set();
for (let x of a) {
if (!seen.has(x)) {
seen.add(x);
yield x;
}
}
}
// example:
function* randomsBelow(limit) {
while (1)
yield Math.floor(Math.random() * limit);
}
// note that randomsBelow is endless
count = 20;
limit = 30;
for (let r of uniqIter(randomsBelow(limit))) {
console.log(r);
if (--count === 0)
break
}
// exercise for the reader: what happens if we set `limit` less than `count` and why
Quick and dirty using jQuery:
var names = ["Mike","Matt","Nancy","Adam","Jenny","Nancy","Carl"];
var uniqueNames = [];
$.each(names, function(i, el){
if($.inArray(el, uniqueNames) === -1) uniqueNames.push(el);
});
Got tired of seeing all bad examples with for-loops or jQuery. Javascript has the perfect tools for this nowadays: sort, map and reduce.
Uniq reduce while keeping existing order
var names = ["Mike","Matt","Nancy","Adam","Jenny","Nancy","Carl"];
var uniq = names.reduce(function(a,b){
if (a.indexOf(b) < 0 ) a.push(b);
return a;
},[]);
console.log(uniq, names) // [ 'Mike', 'Matt', 'Nancy', 'Adam', 'Jenny', 'Carl' ]
// one liner
return names.reduce(function(a,b){if(a.indexOf(b)<0)a.push(b);return a;},[]);
Faster uniq with sorting
There are probably faster ways but this one is pretty decent.
var uniq = names.slice() // slice makes copy of array before sorting it
.sort(function(a,b){
return a > b;
})
.reduce(function(a,b){
if (a.slice(-1)[0] !== b) a.push(b); // slice(-1)[0] means last item in array without removing it (like .pop())
return a;
},[]); // this empty array becomes the starting value for a
// one liner
return names.slice().sort(function(a,b){return a > b}).reduce(function(a,b){if (a.slice(-1)[0] !== b) a.push(b);return a;},[]);
Update 2015: ES6 version:
In ES6 you have Sets and Spread which makes it very easy and performant to remove all duplicates:
var uniq = [ ...new Set(names) ]; // [ 'Mike', 'Matt', 'Nancy', 'Adam', 'Jenny', 'Carl' ]
Sort based on occurrence:
Someone asked about ordering the results based on how many unique names there are:
var names = ['Mike', 'Matt', 'Nancy', 'Adam', 'Jenny', 'Nancy', 'Carl']
var uniq = names
.map((name) => {
return {count: 1, name: name}
})
.reduce((a, b) => {
a[b.name] = (a[b.name] || 0) + b.count
return a
}, {})
var sorted = Object.keys(uniq).sort((a, b) => uniq[a] < uniq[b])
console.log(sorted)
Vanilla JS: Remove duplicates using an Object like a Set
You can always try putting it into an object, and then iterating through its keys:
function remove_duplicates(arr) {
var obj = {};
var ret_arr = [];
for (var i = 0; i < arr.length; i++) {
obj[arr[i]] = true;
}
for (var key in obj) {
ret_arr.push(key);
}
return ret_arr;
}
Vanilla JS: Remove duplicates by tracking already seen values (order-safe)
Or, for an order-safe version, use an object to store all previously seen values, and check values against it before before adding to an array.
function remove_duplicates_safe(arr) {
var seen = {};
var ret_arr = [];
for (var i = 0; i < arr.length; i++) {
if (!(arr[i] in seen)) {
ret_arr.push(arr[i]);
seen[arr[i]] = true;
}
}
return ret_arr;
}
ECMAScript 6: Use the new Set data structure (order-safe)
ECMAScript 6 adds the new Set Data-Structure, which lets you store values of any type. Set.values returns elements in insertion order.
function remove_duplicates_es6(arr) {
let s = new Set(arr);
let it = s.values();
return Array.from(it);
}
Example usage:
a = ["Mike","Matt","Nancy","Adam","Jenny","Nancy","Carl"];
b = remove_duplicates(a);
// b:
// ["Adam", "Carl", "Jenny", "Matt", "Mike", "Nancy"]
c = remove_duplicates_safe(a);
// c:
// ["Mike", "Matt", "Nancy", "Adam", "Jenny", "Carl"]
d = remove_duplicates_es6(a);
// d:
// ["Mike", "Matt", "Nancy", "Adam", "Jenny", "Carl"]
A single line version using array .filter and .indexOf function:
arr = arr.filter(function (value, index, array) {
return array.indexOf(value) === index;
});
Use Underscore.js
It's a library with a host of functions for manipulating arrays.
It's the tie to go along with jQuery's tux, and Backbone.js's
suspenders.
_.uniq
_.uniq(array, [isSorted], [iterator]) Alias: unique
Produces a duplicate-free version of the array, using === to test object
equality. If you know in advance that the array is sorted, passing
true for isSorted will run a much faster algorithm. If you want to
compute unique items based on a transformation, pass an iterator
function.
Example
var names = ["Mike","Matt","Nancy","Adam","Jenny","Nancy","Carl"];
alert(_.uniq(names, false));
Note: Lo-Dash (an underscore competitor) also offers a comparable .uniq implementation.
One line:
let names = ['Mike','Matt','Nancy','Adam','Jenny','Nancy','Carl', 'Nancy'];
let dup = [...new Set(names)];
console.log(dup);
You can simply do it in JavaScript, with the help of the second - index - parameter of the filter method:
var a = [2,3,4,5,5,4];
a.filter(function(value, index){ return a.indexOf(value) == index });
or in short hand
a.filter((v,i) => a.indexOf(v) == i)
use Array.filter() like this
var actualArr = ['Apple', 'Apple', 'Banana', 'Mango', 'Strawberry', 'Banana'];
console.log('Actual Array: ' + actualArr);
var filteredArr = actualArr.filter(function(item, index) {
if (actualArr.indexOf(item) == index)
return item;
});
console.log('Filtered Array: ' + filteredArr);
this can be made shorter in ES6 to
actualArr.filter((item,index,self) => self.indexOf(item)==index);
Here is nice explanation of Array.filter()
The most concise way to remove duplicates from an array using native javascript functions is to use a sequence like below:
vals.sort().reduce(function(a, b){ if (b != a[0]) a.unshift(b); return a }, [])
there's no need for slice nor indexOf within the reduce function, like i've seen in other examples! it makes sense to use it along with a filter function though:
vals.filter(function(v, i, a){ return i == a.indexOf(v) })
Yet another ES6(2015) way of doing this that already works on a few browsers is:
Array.from(new Set(vals))
or even using the spread operator:
[...new Set(vals)]
cheers!
The top answers have complexity of O(n²), but this can be done with just O(n) by using an object as a hash:
function getDistinctArray(arr) {
var dups = {};
return arr.filter(function(el) {
var hash = el.valueOf();
var isDup = dups[hash];
dups[hash] = true;
return !isDup;
});
}
This will work for strings, numbers, and dates. If your array contains objects, the above solution won't work because when coerced to a string, they will all have a value of "[object Object]" (or something similar) and that isn't suitable as a lookup value. You can get an O(n) implementation for objects by setting a flag on the object itself:
function getDistinctObjArray(arr) {
var distinctArr = arr.filter(function(el) {
var isDup = el.inArray;
el.inArray = true;
return !isDup;
});
distinctArr.forEach(function(el) {
delete el.inArray;
});
return distinctArr;
}
2019 edit: Modern versions of JavaScript make this a much easier problem to solve. Using Set will work, regardless of whether your array contains objects, strings, numbers, or any other type.
function getDistinctArray(arr) {
return [...new Set(arr)];
}
The implementation is so simple, defining a function is no longer warranted.
Simplest One I've run into so far. In es6.
var names = ["Mike","Matt","Nancy","Adam","Jenny","Nancy","Carl", "Mike", "Nancy"]
var noDupe = Array.from(new Set(names))
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Set
In ECMAScript 6 (aka ECMAScript 2015), Set can be used to filter out duplicates. Then it can be converted back to an array using the spread operator.
var names = ["Mike","Matt","Nancy","Adam","Jenny","Nancy","Carl"],
unique = [...new Set(names)];
Solution 1
Array.prototype.unique = function() {
var a = [];
for (i = 0; i < this.length; i++) {
var current = this[i];
if (a.indexOf(current) < 0) a.push(current);
}
return a;
}
Solution 2 (using Set)
Array.prototype.unique = function() {
return Array.from(new Set(this));
}
Test
var x=[1,2,3,3,2,1];
x.unique() //[1,2,3]
Performance
When I tested both implementation (with and without Set) for performance in chrome, I found that the one with Set is much much faster!
Array.prototype.unique1 = function() {
var a = [];
for (i = 0; i < this.length; i++) {
var current = this[i];
if (a.indexOf(current) < 0) a.push(current);
}
return a;
}
Array.prototype.unique2 = function() {
return Array.from(new Set(this));
}
var x=[];
for(var i=0;i<10000;i++){
x.push("x"+i);x.push("x"+(i+1));
}
console.time("unique1");
console.log(x.unique1());
console.timeEnd("unique1");
console.time("unique2");
console.log(x.unique2());
console.timeEnd("unique2");
Go for this one:
var uniqueArray = duplicateArray.filter(function(elem, pos) {
return duplicateArray.indexOf(elem) == pos;
});
Now uniqueArray contains no duplicates.
The following is more than 80% faster than the jQuery method listed (see tests below).
It is an answer from a similar question a few years ago. If I come across the person who originally proposed it I will post credit.
Pure JS.
var temp = {};
for (var i = 0; i < array.length; i++)
temp[array[i]] = true;
var r = [];
for (var k in temp)
r.push(k);
return r;
My test case comparison:
http://jsperf.com/remove-duplicate-array-tests
I had done a detailed comparison of dupes removal at some other question but having noticed that this is the real place i just wanted to share it here as well.
I believe this is the best way to do this
var myArray = [100, 200, 100, 200, 100, 100, 200, 200, 200, 200],
reduced = Object.keys(myArray.reduce((p,c) => (p[c] = true,p),{}));
console.log(reduced);
OK .. even though this one is O(n) and the others are O(n^2) i was curious to see benchmark comparison between this reduce / look up table and filter/indexOf combo (I choose Jeetendras very nice implementation https://stackoverflow.com/a/37441144/4543207). I prepare a 100K item array filled with random positive integers in range 0-9999 and and it removes the duplicates. I repeat the test for 10 times and the average of the results show that they are no match in performance.
In firefox v47 reduce & lut : 14.85ms vs filter & indexOf : 2836ms
In chrome v51 reduce & lut : 23.90ms vs filter & indexOf : 1066ms
Well ok so far so good. But let's do it properly this time in the ES6 style. It looks so cool..! But as of now how it will perform against the powerful lut solution is a mystery to me. Lets first see the code and then benchmark it.
var myArray = [100, 200, 100, 200, 100, 100, 200, 200, 200, 200],
reduced = [...myArray.reduce((p,c) => p.set(c,true),new Map()).keys()];
console.log(reduced);
Wow that was short..! But how about the performance..? It's beautiful... Since the heavy weight of the filter / indexOf lifted over our shoulders now i can test an array 1M random items of positive integers in range 0..99999 to get an average from 10 consecutive tests. I can say this time it's a real match. See the result for yourself :)
var ranar = [],
red1 = a => Object.keys(a.reduce((p,c) => (p[c] = true,p),{})),
red2 = a => reduced = [...a.reduce((p,c) => p.set(c,true),new Map()).keys()],
avg1 = [],
avg2 = [],
ts = 0,
te = 0,
res1 = [],
res2 = [],
count= 10;
for (var i = 0; i<count; i++){
ranar = (new Array(1000000).fill(true)).map(e => Math.floor(Math.random()*100000));
ts = performance.now();
res1 = red1(ranar);
te = performance.now();
avg1.push(te-ts);
ts = performance.now();
res2 = red2(ranar);
te = performance.now();
avg2.push(te-ts);
}
avg1 = avg1.reduce((p,c) => p+c)/count;
avg2 = avg2.reduce((p,c) => p+c)/count;
console.log("reduce & lut took: " + avg1 + "msec");
console.log("map & spread took: " + avg2 + "msec");
Which one would you use..? Well not so fast...! Don't be deceived. Map is at displacement. Now look... in all of the above cases we fill an array of size n with numbers of range < n. I mean we have an array of size 100 and we fill with random numbers 0..9 so there are definite duplicates and "almost" definitely each number has a duplicate. How about if we fill the array in size 100 with random numbers 0..9999. Let's now see Map playing at home. This time an Array of 100K items but random number range is 0..100M. We will do 100 consecutive tests to average the results. OK let's see the bets..! <- no typo
var ranar = [],
red1 = a => Object.keys(a.reduce((p,c) => (p[c] = true,p),{})),
red2 = a => reduced = [...a.reduce((p,c) => p.set(c,true),new Map()).keys()],
avg1 = [],
avg2 = [],
ts = 0,
te = 0,
res1 = [],
res2 = [],
count= 100;
for (var i = 0; i<count; i++){
ranar = (new Array(100000).fill(true)).map(e => Math.floor(Math.random()*100000000));
ts = performance.now();
res1 = red1(ranar);
te = performance.now();
avg1.push(te-ts);
ts = performance.now();
res2 = red2(ranar);
te = performance.now();
avg2.push(te-ts);
}
avg1 = avg1.reduce((p,c) => p+c)/count;
avg2 = avg2.reduce((p,c) => p+c)/count;
console.log("reduce & lut took: " + avg1 + "msec");
console.log("map & spread took: " + avg2 + "msec");
Now this is the spectacular comeback of Map()..! May be now you can make a better decision when you want to remove the dupes.
Well ok we are all happy now. But the lead role always comes last with some applause. I am sure some of you wonder what Set object would do. Now that since we are open to ES6 and we know Map is the winner of the previous games let us compare Map with Set as a final. A typical Real Madrid vs Barcelona game this time... or is it? Let's see who will win the el classico :)
var ranar = [],
red1 = a => reduced = [...a.reduce((p,c) => p.set(c,true),new Map()).keys()],
red2 = a => Array.from(new Set(a)),
avg1 = [],
avg2 = [],
ts = 0,
te = 0,
res1 = [],
res2 = [],
count= 100;
for (var i = 0; i<count; i++){
ranar = (new Array(100000).fill(true)).map(e => Math.floor(Math.random()*10000000));
ts = performance.now();
res1 = red1(ranar);
te = performance.now();
avg1.push(te-ts);
ts = performance.now();
res2 = red2(ranar);
te = performance.now();
avg2.push(te-ts);
}
avg1 = avg1.reduce((p,c) => p+c)/count;
avg2 = avg2.reduce((p,c) => p+c)/count;
console.log("map & spread took: " + avg1 + "msec");
console.log("set & A.from took: " + avg2 + "msec");
Wow.. man..! Well unexpectedly it didn't turn out to be an el classico at all. More like Barcelona FC against CA Osasuna :))
Here is a simple answer to the question.
var names = ["Alex","Tony","James","Suzane", "Marie", "Laurence", "Alex", "Suzane", "Marie", "Marie", "James", "Tony", "Alex"];
var uniqueNames = [];
for(var i in names){
if(uniqueNames.indexOf(names[i]) === -1){
uniqueNames.push(names[i]);
}
}
A simple but effective technique, is to use the filter method in combination with the filter function(value, index){ return this.indexOf(value) == index }.
Code example :
var data = [2,3,4,5,5,4];
var filter = function(value, index){ return this.indexOf(value) == index };
var filteredData = data.filter(filter, data );
document.body.innerHTML = '<pre>' + JSON.stringify(filteredData, null, '\t') + '</pre>';
See also this Fiddle.
So the options is:
let a = [11,22,11,22];
let b = []
b = [ ...new Set(a) ];
// b = [11, 22]
b = Array.from( new Set(a))
// b = [11, 22]
b = a.filter((val,i)=>{
return a.indexOf(val)==i
})
// b = [11, 22]
Here is very simple for understanding and working anywhere (even in PhotoshopScript) code. Check it!
var peoplenames = new Array("Mike","Matt","Nancy","Adam","Jenny","Nancy","Carl");
peoplenames = unique(peoplenames);
alert(peoplenames);
function unique(array){
var len = array.length;
for(var i = 0; i < len; i++) for(var j = i + 1; j < len; j++)
if(array[j] == array[i]){
array.splice(j,1);
j--;
len--;
}
return array;
}
//*result* peoplenames == ["Mike","Matt","Nancy","Adam","Jenny","Carl"]
here is the simple method without any special libraries are special function,
name_list = ["Mike","Matt","Nancy","Adam","Jenny","Nancy","Carl"];
get_uniq = name_list.filter(function(val,ind) { return name_list.indexOf(val) == ind; })
console.log("Original name list:"+name_list.length, name_list)
console.log("\n Unique name list:"+get_uniq.length, get_uniq)
Apart from being a simpler, more terse solution than the current answers (minus the future-looking ES6 ones), I perf tested this and it was much faster as well:
var uniqueArray = dupeArray.filter(function(item, i, self){
return self.lastIndexOf(item) == i;
});
One caveat: Array.lastIndexOf() was added in IE9, so if you need to go lower than that, you'll need to look elsewhere.
Generic Functional Approach
Here is a generic and strictly functional approach with ES2015:
// small, reusable auxiliary functions
const apply = f => a => f(a);
const flip = f => b => a => f(a) (b);
const uncurry = f => (a, b) => f(a) (b);
const push = x => xs => (xs.push(x), xs);
const foldl = f => acc => xs => xs.reduce(uncurry(f), acc);
const some = f => xs => xs.some(apply(f));
// the actual de-duplicate function
const uniqueBy = f => foldl(
acc => x => some(f(x)) (acc)
? acc
: push(x) (acc)
) ([]);
// comparators
const eq = y => x => x === y;
// string equality case insensitive :D
const seqCI = y => x => x.toLowerCase() === y.toLowerCase();
// mock data
const xs = [1,2,3,1,2,3,4];
const ys = ["a", "b", "c", "A", "B", "C", "D"];
console.log( uniqueBy(eq) (xs) );
console.log( uniqueBy(seqCI) (ys) );
We can easily derive unique from unqiueBy or use the faster implementation utilizing Sets:
const unqiue = uniqueBy(eq);
// const unique = xs => Array.from(new Set(xs));
Benefits of this approach:
generic solution by using a separate comparator function
declarative and succinct implementation
reuse of other small, generic functions
Performance Considerations
uniqueBy isn't as fast as an imperative implementation with loops, but it is way more expressive due to its genericity.
If you identify uniqueBy as the cause of a concrete performance penalty in your app, replace it with optimized code. That is, write your code first in an functional, declarative way. Afterwards, provided that you encounter performance issues, try to optimize the code at the locations, which are the cause of the problem.
Memory Consumption and Garbage Collection
uniqueBy utilizes mutations (push(x) (acc)) hidden inside its body. It reuses the accumulator instead of throwing it away after each iteration. This reduces memory consumption and GC pressure. Since this side effect is wrapped inside the function, everything outside remains pure.
for (i=0; i<originalArray.length; i++) {
if (!newArray.includes(originalArray[i])) {
newArray.push(originalArray[i]);
}
}
The following script returns a new array containing only unique values. It works on string and numbers. No requirement for additional libraries only vanilla JS.
Browser support:
Feature Chrome Firefox (Gecko) Internet Explorer Opera Safari
Basic support (Yes) 1.5 (1.8) 9 (Yes) (Yes)
https://jsfiddle.net/fzmcgcxv/3/
var duplicates = ["Mike","Matt","Nancy","Adam","Jenny","Nancy","Carl","Mike","Mike","Nancy","Carl"];
var unique = duplicates.filter(function(elem, pos) {
return duplicates.indexOf(elem) == pos;
});
alert(unique);
If by any chance you were using
D3.js
You could do
d3.set(["foo", "bar", "foo", "baz"]).values() ==> ["foo", "bar", "baz"]
https://github.com/mbostock/d3/wiki/Arrays#set_values
A slight modification of thg435's excellent answer to use a custom comparator:
function contains(array, obj) {
for (var i = 0; i < array.length; i++) {
if (isEqual(array[i], obj)) return true;
}
return false;
}
//comparator
function isEqual(obj1, obj2) {
if (obj1.name == obj2.name) return true;
return false;
}
function removeDuplicates(ary) {
var arr = [];
return ary.filter(function(x) {
return !contains(arr, x) && arr.push(x);
});
}
$(document).ready(function() {
var arr1=["dog","dog","fish","cat","cat","fish","apple","orange"]
var arr2=["cat","fish","mango","apple"]
var uniquevalue=[];
var seconduniquevalue=[];
var finalarray=[];
$.each(arr1,function(key,value){
if($.inArray (value,uniquevalue) === -1)
{
uniquevalue.push(value)
}
});
$.each(arr2,function(key,value){
if($.inArray (value,seconduniquevalue) === -1)
{
seconduniquevalue.push(value)
}
});
$.each(uniquevalue,function(ikey,ivalue){
$.each(seconduniquevalue,function(ukey,uvalue){
if( ivalue == uvalue)
{
finalarray.push(ivalue);
}
});
});
alert(finalarray);
});
https://jsfiddle.net/2w0k5tz8/
function remove_duplicates(array_){
var ret_array = new Array();
for (var a = array_.length - 1; a >= 0; a--) {
for (var b = array_.length - 1; b >= 0; b--) {
if(array_[a] == array_[b] && a != b){
delete array_[b];
}
};
if(array_[a] != undefined)
ret_array.push(array_[a]);
};
return ret_array;
}
console.log(remove_duplicates(Array(1,1,1,2,2,2,3,3,3)));
Loop through, remove duplicates, and create a clone array place holder because the array index will not be updated.
Loop backward for better performance ( your loop wont need to keep checking the length of your array)

Removing duplicate value in array using .filter [duplicate]

This question already has answers here:
Get all unique values in a JavaScript array (remove duplicates)
(91 answers)
Closed 5 years ago.
I have a very simple JavaScript array that may or may not contain duplicates.
var names = ["Mike","Matt","Nancy","Adam","Jenny","Nancy","Carl"];
I need to remove the duplicates and put the unique values in a new array.
I could point to all the code that I've tried but I think it's useless because they don't work. I accept jQuery solutions too.
Similar question:
Get all non-unique values (i.e.: duplicate/more than one occurrence) in an array
TL;DR
Using the Set constructor and the spread syntax:
uniq = [...new Set(array)];
( Note that var uniq will be an array... new Set() turns it into a set, but [... ] turns it back into an array again )
"Smart" but naĂŻve way
uniqueArray = a.filter(function(item, pos) {
return a.indexOf(item) == pos;
})
Basically, we iterate over the array and, for each element, check if the first position of this element in the array is equal to the current position. Obviously, these two positions are different for duplicate elements.
Using the 3rd ("this array") parameter of the filter callback we can avoid a closure of the array variable:
uniqueArray = a.filter(function(item, pos, self) {
return self.indexOf(item) == pos;
})
Although concise, this algorithm is not particularly efficient for large arrays (quadratic time).
Hashtables to the rescue
function uniq(a) {
var seen = {};
return a.filter(function(item) {
return seen.hasOwnProperty(item) ? false : (seen[item] = true);
});
}
This is how it's usually done. The idea is to place each element in a hashtable and then check for its presence instantly. This gives us linear time, but has at least two drawbacks:
since hash keys can only be strings or symbols in JavaScript, this code doesn't distinguish numbers and "numeric strings". That is, uniq([1,"1"]) will return just [1]
for the same reason, all objects will be considered equal: uniq([{foo:1},{foo:2}]) will return just [{foo:1}].
That said, if your arrays contain only primitives and you don't care about types (e.g. it's always numbers), this solution is optimal.
The best from two worlds
A universal solution combines both approaches: it uses hash lookups for primitives and linear search for objects.
function uniq(a) {
var prims = {"boolean":{}, "number":{}, "string":{}}, objs = [];
return a.filter(function(item) {
var type = typeof item;
if(type in prims)
return prims[type].hasOwnProperty(item) ? false : (prims[type][item] = true);
else
return objs.indexOf(item) >= 0 ? false : objs.push(item);
});
}
sort | uniq
Another option is to sort the array first, and then remove each element equal to the preceding one:
function uniq(a) {
return a.sort().filter(function(item, pos, ary) {
return !pos || item != ary[pos - 1];
});
}
Again, this doesn't work with objects (because all objects are equal for sort). Additionally, we silently change the original array as a side effect - not good! However, if your input is already sorted, this is the way to go (just remove sort from the above).
Unique by...
Sometimes it's desired to uniquify a list based on some criteria other than just equality, for example, to filter out objects that are different, but share some property. This can be done elegantly by passing a callback. This "key" callback is applied to each element, and elements with equal "keys" are removed. Since key is expected to return a primitive, hash table will work fine here:
function uniqBy(a, key) {
var seen = {};
return a.filter(function(item) {
var k = key(item);
return seen.hasOwnProperty(k) ? false : (seen[k] = true);
})
}
A particularly useful key() is JSON.stringify which will remove objects that are physically different, but "look" the same:
a = [[1,2,3], [4,5,6], [1,2,3]]
b = uniqBy(a, JSON.stringify)
console.log(b) // [[1,2,3], [4,5,6]]
If the key is not primitive, you have to resort to the linear search:
function uniqBy(a, key) {
var index = [];
return a.filter(function (item) {
var k = key(item);
return index.indexOf(k) >= 0 ? false : index.push(k);
});
}
In ES6 you can use a Set:
function uniqBy(a, key) {
let seen = new Set();
return a.filter(item => {
let k = key(item);
return seen.has(k) ? false : seen.add(k);
});
}
or a Map:
function uniqBy(a, key) {
return [
...new Map(
a.map(x => [key(x), x])
).values()
]
}
which both also work with non-primitive keys.
First or last?
When removing objects by a key, you might to want to keep the first of "equal" objects or the last one.
Use the Set variant above to keep the first, and the Map to keep the last:
function uniqByKeepFirst(a, key) {
let seen = new Set();
return a.filter(item => {
let k = key(item);
return seen.has(k) ? false : seen.add(k);
});
}
function uniqByKeepLast(a, key) {
return [
...new Map(
a.map(x => [key(x), x])
).values()
]
}
//
data = [
{a:1, u:1},
{a:2, u:2},
{a:3, u:3},
{a:4, u:1},
{a:5, u:2},
{a:6, u:3},
];
console.log(uniqByKeepFirst(data, it => it.u))
console.log(uniqByKeepLast(data, it => it.u))
Libraries
Both underscore and Lo-Dash provide uniq methods. Their algorithms are basically similar to the first snippet above and boil down to this:
var result = [];
a.forEach(function(item) {
if(result.indexOf(item) < 0) {
result.push(item);
}
});
This is quadratic, but there are nice additional goodies, like wrapping native indexOf, ability to uniqify by a key (iteratee in their parlance), and optimizations for already sorted arrays.
If you're using jQuery and can't stand anything without a dollar before it, it goes like this:
$.uniqArray = function(a) {
return $.grep(a, function(item, pos) {
return $.inArray(item, a) === pos;
});
}
which is, again, a variation of the first snippet.
Performance
Function calls are expensive in JavaScript, therefore the above solutions, as concise as they are, are not particularly efficient. For maximal performance, replace filter with a loop and get rid of other function calls:
function uniq_fast(a) {
var seen = {};
var out = [];
var len = a.length;
var j = 0;
for(var i = 0; i < len; i++) {
var item = a[i];
if(seen[item] !== 1) {
seen[item] = 1;
out[j++] = item;
}
}
return out;
}
This chunk of ugly code does the same as the snippet #3 above, but an order of magnitude faster (as of 2017 it's only twice as fast - JS core folks are doing a great job!)
function uniq(a) {
var seen = {};
return a.filter(function(item) {
return seen.hasOwnProperty(item) ? false : (seen[item] = true);
});
}
function uniq_fast(a) {
var seen = {};
var out = [];
var len = a.length;
var j = 0;
for(var i = 0; i < len; i++) {
var item = a[i];
if(seen[item] !== 1) {
seen[item] = 1;
out[j++] = item;
}
}
return out;
}
/////
var r = [0,1,2,3,4,5,6,7,8,9],
a = [],
LEN = 1000,
LOOPS = 1000;
while(LEN--)
a = a.concat(r);
var d = new Date();
for(var i = 0; i < LOOPS; i++)
uniq(a);
document.write('<br>uniq, ms/loop: ' + (new Date() - d)/LOOPS)
var d = new Date();
for(var i = 0; i < LOOPS; i++)
uniq_fast(a);
document.write('<br>uniq_fast, ms/loop: ' + (new Date() - d)/LOOPS)
ES6
ES6 provides the Set object, which makes things a whole lot easier:
function uniq(a) {
return Array.from(new Set(a));
}
or
let uniq = a => [...new Set(a)];
Note that, unlike in python, ES6 sets are iterated in insertion order, so this code preserves the order of the original array.
However, if you need an array with unique elements, why not use sets right from the beginning?
Generators
A "lazy", generator-based version of uniq can be built on the same basis:
take the next value from the argument
if it's been seen already, skip it
otherwise, yield it and add it to the set of already seen values
function* uniqIter(a) {
let seen = new Set();
for (let x of a) {
if (!seen.has(x)) {
seen.add(x);
yield x;
}
}
}
// example:
function* randomsBelow(limit) {
while (1)
yield Math.floor(Math.random() * limit);
}
// note that randomsBelow is endless
count = 20;
limit = 30;
for (let r of uniqIter(randomsBelow(limit))) {
console.log(r);
if (--count === 0)
break
}
// exercise for the reader: what happens if we set `limit` less than `count` and why
Quick and dirty using jQuery:
var names = ["Mike","Matt","Nancy","Adam","Jenny","Nancy","Carl"];
var uniqueNames = [];
$.each(names, function(i, el){
if($.inArray(el, uniqueNames) === -1) uniqueNames.push(el);
});
Got tired of seeing all bad examples with for-loops or jQuery. Javascript has the perfect tools for this nowadays: sort, map and reduce.
Uniq reduce while keeping existing order
var names = ["Mike","Matt","Nancy","Adam","Jenny","Nancy","Carl"];
var uniq = names.reduce(function(a,b){
if (a.indexOf(b) < 0 ) a.push(b);
return a;
},[]);
console.log(uniq, names) // [ 'Mike', 'Matt', 'Nancy', 'Adam', 'Jenny', 'Carl' ]
// one liner
return names.reduce(function(a,b){if(a.indexOf(b)<0)a.push(b);return a;},[]);
Faster uniq with sorting
There are probably faster ways but this one is pretty decent.
var uniq = names.slice() // slice makes copy of array before sorting it
.sort(function(a,b){
return a > b;
})
.reduce(function(a,b){
if (a.slice(-1)[0] !== b) a.push(b); // slice(-1)[0] means last item in array without removing it (like .pop())
return a;
},[]); // this empty array becomes the starting value for a
// one liner
return names.slice().sort(function(a,b){return a > b}).reduce(function(a,b){if (a.slice(-1)[0] !== b) a.push(b);return a;},[]);
Update 2015: ES6 version:
In ES6 you have Sets and Spread which makes it very easy and performant to remove all duplicates:
var uniq = [ ...new Set(names) ]; // [ 'Mike', 'Matt', 'Nancy', 'Adam', 'Jenny', 'Carl' ]
Sort based on occurrence:
Someone asked about ordering the results based on how many unique names there are:
var names = ['Mike', 'Matt', 'Nancy', 'Adam', 'Jenny', 'Nancy', 'Carl']
var uniq = names
.map((name) => {
return {count: 1, name: name}
})
.reduce((a, b) => {
a[b.name] = (a[b.name] || 0) + b.count
return a
}, {})
var sorted = Object.keys(uniq).sort((a, b) => uniq[a] < uniq[b])
console.log(sorted)
Vanilla JS: Remove duplicates using an Object like a Set
You can always try putting it into an object, and then iterating through its keys:
function remove_duplicates(arr) {
var obj = {};
var ret_arr = [];
for (var i = 0; i < arr.length; i++) {
obj[arr[i]] = true;
}
for (var key in obj) {
ret_arr.push(key);
}
return ret_arr;
}
Vanilla JS: Remove duplicates by tracking already seen values (order-safe)
Or, for an order-safe version, use an object to store all previously seen values, and check values against it before before adding to an array.
function remove_duplicates_safe(arr) {
var seen = {};
var ret_arr = [];
for (var i = 0; i < arr.length; i++) {
if (!(arr[i] in seen)) {
ret_arr.push(arr[i]);
seen[arr[i]] = true;
}
}
return ret_arr;
}
ECMAScript 6: Use the new Set data structure (order-safe)
ECMAScript 6 adds the new Set Data-Structure, which lets you store values of any type. Set.values returns elements in insertion order.
function remove_duplicates_es6(arr) {
let s = new Set(arr);
let it = s.values();
return Array.from(it);
}
Example usage:
a = ["Mike","Matt","Nancy","Adam","Jenny","Nancy","Carl"];
b = remove_duplicates(a);
// b:
// ["Adam", "Carl", "Jenny", "Matt", "Mike", "Nancy"]
c = remove_duplicates_safe(a);
// c:
// ["Mike", "Matt", "Nancy", "Adam", "Jenny", "Carl"]
d = remove_duplicates_es6(a);
// d:
// ["Mike", "Matt", "Nancy", "Adam", "Jenny", "Carl"]
A single line version using array .filter and .indexOf function:
arr = arr.filter(function (value, index, array) {
return array.indexOf(value) === index;
});
Use Underscore.js
It's a library with a host of functions for manipulating arrays.
It's the tie to go along with jQuery's tux, and Backbone.js's
suspenders.
_.uniq
_.uniq(array, [isSorted], [iterator]) Alias: unique
Produces a duplicate-free version of the array, using === to test object
equality. If you know in advance that the array is sorted, passing
true for isSorted will run a much faster algorithm. If you want to
compute unique items based on a transformation, pass an iterator
function.
Example
var names = ["Mike","Matt","Nancy","Adam","Jenny","Nancy","Carl"];
alert(_.uniq(names, false));
Note: Lo-Dash (an underscore competitor) also offers a comparable .uniq implementation.
One line:
let names = ['Mike','Matt','Nancy','Adam','Jenny','Nancy','Carl', 'Nancy'];
let dup = [...new Set(names)];
console.log(dup);
You can simply do it in JavaScript, with the help of the second - index - parameter of the filter method:
var a = [2,3,4,5,5,4];
a.filter(function(value, index){ return a.indexOf(value) == index });
or in short hand
a.filter((v,i) => a.indexOf(v) == i)
use Array.filter() like this
var actualArr = ['Apple', 'Apple', 'Banana', 'Mango', 'Strawberry', 'Banana'];
console.log('Actual Array: ' + actualArr);
var filteredArr = actualArr.filter(function(item, index) {
if (actualArr.indexOf(item) == index)
return item;
});
console.log('Filtered Array: ' + filteredArr);
this can be made shorter in ES6 to
actualArr.filter((item,index,self) => self.indexOf(item)==index);
Here is nice explanation of Array.filter()
The most concise way to remove duplicates from an array using native javascript functions is to use a sequence like below:
vals.sort().reduce(function(a, b){ if (b != a[0]) a.unshift(b); return a }, [])
there's no need for slice nor indexOf within the reduce function, like i've seen in other examples! it makes sense to use it along with a filter function though:
vals.filter(function(v, i, a){ return i == a.indexOf(v) })
Yet another ES6(2015) way of doing this that already works on a few browsers is:
Array.from(new Set(vals))
or even using the spread operator:
[...new Set(vals)]
cheers!
The top answers have complexity of O(n²), but this can be done with just O(n) by using an object as a hash:
function getDistinctArray(arr) {
var dups = {};
return arr.filter(function(el) {
var hash = el.valueOf();
var isDup = dups[hash];
dups[hash] = true;
return !isDup;
});
}
This will work for strings, numbers, and dates. If your array contains objects, the above solution won't work because when coerced to a string, they will all have a value of "[object Object]" (or something similar) and that isn't suitable as a lookup value. You can get an O(n) implementation for objects by setting a flag on the object itself:
function getDistinctObjArray(arr) {
var distinctArr = arr.filter(function(el) {
var isDup = el.inArray;
el.inArray = true;
return !isDup;
});
distinctArr.forEach(function(el) {
delete el.inArray;
});
return distinctArr;
}
2019 edit: Modern versions of JavaScript make this a much easier problem to solve. Using Set will work, regardless of whether your array contains objects, strings, numbers, or any other type.
function getDistinctArray(arr) {
return [...new Set(arr)];
}
The implementation is so simple, defining a function is no longer warranted.
Simplest One I've run into so far. In es6.
var names = ["Mike","Matt","Nancy","Adam","Jenny","Nancy","Carl", "Mike", "Nancy"]
var noDupe = Array.from(new Set(names))
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Set
In ECMAScript 6 (aka ECMAScript 2015), Set can be used to filter out duplicates. Then it can be converted back to an array using the spread operator.
var names = ["Mike","Matt","Nancy","Adam","Jenny","Nancy","Carl"],
unique = [...new Set(names)];
Solution 1
Array.prototype.unique = function() {
var a = [];
for (i = 0; i < this.length; i++) {
var current = this[i];
if (a.indexOf(current) < 0) a.push(current);
}
return a;
}
Solution 2 (using Set)
Array.prototype.unique = function() {
return Array.from(new Set(this));
}
Test
var x=[1,2,3,3,2,1];
x.unique() //[1,2,3]
Performance
When I tested both implementation (with and without Set) for performance in chrome, I found that the one with Set is much much faster!
Array.prototype.unique1 = function() {
var a = [];
for (i = 0; i < this.length; i++) {
var current = this[i];
if (a.indexOf(current) < 0) a.push(current);
}
return a;
}
Array.prototype.unique2 = function() {
return Array.from(new Set(this));
}
var x=[];
for(var i=0;i<10000;i++){
x.push("x"+i);x.push("x"+(i+1));
}
console.time("unique1");
console.log(x.unique1());
console.timeEnd("unique1");
console.time("unique2");
console.log(x.unique2());
console.timeEnd("unique2");
Go for this one:
var uniqueArray = duplicateArray.filter(function(elem, pos) {
return duplicateArray.indexOf(elem) == pos;
});
Now uniqueArray contains no duplicates.
The following is more than 80% faster than the jQuery method listed (see tests below).
It is an answer from a similar question a few years ago. If I come across the person who originally proposed it I will post credit.
Pure JS.
var temp = {};
for (var i = 0; i < array.length; i++)
temp[array[i]] = true;
var r = [];
for (var k in temp)
r.push(k);
return r;
My test case comparison:
http://jsperf.com/remove-duplicate-array-tests
I had done a detailed comparison of dupes removal at some other question but having noticed that this is the real place i just wanted to share it here as well.
I believe this is the best way to do this
var myArray = [100, 200, 100, 200, 100, 100, 200, 200, 200, 200],
reduced = Object.keys(myArray.reduce((p,c) => (p[c] = true,p),{}));
console.log(reduced);
OK .. even though this one is O(n) and the others are O(n^2) i was curious to see benchmark comparison between this reduce / look up table and filter/indexOf combo (I choose Jeetendras very nice implementation https://stackoverflow.com/a/37441144/4543207). I prepare a 100K item array filled with random positive integers in range 0-9999 and and it removes the duplicates. I repeat the test for 10 times and the average of the results show that they are no match in performance.
In firefox v47 reduce & lut : 14.85ms vs filter & indexOf : 2836ms
In chrome v51 reduce & lut : 23.90ms vs filter & indexOf : 1066ms
Well ok so far so good. But let's do it properly this time in the ES6 style. It looks so cool..! But as of now how it will perform against the powerful lut solution is a mystery to me. Lets first see the code and then benchmark it.
var myArray = [100, 200, 100, 200, 100, 100, 200, 200, 200, 200],
reduced = [...myArray.reduce((p,c) => p.set(c,true),new Map()).keys()];
console.log(reduced);
Wow that was short..! But how about the performance..? It's beautiful... Since the heavy weight of the filter / indexOf lifted over our shoulders now i can test an array 1M random items of positive integers in range 0..99999 to get an average from 10 consecutive tests. I can say this time it's a real match. See the result for yourself :)
var ranar = [],
red1 = a => Object.keys(a.reduce((p,c) => (p[c] = true,p),{})),
red2 = a => reduced = [...a.reduce((p,c) => p.set(c,true),new Map()).keys()],
avg1 = [],
avg2 = [],
ts = 0,
te = 0,
res1 = [],
res2 = [],
count= 10;
for (var i = 0; i<count; i++){
ranar = (new Array(1000000).fill(true)).map(e => Math.floor(Math.random()*100000));
ts = performance.now();
res1 = red1(ranar);
te = performance.now();
avg1.push(te-ts);
ts = performance.now();
res2 = red2(ranar);
te = performance.now();
avg2.push(te-ts);
}
avg1 = avg1.reduce((p,c) => p+c)/count;
avg2 = avg2.reduce((p,c) => p+c)/count;
console.log("reduce & lut took: " + avg1 + "msec");
console.log("map & spread took: " + avg2 + "msec");
Which one would you use..? Well not so fast...! Don't be deceived. Map is at displacement. Now look... in all of the above cases we fill an array of size n with numbers of range < n. I mean we have an array of size 100 and we fill with random numbers 0..9 so there are definite duplicates and "almost" definitely each number has a duplicate. How about if we fill the array in size 100 with random numbers 0..9999. Let's now see Map playing at home. This time an Array of 100K items but random number range is 0..100M. We will do 100 consecutive tests to average the results. OK let's see the bets..! <- no typo
var ranar = [],
red1 = a => Object.keys(a.reduce((p,c) => (p[c] = true,p),{})),
red2 = a => reduced = [...a.reduce((p,c) => p.set(c,true),new Map()).keys()],
avg1 = [],
avg2 = [],
ts = 0,
te = 0,
res1 = [],
res2 = [],
count= 100;
for (var i = 0; i<count; i++){
ranar = (new Array(100000).fill(true)).map(e => Math.floor(Math.random()*100000000));
ts = performance.now();
res1 = red1(ranar);
te = performance.now();
avg1.push(te-ts);
ts = performance.now();
res2 = red2(ranar);
te = performance.now();
avg2.push(te-ts);
}
avg1 = avg1.reduce((p,c) => p+c)/count;
avg2 = avg2.reduce((p,c) => p+c)/count;
console.log("reduce & lut took: " + avg1 + "msec");
console.log("map & spread took: " + avg2 + "msec");
Now this is the spectacular comeback of Map()..! May be now you can make a better decision when you want to remove the dupes.
Well ok we are all happy now. But the lead role always comes last with some applause. I am sure some of you wonder what Set object would do. Now that since we are open to ES6 and we know Map is the winner of the previous games let us compare Map with Set as a final. A typical Real Madrid vs Barcelona game this time... or is it? Let's see who will win the el classico :)
var ranar = [],
red1 = a => reduced = [...a.reduce((p,c) => p.set(c,true),new Map()).keys()],
red2 = a => Array.from(new Set(a)),
avg1 = [],
avg2 = [],
ts = 0,
te = 0,
res1 = [],
res2 = [],
count= 100;
for (var i = 0; i<count; i++){
ranar = (new Array(100000).fill(true)).map(e => Math.floor(Math.random()*10000000));
ts = performance.now();
res1 = red1(ranar);
te = performance.now();
avg1.push(te-ts);
ts = performance.now();
res2 = red2(ranar);
te = performance.now();
avg2.push(te-ts);
}
avg1 = avg1.reduce((p,c) => p+c)/count;
avg2 = avg2.reduce((p,c) => p+c)/count;
console.log("map & spread took: " + avg1 + "msec");
console.log("set & A.from took: " + avg2 + "msec");
Wow.. man..! Well unexpectedly it didn't turn out to be an el classico at all. More like Barcelona FC against CA Osasuna :))
Here is a simple answer to the question.
var names = ["Alex","Tony","James","Suzane", "Marie", "Laurence", "Alex", "Suzane", "Marie", "Marie", "James", "Tony", "Alex"];
var uniqueNames = [];
for(var i in names){
if(uniqueNames.indexOf(names[i]) === -1){
uniqueNames.push(names[i]);
}
}
A simple but effective technique, is to use the filter method in combination with the filter function(value, index){ return this.indexOf(value) == index }.
Code example :
var data = [2,3,4,5,5,4];
var filter = function(value, index){ return this.indexOf(value) == index };
var filteredData = data.filter(filter, data );
document.body.innerHTML = '<pre>' + JSON.stringify(filteredData, null, '\t') + '</pre>';
See also this Fiddle.
So the options is:
let a = [11,22,11,22];
let b = []
b = [ ...new Set(a) ];
// b = [11, 22]
b = Array.from( new Set(a))
// b = [11, 22]
b = a.filter((val,i)=>{
return a.indexOf(val)==i
})
// b = [11, 22]
Here is very simple for understanding and working anywhere (even in PhotoshopScript) code. Check it!
var peoplenames = new Array("Mike","Matt","Nancy","Adam","Jenny","Nancy","Carl");
peoplenames = unique(peoplenames);
alert(peoplenames);
function unique(array){
var len = array.length;
for(var i = 0; i < len; i++) for(var j = i + 1; j < len; j++)
if(array[j] == array[i]){
array.splice(j,1);
j--;
len--;
}
return array;
}
//*result* peoplenames == ["Mike","Matt","Nancy","Adam","Jenny","Carl"]
here is the simple method without any special libraries are special function,
name_list = ["Mike","Matt","Nancy","Adam","Jenny","Nancy","Carl"];
get_uniq = name_list.filter(function(val,ind) { return name_list.indexOf(val) == ind; })
console.log("Original name list:"+name_list.length, name_list)
console.log("\n Unique name list:"+get_uniq.length, get_uniq)
Apart from being a simpler, more terse solution than the current answers (minus the future-looking ES6 ones), I perf tested this and it was much faster as well:
var uniqueArray = dupeArray.filter(function(item, i, self){
return self.lastIndexOf(item) == i;
});
One caveat: Array.lastIndexOf() was added in IE9, so if you need to go lower than that, you'll need to look elsewhere.
Generic Functional Approach
Here is a generic and strictly functional approach with ES2015:
// small, reusable auxiliary functions
const apply = f => a => f(a);
const flip = f => b => a => f(a) (b);
const uncurry = f => (a, b) => f(a) (b);
const push = x => xs => (xs.push(x), xs);
const foldl = f => acc => xs => xs.reduce(uncurry(f), acc);
const some = f => xs => xs.some(apply(f));
// the actual de-duplicate function
const uniqueBy = f => foldl(
acc => x => some(f(x)) (acc)
? acc
: push(x) (acc)
) ([]);
// comparators
const eq = y => x => x === y;
// string equality case insensitive :D
const seqCI = y => x => x.toLowerCase() === y.toLowerCase();
// mock data
const xs = [1,2,3,1,2,3,4];
const ys = ["a", "b", "c", "A", "B", "C", "D"];
console.log( uniqueBy(eq) (xs) );
console.log( uniqueBy(seqCI) (ys) );
We can easily derive unique from unqiueBy or use the faster implementation utilizing Sets:
const unqiue = uniqueBy(eq);
// const unique = xs => Array.from(new Set(xs));
Benefits of this approach:
generic solution by using a separate comparator function
declarative and succinct implementation
reuse of other small, generic functions
Performance Considerations
uniqueBy isn't as fast as an imperative implementation with loops, but it is way more expressive due to its genericity.
If you identify uniqueBy as the cause of a concrete performance penalty in your app, replace it with optimized code. That is, write your code first in an functional, declarative way. Afterwards, provided that you encounter performance issues, try to optimize the code at the locations, which are the cause of the problem.
Memory Consumption and Garbage Collection
uniqueBy utilizes mutations (push(x) (acc)) hidden inside its body. It reuses the accumulator instead of throwing it away after each iteration. This reduces memory consumption and GC pressure. Since this side effect is wrapped inside the function, everything outside remains pure.
for (i=0; i<originalArray.length; i++) {
if (!newArray.includes(originalArray[i])) {
newArray.push(originalArray[i]);
}
}
The following script returns a new array containing only unique values. It works on string and numbers. No requirement for additional libraries only vanilla JS.
Browser support:
Feature Chrome Firefox (Gecko) Internet Explorer Opera Safari
Basic support (Yes) 1.5 (1.8) 9 (Yes) (Yes)
https://jsfiddle.net/fzmcgcxv/3/
var duplicates = ["Mike","Matt","Nancy","Adam","Jenny","Nancy","Carl","Mike","Mike","Nancy","Carl"];
var unique = duplicates.filter(function(elem, pos) {
return duplicates.indexOf(elem) == pos;
});
alert(unique);
If by any chance you were using
D3.js
You could do
d3.set(["foo", "bar", "foo", "baz"]).values() ==> ["foo", "bar", "baz"]
https://github.com/mbostock/d3/wiki/Arrays#set_values
A slight modification of thg435's excellent answer to use a custom comparator:
function contains(array, obj) {
for (var i = 0; i < array.length; i++) {
if (isEqual(array[i], obj)) return true;
}
return false;
}
//comparator
function isEqual(obj1, obj2) {
if (obj1.name == obj2.name) return true;
return false;
}
function removeDuplicates(ary) {
var arr = [];
return ary.filter(function(x) {
return !contains(arr, x) && arr.push(x);
});
}
$(document).ready(function() {
var arr1=["dog","dog","fish","cat","cat","fish","apple","orange"]
var arr2=["cat","fish","mango","apple"]
var uniquevalue=[];
var seconduniquevalue=[];
var finalarray=[];
$.each(arr1,function(key,value){
if($.inArray (value,uniquevalue) === -1)
{
uniquevalue.push(value)
}
});
$.each(arr2,function(key,value){
if($.inArray (value,seconduniquevalue) === -1)
{
seconduniquevalue.push(value)
}
});
$.each(uniquevalue,function(ikey,ivalue){
$.each(seconduniquevalue,function(ukey,uvalue){
if( ivalue == uvalue)
{
finalarray.push(ivalue);
}
});
});
alert(finalarray);
});
https://jsfiddle.net/2w0k5tz8/
function remove_duplicates(array_){
var ret_array = new Array();
for (var a = array_.length - 1; a >= 0; a--) {
for (var b = array_.length - 1; b >= 0; b--) {
if(array_[a] == array_[b] && a != b){
delete array_[b];
}
};
if(array_[a] != undefined)
ret_array.push(array_[a]);
};
return ret_array;
}
console.log(remove_duplicates(Array(1,1,1,2,2,2,3,3,3)));
Loop through, remove duplicates, and create a clone array place holder because the array index will not be updated.
Loop backward for better performance ( your loop wont need to keep checking the length of your array)

Is it faster to iterate / loop over an object or an array?

So I've been doing this ProductSearchPage using React and it has a bunch of filter values that I need to set to filter my product list and show results.
Up until now, I've been handling my product list as an array (even though I'm fetching it as an object, I'm converting it to an array) and I've been using lots of map, forEach and A LOT of filter loops over those arrays over and over again.
I'll get a productList, I'll filter based on category
I'll take the new filteredList and filter based on priceFilters
I'll take the new filteredList and filter based on ratingFilter
And so on for brandFilter, featuresFilters, etc.
I began to think that I might be creating a black hole of iterations and that might hurt my performance at some point. I'm doing client side searching and filtering. We're talking about 2k products maximum.
So I wondered if it would be faster to iterate and filter over an object instead of an array. I would be deleting properties and creating new objects along the way.
So I did this snippet to test:
And for my surprise the results were a lot in favor of the array loops.
Looping object with for...in: 0.31ms
Looping array with forEach: 0.08ms
Looping array with filter: 0.10ms
Looping array with map: 0.09ms
QUESTION
Is this enough evidence that looping through arrays is faster than looping through objects and I should stick to the forEach, map and filter methods?
NOTE: This is really simplified. In my real case, each product is an object with some properties (some of them are nested properties). So my options are to keep the list as an array of object (like I've been doing so far) or I could keep a big object allProducts with each product as a property of that object. Could this change the results?
const myObject = {};
const myArray = []
for (let i=0; i<=2000; i++) {
myObject['prop'+i] = i;
}
for (let k=0; k<=2000; k++) {
myArray[k] = k;
}
const t0 = window.performance.now();
for (const key in myObject) {
if (myObject[key] % 37 === 0) {
//console.log(myObject[key] + ' is a multiple of 37');
}
}
const t1 = window.performance.now();
console.log('Looping object with for...in: ' + (t1 - t0).toFixed(2) + 'ms');
const t2 = window.performance.now();
myArray.forEach((item) => {
if (item % 37 === 0) {
//console.log(item + ' is a multiple of 37');
}
});
const t3 = window.performance.now();
console.log('Looping array with forEach: ' + (t3 - t2).toFixed(2) + 'ms');
const t4 = window.performance.now();
const newArray = myArray.filter((item) => item % 37 === 0);
const t5 = window.performance.now();
console.log('Looping array with filter: ' + (t5 - t4).toFixed(2) + 'ms');
const t6 = window.performance.now();
const newArray2 = myArray.map((item) => item*2);
const t7 = window.performance.now();
console.log('Looping array with map: ' + (t7 - t6).toFixed(2) + 'ms');
I would be deleting properties and creating new objects along the way.
These operations will likely take orders of magnitude longer than the time it takes to just perform the loop.
Unless of course the way you loop also affects the way you create or delete objects/properties, but I assume we're considering a loop that otherwise does identical instructions.
In the vast majority of cases it's a tiny part of the performance budget (like 1 millionth), and the wrong place to start if you want to optimize a complex application. Just run some profiling tools, get an overview of where the application is spending time, and focus on the slowest parts.
Is this enough evidence that looping through arrays is faster than looping through objects and I should stick to the forEach, map and filter methods?
No, because it's a single simplified example. It doesn't tell anything about how big of a chunk of the performance budget it represents. It's probably also different depending on which JS runtime is used. All you can derive from it is that with 2000 iterations it takes at worst 0.31 ms.
I expanded the example a bit by adding a very small amount of extra work inside the loop. This can then be multiplied to see how fast it starts being more significant than just the loop. See the iteration function below. It internally runs identically for both cases.
If the complexity is set to 0 (run extra work 0 times), it performs just like the results posted in the question. Array is 2 to 4 times faster.
However just running this work once, the difference is almost gone (~0.7ms vs ~0.8ms for me). From 2 times and upwards sometimes array wins, sometimes object, but never by a big margin.
So the difference becomes insignificant once you do pretty much anything at all inside the loop.
const myObject = {};
const myArray = []
const iterations = 2000;
for (let i = 0; i < iterations; i++) {
myObject['prop' + i] = i;
myArray[i] = i;
}
let total = 0;
function iteration(a, complexity) {
const x = {};
for (let i = 0; i < complexity; i++) {
// Do some simple instructions
const rand = Math.random();
x[`${a}~${i}`] = rand;
total += rand;
}
return x;
}
function loopObject(complexity) {
const results = [];
for (const key in myObject) {
results.push(iteration(myObject[key], complexity));
}
return results;
}
function loopArray(complexity) {
const results = [];
myArray.forEach((item) => {
results.push(iteration(item, complexity))
});
return results;
}
const samples = 10;
const decimals = 6;
function test(complexity) {
console.log(`COMPLEXITY ${complexity} (${samples} samples)`)
let arrayTimes = [];
let objectTimes = [];
for (let i = 0; i < samples; i++) {
const tA = performance.now();
const resultArray = loopArray(complexity);
arrayTimes.push(performance.now() - tA);
const tO = performance.now();
const resultObject = loopObject(complexity);
objectTimes.push(performance.now() - tO);
}
const arraySum = arrayTimes.reduce((p, c) => p + c, 0);
const objectSum = objectTimes.reduce((p, c) => p + c, 0);
const arrayWins = arraySum < objectSum;
console.log(
`ARRAY ${arrayWins ? ' (winner)' : ''}
avg: ${(arraySum / samples).toFixed(decimals)} min: ${Math.min(...arrayTimes).toFixed(decimals)} max: ${Math.max(...arrayTimes).toFixed(decimals)}`);
console.log(
`OBJECT ${!arrayWins ? ' (winner)' : ''}
avg: ${(objectSum / samples).toFixed(decimals)} min: ${Math.min(...objectTimes).toFixed(decimals)} max: ${Math.max(...objectTimes).toFixed(decimals)}`);
}
const complexities = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 20, 50, 100];
complexities.forEach(test);

Deleting duplicate nodes in singly linked list in Javascript [duplicate]

This question already has answers here:
Get all unique values in a JavaScript array (remove duplicates)
(91 answers)
Closed 5 years ago.
I have a very simple JavaScript array that may or may not contain duplicates.
var names = ["Mike","Matt","Nancy","Adam","Jenny","Nancy","Carl"];
I need to remove the duplicates and put the unique values in a new array.
I could point to all the code that I've tried but I think it's useless because they don't work. I accept jQuery solutions too.
Similar question:
Get all non-unique values (i.e.: duplicate/more than one occurrence) in an array
TL;DR
Using the Set constructor and the spread syntax:
uniq = [...new Set(array)];
( Note that var uniq will be an array... new Set() turns it into a set, but [... ] turns it back into an array again )
"Smart" but naĂŻve way
uniqueArray = a.filter(function(item, pos) {
return a.indexOf(item) == pos;
})
Basically, we iterate over the array and, for each element, check if the first position of this element in the array is equal to the current position. Obviously, these two positions are different for duplicate elements.
Using the 3rd ("this array") parameter of the filter callback we can avoid a closure of the array variable:
uniqueArray = a.filter(function(item, pos, self) {
return self.indexOf(item) == pos;
})
Although concise, this algorithm is not particularly efficient for large arrays (quadratic time).
Hashtables to the rescue
function uniq(a) {
var seen = {};
return a.filter(function(item) {
return seen.hasOwnProperty(item) ? false : (seen[item] = true);
});
}
This is how it's usually done. The idea is to place each element in a hashtable and then check for its presence instantly. This gives us linear time, but has at least two drawbacks:
since hash keys can only be strings or symbols in JavaScript, this code doesn't distinguish numbers and "numeric strings". That is, uniq([1,"1"]) will return just [1]
for the same reason, all objects will be considered equal: uniq([{foo:1},{foo:2}]) will return just [{foo:1}].
That said, if your arrays contain only primitives and you don't care about types (e.g. it's always numbers), this solution is optimal.
The best from two worlds
A universal solution combines both approaches: it uses hash lookups for primitives and linear search for objects.
function uniq(a) {
var prims = {"boolean":{}, "number":{}, "string":{}}, objs = [];
return a.filter(function(item) {
var type = typeof item;
if(type in prims)
return prims[type].hasOwnProperty(item) ? false : (prims[type][item] = true);
else
return objs.indexOf(item) >= 0 ? false : objs.push(item);
});
}
sort | uniq
Another option is to sort the array first, and then remove each element equal to the preceding one:
function uniq(a) {
return a.sort().filter(function(item, pos, ary) {
return !pos || item != ary[pos - 1];
});
}
Again, this doesn't work with objects (because all objects are equal for sort). Additionally, we silently change the original array as a side effect - not good! However, if your input is already sorted, this is the way to go (just remove sort from the above).
Unique by...
Sometimes it's desired to uniquify a list based on some criteria other than just equality, for example, to filter out objects that are different, but share some property. This can be done elegantly by passing a callback. This "key" callback is applied to each element, and elements with equal "keys" are removed. Since key is expected to return a primitive, hash table will work fine here:
function uniqBy(a, key) {
var seen = {};
return a.filter(function(item) {
var k = key(item);
return seen.hasOwnProperty(k) ? false : (seen[k] = true);
})
}
A particularly useful key() is JSON.stringify which will remove objects that are physically different, but "look" the same:
a = [[1,2,3], [4,5,6], [1,2,3]]
b = uniqBy(a, JSON.stringify)
console.log(b) // [[1,2,3], [4,5,6]]
If the key is not primitive, you have to resort to the linear search:
function uniqBy(a, key) {
var index = [];
return a.filter(function (item) {
var k = key(item);
return index.indexOf(k) >= 0 ? false : index.push(k);
});
}
In ES6 you can use a Set:
function uniqBy(a, key) {
let seen = new Set();
return a.filter(item => {
let k = key(item);
return seen.has(k) ? false : seen.add(k);
});
}
or a Map:
function uniqBy(a, key) {
return [
...new Map(
a.map(x => [key(x), x])
).values()
]
}
which both also work with non-primitive keys.
First or last?
When removing objects by a key, you might to want to keep the first of "equal" objects or the last one.
Use the Set variant above to keep the first, and the Map to keep the last:
function uniqByKeepFirst(a, key) {
let seen = new Set();
return a.filter(item => {
let k = key(item);
return seen.has(k) ? false : seen.add(k);
});
}
function uniqByKeepLast(a, key) {
return [
...new Map(
a.map(x => [key(x), x])
).values()
]
}
//
data = [
{a:1, u:1},
{a:2, u:2},
{a:3, u:3},
{a:4, u:1},
{a:5, u:2},
{a:6, u:3},
];
console.log(uniqByKeepFirst(data, it => it.u))
console.log(uniqByKeepLast(data, it => it.u))
Libraries
Both underscore and Lo-Dash provide uniq methods. Their algorithms are basically similar to the first snippet above and boil down to this:
var result = [];
a.forEach(function(item) {
if(result.indexOf(item) < 0) {
result.push(item);
}
});
This is quadratic, but there are nice additional goodies, like wrapping native indexOf, ability to uniqify by a key (iteratee in their parlance), and optimizations for already sorted arrays.
If you're using jQuery and can't stand anything without a dollar before it, it goes like this:
$.uniqArray = function(a) {
return $.grep(a, function(item, pos) {
return $.inArray(item, a) === pos;
});
}
which is, again, a variation of the first snippet.
Performance
Function calls are expensive in JavaScript, therefore the above solutions, as concise as they are, are not particularly efficient. For maximal performance, replace filter with a loop and get rid of other function calls:
function uniq_fast(a) {
var seen = {};
var out = [];
var len = a.length;
var j = 0;
for(var i = 0; i < len; i++) {
var item = a[i];
if(seen[item] !== 1) {
seen[item] = 1;
out[j++] = item;
}
}
return out;
}
This chunk of ugly code does the same as the snippet #3 above, but an order of magnitude faster (as of 2017 it's only twice as fast - JS core folks are doing a great job!)
function uniq(a) {
var seen = {};
return a.filter(function(item) {
return seen.hasOwnProperty(item) ? false : (seen[item] = true);
});
}
function uniq_fast(a) {
var seen = {};
var out = [];
var len = a.length;
var j = 0;
for(var i = 0; i < len; i++) {
var item = a[i];
if(seen[item] !== 1) {
seen[item] = 1;
out[j++] = item;
}
}
return out;
}
/////
var r = [0,1,2,3,4,5,6,7,8,9],
a = [],
LEN = 1000,
LOOPS = 1000;
while(LEN--)
a = a.concat(r);
var d = new Date();
for(var i = 0; i < LOOPS; i++)
uniq(a);
document.write('<br>uniq, ms/loop: ' + (new Date() - d)/LOOPS)
var d = new Date();
for(var i = 0; i < LOOPS; i++)
uniq_fast(a);
document.write('<br>uniq_fast, ms/loop: ' + (new Date() - d)/LOOPS)
ES6
ES6 provides the Set object, which makes things a whole lot easier:
function uniq(a) {
return Array.from(new Set(a));
}
or
let uniq = a => [...new Set(a)];
Note that, unlike in python, ES6 sets are iterated in insertion order, so this code preserves the order of the original array.
However, if you need an array with unique elements, why not use sets right from the beginning?
Generators
A "lazy", generator-based version of uniq can be built on the same basis:
take the next value from the argument
if it's been seen already, skip it
otherwise, yield it and add it to the set of already seen values
function* uniqIter(a) {
let seen = new Set();
for (let x of a) {
if (!seen.has(x)) {
seen.add(x);
yield x;
}
}
}
// example:
function* randomsBelow(limit) {
while (1)
yield Math.floor(Math.random() * limit);
}
// note that randomsBelow is endless
count = 20;
limit = 30;
for (let r of uniqIter(randomsBelow(limit))) {
console.log(r);
if (--count === 0)
break
}
// exercise for the reader: what happens if we set `limit` less than `count` and why
Quick and dirty using jQuery:
var names = ["Mike","Matt","Nancy","Adam","Jenny","Nancy","Carl"];
var uniqueNames = [];
$.each(names, function(i, el){
if($.inArray(el, uniqueNames) === -1) uniqueNames.push(el);
});
Got tired of seeing all bad examples with for-loops or jQuery. Javascript has the perfect tools for this nowadays: sort, map and reduce.
Uniq reduce while keeping existing order
var names = ["Mike","Matt","Nancy","Adam","Jenny","Nancy","Carl"];
var uniq = names.reduce(function(a,b){
if (a.indexOf(b) < 0 ) a.push(b);
return a;
},[]);
console.log(uniq, names) // [ 'Mike', 'Matt', 'Nancy', 'Adam', 'Jenny', 'Carl' ]
// one liner
return names.reduce(function(a,b){if(a.indexOf(b)<0)a.push(b);return a;},[]);
Faster uniq with sorting
There are probably faster ways but this one is pretty decent.
var uniq = names.slice() // slice makes copy of array before sorting it
.sort(function(a,b){
return a > b;
})
.reduce(function(a,b){
if (a.slice(-1)[0] !== b) a.push(b); // slice(-1)[0] means last item in array without removing it (like .pop())
return a;
},[]); // this empty array becomes the starting value for a
// one liner
return names.slice().sort(function(a,b){return a > b}).reduce(function(a,b){if (a.slice(-1)[0] !== b) a.push(b);return a;},[]);
Update 2015: ES6 version:
In ES6 you have Sets and Spread which makes it very easy and performant to remove all duplicates:
var uniq = [ ...new Set(names) ]; // [ 'Mike', 'Matt', 'Nancy', 'Adam', 'Jenny', 'Carl' ]
Sort based on occurrence:
Someone asked about ordering the results based on how many unique names there are:
var names = ['Mike', 'Matt', 'Nancy', 'Adam', 'Jenny', 'Nancy', 'Carl']
var uniq = names
.map((name) => {
return {count: 1, name: name}
})
.reduce((a, b) => {
a[b.name] = (a[b.name] || 0) + b.count
return a
}, {})
var sorted = Object.keys(uniq).sort((a, b) => uniq[a] < uniq[b])
console.log(sorted)
Vanilla JS: Remove duplicates using an Object like a Set
You can always try putting it into an object, and then iterating through its keys:
function remove_duplicates(arr) {
var obj = {};
var ret_arr = [];
for (var i = 0; i < arr.length; i++) {
obj[arr[i]] = true;
}
for (var key in obj) {
ret_arr.push(key);
}
return ret_arr;
}
Vanilla JS: Remove duplicates by tracking already seen values (order-safe)
Or, for an order-safe version, use an object to store all previously seen values, and check values against it before before adding to an array.
function remove_duplicates_safe(arr) {
var seen = {};
var ret_arr = [];
for (var i = 0; i < arr.length; i++) {
if (!(arr[i] in seen)) {
ret_arr.push(arr[i]);
seen[arr[i]] = true;
}
}
return ret_arr;
}
ECMAScript 6: Use the new Set data structure (order-safe)
ECMAScript 6 adds the new Set Data-Structure, which lets you store values of any type. Set.values returns elements in insertion order.
function remove_duplicates_es6(arr) {
let s = new Set(arr);
let it = s.values();
return Array.from(it);
}
Example usage:
a = ["Mike","Matt","Nancy","Adam","Jenny","Nancy","Carl"];
b = remove_duplicates(a);
// b:
// ["Adam", "Carl", "Jenny", "Matt", "Mike", "Nancy"]
c = remove_duplicates_safe(a);
// c:
// ["Mike", "Matt", "Nancy", "Adam", "Jenny", "Carl"]
d = remove_duplicates_es6(a);
// d:
// ["Mike", "Matt", "Nancy", "Adam", "Jenny", "Carl"]
A single line version using array .filter and .indexOf function:
arr = arr.filter(function (value, index, array) {
return array.indexOf(value) === index;
});
Use Underscore.js
It's a library with a host of functions for manipulating arrays.
It's the tie to go along with jQuery's tux, and Backbone.js's
suspenders.
_.uniq
_.uniq(array, [isSorted], [iterator]) Alias: unique
Produces a duplicate-free version of the array, using === to test object
equality. If you know in advance that the array is sorted, passing
true for isSorted will run a much faster algorithm. If you want to
compute unique items based on a transformation, pass an iterator
function.
Example
var names = ["Mike","Matt","Nancy","Adam","Jenny","Nancy","Carl"];
alert(_.uniq(names, false));
Note: Lo-Dash (an underscore competitor) also offers a comparable .uniq implementation.
One line:
let names = ['Mike','Matt','Nancy','Adam','Jenny','Nancy','Carl', 'Nancy'];
let dup = [...new Set(names)];
console.log(dup);
You can simply do it in JavaScript, with the help of the second - index - parameter of the filter method:
var a = [2,3,4,5,5,4];
a.filter(function(value, index){ return a.indexOf(value) == index });
or in short hand
a.filter((v,i) => a.indexOf(v) == i)
use Array.filter() like this
var actualArr = ['Apple', 'Apple', 'Banana', 'Mango', 'Strawberry', 'Banana'];
console.log('Actual Array: ' + actualArr);
var filteredArr = actualArr.filter(function(item, index) {
if (actualArr.indexOf(item) == index)
return item;
});
console.log('Filtered Array: ' + filteredArr);
this can be made shorter in ES6 to
actualArr.filter((item,index,self) => self.indexOf(item)==index);
Here is nice explanation of Array.filter()
The most concise way to remove duplicates from an array using native javascript functions is to use a sequence like below:
vals.sort().reduce(function(a, b){ if (b != a[0]) a.unshift(b); return a }, [])
there's no need for slice nor indexOf within the reduce function, like i've seen in other examples! it makes sense to use it along with a filter function though:
vals.filter(function(v, i, a){ return i == a.indexOf(v) })
Yet another ES6(2015) way of doing this that already works on a few browsers is:
Array.from(new Set(vals))
or even using the spread operator:
[...new Set(vals)]
cheers!
The top answers have complexity of O(n²), but this can be done with just O(n) by using an object as a hash:
function getDistinctArray(arr) {
var dups = {};
return arr.filter(function(el) {
var hash = el.valueOf();
var isDup = dups[hash];
dups[hash] = true;
return !isDup;
});
}
This will work for strings, numbers, and dates. If your array contains objects, the above solution won't work because when coerced to a string, they will all have a value of "[object Object]" (or something similar) and that isn't suitable as a lookup value. You can get an O(n) implementation for objects by setting a flag on the object itself:
function getDistinctObjArray(arr) {
var distinctArr = arr.filter(function(el) {
var isDup = el.inArray;
el.inArray = true;
return !isDup;
});
distinctArr.forEach(function(el) {
delete el.inArray;
});
return distinctArr;
}
2019 edit: Modern versions of JavaScript make this a much easier problem to solve. Using Set will work, regardless of whether your array contains objects, strings, numbers, or any other type.
function getDistinctArray(arr) {
return [...new Set(arr)];
}
The implementation is so simple, defining a function is no longer warranted.
Simplest One I've run into so far. In es6.
var names = ["Mike","Matt","Nancy","Adam","Jenny","Nancy","Carl", "Mike", "Nancy"]
var noDupe = Array.from(new Set(names))
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Set
In ECMAScript 6 (aka ECMAScript 2015), Set can be used to filter out duplicates. Then it can be converted back to an array using the spread operator.
var names = ["Mike","Matt","Nancy","Adam","Jenny","Nancy","Carl"],
unique = [...new Set(names)];
Solution 1
Array.prototype.unique = function() {
var a = [];
for (i = 0; i < this.length; i++) {
var current = this[i];
if (a.indexOf(current) < 0) a.push(current);
}
return a;
}
Solution 2 (using Set)
Array.prototype.unique = function() {
return Array.from(new Set(this));
}
Test
var x=[1,2,3,3,2,1];
x.unique() //[1,2,3]
Performance
When I tested both implementation (with and without Set) for performance in chrome, I found that the one with Set is much much faster!
Array.prototype.unique1 = function() {
var a = [];
for (i = 0; i < this.length; i++) {
var current = this[i];
if (a.indexOf(current) < 0) a.push(current);
}
return a;
}
Array.prototype.unique2 = function() {
return Array.from(new Set(this));
}
var x=[];
for(var i=0;i<10000;i++){
x.push("x"+i);x.push("x"+(i+1));
}
console.time("unique1");
console.log(x.unique1());
console.timeEnd("unique1");
console.time("unique2");
console.log(x.unique2());
console.timeEnd("unique2");
Go for this one:
var uniqueArray = duplicateArray.filter(function(elem, pos) {
return duplicateArray.indexOf(elem) == pos;
});
Now uniqueArray contains no duplicates.
The following is more than 80% faster than the jQuery method listed (see tests below).
It is an answer from a similar question a few years ago. If I come across the person who originally proposed it I will post credit.
Pure JS.
var temp = {};
for (var i = 0; i < array.length; i++)
temp[array[i]] = true;
var r = [];
for (var k in temp)
r.push(k);
return r;
My test case comparison:
http://jsperf.com/remove-duplicate-array-tests
I had done a detailed comparison of dupes removal at some other question but having noticed that this is the real place i just wanted to share it here as well.
I believe this is the best way to do this
var myArray = [100, 200, 100, 200, 100, 100, 200, 200, 200, 200],
reduced = Object.keys(myArray.reduce((p,c) => (p[c] = true,p),{}));
console.log(reduced);
OK .. even though this one is O(n) and the others are O(n^2) i was curious to see benchmark comparison between this reduce / look up table and filter/indexOf combo (I choose Jeetendras very nice implementation https://stackoverflow.com/a/37441144/4543207). I prepare a 100K item array filled with random positive integers in range 0-9999 and and it removes the duplicates. I repeat the test for 10 times and the average of the results show that they are no match in performance.
In firefox v47 reduce & lut : 14.85ms vs filter & indexOf : 2836ms
In chrome v51 reduce & lut : 23.90ms vs filter & indexOf : 1066ms
Well ok so far so good. But let's do it properly this time in the ES6 style. It looks so cool..! But as of now how it will perform against the powerful lut solution is a mystery to me. Lets first see the code and then benchmark it.
var myArray = [100, 200, 100, 200, 100, 100, 200, 200, 200, 200],
reduced = [...myArray.reduce((p,c) => p.set(c,true),new Map()).keys()];
console.log(reduced);
Wow that was short..! But how about the performance..? It's beautiful... Since the heavy weight of the filter / indexOf lifted over our shoulders now i can test an array 1M random items of positive integers in range 0..99999 to get an average from 10 consecutive tests. I can say this time it's a real match. See the result for yourself :)
var ranar = [],
red1 = a => Object.keys(a.reduce((p,c) => (p[c] = true,p),{})),
red2 = a => reduced = [...a.reduce((p,c) => p.set(c,true),new Map()).keys()],
avg1 = [],
avg2 = [],
ts = 0,
te = 0,
res1 = [],
res2 = [],
count= 10;
for (var i = 0; i<count; i++){
ranar = (new Array(1000000).fill(true)).map(e => Math.floor(Math.random()*100000));
ts = performance.now();
res1 = red1(ranar);
te = performance.now();
avg1.push(te-ts);
ts = performance.now();
res2 = red2(ranar);
te = performance.now();
avg2.push(te-ts);
}
avg1 = avg1.reduce((p,c) => p+c)/count;
avg2 = avg2.reduce((p,c) => p+c)/count;
console.log("reduce & lut took: " + avg1 + "msec");
console.log("map & spread took: " + avg2 + "msec");
Which one would you use..? Well not so fast...! Don't be deceived. Map is at displacement. Now look... in all of the above cases we fill an array of size n with numbers of range < n. I mean we have an array of size 100 and we fill with random numbers 0..9 so there are definite duplicates and "almost" definitely each number has a duplicate. How about if we fill the array in size 100 with random numbers 0..9999. Let's now see Map playing at home. This time an Array of 100K items but random number range is 0..100M. We will do 100 consecutive tests to average the results. OK let's see the bets..! <- no typo
var ranar = [],
red1 = a => Object.keys(a.reduce((p,c) => (p[c] = true,p),{})),
red2 = a => reduced = [...a.reduce((p,c) => p.set(c,true),new Map()).keys()],
avg1 = [],
avg2 = [],
ts = 0,
te = 0,
res1 = [],
res2 = [],
count= 100;
for (var i = 0; i<count; i++){
ranar = (new Array(100000).fill(true)).map(e => Math.floor(Math.random()*100000000));
ts = performance.now();
res1 = red1(ranar);
te = performance.now();
avg1.push(te-ts);
ts = performance.now();
res2 = red2(ranar);
te = performance.now();
avg2.push(te-ts);
}
avg1 = avg1.reduce((p,c) => p+c)/count;
avg2 = avg2.reduce((p,c) => p+c)/count;
console.log("reduce & lut took: " + avg1 + "msec");
console.log("map & spread took: " + avg2 + "msec");
Now this is the spectacular comeback of Map()..! May be now you can make a better decision when you want to remove the dupes.
Well ok we are all happy now. But the lead role always comes last with some applause. I am sure some of you wonder what Set object would do. Now that since we are open to ES6 and we know Map is the winner of the previous games let us compare Map with Set as a final. A typical Real Madrid vs Barcelona game this time... or is it? Let's see who will win the el classico :)
var ranar = [],
red1 = a => reduced = [...a.reduce((p,c) => p.set(c,true),new Map()).keys()],
red2 = a => Array.from(new Set(a)),
avg1 = [],
avg2 = [],
ts = 0,
te = 0,
res1 = [],
res2 = [],
count= 100;
for (var i = 0; i<count; i++){
ranar = (new Array(100000).fill(true)).map(e => Math.floor(Math.random()*10000000));
ts = performance.now();
res1 = red1(ranar);
te = performance.now();
avg1.push(te-ts);
ts = performance.now();
res2 = red2(ranar);
te = performance.now();
avg2.push(te-ts);
}
avg1 = avg1.reduce((p,c) => p+c)/count;
avg2 = avg2.reduce((p,c) => p+c)/count;
console.log("map & spread took: " + avg1 + "msec");
console.log("set & A.from took: " + avg2 + "msec");
Wow.. man..! Well unexpectedly it didn't turn out to be an el classico at all. More like Barcelona FC against CA Osasuna :))
Here is a simple answer to the question.
var names = ["Alex","Tony","James","Suzane", "Marie", "Laurence", "Alex", "Suzane", "Marie", "Marie", "James", "Tony", "Alex"];
var uniqueNames = [];
for(var i in names){
if(uniqueNames.indexOf(names[i]) === -1){
uniqueNames.push(names[i]);
}
}
A simple but effective technique, is to use the filter method in combination with the filter function(value, index){ return this.indexOf(value) == index }.
Code example :
var data = [2,3,4,5,5,4];
var filter = function(value, index){ return this.indexOf(value) == index };
var filteredData = data.filter(filter, data );
document.body.innerHTML = '<pre>' + JSON.stringify(filteredData, null, '\t') + '</pre>';
See also this Fiddle.
So the options is:
let a = [11,22,11,22];
let b = []
b = [ ...new Set(a) ];
// b = [11, 22]
b = Array.from( new Set(a))
// b = [11, 22]
b = a.filter((val,i)=>{
return a.indexOf(val)==i
})
// b = [11, 22]
Here is very simple for understanding and working anywhere (even in PhotoshopScript) code. Check it!
var peoplenames = new Array("Mike","Matt","Nancy","Adam","Jenny","Nancy","Carl");
peoplenames = unique(peoplenames);
alert(peoplenames);
function unique(array){
var len = array.length;
for(var i = 0; i < len; i++) for(var j = i + 1; j < len; j++)
if(array[j] == array[i]){
array.splice(j,1);
j--;
len--;
}
return array;
}
//*result* peoplenames == ["Mike","Matt","Nancy","Adam","Jenny","Carl"]
here is the simple method without any special libraries are special function,
name_list = ["Mike","Matt","Nancy","Adam","Jenny","Nancy","Carl"];
get_uniq = name_list.filter(function(val,ind) { return name_list.indexOf(val) == ind; })
console.log("Original name list:"+name_list.length, name_list)
console.log("\n Unique name list:"+get_uniq.length, get_uniq)
Apart from being a simpler, more terse solution than the current answers (minus the future-looking ES6 ones), I perf tested this and it was much faster as well:
var uniqueArray = dupeArray.filter(function(item, i, self){
return self.lastIndexOf(item) == i;
});
One caveat: Array.lastIndexOf() was added in IE9, so if you need to go lower than that, you'll need to look elsewhere.
Generic Functional Approach
Here is a generic and strictly functional approach with ES2015:
// small, reusable auxiliary functions
const apply = f => a => f(a);
const flip = f => b => a => f(a) (b);
const uncurry = f => (a, b) => f(a) (b);
const push = x => xs => (xs.push(x), xs);
const foldl = f => acc => xs => xs.reduce(uncurry(f), acc);
const some = f => xs => xs.some(apply(f));
// the actual de-duplicate function
const uniqueBy = f => foldl(
acc => x => some(f(x)) (acc)
? acc
: push(x) (acc)
) ([]);
// comparators
const eq = y => x => x === y;
// string equality case insensitive :D
const seqCI = y => x => x.toLowerCase() === y.toLowerCase();
// mock data
const xs = [1,2,3,1,2,3,4];
const ys = ["a", "b", "c", "A", "B", "C", "D"];
console.log( uniqueBy(eq) (xs) );
console.log( uniqueBy(seqCI) (ys) );
We can easily derive unique from unqiueBy or use the faster implementation utilizing Sets:
const unqiue = uniqueBy(eq);
// const unique = xs => Array.from(new Set(xs));
Benefits of this approach:
generic solution by using a separate comparator function
declarative and succinct implementation
reuse of other small, generic functions
Performance Considerations
uniqueBy isn't as fast as an imperative implementation with loops, but it is way more expressive due to its genericity.
If you identify uniqueBy as the cause of a concrete performance penalty in your app, replace it with optimized code. That is, write your code first in an functional, declarative way. Afterwards, provided that you encounter performance issues, try to optimize the code at the locations, which are the cause of the problem.
Memory Consumption and Garbage Collection
uniqueBy utilizes mutations (push(x) (acc)) hidden inside its body. It reuses the accumulator instead of throwing it away after each iteration. This reduces memory consumption and GC pressure. Since this side effect is wrapped inside the function, everything outside remains pure.
for (i=0; i<originalArray.length; i++) {
if (!newArray.includes(originalArray[i])) {
newArray.push(originalArray[i]);
}
}
The following script returns a new array containing only unique values. It works on string and numbers. No requirement for additional libraries only vanilla JS.
Browser support:
Feature Chrome Firefox (Gecko) Internet Explorer Opera Safari
Basic support (Yes) 1.5 (1.8) 9 (Yes) (Yes)
https://jsfiddle.net/fzmcgcxv/3/
var duplicates = ["Mike","Matt","Nancy","Adam","Jenny","Nancy","Carl","Mike","Mike","Nancy","Carl"];
var unique = duplicates.filter(function(elem, pos) {
return duplicates.indexOf(elem) == pos;
});
alert(unique);
If by any chance you were using
D3.js
You could do
d3.set(["foo", "bar", "foo", "baz"]).values() ==> ["foo", "bar", "baz"]
https://github.com/mbostock/d3/wiki/Arrays#set_values
A slight modification of thg435's excellent answer to use a custom comparator:
function contains(array, obj) {
for (var i = 0; i < array.length; i++) {
if (isEqual(array[i], obj)) return true;
}
return false;
}
//comparator
function isEqual(obj1, obj2) {
if (obj1.name == obj2.name) return true;
return false;
}
function removeDuplicates(ary) {
var arr = [];
return ary.filter(function(x) {
return !contains(arr, x) && arr.push(x);
});
}
$(document).ready(function() {
var arr1=["dog","dog","fish","cat","cat","fish","apple","orange"]
var arr2=["cat","fish","mango","apple"]
var uniquevalue=[];
var seconduniquevalue=[];
var finalarray=[];
$.each(arr1,function(key,value){
if($.inArray (value,uniquevalue) === -1)
{
uniquevalue.push(value)
}
});
$.each(arr2,function(key,value){
if($.inArray (value,seconduniquevalue) === -1)
{
seconduniquevalue.push(value)
}
});
$.each(uniquevalue,function(ikey,ivalue){
$.each(seconduniquevalue,function(ukey,uvalue){
if( ivalue == uvalue)
{
finalarray.push(ivalue);
}
});
});
alert(finalarray);
});
https://jsfiddle.net/2w0k5tz8/
function remove_duplicates(array_){
var ret_array = new Array();
for (var a = array_.length - 1; a >= 0; a--) {
for (var b = array_.length - 1; b >= 0; b--) {
if(array_[a] == array_[b] && a != b){
delete array_[b];
}
};
if(array_[a] != undefined)
ret_array.push(array_[a]);
};
return ret_array;
}
console.log(remove_duplicates(Array(1,1,1,2,2,2,3,3,3)));
Loop through, remove duplicates, and create a clone array place holder because the array index will not be updated.
Loop backward for better performance ( your loop wont need to keep checking the length of your array)

Categories

Resources