Recursive shows all the possible outcomes - javascript

I'm using javascript doing my homework. The question is required to find all the combination of room and meeting allocation with N rooms and n meetings.
For example, if I have 5 rooms and need to allocate to 3 meetings, the outcome will be something like
[1,1,3],[1,2,2],[1,3,1],[2,1,2],[2,2,1] and [3,1,1].
I need to use recursion to solve this question. But my recursion only gives me one outcome rather then all the outcomes.
function partition(num, m) {
if (m == 1) {
return num
} else {
for (i = 1; i < num; i++) {
return i + "," + partition(num - i, m - 1)
}
}
}
console.log(partition(5, 3))
How to list all the combinations with recursion? I'm struggling for a long time. Thank you very much.

Some issues:
Your code uses one global variable called i. This is not right, as the loop iteration in recursion will change the i that outer loops are using. Always declare your variables in a local scope. So for (let i.....)
Your function should not build a string through concatenation (+) and return a string, nor should it return a number in the base case, but it should return an array of arrays, just like you have depicted in the example output.
So the base case should return [[num]]. The outer array has just one element, which represents that there is just one partitioning possible, and the inner array specifies what that partitioning is: it just has one room.
Since the recursive call returns an array of arrays, you should iterate that recursive result, and add the current room assignment to form new combinations.
The iteration can stop a bit earlier than you have foreseen, since there must be enough "value" in num - i to fill up the remaining rooms with at least 1.
Here is a solution:
function partition(num, m) {
if (m == 1) {
return [[num]]; // return an array or arrays
} else {
let collect = []; // Prepare array for collecting the partitions
// Quit loop when not enough value to distribute in remaining rooms
for (let i = 1; i <= num - m + 1; i++) {
// Iterate the arrays that come back from recursion...
for (let arr of partition(num - i, m - 1)) {
collect.push([i, ...arr]); // ... and extend them.
}
}
return collect;
}
}
console.log(partition(5, 3));

It seems like you already know how to generate the sequences, so just describe the rules you used in your head. Then work the program backwards from there. Below we describe how to generate fixed-size combinations of size k from any array, t -
if the amount to choose, k, is zero, yield the empty combination, ()
(inductive) k is at least one. If the array t, is empty, there is nothing left to choose. Stop iteration
(inductive) k is at least one and the array has at least one element. Choose the first element of t and add it to each combination of the sub-problem, (t.slice(1), k - 1). And do not choose this element and yield from the sub-problem, (t.slice(1), k).
function* choosek(t, k) {
if (k == 0)
return (yield []) // 1
else if (t.length == 0)
return // 2
else {
// choose first element // 3
for (const c of choosek(t.slice(1), k - 1))
yield [t[0], ...c]
// skip first element
yield* choosek(t.slice(1), k)
}
}
for (const c of choosek(["🔴","🟢","🔵","🟡","⚫️"], 3))
console.log(c.join(""))
🔴🟢🔵
🔴🟢🟡
🔴🟢⚫️
🔴🔵🟡
🔴🔵⚫️
🔴🟡⚫️
🟢🔵🟡
🟢🔵⚫️
🟢🟡⚫️
🔵🟡⚫️
A benefit of using an array as input instead of a number is we can generate fixed-sized combinations from any input, not just numerical ones. And because .slice also works on Strings, we can actually use string-based inputs too!
for (const c of choosek("ABCDE", 3))
console.log(c.join(""))
ABC
ABD
ABE
ACD
ACE
ADE
BCD
BCE
BDE
CDE

Related

function for manually order string

I'm trying to learn how functions work in JS.
This function should order the string, but the result is equal to the string himself.
what do I do wrong?
a = "awbc"
function f(str) {
let temporary
for (i = 0; i < a.length; i++) {
for (j = 0; j < a.length; j++) {
if (a[j] < a[j + 1]) {
temporary = a[j]
a[j] = a[j + 1]
a[j + 1] = temporary
}
}
}
return a
}
console.log(f(a))
You need to use a replace method on array values. See this for reference:
Replace string in javascript array
Strings are immutable
As already pointed out by Pointy (edit: believe it or not, no pun intended) in the comments, strings are immutable and cannot be changed. But what you can do is create one separate string for each character in your string and put that in an array using the split() method. Then you can sort that array and when it is sorted use join() to create another string consisting of those characters in sorted order.
Your bubbleSort() implementation
First of all the algorithm you are trying to implement is called bubble sort and is one of the easy but unfortunately slow sorting algorithms as it takes O(n²) in best, average and worst case runtime while good algorithms like merge sort only take O(n * log n).
Nevertheless, if you want to sort using bubbleSort() you need to make some changes to your code.
You are sorting in descending order therefore in every iteration of the outer loop the biggest value will be moved to the left-most position. Therefore in the next iteration you need to find the next biggest value and move it to the now left-most position. No need to check for all elements again as we already know that the left-most element is the biggest. Therefore start your inner loop at position i instead of 0. This will change nothing in the time complexity in big-O notation but will nevertheless improve the performance significantly.
Iteration i of outer loop
temporary
0
[ a,w,b,c ]
1
[ w,b,c,a ]
2
[ w,c,b,a ]
3
[ w,c,b,a ]
Also you are using a function in order to encapsulate functionality so you should not modify global variables within it. Use the str parameter you pass to it instead of a.
Last but not least you have this line temporary = a[j] but temporary should hold your array of individual character strings which you will destroy with this assignment. Create a new variable temp instead to do the swap.
Here an implementation of bubbleSort() with all those issues addressed.
/**
* Bubble sort algorithm which has sorts characters in a string in descending order.
* Best/ average and worst case runtime of bubble sort is O(n²).
* As we iterate n times over (n - i) items.
* T(n) = Sum{from 0 to n}[n * (n-i)] = (n + 1) * n = n² + n = O(n²)
* #param {string} str string
* #returns
*/
function bubbleSort(str) {
const temporary = str.split("");
// temporary now contains every single character
console.log("After split:", temporary);
// in each iteration of the outer loop the "biggest" letter will be sorted to the front
for (let i = 0; i < temporary.length; i++) {
// you need to start from i as otherwise you will be moving already sorted letters (as they are moved to the front)
for (let j = i; j < temporary.length - 1; j++) {
if (temporary[j] < temporary[j + 1]) {
// you need other variable here, otherwise you will override temporary
const temp = temporary[j];
temporary[j] = temporary[j + 1];
temporary[j + 1] = temp;
}
}
}
// now join characters back together to a string
console.log("After sorting: ", temporary);
return temporary.join("");
}
console.log(bubbleSort("awbc"));
console.log(bubbleSort("another _string with &)8 w?ird chars"));
.as-console-wrapper { max-height: 100% !important; top: 0; }

How do I efficiently check if an integer is not in a massive array in Javascript

I'm working on an assignment where I have to find the smallest positive integer greater than 0 that is NOT within a huge array including 100,000 elements. I'm able to do it so it's correct but apparently my solution takes way too long and returns a timeout error.
this is my current solution:
function solution(A) {
let min=1
while(A.includes(min) === true){
min++
}
return min
}
Is there a faster way of doing this that doesn't involve looping through every single element?
edit: whoops guys I forgot a key element to this question!
edit 2: the minimum value would be -2 and the maximum value would be 100,000 and they are not in order
You could take an object and add each wanted value to the object.
Then take the smallest key and check if is is equal to one and increment until no key is found. Otherwise return one.
function getSmallest(array) {
let temp = Object.create(null),
smallest;
for (const v of array) if (v > 0) temp[v] = true;
smallest = +Object.keys(temp)[0];
if (smallest !== 1) return 1;
while (temp[++smallest]);
return smallest;
}
console.log(getSmallest([2, 1, 0])); // 3
console.log(getSmallest([2, 3, 0])); // 1
I do believe this is a test to see if you know search algorithms for an unsorted array. Have you been taught about binary search?
I wrote code for following statement: "Find minimal positive integer NOT in array of numbers that could be any from -2 to 10000".
const arr = [5,4,6,7,5,6,4,5];
const p = new Array(10004);
arr.forEach(el => p[el + 2] = 1);
let answer = 10004;
for (let i = 3; i < 10004; ++i) {
if (!p[i]) {
answer = i - 2;
console.log(answer);
break;
} else if (i == 10003) {
console.log('no solution');
}
}
Sorting the array seems like a good place to start. Then you can loop or binary search from there to find your smallest positive integer.

Recursion to find n numbers in a range that add up to a target value

I've written a function that finds all sets of two numbers that sum a target value, given a range of numbers from 0 to x. I'm trying to rewrite it in a way so that you can get a result of a given length n (not only 2), but n numbers that will equal a target when added together.
For example, if the range is 1 to 5 and you want to find all sets of three numbers that add up to 7, the answer would be:
[[1,1,5], [1,2,4], [1,3,3], [2,2,3]]
It looks like the solution is to use a recursive function, but I can't quite figure out how to do this. I've looked at several subset-sum examples on StackOverflow, but none seem to match this particular scenario.
This is not a homework problem. Any help would be appreciated. Here is my findPairs function:
function findPairs(arr, target) {
var pairs = [];
var first = 0;
var last = arr.length-1;
while (first <= last) {
var sum = arr[first] + arr[last];
if (sum === target) {
pairs.push([arr[first], arr[last]]);
first ++;
last--;
}
else if (sum < target) {
first++;
}
else {
last--;
}
}
return pairs;
}
var sample = _.range(11);
console.log(JSON.stringify(findPairs(sample,12)));
// Returns
// [[2,10],[3,9],[4,8],[5,7],[6,6]]
This example uses the lodash _.range function. Fiddle here: https://jsfiddle.net/tbarmann/muoms1vL/10/
You could indeed use recursion. Define the third argument as the length of the sub-arrays you are looking for, and define a recursion function inside that function as follows:
function findSums(arr, target, count) {
var result = [];
function recurse(start, leftOver, selection) {
if (leftOver < 0) return; // failure
if (leftOver === 0 && selection.length == count) {
result.push(selection); // add solution
return;
}
for (var i = start; i < arr.length; i++) {
recurse(i, leftOver-arr[i], selection.concat(arr[i]));
}
}
recurse(0, target, []);
return result;
}
// Demo
var result = findSums([1,2,3,4,5], 7, 3);
console.log(result);
.as-console-wrapper { max-height: 100% !important; top: 0; }
Remarks
The solution does not require that the input array has consecutive numbers (range): you might pass it [3,6,4,2,1]. The sub-arrays that are returned will keep the selected elements in order, for example: [3,3,3], [3,4,2] and [6,2,1] could be solutions for targetting 9 with 3 values for that example input.
The numbers must all be non-negative. If negative values are to be allowed, then the optimisation if (leftOver < 0) return; must be removed from the code.
Well, what you are probably looking for is Dynamic Programming. It is an approach where you define mathematically your problem and a recursive solution. Then you try to find a solution, using memoization.
I would make a helper function, where the arguments are (range, target, lengthOfResult). Then do something like:
func helper(arr, target, lengthOfResult){
if(taget == 0) continue;
for(int i=1; i<arr.length-1; i++){
if(taget - arr[i] < 0) return [];
var sub_results = helper(arr, taget - arr[i], lengthOfResult - 1)
foreach(var res in sub_results){
result.concat([arr[i]] + res);
}
}
return result;
}
So you are changing the question for the helper function to "Give me all lists of length-1 which sums up to taget-arr[i]", append to that arr[i]. Then use that to construct the result, for each arr[i].
Basically, every iteration the length will decrease, so the function will terminate at some point. You subtract from the taget whatever number you have now. So you're end result will add up to the desired target.
Note that this algorithm works only with positive numbers. If you can allow negatives, you should remove the if inside the first for-loop.
About the memoization, if you want to allow using each number only once, you could get away with a single array. If you allow reoccurring numbers in the result (like in your example), you probably need a grid; horizontally the target, vertically the lengthOfResult. Then in the beginning of each invocation of the helper method, you check if you already calculated that value. This will save you some recursive calls and make the algorithm not exponential.
function sumPermutations(target, range, number) {
var result = [];
function combo(left, group, sum) {
if(sum > target) return null;
if (left == 0) {
if(sum == target) return group;
return null;
}
for (var i = range.min; i <= range.max; i++) {
var r = combo(left - 1, group.concat(i), sum + i);
if (r)
result.push(r);
}
}
combo(number, [], 0);
return result;
}
console.log(sumPermutations(7, {min: 1, max: 5}, 3));
Note: This gives results with duplicates (all permutaions including those with different orders). You can remove duplicates by sorting the arrays and join thier items and hash them into a hash object.

Algorithm to merge multiple sorted sequences into one sorted sequence in javascript

I am looking for an algorithm to merge multiple sorted sequences, lets say X sorted sequences with n elements, into one sorted sequence in javascript , can you provide some examples?
note: I do not want to use any library.
Trying to solve https://icpc.kattis.com/problems/stacking
what will be the minimal number of operations needed to merge sorted arrays, under conditions :
Split: a single stack can be split into two stacks by lifting any top portion of the stack and putting it aside to form a new stack.
Join: two stacks can be joined by putting one on top of the other. This is allowed only if the bottom plate of the top stack is no larger than the top plate of the bottom stack, that is, the joined stack has to be properly ordered.
History
This problem has been solved for more than a century, going back to Hermann Hollerith and punchcards. Huge sets of punchcards, such as those resulting from a census, were sorted by dividing them into batches, sorting each batch, and then merging the sorted batches--the so-called
"merge sort". Those tape drives you see spinning in 1950's sci-fi movies were most likely merging multiple sorted tapes onto one.
Algorithm
All the algorithms you need can be found at https://en.wikipedia.org/wiki/Merge_algorithm. Writing this in JS is straightforward. More information is available in the question Algorithm for N-way merge. See also this question, which is an almost exact duplicate, although I'm not sure any of the answers are very good.
The naive concat-and-resort approach does not even qualify as an answer to the problem. The somewhat naive take-the-next-minimum-value-from-any-input approach is much better, but not optimal, because it takes more time than necessary to find the next input to take a value from. That is why the best solution using something called a "min-heap" or a "priority queue".
Simple JS solution
Here's a real simple version, which I make no claim to be optimized, other than in the sense of being able to see what it is doing:
const data = [[1, 3, 5], [2, 4]];
// Merge an array or pre-sorted arrays, based on the given sort criteria.
function merge(arrays, sortFunc) {
let result = [], next;
// Add an 'index' property to each array to keep track of where we are in it.
arrays.forEach(array => array.index = 0);
// Find the next array to pull from.
// Just sort the list of arrays by their current value and take the first one.
function findNext() {
return arrays.filter(array => array.index < array.length)
.sort((a, b) => sortFunc(a[a.index], b[b.index]))[0];
}
// This is the heart of the algorithm.
while (next = findNext()) result.push(next[next.index++]);
return result;
}
function arithAscending(a, b) { return a - b; }
console.log(merge(data, arithAscending));
The above code maintains an index property on each input array to remember where we are. The simplistic alternative would be to shift the element from the front of each array when it is its turn to be merged, but that would be rather inefficient.
Optimizing finding the next array to pull from
This naive implementation of findNext, to find the array to pull the next value from, simply sorts the list of inputs by the first element, and takes the first array in the result. You can optimize this by using a "min-heap" to manage the arrays in sorted order, which removes the need to resort them each time. A min-heap is a tree, consisting of nodes, where each node contains a value which is the minimum of all values below, with left and right nodes giving additional (greater) values, and so on. You can find information on a JS implementation of a min-heap here.
A generator solution
It might be slightly cleaner to write this as a generator which takes a list of iterables as inputs, which includes arrays.
// Test data.
const data = [[1, 3, 5], [2, 4]];
// Merge an array or pre-sorted arrays, based on the given sort criteria.
function* merge(iterables, sortFunc) {
let next;
// Create iterators, with "result" property to hold most recent result.
const iterators = iterables.map(iterable => {
const iterator = iterable[Symbol.iterator]();
iterator.result = iterator.next();
return iterator;
});
// Find the next iterator whose value to use.
function findNext() {
return iterators
.filter(iterator => !iterator.result.done)
.reduce((ret, cur) => !ret || cur.result.value < ret.result.value ? cur : ret,
null);
}
// This is the heart of the algorithm.
while (next = findNext()) {
yield next.result.value;
next.result = next.next();
}
}
function arithAscending(a, b) { return a - b; }
console.log(Array.from(merge(data, arithAscending)));
The naive approach is concatenating all the k sequences, and sort the result. But if each sequence has n elements, the the cost will be O(k*n*log(k*n)). Too much!
Instead, you can use a priority queue or heap. Like this:
var sorted = [];
var pq = new MinPriorityQueue(function(a, b) {
return a.number < b.number;
});
var indices = new Array(k).fill(0);
for (var i=0; i<k; ++i) if (sequences[i].length > 0) {
pq.insert({number: sequences[i][0], sequence: i});
}
while (!pq.empty()) {
var min = pq.findAndDeleteMin();
sorted.push(min.number);
++indices[min.sequence];
if (indices[min.sequence] < sequences[i].length) pq.insert({
number: sequences[i][indices[min.sequence]],
sequence: min.sequence
});
}
The priority queue only contains at most k elements simultaneously, one for each sequence. You keep extracting the minimum one, and inserting the following element in that sequence.
With this, the cost will be:
k*n insertions to a heap of k elements: O(k*n)
k*n deletions in a heap of k elements: O(k*n*log(k))
Various constant operations for each number: O(k*n)
So only O(k*n*log(k))
Just add them into one big array and sort it.
You could use a heap, add the first element of each sequence to it, pop the lowest one (that's your first merged element), add the next element from the sequence of the popped element and continue until all sequences are over.
It's much easier to just add them into one big array and sort it, though.
This is a simple javascript algo I came up with. Hope it helps. It will take any number of sorted arrays and do a merge. I am maintaining an array for index of positions of the arrays. It basically iterates through the index positions of each array and checks which one is the minimum. Based on that it picks up the min and inserts into the merged array. Thereafter it increments the position index for that particular array. I feel the time complexity can be improved. Will post back if I come up with a better algo, possibly using a min heap.
function merge() {
var mergedArr = [],pos = [], finished = 0;
for(var i=0; i<arguments.length; i++) {
pos[i] = 0;
}
while(finished != arguments.length) {
var min = null, selected;
for(var i=0; i<arguments.length; i++) {
if(pos[i] != arguments[i].length) {
if(min == null || min > arguments[i][pos[i]]) {
min = arguments[i][pos[i]];
selected = i;
}
}
}
mergedArr.push(arguments[selected][pos[selected]]);
pos[selected]++;
if(pos[selected] == arguments[selected].length) {
finished++;
}
}
return mergedArr;
}
This is a beautiful question. Unlike concatenating the arrays and applying a .sort(); a simple dynamical programming approach with .reduce() would yield a result in O(m.n) time complexity. Where m is the number of arrays and n is their average length.
We will handle the arrays one by one. First we will merge the first two arrays and then we will merge the result with the third array and so on.
function mergeSortedArrays(a){
return a.reduce(function(p,c){
var pc = 0,
cc = 0,
len = p.length < c.length ? p.length : c.length,
res = [];
while (p[pc] !== undefined && c[cc] !== undefined) p[pc] < c[cc] ? res.push(p[pc++])
: res.push(c[cc++]);
return p[pc] === undefined ? res.concat(c.slice(cc))
: res.concat(p.slice(pc));
});
}
var sortedArrays = Array(5).fill().map(_ => Array(~~(Math.random()*5)+5).fill().map(_ => ~~(Math.random()*20)).sort((a,b) => a-b));
sortedComposite = mergeSortedArrays(sortedArrays);
sortedArrays.forEach(a => console.log(JSON.stringify(a)));
console.log(JSON.stringify(sortedComposite));
OK as per #Mirko Vukušić's comparison of this algorithm with .concat() and .sort(), this algorithm is still the fastest solution with FF but not with Chrome. The Chrome .sort() is actually very fast and i can not make sure about it's time complexity. I just needed to tune it up a little for JS performance without touching the essence of the algorithm at all. So now it seems to be faster than FF's concat and sort.
function mergeSortedArrays(a){
return a.reduce(function(p,c){
var pc = 0,
pl =p.length,
cc = 0,
cl = c.length,
res = [];
while (pc < pl && cc < cl) p[pc] < c[cc] ? res.push(p[pc++])
: res.push(c[cc++]);
if (cc < cl) while (cc < cl) res.push(c[cc++]);
else while (pc < pl) res.push(p[pc++]);
return res;
});
}
function concatAndSort(a){
return a.reduce((p,c) => p.concat(c))
.sort((a,b) => a-b);
}
var sortedArrays = Array(5000).fill().map(_ => Array(~~(Math.random()*5)+5).fill().map(_ => ~~(Math.random()*20)).sort((a,b) => a-b));
console.time("merge");
mergeSorted = mergeSortedArrays(sortedArrays);
console.timeEnd("merge");
console.time("concat");
concatSorted = concatAndSort(sortedArrays);
console.timeEnd("concat");
5000 random sorted arrays of random lengths between 5-10.
es6 syntax:
function mergeAndSort(arrays) {
return [].concat(...arrays).sort()
}
function receives array of arrays to merge and sort.
*EDIT: as cought by #Redu, above code is incorrect. Default sort() if sorting function is not provided, is string Unicode. Fixed (and slower) code is:
function mergeAndSort(arrays) {
return [].concat(...arrays).sort((a,b)=>a-b)
}

Why is array.push sometimes faster than array[n] = value?

As a side result of testing some code I wrote a small function to compare the speed of using the array.push(value) method vs direct addressing array[n] = value. To my surprise the push method often showed to be faster especially in Firefox and sometimes in Chrome. Just out of curiosity: anyone has an explanation for it?
Here's the test (note: rewritten 2023/02/10)
const arrLen = 10_000;
const x = [...Array(10)].map( (_, i) => testArr(arrLen, i));
console.log(`Array length: ${arrLen}\n--------\n${x.join(`\n`)}`);
function testArr(n, action) {
let arr = [];
const perfStart = performance.now();
const methods =
` for (; n; n--) arr.push(n)
for (; i < n; i += 1) { arr[i] = i; }
for (; i < n; i += 1) arr.push(i)
while (--n) arr.push(n)
while (i++ < n) arr.push(n)
while (--n) arr.splice(0, 0, n)
while (--n) arr.unshift(n)
while (++i < n) arr.unshift(i)
while (--n) arr.splice(n - 1, 0, n)
while (n--) arr[n] = n`.split(`\n`).map(v => v.trim());
const report = i => `${methods[i]}: ${
(performance.now() - perfStart).toFixed(2)} milliseconds`;
let i = 0;
switch (action) {
case 0: for (; n; n--) arr.push(n)
case 1: for (; i < n; i += 1) { arr[i] = i; } break;
case 2: for (let i = 0; i < n; i += 1) arr.push(i); break;
case 3: while (--n) arr.push(n); break;
case 4: while (i++ < n) arr.push(n); break;
case 5: while (--n) arr.splice(0, 0, n); break;
case 6: while (--n) arr.unshift(n)
case 7: while (++i < n) arr.unshift(i); break;
case 8: while (--n) arr.splice(n - 1, 0, n); break;
default: while (n--) arr[n] = n;
}
return report(action);
}
.as-console-wrapper {
max-height: 100% !important;
}
All sorts of factors come into play, most JS implementations use a flat array that converts to sparse storage if it becomes necessary later on.
Basically the decision to become sparse is a heuristic based on what elements are being set, and how much space would be wasted in order to remain flat.
In your case you are setting the last element first, which means the JS engine will see an array that needs to have a length of n but only a single element. If n is large enough this will immediately make the array a sparse array -- in most engines this means that all subsequent insertions will take the slow sparse array case.
You should add an additional test in which you fill the array from index 0 to index n-1 -- it should be much, much faster.
In response to #Christoph and out of a desire to procrastinate, here's a description of how arrays are (generally) implemented in JS -- specifics vary from JS engine to JS engine but the general principle is the same.
All JS Objects (so not strings, numbers, true, false, undefined, or null) inherit from a base object type -- the exact implementation varies, it could be C++ inheritance, or manually in C (there are benefits to doing it in either way) -- the base Object type defines the default property access methods, eg.
interface Object {
put(propertyName, value)
get(propertyName)
private:
map properties; // a map (tree, hash table, whatever) from propertyName to value
}
This Object type handles all the standard property access logic, the prototype chain, etc.
Then the Array implementation becomes
interface Array : Object {
override put(propertyName, value)
override get(propertyName)
private:
map sparseStorage; // a map between integer indices and values
value[] flatStorage; // basically a native array of values with a 1:1
// correspondance between JS index and storage index
value length; // The `length` of the js array
}
Now when you create an Array in JS the engine creates something akin to the above data structure. When you insert an object into the Array instance the Array's put method checks to see if the property name is an integer (or can be converted into an integer, e.g. "121", "2341", etc.) between 0 and 2^32-1 (or possibly 2^31-1, i forget exactly). If it is not, then the put method is forwarded to the base Object implementation, and the standard [[Put]] logic is done. Otherwise the value is placed into the Array's own storage, if the data is sufficiently compact then the engine will use the flat array storage, in which case insertion (and retrieval) is just a standard array indexing operation, otherwise the engine will convert the array to sparse storage, and put/get use a map to get from propertyName to value location.
I'm honestly not sure if any JS engine currently converts from sparse to flat storage after that conversion occurs.
Anyhoo, that's a fairly high level overview of what happens and leaves out a number of the more icky details, but that's the general implementation pattern. The specifics of how the additional storage, and how put/get are dispatched differs from engine to engine -- but this is the clearest i can really describe the design/implementation.
A minor addition point, while the ES spec refers to propertyName as a string JS engines tend to specialise on integer lookups as well, so someObject[someInteger] will not convert the integer to a string if you're looking at an object that has integer properties eg. Array, String, and DOM types (NodeLists, etc).
These are the result I get with your test
on Safari:
Array.push(n) 1,000,000 values: 0.124
sec
Array[n .. 0] = value
(descending) 1,000,000 values: 3.697
sec
Array[0 .. n] = value (ascending)
1,000,000 values: 0.073 sec
on FireFox:
Array.push(n) 1,000,000 values: 0.075 sec
Array[n .. 0] = value (descending) 1,000,000 values: 1.193 sec
Array[0 .. n] = value (ascending) 1,000,000 values: 0.055 sec
on IE7:
Array.push(n) 1,000,000 values: 2.828 sec
Array[n .. 0] = value (descending) 1,000,000 values: 1.141 sec
Array[0 .. n] = value (ascending) 1,000,000 values: 7.984 sec
According to your test the push method seems to be better on IE7 (huge difference), and since on the other browsers the difference is small, it seems to be the push method really the best way to add element to an array.
But I created another simple test script to check what method is fast to append values to an array, the results really surprised me, using Array.length seems to be much faster compared to using Array.push, so I really don't know what to say or think anymore, I'm clueless.
BTW: on my IE7 your script stops and browsers asks me if I want to let it go on (you know the typical IE message that says: "Stop runnign this script? ...")
I would recoomend to reduce a little the loops.
push() is a special case of the more general [[Put]] and therefore can be further optimized:
When calling [[Put]] on an array object, the argument has to be converted to an unsigned integer first because all property names - including array indices - are strings. Then it has to be compared to the length property of the array in order to determine whether or not the length has to be increased. When pushing, no such conversion or comparison has to take place: Just use the current length as array index and increase it.
Of course there are other things which will affect the runtime, eg calling push() should be slower than calling [[Put]] via [] because the prototype chain has to be checked for the former.
As olliej pointed out: actual ECMAScript implementations will optimize the conversion away, ie for numeric property names, no conversion from string to uint is done but just a simple type check. The basic assumption should still hold, though its impact will be less than I originally assumed.
array[n] = value, when previously initialised with a length (like new Array(n)), is faster than array.push, when ascending when n >= 90.
From inspecting the javascript source code of your page, your Array[0 .. n] = value (ascending) test does not initialize the array with a length in advance.
So Array.push(n) sometimes comes ahead on the first run, but on subsequent runs of your test then Array[0 .. n] = value (ascending) actually consistently performs best (in both Safari and Chrome).
If the code is modified so it initialises an array with a length in advance like var array = new Array(n) then Array[0 .. n] = value (ascending) shows that array[n] = value performs 4.5x to 9x faster than Array.push(n) in my rudimentary running of this specific test code.
This is consistent with other tests, like #Timo Kähkönen reported. See specifically this revision of the test he mentioned: https://jsperf.com/push-method-vs-setting-via-key/10
The modified code, so you may see how I edited it and initialised the array in a fair manner (not unnecessarily initialising it with a length for the array.push test case):
function testArr(n, doPush){
var now = new Date().getTime(),
duration,
report = ['<b>.push(n)</b>',
'<b>.splice(0,0,n)</b>',
'<b>.splice(n-1,0,n)</b>',
'<b>[0 .. n] = value</b> (ascending)',
'<b>[n .. 0] = value</b> (descending)'];
doPush = doPush || 5;
if (doPush === 1) {
var arr = [];
while (--n) {
arr.push(n);
}
} else if (doPush === 2) {
var arr = [];
while (--n) {
arr.splice(0,0,n);
}
} else if (doPush === 3) {
var arr = [];
while (--n) {
arr.splice(n-1,0,n);
}
} else if (doPush === 4) {
var arr = new Array(n);
for (var i = 0;i<n;i++) {
arr[i] = i;
}
} else {
while (--n) {
var arr = [];
arr[n] = n;
}
}
/*console.log(report[doPush-1] + '...'+ arr.length || 'nopes');*/
duration = ((new Date().getTime() - now)/1000);
$('zebradinges').innerHTML += '<br>Array'+report[doPush-1]+' 1.000.000 values: '+duration+' sec' ;
arr = null;
}
Push adds it to the end, while array[n] has to go through the array to find the right spot. Probably depends on browser and its way to handle arrays.

Categories

Resources