Different ways to create Javascript arrays - javascript

I want to understand the performance difference for constructing arrays. Running the following program, I am puzzled by the output below:
Time for range0: 521
Time for range1: 149
Time for range2: 1848
Time for range3: 8411
Time for range4: 3487
I don't understand why 3 takes longer than 4, while 1 takes shorter than 2. Also, seems the map function is very inefficient; what is the use of it?
function range0(start, count) {
var arr = [];
for (var i = 0; i < count; i++) {
arr.push(start + i);
}
return arr;
}
function range1(start, count) {
var arr = new Array(count);
for (var i = 0; i < count; i++) {
arr[i] = start + i;
}
return arr;
}
function range2(start, count) {
var arr = Array.apply(0, Array(count));
for (var i = 0; i < count; i++) {
arr[i] = start + i;
}
return arr;
}
function range3(start, count) {
var arr = new Array(count);
return arr.map(function(element, index) {
return index + start;
});
}
function range4(start, count) {
var arr = Array.apply(0, Array(count));
return arr.map(function(element, index) {
return index + start;
});
}
function profile(range) {
var iterations = 100000,
start = 0, count = 1000,
startTime, endTime, finalTime;
startTime = performance.now();
for (var i = 0; i < iterations; ++i) {
range(start, count);
}
endTime = performance.now();
finalTime = (endTime - startTime);
console.log(range.name + ': ' + finalTime + ' ms');
}
[range0, range1, range2, range3, range4].forEach(profile);

I don't understand why 3 takes longer than 4
Me neither. It is a surprising result, given my superficial analysis and the results I obtained by profiling the code. On my computer running Google Chrome 50, range4 is twice as slow compared to range3.
I'd have to study the Javascript implementation you are using in order to figure out why that happens.
while 1 takes shorter than 2.
range1 executes faster because it uses a loop and optimizes memory allocations, while range2 uses functions and does unnecessary memory allocations.
Also, seems the map function is very inefficient; what is the use of it?
The map function is used to compute a new Array based on the values of an existing one.
[1, 2, 3, 4, 5].map(number => number * number);
// [1, 4, 9, 16, 25]
On my computer
Time for range0: 783
Time for range1: 287
Time for range2: 10541
Time for range3: 14981
Time for range4: 28243
My results reflect my expectations regarding the performance of each function.
A superficial analysis of each function
range0
Creates an Array and populates it via a loop. It is the most simple and straightforward code possible. I suppose it could be understood as the baseline for performance comparison.
range1
Uses the Array constructor with a length parameter. This greatly optimizes the underlying memory allocation required to store the elements. Since the exact number of elements is known beforehand, the memory does not have to be reallocated as the number of elements grows; the exact amount of memory needed to store all elements can be allocated exactly once, when the Array is instantiated.
range2
Applies an empty arguments list to the constructor function, with this set to the number 0. This is semantically equivalent to Array() – the fact the arguments list was created with a count parameter has no bearing on the result of the function application. In fact, it needlessly wastes time allocating memory for an empty arguments list.
You probably meant to use call:
Array.call(null, count)
range3
Like range1, but uses map with a function instead of a loop. The initial memory allocation is optimized, but the overhead of calling the function count times is likely to be huge.
In addition, map generates a new Array instance. Since that instance also has count elements, it would make sense to optimize that memory allocation as well, however it is unclear to me whether that actually happens. Nevertheless, two separate memory allocations are taking place, instead of just one as in range1.
range4
Combines all the inefficiencies of range2 and range3.
Amazingly, it executes faster than range3 on your computer. It's unclear to me why that happened. I suppose one would have to investigate your Javascript particular implementation in order to figure it out.

Related

Time complexity when doing n recursions?

I'm struggling to understand how to effectively determine the time complexity of recursive code. While I can see how we can find the time complexity for code that has two recursive calls (e. g. recursive fib) as O(2^n), I struggle to dertime it when there are n recursions. Here is an example:
Quick note: I tried to come up with an easy example. I admit that there's very likely a formula to calculate size of the "powerset/superset" and gives us the work needed. My question is more generic and should relate to all times when one resursion can produce n additional recursions.
var subsets = function(nums) {
const result = [];
function traverse(arr, start) {
result.push(arr)
for (let i = start; i < nums.length; i++) {
traverse([...arr, nums[i]], i + 1);
}
}
traverse([], 0)
return result;
};

Is it worth it to convert array into set to search in NodeJS

I would like to know if it is worth to convert an array into a set in order to search using NodeJS.
My use case is that this search is done lot of times, but not necessary on big sets of data (can go up to ~2000 items in the array from time to time).
Looking for a specific id in a list.
Which approach is better :
const isPresent = (myArray, id) => {
return Boolean(myArray.some((arrayElement) => arrayElement.id === id);
}
or
const mySet = new Set(myArray)
const isPresent = (mySet, id) => {
return mySet.has(id);
}
I know that theoretically the second approach is better as it is O(1) and O(n) for the first approach. But can the instantiation of the set offset the gain on small arrays?
#jonrsharpe - particularly for your case, I found that converting an array of 2k to Set itself is taking ~1.15ms. No doubt searching Set is faster than an Array but in your case, this additional conversion can be little costly.
You can run below code in your browser console to check. new Set(arr) is taking almost ~1.2ms
var arr = [], set = new Set(), n = 2000;
for (let i = 0; i < n; i++) {
arr.push(i);
};
console.time('Set');
set = new Set(arr);
console.timeEnd('Set');
Adding element in the Set is always costly.
Below code shows the time required to insert an item in array/set. Which shows Array insertion is faster than Set.
var arr = [], set = new Set(), n = 2000;
console.time('Array');
for (let i = 0; i < n; i++) {
arr.push(i);
};
console.timeEnd('Array');
console.time('Set');
for (let i = 0; i < n; i++) {
set.add(i);
};
console.timeEnd('Set');
I run the following code to analyze the speed of locating an element in the array and set. Found that set is 8-10 time faster than the array.
You can copy-paste this code in your browser to analyze further
var arr = [], set = new Set(), n = 100000;
for (let i = 0; i < n; i++) {
arr.push(i);
set.add(i);
}
var result;
console.time('Array');
result = arr.indexOf(12313) !== -1;
console.timeEnd('Array');
console.time('Set');
result = set.has(12313);
console.timeEnd('Set');
So for your case array.some is better!
I will offer a different upside for using Set: your code is now more semantic, easier to know what it does.
Other than that this post has a nice comparison - Javascript Set vs. Array performance but make your own measurements if you really feel that this is your bottleneck. Don't optimise things that are not your bottleneck!
My own heuristic is a isPresent-like utility for nicer code but if the check is done in a loop I always construct a Set before.

RangError: too many arguments provided for a function call

I got a nice solution to get HTML Comments from the HTML Node Tree
var findComments = function(el) {
var arr = [];
for (var i = 0; i < el.childNodes.length; i++) {
var node = el.childNodes[i];
if (node.nodeType === 8) {
arr.push(node);
} else {
arr.push.apply(arr, findComments(node));
}
}
return arr;
};
var commentNodes = findComments(document);
// whatever you were going to do with the comment...
console.log(commentNodes[0].nodeValue);
from this thread.
Everything I did was adding this small loop to print out all the nodes.
var arr = [];
var findComments = function(el) {
for (var i = 0; i < el.childNodes.length; i++) {
var node = el.childNodes[i];
if (node.nodeType === 8) {
arr.push(node);
} else {
arr.push.apply(arr, findComments(node));
}
}
return arr;
};
var commentNodes = findComments(document);
//I added this
for (var counter = arr.length; counter > 0; counter--) {
console.log(commentNodes[counter].nodeValue);
}
I keep getting this Error Message:
RangeError: too many arguments provided for a function call debugger
eval code:9:13
EDIT: i had a typo while pasting changed the code from i-- to counter--
see this comment in MDN docs about the use of apply to merge arrays:
Do not use this method if the second array (moreVegs in the example) is very large, because the maximum number of parameters that one function can take is limited in practice. See apply() for more details.
the other note from apply page:
But beware: in using apply this way, you run the risk of exceeding the JavaScript engine's argument length limit. The consequences of applying a function with too many arguments (think more than tens of thousands of arguments) vary across engines (JavaScriptCore has hard-coded argument limit of 65536), because the limit (indeed even the nature of any excessively-large-stack behavior) is unspecified. Some engines will throw an exception. More perniciously, others will arbitrarily limit the number of arguments actually passed to the applied function. To illustrate this latter case: if such an engine had a limit of four arguments (actual limits are of course significantly higher), it would be as if the arguments 5, 6, 2, 3 had been passed to apply in the examples above, rather than the full array.
As the array start from index of 0, actually the last item in the array is arr.length - 1.
you can fix it by:
for (var counter = arr.length - 1; counter >= 0; counter--)
Notice I've added arr.length -1 and counter >= 0 as zero is the first index of the array.
Adding the for loop is not the only thing you changed (and see the other answer about fixing that loop too). You also moved the declaration of arr from inside the function to outside, making arr relatively global.
Because of that, each recursive call to findComments() works on the same array, and the .apply() call pushes the entire contents back onto the end of the array every time. After a while, its length exceeds the limit of the runtime.
The original function posted at the top of your question has arr declared inside the function. Each recursive call therefore has its own local array to work with. In a document with a lot of comment nodes, it could still get that Range Error however.

Javascript performance array of objects preassignment vs direct use

I have a doubt about how can be affected to speed the use of object data arrays, that is, use it directly or preasign them to simple vars.
I have an array of elements, for example 1000 elements.
Every array item is an object with 10 properties (for example).
And finally I use some of this properties to do 10 calculations.
So I have APPROACH1
var nn = myarray.lenght;
var a1,a2,a3,a4 ... a10;
var cal1,cal2,.. cal10
for (var x=0;x<nn;x++)
{ // assignment
a1=my_array[x].data1;
..
a10 =my_array[x].data10;
// calculations
cal1 = a1*a10 +a2*Math.abs(a3);
...
cal10 = (a8-a7)*4 +Math.sqrt(a9);
}
And APPROACH2
var nn = myarray.lenght;
for (var x=0;x<nn;x++)
{
// calculations
cal1 = my_array[x].data1*my_array[x].data10 +my_array[x].data2*Math.abs(my_array[x].data3);
...
cal10 = (my_array[x].data8-my_array[x].data7)*4 +Math.sqrt(my_array[x].data9);
}
Assign a1 ... a10 values from my_array and then make calculations is faster than make the calculations using my_array[x].properties; or the right is the opposite ?????
I dont know how works the 'js compiler' ....
The kind of short answer is: it depends on your javascript engine, there is no right and wrong here, only "this has worked in the past" and "this don't seem to speed thing up no more".
<tl;dr> If i would not run a jsperf test, i would go with "Cached example" 1 example down: </tl;dr>
A general rule of thumb is(read: was) that if you are going to use an element in an array more then once, it could be faster to cache it in a local variable, and if you were gonna use a property on an object more then once it should also be cached.
Example:
You have this code:
// Data generation (not discussed here)
function GetLotsOfItems() {
var ret = [];
for (var i = 0; i < 1000; i++) {
ret[i] = { calc1: i * 4, calc2: i * 10, calc3: i / 5 };
}
return ret;
}
// Your calculation loop
var myArray = GetLotsOfItems();
for (var i = 0; i < myArray.length; i++) {
var someResult = myArray[i].calc1 + myArray[i].calc2 + myArray[i].calc3;
}
Depending on your browser (read:this REALLY depends on your browser/its javascript engine) you could make this faster in a number of different ways.
You could for example cache the element being used in the calculation loop
Cached example:
// Your cached calculation loop
var myArray = GetLotsOfItems();
var element;
var arrayLen = myArray.length;
for (var i = 0; i < arrayLen ; i++) {
element = myArray[i];
var someResult = element.calc1 + element.calc2 + element.calc3;
}
You could also take this a step further and run it like this:
var myArray = GetLotsOfItems();
var element;
for (var i = myArray.length; i--;) { // Start at last element, travel backwards to the start
element = myArray[i];
var someResult = element.calc1 + element.calc2 + element.calc3;
}
What you do here is you start at the last element, then you use the condition block to see if i > 0, then AFTER that you lower it by one (allowing the loop to run with i==0 (while --i would run from 1000 -> 1), however in modern code this is usually slower because you will read an array backwards, and reading an array in the correct order usually allow for either run-time or compile-time optimization (which is automatic, mind you, so you don't need to do anything for this work), but depending on your javascript engine this might not be applicable, and the backwards going loop could be faster..
However this will, by my experience, run slower in chrome then the second "kinda-optimized" version (i have not tested this in jsperf, but in an CSP solver i wrote 2 years ago i ended caching array elements, but not properties, and i ran my loops from 0 to length.
You should (in most cases) write your code in a way that makes it easy to read and maintain, caching array elements is in my opinion as easy to read (if not easier) then non-cached elements, and they might be faster (they are, at least, not slower), and they are quicker to write if you use an IDE with autocomplete for javascript :P

Javascript code for making my browser slow down

I'm writing a library for WebWorkers, and I want to test the difference between running a script in the main page thread, versus in one or more workers. The problem is: I can't find out of hand a short function which will strain my browser enough that I can observe the difference.
A quick search didn't return much, but it might just be that I don't really know what to search for; usually I try to optimise my code, not make it slower...
I'm looking for algorithms or patterns that can be easily implemented in pure Javascript, that do not depend on the DOM or XHR, and which can have an argument passed to limit or specify how far the calculation goes (no infinite algorithms); 1s < avg time < 10s.
Extra points if it can be built without recursion and if it does not incur a significant memory hog while still being as processor intensive as possible.
Try using the obvious (and bad) recursive implementation for the Fibonacci sequence:
function fib(x) {
if (x <= 0) return 0;
if (x == 1) return 1;
return fib(x-1) + fib(x-2);
}
Calling it with values of ~30 to ~35 (depending entirely on your system) should produce good "slow down" times in the range you seek. The call stack shouldn't get very deep and the algorithm is something like O(2^n).
/**
* Block CPU for the given amount of seconds
* #param {Number} [seconds]
*/
function slowdown(seconds = 0.5) {
const start = (new Date()).getTime()
while ((new Date()).getTime() - start < seconds * 1000){}
}
slowdown(2)
console.log('done')
Calling this method will slow code down for the given amount of seconds (with ~200ms precision).
Generate an array of numbers in reverse order and sort it.
var slowDown = function(n){
var arr = [];
for(var i = n; i >= 0; i--){
arr.push(i);
}
arr.sort(function(a,b){
return a - b;
});
return arr;
}
This can be called like so:
slowDown(100000);
Or whatever number you want to use.
Check out the benchmarking code referenced by the Google V8 Javascript Engine.
For some reason Bogosort comes to mind. Basically it's a sorting algorithm that consists of:
while not list.isInOrder():
list.randomize()
It has an average complexity of O(n * n!) with little memory, so it should slow things down pretty good.
The main downside is that its running time can be anywhere from O(n) to O(inf) (though really, O(inf) is pretty unlikely).
Everyone seems determined to be complicated. Why not this?
function waste_time(amount) {
for(var i = 0; i < amount; i++);
}
If you're concerned the browser will optimize the loop out of existence entirely, you can make it marginally more complicated:
function waste_time(amount) {
var tot = 0;
for(var i = 0; i < amount; i++)
tot += i;
}
Compute lots of square roots manually?
function sqrt(number, maxDecimal) {
var cDecimal = -1;
var cNumber = 0;
var direction = -1;
while(cNumber * cNumber !== number && cDecimal < maxDecimal) {
direction = -direction;
cDecimal++;
while((cNumber * cNumber - number) / Math.abs(cNumber * cNumber - number) === direction) cNumber += direction * Math.pow(10, -cDecimal);
}
return Math.abs(cNumber);
}
function performTest() {
for(var i = 0; i < 10000; i++) {
sqrt(i, 3);
}
}
Maybe this is what you are looking for:
var threadTest = function(durationMs, outputFkt, outputInterval) {
var startDateTime = (new Date()).getTime();
counter = 0,
testDateTime = null,
since = 0,
lastSince = -1;
do {
testDateTime = (new Date()).getTime();
counter++;
since = testDateTime - startDateTime;
if(typeof outputFkt != 'undefined' && lastSince != since && testDateTime % outputInterval == 0) {
outputFkt(counter, since);
lastSince = since;
}
} while(durationMs > since);
if(typeof outputFkt != 'undefined') {
outputFkt(counter, since);
}
return counter;
}
This method will simply repeat a check in a loop
durationMS - duartion it should run in miliseconds
OPTIONAL:
outputFkt - a callback method, for logging purpose function(currentCount, milisecondsSinceStart)
outputInterval - intervall the output function will be called
I figured since you do not want to test a real function, and even NP-Hard Problems have a ratio between input length and time this could be a easy way. You can measure performance at any interval and of course receive the number of loops as a return value, so you can easily measure how much threads interfere each others performance, with the callback even on a per cycle basis.
As an example here is how i called it (jQuery and Dom usage are here, but as you can see optional)
$(document).ready(function() {
var outputFkt = function(counter, since) {
$('body').append('<p>'+counter+', since '+since+'</p>');
};
threadTest(1000, outputFkt, 20);
});
A last Warning: Of course this function can not be more exact than JS itself. Since modern Browsers can do much more than one cycle in one Milisecond, there will be a little tail that gets cut.
Update
Thinking about it... actually using the ouputFkt callback for more than just output could give great insight. You could pass a method that uses some shared properties, or you could use it to test great memory usage.

Categories

Resources