for loop vs for loop in reverse: performance - javascript

According to What's the Fastest Way to Code a Loop in JavaScript? and Why is to decrement the iterator toward 0 faster than incrementing ,
a basic for loop is slower than a for - loop with simplified test condition,
i.e.:
console.log("+++++++");
var until = 100000000;
function func1() {
console.time("basic")
var until2 = until;
for (var i = 0; i < until2; i++) {}
console.timeEnd("basic")
}
function func2() {
console.time("reverse")
var until2 = until;
for (until2; until2--;) {}
//while(until2--){}
console.timeEnd("reverse")
}
func1();
func2();
As you might see the first function is, contrary to expectations, faster than the second. Did something change since the release of this oracle article, or did I do something wrong?

Yes, something has changed since the article was released. Firefox has gone from version 3 to version 38 for one thing. Mostly when a new version of a browser is released, the performance of several things has changed.
If you try that code in different versions of different browsers on different systems, you will see that you will get quite a difference in performance. Different browsers are optimised for different Javascript code.
As performance differs, and you can't rely on any measurements to be useful for very long, there are basically two principles that you can follow if you need to optimise Javascript:
Use the simplest and most common code for each task; that is the code that browser vendors will try to optimise the most.
Don't look for the best performance in a specific browser, look for the worst performance in any brower. Test the code in different browsers, and pick a method that doesn't give remarkably bad performance in any of them.

Related

What is the best option to fill array in Javascript?

Recently I have come to solve some programming problems where I need to fill an array with a default value. I find some approaches but don't know which one is the best option with respect to performance.
For example, if I want to fill an array of size 10 with 0. Here are some options that I ended up with.
Option One
let arr = []
arr.length = 10
arr.fill(0)
console.log(arr)
Option Two
let arr = new Array(10).fill(0)
console.log(arr)
Option Three
let arr = Array(10).fill(0)
console.log(arr)
Option Four
function makeNewArray(size, value) {
let arr = []
for (let i=1; i<=size; i++)
arr.push(value)
return arr
}
let arr = makeNewArray(10,0)
console.log(arr)
I am confused which one is standard to use and needs less time to compile. Or is there any other better approach?
If the performance really matters for your use case, test it. On my browser, the last is (slightly surprisingly to me) apparently the fastest.
However, all of these can execute millions of times per second (in Chrome on my mid-range laptop), so I'm rather skeptical that this is important to the overall performance of your application. I'd recommend avoiding premature optimization of this sort except in code that you know to be performance-critical. Everywhere else I'd prioritize legibility, so I would write something like new Array(10).fill(0) which is concise, likely to be understood by a JavaScript developer, and plenty fast.
The performance difference between that and the last likely has to do with the JS runtime implementation details. (In my case, that's V8, which doesn't optimize Array.prototype.fill using Torque/CSA builtins that can be optimized at the call site, but instead calls out to C++ code. Likely this is because fill performance hasn't often been an issue in the past.)

JavaScript Performance: Multiple variables or one object?

this is just a simple performance question, helping me understand the javascript engine.
for this I'm was wondering, what is faster: declaring multiple variables for certain values or using one object containing multiple values.
example:
var x = 15;
var y = 300;
vs.
var sizes = { x: 15, y: 300 };
this is just a very simple example, could of course differ in a real project.
does this even matter?
A complete answer for that question would be really long. So I'll try to explain a few things only. First, maybe most important fact, even if you declare a variable with var, it depends where you do that. In a global scope, you implicitly would also write that variable in an object, most browsers call it window. So for instance
// global scope
var x = 15;
console.log( window.x ); // 15
If we do the same thing within the context of a function things change. Within the context of a function, we would write that variable name into its such called 'Activation Object'. That is, an internal object which the js engine handles for you. All formal parameters, function declarations and variables are stored there.
Now to answer your actual question: Within the context of a function, its always the fastest possible access to have variables declared with var. This again is not necesarrily true if we are in the global context. The global object is very huge and its not really fast to access anything within.
If we store things within an object, its still very fast, but not as fast as variables declared by var. Especially the access times do increase. But nonetheless, we are talking about micro and nanoseconds here (in modern browser implementations). Old'ish browsers, especially IE6+7 have huge performance penalties when accessing object properties.
If you are really interested in stuff like this, I highyl recommend the book 'High Performance Javascript' by Nicholas C. Zakas. He measured lots of different techniques to access and store data in ECMAscript for you.
Again, performance differences for object lookups and variables declared by var is almost not measureable in modern browsers. Old'ish Browsers like FF3 or IE6 do show a fundamental slow performance for object lookups/access.
foo_bar is always faster than foo.bar in every modern browser (IE11+/Edge and any version of Chrome, FireFox, and Safari) and NodeJS so long as you see performance as holistic (which I recommend you should). After millions of iterations in a tight loop, foo.bar may approach (but never surpass) the same ops/s as foo_bar due to the wealth of correct branch predictions. Notwithstanding, foo.bar incurs a ton more overhead during both JIT compilation and execution because it is so much more complex of an operation. JavaScript that features no tight loops benefits an extra amount from using foo_bar because, in comparison, foo.bar would have a much higher overhead:savings ratio such that there was extra overhead involved in the JIT of foo.bar just to make foo.bar a little faster in a few places. Granted, all JIT engines intelligently try to guess how much effort should be put into optimizing what to minimize needless overhead, but there is still a baseline overhead incurred by processing foo.bar that can never be optimized away.
Why? JavaScript is a highly dynamic language, where there is costly overhead associated with every object. It was originally a tiny scripting executed line-by-line and still exhibits line-by-line execution behavior (it's not executed line-by-line anymore but, for example, one can do something evil like var a=10;eval('a=20');console.log(a) to log the number 20). JIT compilation is highly constrained by this fact that JavaScript must observe line-by-line behavior. Not everything can be anticipated by JIT, so all code must be slow in order for extraneous code such as is shown below to run fine.
(function() {"use strict";
// chronological optimization is very poor because it is so complicated and volatile
var setTimeout=window.setTimeout;
var scope = {};
scope.count = 0;
scope.index = 0;
scope.length = 0;
function increment() {
// The code below is SLOW because JIT cannot assume that the scope object has not changed in the interum
for (scope.index=0, scope.length=17; scope.index<scope.length; scope.index=scope.index+1|0)
scope.count = scope.count + 1|0;
scope.count = scope.count - scope.index + 1|0;
}
setTimeout(function() {
console.log( scope );
}, 713);
for(var i=0;i<192;i=i+1|0)
for (scope.index=11, scope.length=712; scope.index<scope.length; scope.index=scope.index+1|0)
setTimeout(increment, scope.index);
})();
(function() {"use strict";
// chronological optimization is very poor because it is so complicated and volatile
var setTimeout=window.setTimeout;
var scope_count = 0;
var scope_index = 0;
var scope_length = 0;
function increment() {
// The code below is FAST because JIT does not have to use a property cache
for (scope_index=0, scope_length=17; scope_index<scope_length; scope_index=scope_index+1|0)
scope_count = scope_count + 1|0;
scope_count = scope_count - scope_index + 1|0;
}
setTimeout(function() {
console.log({
count: scope_count,
index: scope_index,
length: scope_length
});
}, 713);
for(var i=0;i<192;i=i+1|0)
for (scope_index=4, scope_length=712; scope_index<scope_length; scope_index=scope_index+1|0)
setTimeout(increment, scope_index);
})();
Performing a one sample z-interval by running each code snippet above 30 times and seeing which one gave a higher count, I am 90% confident that the later code snippet with pure variable names is faster than the first code snippet with object access between 76.5% and 96.9% of the time. As another way to analyze the data, there is a 0.0000003464% chance that the data I collected was a fluke and the first snippet is actually faster. Thus, I believe it is reasonable to infer that foo_bar is faster than foo.bar because there is less overhead.
Don't get me wrong. Hash maps are very fast because many engines feature advanced property caches, but there will still always be enough extra overhead when using hash maps. Observe.
(function(){"use strict"; // wrap in iife
// This is why you should not pack variables into objects
var performance = window.performance;
var iter = {};
iter.domino = -1; // Once removed, performance topples like a domino
iter.index=16384, iter.length=16384;
console.log(iter);
var startTime = performance.now();
// Warm it up and trick the JIT compiler into false optimizations
for (iter.index=0, iter.length=128; iter.index < iter.length; iter.index=iter.index+1|0)
if (recurse_until(iter, iter.index, 0) !== iter.domino)
throw Error('mismatch!');
// Now that its warmed up, drop the cache off cold and abruptly
for (iter.index=0, iter.length=16384; iter.index < iter.length; iter.index=iter.index+1|0)
if (recurse_until(iter, iter.index, 0) !== iter.domino)
throw Error('mismatch!');
// Now that we have shocked JIT, we should be running much slower now
for (iter.index=0, iter.length=16384; iter.index < iter.length; iter.index=iter.index+1|0)
if (recurse_until(iter, iter.index, 0) !== iter.domino)
throw Error('mismatch!');
var endTime=performance.now();
console.log(iter);
console.log('It took ' + (endTime-startTime));
function recurse_until(obj, _dec, _inc) {
var dec=_dec|0, inc=_inc|0;
var ret = (
dec > (inc<<1) ? recurse_until(null, dec-1|0, inc+1|0) :
inc < 384 ? recurse_until :
// Note: do not do this in production. Dynamic code evaluation is slow and
// can usually be avoided. The code below must be dynamically evaluated to
// ensure we fool the JIT compiler.
recurse_until.constructor(
'return function(obj,x,y){' +
// rotate the indices
'obj.domino=obj.domino+1&7;' +
'if(!obj.domino)' +
'for(var key in obj){' +
'var k=obj[key];' +
'delete obj[key];' +
'obj[key]=k;' +
'break' +
'}' +
'return obj.domino' +
'}'
)()
);
if (obj === null) return ret;
recurse_until = ret;
return obj.domino;
}
})();
For a performance comparison, observe pass-by-reference via an array and local variables.
// This is the correct way to write blazingly fast code
(function(){"use strict"; // wrap in iife
var performance = window.performance;
var iter_domino=[0,0,0]; // Now, domino is a pass-by-reference list
var iter_index=16384, iter_length=16384;
var startTime = performance.now();
// Warm it up and trick the JIT compiler into false optimizations
for (iter_index=0, iter_length=128; iter_index < iter_length; iter_index=iter_index+1|0)
if (recurse_until(iter_domino, iter_index, 0)[0] !== iter_domino[0])
throw Error('mismatch!');
// Now that its warmed up, drop the cache off cold and abruptly
for (iter_index=0, iter_length=16384; iter_index < iter_length; iter_index=iter_index+1|0)
if (recurse_until(iter_domino, iter_index, 0)[0] !== iter_domino[0])
throw Error('mismatch!');
// Now that we have shocked JIT, we should be running much slower now
for (iter_index=0, iter_length=16384; iter_index < iter_length; iter_index=iter_index+1|0)
if (recurse_until(iter_domino, iter_index, 0)[0] !== iter_domino[0])
throw Error('mismatch!');
var endTime=performance.now();
console.log('It took ' + (endTime-startTime));
function recurse_until(iter_domino, _dec, _inc) {
var dec=_dec|0, inc=_inc|0;
var ret = (
dec > (inc<<1) ? recurse_until(null, dec-1|0, inc+1|0) :
inc < 384 ? recurse_until :
// Note: do not do this in production. Dynamic code evaluation is slow and
// can usually be avoided. The code below must be dynamically evaluated to
// ensure we fool the JIT compiler.
recurse_until.constructor(
'return function(iter_domino, x,y){' +
// rotate the indices
'iter_domino[0]=iter_domino[0]+1&7;' +
'if(!iter_domino[0])' +
'iter_domino.push( iter_domino.shift() );' +
'return iter_domino' +
'}'
)()
);
if (iter_domino === null) return ret;
recurse_until = ret;
return iter_domino;
}
})();
JavaScript is very different from other languages in that benchmarks can easily be a performance-sin when misused. What really matters is what should in theory run the fastest accounting for everything in JavaScript. The browser you are running your benchmark in right now may fail to optimize for something that a later version of the browser will optimize for.
Further, browsers are guided in the direction that we program. If everyone used CodeA that makes no performance sense via pure logic but is really fast (44Kops/s) only in a certain browser, other browsers will lean towards optimizing CodeA and CodeA may eventually surpass 44Kops/s in all browsers. On the other hand, if CodeA were really slow in all browsers (9Kops/s) but was very logical performance-wise, browsers would be able to take advantage of that logic and CodeA may soon surpass 900Kops/s in all browsers. Ascertaining the logical performance of code is very simple and very difficult. One must put themself in the shoes of the computer and imagine one has an infinite amount of paper, an infinite supply of pencils, and an infinite amount of time, and no ability to interpret the purpose/intention of the code. How can you structure your code to fare the best under such hypothetical circumstances? For example, hypothetically, the hash maps incurred by foo.bar would be a bit slower than doing foo_bar because foo.bar would require looking at the table named foo and finding the property named bar. You could put your finger on the location of the bar property to cache it, but the overhead of looking through the table to find bar costed time.
You are definitely micro-optimizing. I wouldn't worry about it until there is a demonstrable performance bottleneck, and you have narrowed the issue to using multiple vars vs a object with properties.
Logically thinking about it using the object approach requires three variable creations, one for the object, and one for each property on the object, vs 2 for just declaring variables. So having the object will have a higher memory approach. However, it is probably more efficient to pass an object to a method, than n > 1 variables to a method, since you only need to copy 1 value (javascript is pass by value). This also has implications for keeping track of the lexical scoping of the objects; i.e. passing less things to methods will use less memory.
however, i doubt the performance differences will even be quantifiable by any profiler.
Theory or questions like "What you are ..hmm.. doing, dude?", of course, can appear here as an answers. But I dont think it's good approach.
I just created two test benchs:
Specific, http://jsben.ch/SvNyw for global scope
It shows, for example, that on 07/2017 in Chromium based browsers (Vivaldi, Opera, Google Chrome and other) to achieve max performance there are preferable to use var. It works about 25% faster for reading values and 10% faster for writing ones.
Under Node.js there're about the same results - because of same JS engine.
In Opera Presto (12.18) there're the similar percentage test results as in chromium-based browsers.
In (modern) Firefox there is other and strange picture. Reading of global scope var is around the same as reading of object property, and writing of global scope var is dramatically slower than writing obj.prop (around twice slower). It seems like a bug.
For testing under IE/Edge or any others you are welcome.
Normal case, http://jsben.ch/5UvSZ for in-function local scope
In both Chromium based browsers and Mozilla Firefox you can see huge domination of simple var performance according to object property access. Local simple variables are several times (!) faster than dealing with object properties.
So,
if you need maximize some critical JavaScript code performance:
in browser - you can forced to make different optimizations for several browsers. I dont recommend! Or you can select some "favorite" browser, optimize your code for it and do not see what freezes happens in other ones. Not wery good, but is the way.
in browser, again - do you really need to optimize this way? May be something wrong in your algorithm / code logic?
in highload Node.js module (or other highload calcs) - well, try to minimize object "dots", with minimized damage to quality/readability of course - use var.
The safe optimization trick for any case - when you have too much operations with obj.subobj.* you can do var subobj = obj.subobj; and operate with subobj.*. This can ever improve readability.
In any case, think what do you need and do and make real bechmarks of you highload code.

Javascript toLowerCase() performance versus variable creation

What is more efficient?:
var text="ABCdef";
var lowerVersion=text.toLowerCase();
if (lowerVersion=='abcdef' || lowerVersion=='asdfgh' || lowerVersion=='zxcvbn'){...
or
var text="ABCdef";
if (text.toLowerCase()=='abcdef' || text.toLowerCase()=='asdfgh' || text.toLowerCase()=='zxcvbn'){...
i.e. is variable creation more expensive than running toLowerCase() several times?
Thanks.
This is JavaScript. The answer is going to be: It depends. It depends on what engine you're using, on your data, on the other things in the context, on whether the first or last match matches, on alternate Tuesdays...
But creating variables in JavaScript is very fast. In contrast, the repeated calls version asks the interpreter to make multiple function calls, and function calls (while fast by any real measure) are slow compared to most other operations. The only way that's going to be as fast is if the interpreter can figure out that it can cache the result of the call, which is tricky.
Taking #Felix's performance test and making it pessimistic (e.g., worst case and none of them match) suggests that even Chrome can't optimize it enough to make the repeated function calls not come out worse. I didn't do any comprehensive tests, but Chrome, Firefox, and Opera all came out about 60% slower.
You have an alternative, of course:
var text="ABCdef";
switch (text.toLowerCase()) {
case 'abcdef':
// ...
break;
case 'asdfgh'
// ...
break;
case 'zxcvbn'
// ...
break;
}
All of this is premature optimisation, though, which is bad enough generally but particularly bad with JavaScript and the varying environments in which it runs complicating things.
The better question is: What's clearer and more maintainable?
This is without any doubt that the 2nd implementation will be significantly faster then the 1st one.
It is for sure that when each time text.toLowerCase() will take time like O(n) and there will be 3xO(n) vs O(n)
I have run the test on jsPref.com and the 2nd snippet is 18% faster.
If you are going to refer to the value more than once, store it as a variable: toLowerCase() can be very slow with long strings.
Caching being faster seems logical (3 toLowerCase calls vs one), but the (modern) browsers' scripting engine may well do that for you. I don't think it will matter a lot, if it's a one or few times operation. It may be a question of taste, but I think assigning a variable is more readable/maintainable.
Alternative may be using a Regular Expression for the check:
var text="ABCdef";
console.log(/^(abcdef|asdfgh|zxcvbn)$/i.test(text)
? `${text} is ok` : `${text} is NOT ok`);

JQuery map vs Javascript map vs For-loop

I'm implementing some code that is a natural fit for map. However, I have a significant amount of objects in a list that I'm going to iterate through, so my question is which is the best way to go abou this:
var stuff = $.map(listOfMyObjects, someFunction())
var stuff = listOfMyObjects.map(someFunction())
or just
var stuff = new Array();
for(var i = 0; i < listOfmyObjects.length; i++){
stuff.push(someFunction(listOfMyObjects[i]));
}
here is a test case done in jsben.ch: http://jsben.ch/#/BQhED
it shows that a for-loop map is faster than a jquery map (at least in chrome).
The latter (for loop) is much faster. I remember seeing a benchmark somewhere but I can't seem to find the link.
If performance is really an issue then I would use the for loop. It doesn't really obscure the code that much.
First at all, true Objects don't have a native .map() method, neither a .length property. So we are either talking about Arrays or Array-like-objects (jQuery objects for instance).
However, there is not faster way to iterate than using a native for, while or do-while loop. All other functional operations do performan (guess what) a function for each iteration which costs.
jQuerys 's .each() will just performan a for-in loop when an object is passed to it. That is fairly the fastest way to loop over an object. You could just use a for-in yourself and you save the overhead call.
Another "good" way in terms of readabilty is to make usage of ES5 features like .keys() and .map(). For instance:
var myObj = {
foo: 'bar',
base: 'ball',
answer: 42,
illuminati: 23
};
Object.keys( myObj ).map(function( prop ) {
console.log( myObj[ prop ] );
});
Which I think is a very good tradeof in terms of readabilty, convinience and performance. Of course you should use an ES5 abstraction library for old'ish browser.
But again, no way to beat native loops in terms of performance.
+1 for the "test it" answer by Emil :) That's always the right answer.
But yeah, native loops win, and you can do one better by caching the length so the .length reference isn't evaluated each iteration.
for(var i = 0, l = list.length; i < l; i++)
or avoid the extra var by doing it backwards
for(var i = list.length-1; i >= 0; i--)
And, if you can 'inline' the 'someFunction', that will be faster still. Basically, avoid function calls and references as much as you can. But that's only if you are really looking at fine detail. It's unlikely optimizations like this are going to matter much. Always test to find your bottlenecks.
Create a test cases with your html/javascript code at jsperf.
You will be able to see what works best, and how fast different browsers perform the loops.
I would put my money on the native JavaScript loop, but you never know.

Is javascript str.length calculated every time it is called or just once?

Which way is more efficient? Is there a difference?
This one:
var str = 'abc';
if(str.length == 20) {
//...
}
if(str.length == 25) {
//...
}
// and so on
Or this one:
var str = 'abc';
var length = str.length;
if(length == 20) {
//...
}
if(length == 25) {
//...
}
// and so on
In the browsers where this might actually matter (read: IE) it will be calculated every time, so it's faster to store the value in a local variable.
http://jsperf.com/string-length
It used to be that
var len = someArray.length;
for (var i=0; i<len; i++) {
// ...
}
was faster than
for (var i=0; i<someArray.length; i++) {
// ...
}
but these days, V8's (Chrome's JS engine) optimizes the latter to run faster than the former. That's great - just remember, you don't really need to worry about performance in Chrome.
If you're curious to learn more about JavaScript performance, High Performance JavaScript is a solid read. Take its recommendations with a grain of salt, though, since a trick that makes code run faster in IE (6, 7, 8 or even 9) might very well make the code run slower in Chrome or Firefox 4.
The second is by far the safer way to go. In the first you are assuming that it won't get recalculated. In the second you know that it won't. The second isn't always the best way though. It will only work when you know other processes won't affect the length of the array. So with global variables etc. you have to be careful. This can also apply to modifying the contents (length) of an array inside a for loop which stops at the upper bound of the array.
Strings are immutable in JavaScript, so it is unlikely that even bad implementations of Javascript would recalcuate the length property of the string every time you access it.
You can actually test this yourself using jsperf; using Chrome 12, it actually looks like your first example is faster.
In theory, I would expect that the second code block should be quicker.
However, given that today's JS interpreters are actually highly optimised JIT compilers, I would imagine that they would spot that kind of thing and optimise it.
That should apply to pretty much all browsers in current mainstream use, with the obvious exception of IE8 and lower, where it's anyone's guess how it does it, but it'll be slow either way.

Categories

Resources