how performance differs in loop declarations - javascript

how these two for loops differs in performance and why? what is the best way to iterate
var letters=
['a','b','c','d','e','f','','','',''];
var start=new Date()
for(var i=0,abc=letters.length;i<abc;i++){
alert(letters[i]);
}
var end=new Date()
alert(end-start)
var letters1=['a','b','c','d','e','f','','','',''];
var start1=new Date()
for(var i=0;i<letters1.length;i++){
alert(letters1[i]);
}
var end1=new Date()
alert(end1-start1);

as i mentioned in comment testing with alerts is absolute nonsense because it pauses the loop
you can compare js performance for example on http://jsperf.com/
heres the result:
UPDATE: made some better testing
snippet 1 seems to be better becau it caches the result of letters.length
heres the test: http://jsperf.com/testing-perf-fo-for-loop

In old browsers, caching the limit of a loop when iterating over objects like arrays was nearly always much faster than getting the limit (e.g. array.length) every time. In modern browsers, not so much.
Caching usually only has a significant performance benefit if the object being iterated is a type that the script engine knows might change during iteration, but doesn't.
For example, if iterating over a live NodeList:
var nodeList = document.getElementsByTagName('*');
for (var i=0, iLen=nodeList.length; i<iLen; i++) {
// do stuff
}
is measurably (if not noticeably) faster in browsers than:
for (var i=0; i<nodeList.length; i++) {
// do stuff
}
However, as others have noted, the work done in the loop may be far more significant so that saving a few cycles in the test is not useful. But for some, their standard loop pattern includes caching the limit if it isn't modified in the loop and do it always.
BTW, for measuring time in a browsers there is the High Resolution Time specification (also see MDN Peformance.now).

Related

for loop vs for loop in reverse: performance

According to What's the Fastest Way to Code a Loop in JavaScript? and Why is to decrement the iterator toward 0 faster than incrementing ,
a basic for loop is slower than a for - loop with simplified test condition,
i.e.:
console.log("+++++++");
var until = 100000000;
function func1() {
console.time("basic")
var until2 = until;
for (var i = 0; i < until2; i++) {}
console.timeEnd("basic")
}
function func2() {
console.time("reverse")
var until2 = until;
for (until2; until2--;) {}
//while(until2--){}
console.timeEnd("reverse")
}
func1();
func2();
As you might see the first function is, contrary to expectations, faster than the second. Did something change since the release of this oracle article, or did I do something wrong?
Yes, something has changed since the article was released. Firefox has gone from version 3 to version 38 for one thing. Mostly when a new version of a browser is released, the performance of several things has changed.
If you try that code in different versions of different browsers on different systems, you will see that you will get quite a difference in performance. Different browsers are optimised for different Javascript code.
As performance differs, and you can't rely on any measurements to be useful for very long, there are basically two principles that you can follow if you need to optimise Javascript:
Use the simplest and most common code for each task; that is the code that browser vendors will try to optimise the most.
Don't look for the best performance in a specific browser, look for the worst performance in any brower. Test the code in different browsers, and pick a method that doesn't give remarkably bad performance in any of them.

For Loop optimization in Javascript

So i have been taking a Computer science course that uses C+ to teach programming concepts with. today i learned a new concept that i was not sure applied to JS, in which there are system resources expended each time a string.length is calculated. It seems like a tiny matter but it got me thinking about huge arrays and how that could add up. Check out this example and let me know if loop #2 is indeed more efficient than the first and thanks:
var weekDay = ["Monday", "Tuesday", "Wednesday"];
//for loop #1
for(i=0; i<weekDay.length; i++){
//code code code
;}
//for loop #2
for(i=0; var n=weekDay.length; i<n; i++){
//code code code
;}
The second approach is faster, but not by much. Also, there is a small syntax error
for( var i = 0, n = weekDay.length; i < n; i++ ){ ... }
This is rather common in javascript code. Please note the importance in declaring all of your variables with var so that they do not step on the wrong scope.
You can see this js performance test here: http://jsperf.com/forloopiterator which shows the results being 24% faster when using the second approach.
First of all, premature optimization is the root of all evil.
Secondly; you are completely correct that loop #2 is more efficient.
Loop #1 would do calculate the length of weekDay for every iteration of the loop. This means it would calculate the length 10,000 times in a 10,000 length array.
Loop #2 would calculate the length of weekDay and set the variable n to be the result, and hence we keep the length in a variable rather than recalculating it for every iteration.
Read more about why premature optimization is bad.
This question has been asked a few times...
Is optimizing JavaScript for loops really necessary?
I found the following link very helpful. Basically it depends on browser version and vendor.
In some cases e.g. IE yes #2 is much faster
http://jsperf.com/loop-iteration-length-comparison-variations
I would always pre-cache the length explicity, i.e.
var n = weekDay.length;
var i;
for(i=0;i<n; i++){
do_something;
}
A much clearer approach, as all variables definitions are 'hoisted' to the top of the function anyway.

JQuery map vs Javascript map vs For-loop

I'm implementing some code that is a natural fit for map. However, I have a significant amount of objects in a list that I'm going to iterate through, so my question is which is the best way to go abou this:
var stuff = $.map(listOfMyObjects, someFunction())
var stuff = listOfMyObjects.map(someFunction())
or just
var stuff = new Array();
for(var i = 0; i < listOfmyObjects.length; i++){
stuff.push(someFunction(listOfMyObjects[i]));
}
here is a test case done in jsben.ch: http://jsben.ch/#/BQhED
it shows that a for-loop map is faster than a jquery map (at least in chrome).
The latter (for loop) is much faster. I remember seeing a benchmark somewhere but I can't seem to find the link.
If performance is really an issue then I would use the for loop. It doesn't really obscure the code that much.
First at all, true Objects don't have a native .map() method, neither a .length property. So we are either talking about Arrays or Array-like-objects (jQuery objects for instance).
However, there is not faster way to iterate than using a native for, while or do-while loop. All other functional operations do performan (guess what) a function for each iteration which costs.
jQuerys 's .each() will just performan a for-in loop when an object is passed to it. That is fairly the fastest way to loop over an object. You could just use a for-in yourself and you save the overhead call.
Another "good" way in terms of readabilty is to make usage of ES5 features like .keys() and .map(). For instance:
var myObj = {
foo: 'bar',
base: 'ball',
answer: 42,
illuminati: 23
};
Object.keys( myObj ).map(function( prop ) {
console.log( myObj[ prop ] );
});
Which I think is a very good tradeof in terms of readabilty, convinience and performance. Of course you should use an ES5 abstraction library for old'ish browser.
But again, no way to beat native loops in terms of performance.
+1 for the "test it" answer by Emil :) That's always the right answer.
But yeah, native loops win, and you can do one better by caching the length so the .length reference isn't evaluated each iteration.
for(var i = 0, l = list.length; i < l; i++)
or avoid the extra var by doing it backwards
for(var i = list.length-1; i >= 0; i--)
And, if you can 'inline' the 'someFunction', that will be faster still. Basically, avoid function calls and references as much as you can. But that's only if you are really looking at fine detail. It's unlikely optimizations like this are going to matter much. Always test to find your bottlenecks.
Create a test cases with your html/javascript code at jsperf.
You will be able to see what works best, and how fast different browsers perform the loops.
I would put my money on the native JavaScript loop, but you never know.

Is javascript str.length calculated every time it is called or just once?

Which way is more efficient? Is there a difference?
This one:
var str = 'abc';
if(str.length == 20) {
//...
}
if(str.length == 25) {
//...
}
// and so on
Or this one:
var str = 'abc';
var length = str.length;
if(length == 20) {
//...
}
if(length == 25) {
//...
}
// and so on
In the browsers where this might actually matter (read: IE) it will be calculated every time, so it's faster to store the value in a local variable.
http://jsperf.com/string-length
It used to be that
var len = someArray.length;
for (var i=0; i<len; i++) {
// ...
}
was faster than
for (var i=0; i<someArray.length; i++) {
// ...
}
but these days, V8's (Chrome's JS engine) optimizes the latter to run faster than the former. That's great - just remember, you don't really need to worry about performance in Chrome.
If you're curious to learn more about JavaScript performance, High Performance JavaScript is a solid read. Take its recommendations with a grain of salt, though, since a trick that makes code run faster in IE (6, 7, 8 or even 9) might very well make the code run slower in Chrome or Firefox 4.
The second is by far the safer way to go. In the first you are assuming that it won't get recalculated. In the second you know that it won't. The second isn't always the best way though. It will only work when you know other processes won't affect the length of the array. So with global variables etc. you have to be careful. This can also apply to modifying the contents (length) of an array inside a for loop which stops at the upper bound of the array.
Strings are immutable in JavaScript, so it is unlikely that even bad implementations of Javascript would recalcuate the length property of the string every time you access it.
You can actually test this yourself using jsperf; using Chrome 12, it actually looks like your first example is faster.
In theory, I would expect that the second code block should be quicker.
However, given that today's JS interpreters are actually highly optimised JIT compilers, I would imagine that they would spot that kind of thing and optimise it.
That should apply to pretty much all browsers in current mainstream use, with the obvious exception of IE8 and lower, where it's anyone's guess how it does it, but it'll be slow either way.

Is it faster access an javascript array directly?

I was reading an article: Optimizing JavaScript for Execution Speed
And there is a section that says:
Use this code:
for (var i = 0; (p = document.getElementsByTagName("P")[i]); i++)
Instead of:
nl = document.getElementsByTagName("P");
for (var i = 0; i < nl.length; i++)
{
p = nl[i];
}
for performance reasons.
I always used the "wrong" way, according the article, but, am I wrong or is the article wrong?
"We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil."
--Donald Knuth
Personally i would use your way because it is more readable and easier to maintain. Then i would use a tool such as YSlow to profile the code and iron out the performance bottlenecks.
If you look at it from a language like C#, you'd expect the second statement to be more efficient, however C# is not an interpreter language.
As the guide states: your browser is optimized to retrieved the right Nodes from live lists and does this a lot faster than retrieving them from the "cache" you define in your variable. Also you have to determine the length each iteration, with might cause a bit of performance loss as well.
Interpreter languages react differently from compiled languages, they're optimized in different ways.
Good question - I would assume not calling the function getElementsByTagName() every loop would save time. I could see this being faster - you aren't checking the length of the array, just that the value got assigned.
var p;
var ps = document.getElementsByTagName("P");
for (var i = 0; (p=ps[i]); i++) {
//...
}
Of course this also assumes that none of the values in your array evaluate to "false". A numeric array that may contain a 0 will break this loop.
Interesting. Other references do seem to back up the idea that NodeLists are comparatively heavyweight.
The question is ... what's the overhead? Is it enough to bother about? I'm not a fan of premature optimisation. However this is an interesting case, because it's not just the cost of iteration that's affected, there's extra overhead as the NodeList must be kept in synch with any changes to the DOM.
With no further evidence I tend to believe the article.
No it will not be faster. Actually it is nearly totally nonsense since for every step of the for loop, you call the "getElementsByTagName" which, is a time consuming function.
The ideal loop would be as follows:
nl = document.getElementsByTagName("P");
for (var i = nl.length-1; i >= 0; i--)
{
p = nl[i];
}
EDIT:
I actually tested those two examples you have given in Firebug using console.time and as everyone have though the first one took 1ms whereas the second one took 0ms =)
It makes sense to just assign it directly to the variable. Probably quicker to write as well. I'd say that the article might have some truth to it.
Instead of having to get the length everytime, check it against i, and then assign the variable, you simply check if p was able to be set. This cleans up the code and probably is actually faster.
I typically do this (move the length test outside the for loop)
var nl = document.getElementsByTagName("p");
var nll = nl.length;
for(var i=0;i<nll;i++){
p = nl[i];
}
or for compactness...
var nl = document.getElementsByTagName("p");
for(var i=0,nll=nl.length;i<nll;i++){
p = nl[i];
}
which presumes that the length doesn't change during access (which in my cases doesn't)
but I'd say that running a bunch of performance tests on the articles idea would be the definitive answer.
The author of the article wrote:
In most cases, this is faster than caching the NodeList. In the second example, the browser doesn't need to create the node list object. It needs only to find the element at index i at that exact moment.
As always it depends. Maybe it depends on the number of elements of the NodeList.
For me this approach is not safe if the number of elemets can change, this could cause and index out of bound.
From the article you link to:
In the second example, the browser doesn't need to create the node list object. It needs only to find the element at index i at that exact moment.
This is nonsense. In the first example, the node list is created and a reference to it is held in a variable. If something happens which causes the node list to change - say you remove a paragraph - then the browser has to do some work to update the node list. However, if your code doesn't cause the list to change, this isn't an issue.
In the second example, far from not needing to create the node list, the browser has to create the node list every time through the loop, then find the element at index i. The fact that a reference to the node list is never assigned to a variable doesn't mean the list doesn't have to be created, as the author seems to think. Object creation is expensive (no matter what the author says about browsers "being optimized for this"), so this is going to be a big performance hit for many applications.
Optimisation is always dependant on the actual real-world usage your application encounters. Articles such as this shouldn't be seen as saying "Always work this way" but as being collections of techniques, any one of which might, in some specific set of circumstances, be of value. The second example is, in my opinion, less easy to follow, and that alone puts it in the realm of tricksy techniques that should only be used if there is a proven benefit in a specific case.
(Frankly, I also don't trust advice offered by a programmer who uses a variable name like "nl". If he's too lazy to use meaningful names when writing a tutorial, I'm glad I don't have to maintain his production code.)
It is worthless to discuss theory when actual tests will be more accurate, and by comparing both the second method was clearly faster.
Here is sample code from a benchmark test
var start1 = new Date().getTime();
for (var j= 0; j < 500000; j++){
for (var i = 0; (p = document.getElementsByTagName("P")[i]); i++);
}
var end1 = new Date().getTime() - start1;
var start2 = new Date().getTime();
for (var j= 0; j < 500000; j++){
nl = document.getElementsByTagName("P");
for (var i = 0; i < nl.length; i++)
{
p = nl[i];
}
}
var end2 = new Date().getTime() - start2;
alert("first:" + end1 + "\nSecond:" + end2);
In chrome the first method was taking 2324ms while the second took 1986ms.
But note that for 500,000 iterations there was a difference only 300ms so I wouldn't bother with this at all.

Categories

Resources