is concat faster or slower than push - javascript

For this code I want to know in javascript what is the best approach?
var output = foo +";"+bar;
or
var output = new Array(foo,bar).join(";");

It doesn't really matter.
There were blogs promoting the first one or the second one, depending on their benchmarks.
But the truth is that javascript engines are heavily optimized and changing, so you won't find a big reproducible and cross-browser difference.
Choose the most readable. Generally it's the first one.
If you really do a loop with 10000 times this push, benchmark it on your customer browsers in your real code, and choose the best but only if there is a significative difference. Don't forget that javascript is fast.

There are many test cases in http://jsperf.com/ (for example http://jsperf.com/joint-vs-concat). There you can check which is slower.
In my experience depends on the user's browser (to be more exact - JS engine).

According to me String concatenation is faster then array joins . saw these test cases
http://jsperf.com/array-join-vs-string-connect
http://jsperf.com/join-concat/2

Related

Why for loop with cached length is slower than simple for loop?

I am reading the book "Secrets of the JavaScript Ninja". But I am confused with the figure that I found on in this book. From my view, simple for loop lost time to get the length of array so if the length is cached before, the speed would be improved. But the picture below showed the opposite result. Could someone give me the reason? .Thanks.
Performance in js is a question of how much the engine optimized the code. And how the engine optimizes code depends on the developers that wrote the optimization. And those developers want the average code to run fast, that means the more "normal" your code is, the faster it runs. So maybe iterating over an array in the first way is so common that someone heavily optimized it. Whatsoever the only real result we can gain from that data is: both ways are fast enough to not care about the difference.

javascript document.createElement or HTML tags

I am confused on following style of writing code. I want to know which is the practical and efficient method to insert HTML tag in document.
Javascript:
document.getElementById('demoId').innerHTML='<div>hello world and <a>click me</a> also</div>';
OR
var itd=document.createElement('div'),
ita=document.createElement('a'),
idt=document.createTextNode('hello world');
iat=document.createTextNode('click me');
idn=document.createTextNode('also');
ita.appendChild(iat);
itd.appendChild(idt);
itd.appendChild(ita);
itd.appendChild(idn)
docuemtn.getElementById('demoId').appendChild(itd);
Which is the fastest and best method?
Well just think about what each of them are doing. The first one is taking the string, parsing it and then calling document.createElement, document.appendChild, etc. (or similar methods) based on the output from the parsed string. The second is reducing the work load on the browser as you're stating directly what you want it to do.
See a comparison on jsPerf
According to jsPerf option 1 is approximately 3 times slower than option 2.
On the topic of maintainability option 1 is far easier to write but call me old fashioned, I'd prefer to use option 2 as it just feels much safer.
Edit
After a few results started coming in, the differing results piqued my curiosity. I ran the test twice with each of my browsers installed, Here is a screenshot of the results from jsPerf after all the tests (operations/second, higher is better).
So browsers seem to differ greatly in how they handle the two techniques. On average option 2 seems to be the faster due to Chrome and Safari's large gap favouring option 2. My conclusion is that we should just use whichever feels the most comfortable or fits best with your organisation's standards and not worry about which is more efficient.
The main thing this exercise has taught me is to not use IE10 despite the great things I've heard about it.

Optimizing JavaScript loop makes it slower

In the book JavaScript Patterns Stoyan Stefanov he claims that the common way of looping in JavaScript
for (i = 0, max = myarray.length; i < max; i++) {
// do something with myarray[i]
}
can be optimized by using this pattern instead
for (i = myarray.length; i--;) {
// do something with myarray[i]
}
I found that to be interesting so I decided to test it in the real world by applying the technique to a performance intensive loop showed in this blog post about doing pixel manipulation with canvas. The benchmarks comparing the regular code with the "optimized" code can be seen here.
The interesting thing is that the supposedly optimized loop is actually slower than the regular way of looping in both Opera and Firefox. Why is that?
This kind of micro-optimization always has very limited validity. Chances are that the VM implementations include optimizations for the "common ways" of doing things that go beyond what you can do on the language level.
Which is why micro-optimizations are usually a waste of time. Beginners tend to obsess over them and end up writing code that is hard to maintain AND slow.
Most of the ways to try to optimise a loop comes from C, and a time when compilers where simpler and processors executed one instruction after the other.
Modern processors run the code very differently, so optimising specific instructions doesn't have the same effect now.
For Javascript the changes are quite rapid now. It has gone from being interpreted to being compiled, which makes a huge performance difference. The compilers are very different between browsers, and they change with each new browser version, so something that is faster in one browser today, may be slower tomorrow.
I have tested some different ways of optimising loops, and currently there is very little difference in performance: http://jsperf.com/loopoptimisations
One thing that can be said for certain though, is that the regular way of writing loops is the most common, so that is what all compilers will be focusing on optimising.
To begin with, I see no reason why the second should be much faster than the first. The difference between comparing with zero versus comparing with another number is something that might make a difference in extremely tight loops in compiled code, but even there it's likely a cargo cult opt most of the time (read Richard Feyman's Cargo Cult Science if you don't get the reference, if nothing else it's a good read by there are also more than a few times where similar tendencies to copy something that worked well once to a case where there's no real reason to suppose it will help, in programming).
I could see the following being slower:
for (i = 0; i < myarray.length; i++) {
// do something with myarray[i]
}
But I could also see it not being slower, if the engine did the optimisation of hoisting the length check for you, or the implementation was such that checking the length and checking a variable was about equivalent cost anyway.
I could also see either that or the first code example you give, or perhaps both, being something that a given script-engine optimises - it is after all a very common idiom in js, and inherently involves looping, so it would be a sensible thing to try to detect and optimise for in a script engine.
Beyond such conjectures though, we can't really say anything about this beyond "because one works better in that engine than the other, that's why" without getting to the level below javascript and examining the implementation of the engine. Your very results would suggest that the answer won't be the same with each engine (after all, one did correspond more with what you expected).
Now, it's worth noting that in each case the results are quite close to each other anyway. If you found just one or two browsers that are currently reasonably popular where the change did indeed optimise, it could still be worth it.
If you're interested in whether it was ever worth or, or was just an assumption to being with, you could try to get a copy of Netscape 2 (first javascript browser ever, after all), and run some code to test the approach on it.
Edit: If you do try that sort of experiment, another is to try deliberately buggy loops that overshoot the array bounds by one. One possible optimisation for the engine is to realise you're walking the array, and check once on where you will end for out-of-range. if so, you could have different results if you will eventually error.

Does obfuscated javascript slow a browser down?

I have a script which is obfuscated and begins like this:
var _0xfb0b=["\x48\x2E\x31\x36\x28\x22\x4B\x2E
...it continues like that for more then 435.000 chars (the file has 425kB) and in the end this is coming:
while(_0x8b47x3--){if(_0x8b47x4[_0x8b47x3]){_0x8b47x1=_0x8b47x1[_0xfb0b[8]](
new RegExp(_0xfb0b[6]+_0x8b47x5(_0x8b47x3)+_0xfb0b[6],_0xfb0b[7]),
_0x8b47x4[_0x8b47x3]);} ;} ;return _0x8b47x1;}
(_0xfb0b[0],62,2263,_0xfb0b[3][_0xfb0b[2]](_0xfb0b[1])));
My question is: Isn't it way harder for a browser to execute that compared to a not-obfuscated script and if so, how much time I'm probably loosing because of the obfuscation? Especially the older browsers like IE6 which are really not that performant in JS must spend a lot more time on that, right?
It certainly does slow down the browser more significantly on older browsers (specifically when initializing), but it definitely slows it down even afterwards. I had a heavily obfuscated file that took about 1.2 seconds to initialize, unobfuscated in the same browser and PC was about 0.2 seconds, so, significant.
It depends on what the obfuscator does.
If it primarily simply renames identifiers, I would expect it to have little impact on performance unless the identifier names it used were artificially long.
If it scrambles control or data flow, it could have arbitrary impact on code execution.
Some control flow scrambling can be done with only constant overhead.
You'll have to investigate the method of obfuscation to know the answer to this. Might be easier to just measure the difference.
The obfuscation you're using seems to just store all string constants into one array and put them into the code where they originally were. The strings are obfuscated into the array but still come out as string. (Try console.log(_0xfb0b) to see what I mean).
It does, definitely, slow down the code INITIALIZATION. However, once that array has been initialized, the impact on the script is negligible.

Javascript: Optimizing details for Critical/Highly Processed Javascript code

I've been looking through a lot of Javascript Optimizing and most of them talk about string concatenation and a few other big ones found here, but I figured there had to be more details that you can optimize for when speed is critical and the processing of those pieces of code is very high.
Say you run this code for some reason: (unlikely, I know, but bear with me)
for( var i = 0; i < 100000000000; i++ ) {
//Do stuff
}
And there's no way of getting around having a loop that big... You're going to want to make sure that all the stuff you're doing in that loop is optimized to the point that you can't optimize it anymore... or your website will hang.
Edit: I'm not necessarily talking about a loop, what about a function that's repeatedly called such as onmousemove? Although in most cases we shouldn't need to use onmousemove, there are some cases that do. This questions is for those cases.
Using JQuery as our JS library
So what I would like is tips for optimizing, but only the more uncommon ones
- ie. Speed differences between switch or if-else
If you'd like to see the more common ones, you can find them here:
Optimizing Javascript for Execution Speed
Javascript Tips and Tricks; Javascript Best Practices
Optimize javascript pre-load of images
How do you optimize your Javascript
Object Oriented Javascript best practices
"And there's no way of getting around having a loop that big... "
In the real world of RIA, you HAVE to get around the big loops. As important as optimization is learning how to break large loops into small loops, and giving time to the browser to deal with its UI. Otherwise you'll give your users a bad experience and they won't come back.
So I'd argue that BEFORE you learn funky JS optimizations, you should know how to break a large loop into chunks called by setTimeout() and display a progress bar (or let animated GIFs loop).
Perceived speed is often more important than actual speed. The world of the client is different from the world of the server.
When animating, learn how to find out if you're running on a lame browser (usually IE) and try for a worse framerate (or just don't animate). I can get some animations to go 90fps in a good browser but just 15fps in IE. You can test for the browser, but it's usually better to use timeouts and the clock to see how animations are performing.
Also, for genuine speedups, learn how to use web workers in Gears and in newer browsers.
You can speed up this mofo thus:
for (var i = 100000000; i--;) {
//Do stuff
}
Reverses the loop and checks only for
i--
instead of
i < 10000000 and i < 10000000 = true
Performance gain of 50% in most browsers
Saw this in a great Google Code talk # http://www.youtube.com/watch?v=mHtdZgou0qU
The talk contains some other great tricks.
Good luck!
If it doesn't need to be synchronous, convert the loops into a recursive implementation with setTimeout calls
for( var i = 0; i < 100000000000; i++ ) {
//Do stuff
}
Can probably written as
function doSomething(n)
{
if (n === 0) return some_value;
setTimeout(function(){doSomething(n-1);}, 0);
}
OK, this might not be a good example, but you get the idea. This way, you convert long synchronous operations into an asynchronous operation that doesn't hang the browser. Very useful in certain scenarios where something doesn't need to be done right away.
Using split & join instead of replace:
//str.replace("needle", "hay");
str.split("needle").join("hay");
Store long reference chains in local variables:
function doit() {
//foo.bar.moo.goo();
//alert(foo.bar.moo.x);
var moo = foo.bar.moo;
moo.goo();
alert(moo.x);
}
After seeing a few good answers by the people here, I did some more searching and found a few to add:
These are tips on Javascript optimizing when you're looking to get down to the very little details, things that in most cases wouldn't matter, but some it will make all the difference:
Switch vs. Else If
A commonly used tactic to wring
whatever overhead might be left out of
a large group of simple conditional
statements is replacing If-Then-Else's
with Switch statements.
Just incase you wanted to see benchmarking you can find it here.
Loop Unrolling
To unroll a loop, you have it do more
than one of the same step per
iteration and increment the counter
variable accordingly. This helps a lot
because you then decrease the number
of times you are checking the
condition for the loop overall. You
must be careful when doing this though
because you may end up overshooting
bounds.
See details and benchmarking here.
Reverse Loop Counting
Reverse your loop so that it counts
down instead of up. I have also seen
in various documents about
optimization that comparing a number
to zero is much quicker than comparing
it to another number, so if you
decrement and compare to zero it
should be faster.
See more details and benchmarking here.
Duff's Device
It's simple, but complicated to grasp at first. Read more about it here.
Make sure to check out the improved version further down that page.
The majority of this information was quoted directly from here: JavaScript Optimization. It's interesting, since it's such an old site it looks at optimization from the perspective of the browser processing power they had back then. Although the benchmarks they have recorded there are for IE 5.5 and Netscape 4.73, their benchmarking tools give accurate results for the browser you're using.
For the people who think these details don't matter, I think it says a bit about the way people perceive the power in advancing technologies we have. Just because our browsers are processing many times faster than what they use to doesn't necessarily mean that we should abuse that processing power.
I'm not suggesting spend hours optimizing two lines of code for 0.005ms, but if you keep some these techniques in mind and implement them where appropriate it will contribute to a faster web. After all, there are still many people using IE 6, so it would be wrong to assume everyone's browsers can handle the same processing.
Which JavaScript engine are we supposed to be targeting? If you're talking about such extreme optimisation, it makes a big difference. For starters, I'll point out that the array.join() trick for string concatenation is only really applicable to Microsoft's JScript engine; it can actually give worse performance on other JS engines.

Categories

Resources