Javascript: Optimizing details for Critical/Highly Processed Javascript code - javascript

I've been looking through a lot of Javascript Optimizing and most of them talk about string concatenation and a few other big ones found here, but I figured there had to be more details that you can optimize for when speed is critical and the processing of those pieces of code is very high.
Say you run this code for some reason: (unlikely, I know, but bear with me)
for( var i = 0; i < 100000000000; i++ ) {
//Do stuff
}
And there's no way of getting around having a loop that big... You're going to want to make sure that all the stuff you're doing in that loop is optimized to the point that you can't optimize it anymore... or your website will hang.
Edit: I'm not necessarily talking about a loop, what about a function that's repeatedly called such as onmousemove? Although in most cases we shouldn't need to use onmousemove, there are some cases that do. This questions is for those cases.
Using JQuery as our JS library
So what I would like is tips for optimizing, but only the more uncommon ones
- ie. Speed differences between switch or if-else
If you'd like to see the more common ones, you can find them here:
Optimizing Javascript for Execution Speed
Javascript Tips and Tricks; Javascript Best Practices
Optimize javascript pre-load of images
How do you optimize your Javascript
Object Oriented Javascript best practices

"And there's no way of getting around having a loop that big... "
In the real world of RIA, you HAVE to get around the big loops. As important as optimization is learning how to break large loops into small loops, and giving time to the browser to deal with its UI. Otherwise you'll give your users a bad experience and they won't come back.
So I'd argue that BEFORE you learn funky JS optimizations, you should know how to break a large loop into chunks called by setTimeout() and display a progress bar (or let animated GIFs loop).
Perceived speed is often more important than actual speed. The world of the client is different from the world of the server.
When animating, learn how to find out if you're running on a lame browser (usually IE) and try for a worse framerate (or just don't animate). I can get some animations to go 90fps in a good browser but just 15fps in IE. You can test for the browser, but it's usually better to use timeouts and the clock to see how animations are performing.
Also, for genuine speedups, learn how to use web workers in Gears and in newer browsers.

You can speed up this mofo thus:
for (var i = 100000000; i--;) {
//Do stuff
}
Reverses the loop and checks only for
i--
instead of
i < 10000000 and i < 10000000 = true
Performance gain of 50% in most browsers
Saw this in a great Google Code talk # http://www.youtube.com/watch?v=mHtdZgou0qU
The talk contains some other great tricks.
Good luck!

If it doesn't need to be synchronous, convert the loops into a recursive implementation with setTimeout calls
for( var i = 0; i < 100000000000; i++ ) {
//Do stuff
}
Can probably written as
function doSomething(n)
{
if (n === 0) return some_value;
setTimeout(function(){doSomething(n-1);}, 0);
}
OK, this might not be a good example, but you get the idea. This way, you convert long synchronous operations into an asynchronous operation that doesn't hang the browser. Very useful in certain scenarios where something doesn't need to be done right away.

Using split & join instead of replace:
//str.replace("needle", "hay");
str.split("needle").join("hay");

Store long reference chains in local variables:
function doit() {
//foo.bar.moo.goo();
//alert(foo.bar.moo.x);
var moo = foo.bar.moo;
moo.goo();
alert(moo.x);
}

After seeing a few good answers by the people here, I did some more searching and found a few to add:
These are tips on Javascript optimizing when you're looking to get down to the very little details, things that in most cases wouldn't matter, but some it will make all the difference:
Switch vs. Else If
A commonly used tactic to wring
whatever overhead might be left out of
a large group of simple conditional
statements is replacing If-Then-Else's
with Switch statements.
Just incase you wanted to see benchmarking you can find it here.
Loop Unrolling
To unroll a loop, you have it do more
than one of the same step per
iteration and increment the counter
variable accordingly. This helps a lot
because you then decrease the number
of times you are checking the
condition for the loop overall. You
must be careful when doing this though
because you may end up overshooting
bounds.
See details and benchmarking here.
Reverse Loop Counting
Reverse your loop so that it counts
down instead of up. I have also seen
in various documents about
optimization that comparing a number
to zero is much quicker than comparing
it to another number, so if you
decrement and compare to zero it
should be faster.
See more details and benchmarking here.
Duff's Device
It's simple, but complicated to grasp at first. Read more about it here.
Make sure to check out the improved version further down that page.
The majority of this information was quoted directly from here: JavaScript Optimization. It's interesting, since it's such an old site it looks at optimization from the perspective of the browser processing power they had back then. Although the benchmarks they have recorded there are for IE 5.5 and Netscape 4.73, their benchmarking tools give accurate results for the browser you're using.
For the people who think these details don't matter, I think it says a bit about the way people perceive the power in advancing technologies we have. Just because our browsers are processing many times faster than what they use to doesn't necessarily mean that we should abuse that processing power.
I'm not suggesting spend hours optimizing two lines of code for 0.005ms, but if you keep some these techniques in mind and implement them where appropriate it will contribute to a faster web. After all, there are still many people using IE 6, so it would be wrong to assume everyone's browsers can handle the same processing.

Which JavaScript engine are we supposed to be targeting? If you're talking about such extreme optimisation, it makes a big difference. For starters, I'll point out that the array.join() trick for string concatenation is only really applicable to Microsoft's JScript engine; it can actually give worse performance on other JS engines.

Related

Why for loop with cached length is slower than simple for loop?

I am reading the book "Secrets of the JavaScript Ninja". But I am confused with the figure that I found on in this book. From my view, simple for loop lost time to get the length of array so if the length is cached before, the speed would be improved. But the picture below showed the opposite result. Could someone give me the reason? .Thanks.
Performance in js is a question of how much the engine optimized the code. And how the engine optimizes code depends on the developers that wrote the optimization. And those developers want the average code to run fast, that means the more "normal" your code is, the faster it runs. So maybe iterating over an array in the first way is so common that someone heavily optimized it. Whatsoever the only real result we can gain from that data is: both ways are fast enough to not care about the difference.

Optimizing JavaScript loop makes it slower

In the book JavaScript Patterns Stoyan Stefanov he claims that the common way of looping in JavaScript
for (i = 0, max = myarray.length; i < max; i++) {
// do something with myarray[i]
}
can be optimized by using this pattern instead
for (i = myarray.length; i--;) {
// do something with myarray[i]
}
I found that to be interesting so I decided to test it in the real world by applying the technique to a performance intensive loop showed in this blog post about doing pixel manipulation with canvas. The benchmarks comparing the regular code with the "optimized" code can be seen here.
The interesting thing is that the supposedly optimized loop is actually slower than the regular way of looping in both Opera and Firefox. Why is that?
This kind of micro-optimization always has very limited validity. Chances are that the VM implementations include optimizations for the "common ways" of doing things that go beyond what you can do on the language level.
Which is why micro-optimizations are usually a waste of time. Beginners tend to obsess over them and end up writing code that is hard to maintain AND slow.
Most of the ways to try to optimise a loop comes from C, and a time when compilers where simpler and processors executed one instruction after the other.
Modern processors run the code very differently, so optimising specific instructions doesn't have the same effect now.
For Javascript the changes are quite rapid now. It has gone from being interpreted to being compiled, which makes a huge performance difference. The compilers are very different between browsers, and they change with each new browser version, so something that is faster in one browser today, may be slower tomorrow.
I have tested some different ways of optimising loops, and currently there is very little difference in performance: http://jsperf.com/loopoptimisations
One thing that can be said for certain though, is that the regular way of writing loops is the most common, so that is what all compilers will be focusing on optimising.
To begin with, I see no reason why the second should be much faster than the first. The difference between comparing with zero versus comparing with another number is something that might make a difference in extremely tight loops in compiled code, but even there it's likely a cargo cult opt most of the time (read Richard Feyman's Cargo Cult Science if you don't get the reference, if nothing else it's a good read by there are also more than a few times where similar tendencies to copy something that worked well once to a case where there's no real reason to suppose it will help, in programming).
I could see the following being slower:
for (i = 0; i < myarray.length; i++) {
// do something with myarray[i]
}
But I could also see it not being slower, if the engine did the optimisation of hoisting the length check for you, or the implementation was such that checking the length and checking a variable was about equivalent cost anyway.
I could also see either that or the first code example you give, or perhaps both, being something that a given script-engine optimises - it is after all a very common idiom in js, and inherently involves looping, so it would be a sensible thing to try to detect and optimise for in a script engine.
Beyond such conjectures though, we can't really say anything about this beyond "because one works better in that engine than the other, that's why" without getting to the level below javascript and examining the implementation of the engine. Your very results would suggest that the answer won't be the same with each engine (after all, one did correspond more with what you expected).
Now, it's worth noting that in each case the results are quite close to each other anyway. If you found just one or two browsers that are currently reasonably popular where the change did indeed optimise, it could still be worth it.
If you're interested in whether it was ever worth or, or was just an assumption to being with, you could try to get a copy of Netscape 2 (first javascript browser ever, after all), and run some code to test the approach on it.
Edit: If you do try that sort of experiment, another is to try deliberately buggy loops that overshoot the array bounds by one. One possible optimisation for the engine is to realise you're walking the array, and check once on where you will end for out-of-range. if so, you could have different results if you will eventually error.

What should and shouldn't I cache in javascript?

I know that it's a good idea to cache objects that will be used many times. But what about if I will use the following many times:
var chacheWindow = window;
var chacheDocument = document;
var chacheNavigator = navigator;
var chacheScreen = screen;
var chacheWindowLocationHash = window.location.hash;
var chacheDocumentBody = document.body;
Maybe it is only good to chace stuff between <html></html>? Please explain.
The point of caching is to avoid either:
Typing long names repeatedly
Every one of your examples has a longer name then the original so you don't get that benefit
Avoiding making an expensive operation repeatedly (or a slightly costly operation, e.g. array.length before a for loop, a very large number of times)
There is no sign that you are getting that benefit either.
You probably shouldn't be copying the references to local variables at all.
Pretty hard to say exactly what you should cache.
I wouldn't cache native global objects or things that may change. What you are doing in your example is just creating another reference to the same object.
References to DOM elements should be cached, else you will spend time to serach for them again. Also result of functions that make heavy operations could be cached.
You can use some profiler and see the performences on different functions to get a hint on what you should cache.
Caching is a double edged sword. If you've got some value that's intensive to calculate or requires a round trip to the server, caching that is a fantastic idea. For pretty much all of the values that you specified in your question, you're not buying yourself anything. You're simply replacing one variable reference with another.
Generally speaking it's not a good idea to bog yourself down in these micro-optimizations. Optimizing and improving performance is good, but your time is generally better spent looking for that loop that does way too much work or a fixing bad algorithm than handling this type of case where if there's any improvement at all, you're only looking at an improvement of nanoseconds at best - but again for the values you mentioned you will see absolutely no improvement.

Does obfuscated javascript slow a browser down?

I have a script which is obfuscated and begins like this:
var _0xfb0b=["\x48\x2E\x31\x36\x28\x22\x4B\x2E
...it continues like that for more then 435.000 chars (the file has 425kB) and in the end this is coming:
while(_0x8b47x3--){if(_0x8b47x4[_0x8b47x3]){_0x8b47x1=_0x8b47x1[_0xfb0b[8]](
new RegExp(_0xfb0b[6]+_0x8b47x5(_0x8b47x3)+_0xfb0b[6],_0xfb0b[7]),
_0x8b47x4[_0x8b47x3]);} ;} ;return _0x8b47x1;}
(_0xfb0b[0],62,2263,_0xfb0b[3][_0xfb0b[2]](_0xfb0b[1])));
My question is: Isn't it way harder for a browser to execute that compared to a not-obfuscated script and if so, how much time I'm probably loosing because of the obfuscation? Especially the older browsers like IE6 which are really not that performant in JS must spend a lot more time on that, right?
It certainly does slow down the browser more significantly on older browsers (specifically when initializing), but it definitely slows it down even afterwards. I had a heavily obfuscated file that took about 1.2 seconds to initialize, unobfuscated in the same browser and PC was about 0.2 seconds, so, significant.
It depends on what the obfuscator does.
If it primarily simply renames identifiers, I would expect it to have little impact on performance unless the identifier names it used were artificially long.
If it scrambles control or data flow, it could have arbitrary impact on code execution.
Some control flow scrambling can be done with only constant overhead.
You'll have to investigate the method of obfuscation to know the answer to this. Might be easier to just measure the difference.
The obfuscation you're using seems to just store all string constants into one array and put them into the code where they originally were. The strings are obfuscated into the array but still come out as string. (Try console.log(_0xfb0b) to see what I mean).
It does, definitely, slow down the code INITIALIZATION. However, once that array has been initialized, the impact on the script is negligible.

'Fixed' for loop - what is more efficient?

I'm creating a tic-tac-toe game, and one of the functions has to iterate through each of the 9 fields (tic-tac-toe is played on a 3x3 grid). I was wondering what is more efficient (which one is perhaps faster, or what is the preferred way of scripting in such situation) - using two for nested loops like this:
for(var i=0; i<3; i++) {
for(var j=0; j<3; j++) {
checkField(i, j);
}
}
or hard-coding it like this:
checkField(0, 0);
checkField(0, 1);
checkField(0, 2);
checkField(1, 0);
checkField(1, 1);
checkField(1, 2);
checkField(2, 0);
checkField(2, 1);
checkField(2, 2);
As there are only 9 combinations, it would be perhaps overkill to use two nested for loops, but then again this is clearer to read. The for loop, however, will increment variables and check whether i and j are smaller than 3 every time as well.
In this example, the time saving at least might be negligible, but what is the preferred way of coding in this case?
Thanks.
Do not hard code 9 lines of the same code!
Readability
Flexibility / Maintenance
Code Length
This is a premature micro-optimization. In this case always go for the clearer solution - so use the for loops:) And by the way, think about if tomorrow the grid is 4x4:)
Time savings: negligible. Probably un-measurable.
Preferred style: nested for loops. Ok, you'll probably never make it a 4x4 or 5x5 or 3d (or 4d!) tic-tac-toe - but it's a good habit to get into. Also easier to see if you forgot something and avoids cut-and-paste errors.
Ironically hard-coding the checks will probably be faster, but (and here's the important bit) meaninglessly so.
As such, what you should really aim for is the maximum clarity of intent for what you're trying to achieve. Also, you should try and make life easier for any future improvements (that may not be carried out by you.) For example, if the tic-tac-toe grid was expanded to 4x4 which solution would be the best?
On this basis I'd be tempted to go with the loop approach along with the appropriate level of commenting, etc.
We should forget about small
efficiencies, say about 97% of the
time: premature optimization is the
root of all evil.
-- Donald Knuth
Definitely the first one. First of all, the performance difference will be absolutely negligible - my guess is the unrolled program will run slower because more time is required to compile/interpret it because it is longer (plus, additional client, server and router processing power and bandwidth will be required).
Secondly, and as a generalization, you don't know which version would be faster. Maybe some interpreter would increment registers for the first version, but load the parameters from memory (waaay slower) for the second one?
Especially in the case of JavaScript, you have absolutely no fixed specification on how fast (future!) interpreters and compilers work, so this "optimization" is absolutely pointless and confusing other programmers working with your code at best.
Please don't hardcode it. Never do something like that.
More than one time? Use a loop.
Also, you are worrying about a problem you don't have, really.
As you mentioned, the time saving will be negligible. Even if you would put that grid to a 100 times 100 square, you still won't see any difference.
If we go a bit larger though, say 10.000 times 10.000, we might see some difference. I wonder what that might be, because the compilers and optimisers are very good and especially in a loop the environment might speed things up by having this information (function will be called several times).
Why don't you try it out and share your results with us?
In practise, however, I would never recommend going for the second approach. Readability and flexibility is far more important than CPU time. And optimising early, as they say, is quite evil in itself because it obfuscates the code and introduces a lot of unnecessary complexity without really contributing to performance.

Categories

Resources