I know that it's a good idea to cache objects that will be used many times. But what about if I will use the following many times:
var chacheWindow = window;
var chacheDocument = document;
var chacheNavigator = navigator;
var chacheScreen = screen;
var chacheWindowLocationHash = window.location.hash;
var chacheDocumentBody = document.body;
Maybe it is only good to chace stuff between <html></html>? Please explain.
The point of caching is to avoid either:
Typing long names repeatedly
Every one of your examples has a longer name then the original so you don't get that benefit
Avoiding making an expensive operation repeatedly (or a slightly costly operation, e.g. array.length before a for loop, a very large number of times)
There is no sign that you are getting that benefit either.
You probably shouldn't be copying the references to local variables at all.
Pretty hard to say exactly what you should cache.
I wouldn't cache native global objects or things that may change. What you are doing in your example is just creating another reference to the same object.
References to DOM elements should be cached, else you will spend time to serach for them again. Also result of functions that make heavy operations could be cached.
You can use some profiler and see the performences on different functions to get a hint on what you should cache.
Caching is a double edged sword. If you've got some value that's intensive to calculate or requires a round trip to the server, caching that is a fantastic idea. For pretty much all of the values that you specified in your question, you're not buying yourself anything. You're simply replacing one variable reference with another.
Generally speaking it's not a good idea to bog yourself down in these micro-optimizations. Optimizing and improving performance is good, but your time is generally better spent looking for that loop that does way too much work or a fixing bad algorithm than handling this type of case where if there's any improvement at all, you're only looking at an improvement of nanoseconds at best - but again for the values you mentioned you will see absolutely no improvement.
Related
What is the basic difference between these two statements from the memory standpoint. Just want to know making objects with new does anything special about the memory allocation and garbage collection or both are identical.
I have to load a huge binary data to an array so want to have an idea.
Another question is can i force de-allocation of any memory from the JavaScript directly? like Gc.Collect() in c# or delete operator?
var x=8;
var y=new Number(8);
Thanks for your help in advance
Difference: none.
As for forcing deallocation: no.
(you can set all references to null; but that may be an unnecessary hint to the GC)
Javascript is fully managed and doesn't provide an API like C# to "order" the GC to do stuff. Indeed, you may even find that some objects end up tied to the DOM and aren't deleted until their associated nodes are. And each browser is a different flavour.
Consider this javascript code:
var s = "Some string";
s = "More string";
Will the garbage collector (GC) have work to do after this sort of operation?
(I'm wondering whether I should worry about assigning string literals when trying to minimize GC pauses.)
e: I'm slightly amused that, although I stated explicitly in my question that I needed to minimize GC, everyone assumed I'm wrong about that. If one really must know the particular details: I've got a game in javascript -- it runs fine in Chrome, but in Firefox has semi-frequent pauses, that seem to be due to GC. (I've even checked with the MemChaser extension for Firefox, and the pauses coincide exactly with garbage collection.)
Yes, strings need to be garbage-collected, just like any other type of dynamically allocated object. And yes, this is a valid concern as careless allocation of objects inside busy loops can definitely cause performance issues.
However, string values are immutable (non-changable), and most modern JavaScript implementations use "string interning", that is they store only one instance of each unique string value. This means that if you have something like this...
var s1 = "abc",
s2 = "abc";
...only one instance of "abc" will be allocated. This only applies to string values, not String objects.
A couple of things to keep in mind:
Functions like substring, slice, etc. will allocate a new object for each function call (if called with different parameters).
Even though both variable point to the same data in memory, there are still two variables to process when the GC cycle runs. Having too many local variables can also hurt you as each of them will need to be processed by the GC, adding overhead.
Some further reading on writing high-performance JavaScript:
https://developer.mozilla.org/en-US/docs/JavaScript/Memory_Management
https://www.scirra.com/blog/76/how-to-write-low-garbage-real-time-javascript
http://jonraasch.com/blog/10-javascript-performance-boosting-tips-from-nicholas-zakas
Yes, but unless you are doing this in a loop millions of times it won't likely be a factor for you to worry about.
As you already noticed, JavaScript is not JavaScript. It runs on different platforms and thus will have different performance characteristics.
So the definite answer to the question "Will the GC have work to do after this sort of operation?" is: maybe. If the script is as short as you've shown it, then a JIT-Compiler might well drop the first string completely. But there's no rule in the language definition that says it has to be that way or the other way. So in the end it's like it is all too often in JavaScript: you have to try it.
The more interesting question might also be: how can you avoid garbage collection. And that is try to minimize the allocation of new objects. Games typically have a pretty constant amount of objects and often there won't be new objects until an old one gets unused. For strings this might be harder as they are immutable in JS. So try to replace strings with other (mutable) representations where possible.
Yes, the garbage collector will have a string object containing "Some string" to get rid of. And, in answer to your question, that string assignment will make work for the GC.
Because strings are immutable and are used a lot, the JS engine has a pretty efficient way of dealing with them. You should not notice any pauses from garbage collecting a few strings. The garbage collector has work to do all the time in the normal course of javascript programming. That's how it's supposed to work.
If you are observing pauses from GC, I rather doubt it's from a few strings. There is more likely a much bigger issue going on. Either you have thousands of objects needing GC or some very complicated task for the GC. We couldn't really speculate on that without study of the overall code.
This should not be a concern unless you were doing some enormous loop and dealing with tens of thousands of objects. In that case, one might want to program a little more carefully to minimize the number of intermediate objects that are created. But, absent that level of objects, you should first right clear, reliable code and then optimize for performance only when something has shown you that there is a performance issue to worry about.
To answer your question "I'm wondering whether I should worry about assigning string literals when trying to minimize GC pauses": No.
You really don't need to worry about this sort of thing with regard to garbage collection.
GC is only a concern when creating & destroying huge numbers of Javascript objects, or large numbers of DOM elements.
I'm creating a tic-tac-toe game, and one of the functions has to iterate through each of the 9 fields (tic-tac-toe is played on a 3x3 grid). I was wondering what is more efficient (which one is perhaps faster, or what is the preferred way of scripting in such situation) - using two for nested loops like this:
for(var i=0; i<3; i++) {
for(var j=0; j<3; j++) {
checkField(i, j);
}
}
or hard-coding it like this:
checkField(0, 0);
checkField(0, 1);
checkField(0, 2);
checkField(1, 0);
checkField(1, 1);
checkField(1, 2);
checkField(2, 0);
checkField(2, 1);
checkField(2, 2);
As there are only 9 combinations, it would be perhaps overkill to use two nested for loops, but then again this is clearer to read. The for loop, however, will increment variables and check whether i and j are smaller than 3 every time as well.
In this example, the time saving at least might be negligible, but what is the preferred way of coding in this case?
Thanks.
Do not hard code 9 lines of the same code!
Readability
Flexibility / Maintenance
Code Length
This is a premature micro-optimization. In this case always go for the clearer solution - so use the for loops:) And by the way, think about if tomorrow the grid is 4x4:)
Time savings: negligible. Probably un-measurable.
Preferred style: nested for loops. Ok, you'll probably never make it a 4x4 or 5x5 or 3d (or 4d!) tic-tac-toe - but it's a good habit to get into. Also easier to see if you forgot something and avoids cut-and-paste errors.
Ironically hard-coding the checks will probably be faster, but (and here's the important bit) meaninglessly so.
As such, what you should really aim for is the maximum clarity of intent for what you're trying to achieve. Also, you should try and make life easier for any future improvements (that may not be carried out by you.) For example, if the tic-tac-toe grid was expanded to 4x4 which solution would be the best?
On this basis I'd be tempted to go with the loop approach along with the appropriate level of commenting, etc.
We should forget about small
efficiencies, say about 97% of the
time: premature optimization is the
root of all evil.
-- Donald Knuth
Definitely the first one. First of all, the performance difference will be absolutely negligible - my guess is the unrolled program will run slower because more time is required to compile/interpret it because it is longer (plus, additional client, server and router processing power and bandwidth will be required).
Secondly, and as a generalization, you don't know which version would be faster. Maybe some interpreter would increment registers for the first version, but load the parameters from memory (waaay slower) for the second one?
Especially in the case of JavaScript, you have absolutely no fixed specification on how fast (future!) interpreters and compilers work, so this "optimization" is absolutely pointless and confusing other programmers working with your code at best.
Please don't hardcode it. Never do something like that.
More than one time? Use a loop.
Also, you are worrying about a problem you don't have, really.
As you mentioned, the time saving will be negligible. Even if you would put that grid to a 100 times 100 square, you still won't see any difference.
If we go a bit larger though, say 10.000 times 10.000, we might see some difference. I wonder what that might be, because the compilers and optimisers are very good and especially in a loop the environment might speed things up by having this information (function will be called several times).
Why don't you try it out and share your results with us?
In practise, however, I would never recommend going for the second approach. Readability and flexibility is far more important than CPU time. And optimising early, as they say, is quite evil in itself because it obfuscates the code and introduces a lot of unnecessary complexity without really contributing to performance.
Imagine I had a variable called X.
Let's say every 5 seconds I wanted to make X = true. (it could be either true or false in between these 5 seconds, but gets reset to true when the 5 seconds are up).
Would it be more efficient to check if the value is already true, then if not, reassign it to true? Or just have X = true?
in other words, which would run faster?
if(x==false){
x = true;
}
vs
x = true;
On one hand, the first program won't mutate the variable if it doesn't have to. On the other hand, the second program doesn't need to check what X is equal to; it dives straight in.
It nearly always doesn't matter. Write the code that is easiest to understand and maintain. Only optimize it if necessary.
The best way to be sure is to test it. Profile your code.
Which is faster might depend on the browser.
Which is faster depends on whether the variable is usually true or usually false.
Having said that, I'd guess in most scenarios setting a variable without testing it will be faster.
Really depends on your data :)
If x == false 90% of the time, then a straight assignment to x would be faster.
This is one of those places where you probably don't want to worry about efficiency, and if you really do, profile it ..
Disclaimer/Warning:
This is a micro-optimization, and will never affect the efficiency of your program in a way that is measurable by users. If you turn off all compiler optimizations, and run an excellent profiler, you may be able to quantify the effects - but no user will ever notice.
This is especially true for your situation, where the code in question is only run every few seconds. The time spent profiling would probably be better spent improving other parts of your application.
Also, in these situations readability should always prevail over non-bottleneck micro-optimizations (although my answer below takes only runtime efficiency into account, as requested). Therefore my recommended code for you to use in this situation is x=true, since it's the easiest to read and understand.
Finally, if adding the check will improve speed, the compiler probably already knows that and will do it for you, so you can't go wrong with x=true (that's why you should turn off optimizations before running the profiler).
Answer:
The only true way to figure this out is by profiling. You may find that the 0 test (x==false) basically takes no time at all, and therefore it is worth including due to the time it saves when x turns out to be true. Or you may find that the test takes long enough that it wastes too much time when x turns out to be false.
My guess is that the test is unecessary. That's because 0-testing and other bitwise operations (and, or, etc) are all so fast that I usually treat them as taking the same elementary amount of time. And if 0-testing takes the same amount of time as an OR operation (setting to true), then the 0-test is a redundant waste of time. Profiling could prove me wrong of course, and my guess is based on loose assumptions about bitwise operations, so if you choose to run a profiler and figure this out I'd definitely be interested in the results.
The efficiency your are trying to attain by this is minute compared the efficiency attained by the quality of your overall design.
I've been looking through a lot of Javascript Optimizing and most of them talk about string concatenation and a few other big ones found here, but I figured there had to be more details that you can optimize for when speed is critical and the processing of those pieces of code is very high.
Say you run this code for some reason: (unlikely, I know, but bear with me)
for( var i = 0; i < 100000000000; i++ ) {
//Do stuff
}
And there's no way of getting around having a loop that big... You're going to want to make sure that all the stuff you're doing in that loop is optimized to the point that you can't optimize it anymore... or your website will hang.
Edit: I'm not necessarily talking about a loop, what about a function that's repeatedly called such as onmousemove? Although in most cases we shouldn't need to use onmousemove, there are some cases that do. This questions is for those cases.
Using JQuery as our JS library
So what I would like is tips for optimizing, but only the more uncommon ones
- ie. Speed differences between switch or if-else
If you'd like to see the more common ones, you can find them here:
Optimizing Javascript for Execution Speed
Javascript Tips and Tricks; Javascript Best Practices
Optimize javascript pre-load of images
How do you optimize your Javascript
Object Oriented Javascript best practices
"And there's no way of getting around having a loop that big... "
In the real world of RIA, you HAVE to get around the big loops. As important as optimization is learning how to break large loops into small loops, and giving time to the browser to deal with its UI. Otherwise you'll give your users a bad experience and they won't come back.
So I'd argue that BEFORE you learn funky JS optimizations, you should know how to break a large loop into chunks called by setTimeout() and display a progress bar (or let animated GIFs loop).
Perceived speed is often more important than actual speed. The world of the client is different from the world of the server.
When animating, learn how to find out if you're running on a lame browser (usually IE) and try for a worse framerate (or just don't animate). I can get some animations to go 90fps in a good browser but just 15fps in IE. You can test for the browser, but it's usually better to use timeouts and the clock to see how animations are performing.
Also, for genuine speedups, learn how to use web workers in Gears and in newer browsers.
You can speed up this mofo thus:
for (var i = 100000000; i--;) {
//Do stuff
}
Reverses the loop and checks only for
i--
instead of
i < 10000000 and i < 10000000 = true
Performance gain of 50% in most browsers
Saw this in a great Google Code talk # http://www.youtube.com/watch?v=mHtdZgou0qU
The talk contains some other great tricks.
Good luck!
If it doesn't need to be synchronous, convert the loops into a recursive implementation with setTimeout calls
for( var i = 0; i < 100000000000; i++ ) {
//Do stuff
}
Can probably written as
function doSomething(n)
{
if (n === 0) return some_value;
setTimeout(function(){doSomething(n-1);}, 0);
}
OK, this might not be a good example, but you get the idea. This way, you convert long synchronous operations into an asynchronous operation that doesn't hang the browser. Very useful in certain scenarios where something doesn't need to be done right away.
Using split & join instead of replace:
//str.replace("needle", "hay");
str.split("needle").join("hay");
Store long reference chains in local variables:
function doit() {
//foo.bar.moo.goo();
//alert(foo.bar.moo.x);
var moo = foo.bar.moo;
moo.goo();
alert(moo.x);
}
After seeing a few good answers by the people here, I did some more searching and found a few to add:
These are tips on Javascript optimizing when you're looking to get down to the very little details, things that in most cases wouldn't matter, but some it will make all the difference:
Switch vs. Else If
A commonly used tactic to wring
whatever overhead might be left out of
a large group of simple conditional
statements is replacing If-Then-Else's
with Switch statements.
Just incase you wanted to see benchmarking you can find it here.
Loop Unrolling
To unroll a loop, you have it do more
than one of the same step per
iteration and increment the counter
variable accordingly. This helps a lot
because you then decrease the number
of times you are checking the
condition for the loop overall. You
must be careful when doing this though
because you may end up overshooting
bounds.
See details and benchmarking here.
Reverse Loop Counting
Reverse your loop so that it counts
down instead of up. I have also seen
in various documents about
optimization that comparing a number
to zero is much quicker than comparing
it to another number, so if you
decrement and compare to zero it
should be faster.
See more details and benchmarking here.
Duff's Device
It's simple, but complicated to grasp at first. Read more about it here.
Make sure to check out the improved version further down that page.
The majority of this information was quoted directly from here: JavaScript Optimization. It's interesting, since it's such an old site it looks at optimization from the perspective of the browser processing power they had back then. Although the benchmarks they have recorded there are for IE 5.5 and Netscape 4.73, their benchmarking tools give accurate results for the browser you're using.
For the people who think these details don't matter, I think it says a bit about the way people perceive the power in advancing technologies we have. Just because our browsers are processing many times faster than what they use to doesn't necessarily mean that we should abuse that processing power.
I'm not suggesting spend hours optimizing two lines of code for 0.005ms, but if you keep some these techniques in mind and implement them where appropriate it will contribute to a faster web. After all, there are still many people using IE 6, so it would be wrong to assume everyone's browsers can handle the same processing.
Which JavaScript engine are we supposed to be targeting? If you're talking about such extreme optimisation, it makes a big difference. For starters, I'll point out that the array.join() trick for string concatenation is only really applicable to Microsoft's JScript engine; it can actually give worse performance on other JS engines.