'Fixed' for loop - what is more efficient? - javascript

I'm creating a tic-tac-toe game, and one of the functions has to iterate through each of the 9 fields (tic-tac-toe is played on a 3x3 grid). I was wondering what is more efficient (which one is perhaps faster, or what is the preferred way of scripting in such situation) - using two for nested loops like this:
for(var i=0; i<3; i++) {
for(var j=0; j<3; j++) {
checkField(i, j);
}
}
or hard-coding it like this:
checkField(0, 0);
checkField(0, 1);
checkField(0, 2);
checkField(1, 0);
checkField(1, 1);
checkField(1, 2);
checkField(2, 0);
checkField(2, 1);
checkField(2, 2);
As there are only 9 combinations, it would be perhaps overkill to use two nested for loops, but then again this is clearer to read. The for loop, however, will increment variables and check whether i and j are smaller than 3 every time as well.
In this example, the time saving at least might be negligible, but what is the preferred way of coding in this case?
Thanks.

Do not hard code 9 lines of the same code!
Readability
Flexibility / Maintenance
Code Length

This is a premature micro-optimization. In this case always go for the clearer solution - so use the for loops:) And by the way, think about if tomorrow the grid is 4x4:)

Time savings: negligible. Probably un-measurable.
Preferred style: nested for loops. Ok, you'll probably never make it a 4x4 or 5x5 or 3d (or 4d!) tic-tac-toe - but it's a good habit to get into. Also easier to see if you forgot something and avoids cut-and-paste errors.

Ironically hard-coding the checks will probably be faster, but (and here's the important bit) meaninglessly so.
As such, what you should really aim for is the maximum clarity of intent for what you're trying to achieve. Also, you should try and make life easier for any future improvements (that may not be carried out by you.) For example, if the tic-tac-toe grid was expanded to 4x4 which solution would be the best?
On this basis I'd be tempted to go with the loop approach along with the appropriate level of commenting, etc.

We should forget about small
efficiencies, say about 97% of the
time: premature optimization is the
root of all evil.
-- Donald Knuth
Definitely the first one. First of all, the performance difference will be absolutely negligible - my guess is the unrolled program will run slower because more time is required to compile/interpret it because it is longer (plus, additional client, server and router processing power and bandwidth will be required).
Secondly, and as a generalization, you don't know which version would be faster. Maybe some interpreter would increment registers for the first version, but load the parameters from memory (waaay slower) for the second one?
Especially in the case of JavaScript, you have absolutely no fixed specification on how fast (future!) interpreters and compilers work, so this "optimization" is absolutely pointless and confusing other programmers working with your code at best.

Please don't hardcode it. Never do something like that.
More than one time? Use a loop.
Also, you are worrying about a problem you don't have, really.

As you mentioned, the time saving will be negligible. Even if you would put that grid to a 100 times 100 square, you still won't see any difference.
If we go a bit larger though, say 10.000 times 10.000, we might see some difference. I wonder what that might be, because the compilers and optimisers are very good and especially in a loop the environment might speed things up by having this information (function will be called several times).
Why don't you try it out and share your results with us?
In practise, however, I would never recommend going for the second approach. Readability and flexibility is far more important than CPU time. And optimising early, as they say, is quite evil in itself because it obfuscates the code and introduces a lot of unnecessary complexity without really contributing to performance.

Related

Javascript performance improvement by the way variable is accessed [duplicate]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
It seems that the phrase "Premature Optimization" is the buzz-word of the day. For some reason, iphone programmers in particular seem to think of avoiding premature optimization as a pro-active goal, rather than the natural result of simply avoiding distraction. The problem is, the term is beginning to be applied more and more to cases that are completely inappropriate.
For example, I've seen a growing number of people say not to worry about the complexity of an algorithm, because that's premature optimization (eg Help sorting an NSArray across two properties (with NSSortDescriptor?)). Frankly, I think this is just laziness, and appalling to disciplined computer science.
But it has occurred to me that maybe considering the complexity and performance of algorithms is going the way of assembly loop unrolling, and other optimization techniques that are now considered unnecessary.
What do you think? Are we at the point now where deciding between an O(n^n) and O(n!) complexity algorithm is irrelevant? What about O(n) vs O(n*n)?
What do you consider "premature optimization"? What practical rules do you use to consciously or unconsciously avoid it?
EDIT
I know my description is a bit general, but I'm interested in specific, practical rules or best practices people use to avoid "pre-mature optimization", particularly on the iphone platform.
Answering this requires you to first answer the question of "what is pre-mature optimization?". Since that definition clearly varies so greatly, any meaningful answer requires the author to define the term. That's why I don't really think this is a CW question. Again, if people disagree, I'll change it.
What is premature optimization?
Premature optimization is the process of optimizing your code (usually for performance) before you know whether or not it is worthwhile to do so. An example of premature optimization is optimizing the code before you have profiled it to find out where the performance bottleneck is. An even more extreme example of premature optimization is optimizing before you have run your program and established that it is running too slowly.
Are we at the point now where deciding between an O(n^n) and O(n!) complexity algorithm is irrelevant? What about O(n) vs O(n*n)?
It depends on the size of n and how often your code will get called.
If n is always less than 5 then the asymptotic performance is irrelevant. In this case the size of the constants will matter more. A simple O(n * n) algorithm could beat a more complicated O(n log n) algorithm for small n. Or the measurable difference could be so small that it doesn't matter.
I still think that there are too many people that spend time optimizing the 90% of code that doesn't matter instead of the 10% that does. No-one cares if some code takes 10ms instead of 1ms if that code is hardly ever called. There are times when just doing something simple that works and moving on is a good choice, even though you know that the algorithmic complexity is not optimal.
Every hour you spend optimizing rarely called code is one hour less that you can spend on adding features people actually want.
My vote goes for most people optimize what they think is the weak point, but they don't profile.
So regardless of how well you know algorithms and regardless of how well you've written your code, you don't know what else is happening outside your module. What do the APIs you've called do behind the scenes? Can you always gaurantee that the particular order of ops is the fastest?
This is what is meant by Premature Optimization. Anything that you think is an optimization that has not been rigorously tested by way of a profiler or other definitive tool (clock cycles by ops is not a bad thing, but it only tells you performance characteristics ~ actual calls is more important than timing, usually), is a premature optimization.
#k_b says it well above me, and it's what I say too. Make it right, make it simple, then profile, then tweak. Repeat as necessary.
Order of priority: 1. It has to work
2. It has to be maintainable
3. It has to be machine-efficient
That was from the first week of my first programming course. In 1982.
"Premature optimization" is any time Priority 3 has been considered before Priority 1 or 2.
Note that modern programming techniques (abstractions and interfaces) are designed to make this prioritization easier.
The one gray area: during the initial design, you do have to check that your solutions are not inherently cripplingly slow. Otherwise, don't worry about performance until you at least have some working code.
For some people, optimization is part of the fun of writing code, premature or not. I like to optimize, and restrain myself for the sake of legibility. The advice not to optimize so much is for the people that like to optimize.
iphone programmers in particular seem
to think of avoiding premature
optimization as a pro-active goal
The majority of iPhone code is UI related. There is not much need to optimize. There is a need not to choose a poor design that will result in bad performance, but once you start coding up a good design there is little need for optimization. So in that context, avoiding optimization is a reasonable goal.
What do you consider "premature
optimization"? What practical rules do
you use to consciously or
unconsciously avoid it?
Using the Agile approach (rapid iterations with refinement of requirements through interactions with users) is helpful as the awareness that the current interface is probably going to change drastically after the next session with the users makes it easier to focus on developing the essential features of the application rather than the performance.
If not, a few iterations where you spent a lot of time optimizing a feature that was entirely discarded after the session with the user should give you the message.
Algorithm complexity, and even choice, is an issue that should be hidden behind an interface. For example, a List is an abstraction that can be implemented various ways with different efficiencies for different situations.
Sometimes avoiding premature optimization can aid design, because if you design with the idea that you will need to optimize later, then you are more inclined to develop at the abstract level (e.g. list) rather than the iimplementation (e.g. Array or linked list) level.
This can result in simpler, and more readable code, in addition to avoiding distraction. If programmed to the interface, different implementations can be swapped in later to optmize. Prematurely optimizing leads to the risk that implementation details may be prematurely exposed and coupled with other software components that should not see these details.
What practical rules do you use to
consciously or unconsciously avoid it?
One way to avoid unnecessary optimization is to consider the relative cost benefit:
A) Cost for programmer to optimize code + cost to test said optimization + cost of maintaining more complex code resulting from said optimization
vs.
B) Cost to upgrade server on which software runs or simply buy another one (if scalable)
If A >> B consider whether it's the right thing to do. [Ignoring for the moment the environmental cost of B which may or may not be a concern to your organization]
This applies more generally than just to premature optimization but it can help instill in your developers a sense that spending their time doing optimization work is a cost and should not be undertaken unless there is a real measurable difference in something that actually matters like: number of servers required or customer satisfaction through improved response times.
If management can't see the benefit in reduced $ and customers can't see the benefit in better response times, ask yourself why you are doing it.
I think this is a question of common sense. There's a need to understand the big picture, or even what's happening under the hood, to be able to consider when a so-called "premature" move is justified.
I've worked with solutions where web service calls were necessary to calculate new values based on the contents of a local list. The way this was implemented was by making a web request per value. It wouldn't be premature optimization to send several values at a time. The same thing goes for the use of database transactions for multiple operations, i.e. multiple inserts.
When it comes to algorithms, initially the most important thing is to get it right and as simple as possible. Worrying about stack vs heap issues at that point would be madness.

Optimizing JavaScript loop makes it slower

In the book JavaScript Patterns Stoyan Stefanov he claims that the common way of looping in JavaScript
for (i = 0, max = myarray.length; i < max; i++) {
// do something with myarray[i]
}
can be optimized by using this pattern instead
for (i = myarray.length; i--;) {
// do something with myarray[i]
}
I found that to be interesting so I decided to test it in the real world by applying the technique to a performance intensive loop showed in this blog post about doing pixel manipulation with canvas. The benchmarks comparing the regular code with the "optimized" code can be seen here.
The interesting thing is that the supposedly optimized loop is actually slower than the regular way of looping in both Opera and Firefox. Why is that?
This kind of micro-optimization always has very limited validity. Chances are that the VM implementations include optimizations for the "common ways" of doing things that go beyond what you can do on the language level.
Which is why micro-optimizations are usually a waste of time. Beginners tend to obsess over them and end up writing code that is hard to maintain AND slow.
Most of the ways to try to optimise a loop comes from C, and a time when compilers where simpler and processors executed one instruction after the other.
Modern processors run the code very differently, so optimising specific instructions doesn't have the same effect now.
For Javascript the changes are quite rapid now. It has gone from being interpreted to being compiled, which makes a huge performance difference. The compilers are very different between browsers, and they change with each new browser version, so something that is faster in one browser today, may be slower tomorrow.
I have tested some different ways of optimising loops, and currently there is very little difference in performance: http://jsperf.com/loopoptimisations
One thing that can be said for certain though, is that the regular way of writing loops is the most common, so that is what all compilers will be focusing on optimising.
To begin with, I see no reason why the second should be much faster than the first. The difference between comparing with zero versus comparing with another number is something that might make a difference in extremely tight loops in compiled code, but even there it's likely a cargo cult opt most of the time (read Richard Feyman's Cargo Cult Science if you don't get the reference, if nothing else it's a good read by there are also more than a few times where similar tendencies to copy something that worked well once to a case where there's no real reason to suppose it will help, in programming).
I could see the following being slower:
for (i = 0; i < myarray.length; i++) {
// do something with myarray[i]
}
But I could also see it not being slower, if the engine did the optimisation of hoisting the length check for you, or the implementation was such that checking the length and checking a variable was about equivalent cost anyway.
I could also see either that or the first code example you give, or perhaps both, being something that a given script-engine optimises - it is after all a very common idiom in js, and inherently involves looping, so it would be a sensible thing to try to detect and optimise for in a script engine.
Beyond such conjectures though, we can't really say anything about this beyond "because one works better in that engine than the other, that's why" without getting to the level below javascript and examining the implementation of the engine. Your very results would suggest that the answer won't be the same with each engine (after all, one did correspond more with what you expected).
Now, it's worth noting that in each case the results are quite close to each other anyway. If you found just one or two browsers that are currently reasonably popular where the change did indeed optimise, it could still be worth it.
If you're interested in whether it was ever worth or, or was just an assumption to being with, you could try to get a copy of Netscape 2 (first javascript browser ever, after all), and run some code to test the approach on it.
Edit: If you do try that sort of experiment, another is to try deliberately buggy loops that overshoot the array bounds by one. One possible optimisation for the engine is to realise you're walking the array, and check once on where you will end for out-of-range. if so, you could have different results if you will eventually error.

Circular reference breakers for JSON written in JavaScript

I know only one, the cycle.js from Crockford's JSON-JS, but it is recursive and appears very slow, it take 2-5 seconds to JSON.stringify(JSON.decycle(random_graph_with_30_vertices)) and hit recursion depth limit for larger graphs. Are there better non-recursive alternatives?
Try Cereal
It is not recursive. The output format is less readable, but it's still actually JSON. I believe it's fairly fast, but I've not benchmarked it against cycle. It has been used in anger in a few projects. It also solves more than just cycle-detection which may or may not be to your liking.

what's more efficient? checking == or just mutating the variable?

Imagine I had a variable called X.
Let's say every 5 seconds I wanted to make X = true. (it could be either true or false in between these 5 seconds, but gets reset to true when the 5 seconds are up).
Would it be more efficient to check if the value is already true, then if not, reassign it to true? Or just have X = true?
in other words, which would run faster?
if(x==false){
x = true;
}
vs
x = true;
On one hand, the first program won't mutate the variable if it doesn't have to. On the other hand, the second program doesn't need to check what X is equal to; it dives straight in.
It nearly always doesn't matter. Write the code that is easiest to understand and maintain. Only optimize it if necessary.
The best way to be sure is to test it. Profile your code.
Which is faster might depend on the browser.
Which is faster depends on whether the variable is usually true or usually false.
Having said that, I'd guess in most scenarios setting a variable without testing it will be faster.
Really depends on your data :)
If x == false 90% of the time, then a straight assignment to x would be faster.
This is one of those places where you probably don't want to worry about efficiency, and if you really do, profile it ..
Disclaimer/Warning:
This is a micro-optimization, and will never affect the efficiency of your program in a way that is measurable by users. If you turn off all compiler optimizations, and run an excellent profiler, you may be able to quantify the effects - but no user will ever notice.
This is especially true for your situation, where the code in question is only run every few seconds. The time spent profiling would probably be better spent improving other parts of your application.
Also, in these situations readability should always prevail over non-bottleneck micro-optimizations (although my answer below takes only runtime efficiency into account, as requested). Therefore my recommended code for you to use in this situation is x=true, since it's the easiest to read and understand.
Finally, if adding the check will improve speed, the compiler probably already knows that and will do it for you, so you can't go wrong with x=true (that's why you should turn off optimizations before running the profiler).
Answer:
The only true way to figure this out is by profiling. You may find that the 0 test (x==false) basically takes no time at all, and therefore it is worth including due to the time it saves when x turns out to be true. Or you may find that the test takes long enough that it wastes too much time when x turns out to be false.
My guess is that the test is unecessary. That's because 0-testing and other bitwise operations (and, or, etc) are all so fast that I usually treat them as taking the same elementary amount of time. And if 0-testing takes the same amount of time as an OR operation (setting to true), then the 0-test is a redundant waste of time. Profiling could prove me wrong of course, and my guess is based on loose assumptions about bitwise operations, so if you choose to run a profiler and figure this out I'd definitely be interested in the results.
The efficiency your are trying to attain by this is minute compared the efficiency attained by the quality of your overall design.

Javascript: Optimizing details for Critical/Highly Processed Javascript code

I've been looking through a lot of Javascript Optimizing and most of them talk about string concatenation and a few other big ones found here, but I figured there had to be more details that you can optimize for when speed is critical and the processing of those pieces of code is very high.
Say you run this code for some reason: (unlikely, I know, but bear with me)
for( var i = 0; i < 100000000000; i++ ) {
//Do stuff
}
And there's no way of getting around having a loop that big... You're going to want to make sure that all the stuff you're doing in that loop is optimized to the point that you can't optimize it anymore... or your website will hang.
Edit: I'm not necessarily talking about a loop, what about a function that's repeatedly called such as onmousemove? Although in most cases we shouldn't need to use onmousemove, there are some cases that do. This questions is for those cases.
Using JQuery as our JS library
So what I would like is tips for optimizing, but only the more uncommon ones
- ie. Speed differences between switch or if-else
If you'd like to see the more common ones, you can find them here:
Optimizing Javascript for Execution Speed
Javascript Tips and Tricks; Javascript Best Practices
Optimize javascript pre-load of images
How do you optimize your Javascript
Object Oriented Javascript best practices
"And there's no way of getting around having a loop that big... "
In the real world of RIA, you HAVE to get around the big loops. As important as optimization is learning how to break large loops into small loops, and giving time to the browser to deal with its UI. Otherwise you'll give your users a bad experience and they won't come back.
So I'd argue that BEFORE you learn funky JS optimizations, you should know how to break a large loop into chunks called by setTimeout() and display a progress bar (or let animated GIFs loop).
Perceived speed is often more important than actual speed. The world of the client is different from the world of the server.
When animating, learn how to find out if you're running on a lame browser (usually IE) and try for a worse framerate (or just don't animate). I can get some animations to go 90fps in a good browser but just 15fps in IE. You can test for the browser, but it's usually better to use timeouts and the clock to see how animations are performing.
Also, for genuine speedups, learn how to use web workers in Gears and in newer browsers.
You can speed up this mofo thus:
for (var i = 100000000; i--;) {
//Do stuff
}
Reverses the loop and checks only for
i--
instead of
i < 10000000 and i < 10000000 = true
Performance gain of 50% in most browsers
Saw this in a great Google Code talk # http://www.youtube.com/watch?v=mHtdZgou0qU
The talk contains some other great tricks.
Good luck!
If it doesn't need to be synchronous, convert the loops into a recursive implementation with setTimeout calls
for( var i = 0; i < 100000000000; i++ ) {
//Do stuff
}
Can probably written as
function doSomething(n)
{
if (n === 0) return some_value;
setTimeout(function(){doSomething(n-1);}, 0);
}
OK, this might not be a good example, but you get the idea. This way, you convert long synchronous operations into an asynchronous operation that doesn't hang the browser. Very useful in certain scenarios where something doesn't need to be done right away.
Using split & join instead of replace:
//str.replace("needle", "hay");
str.split("needle").join("hay");
Store long reference chains in local variables:
function doit() {
//foo.bar.moo.goo();
//alert(foo.bar.moo.x);
var moo = foo.bar.moo;
moo.goo();
alert(moo.x);
}
After seeing a few good answers by the people here, I did some more searching and found a few to add:
These are tips on Javascript optimizing when you're looking to get down to the very little details, things that in most cases wouldn't matter, but some it will make all the difference:
Switch vs. Else If
A commonly used tactic to wring
whatever overhead might be left out of
a large group of simple conditional
statements is replacing If-Then-Else's
with Switch statements.
Just incase you wanted to see benchmarking you can find it here.
Loop Unrolling
To unroll a loop, you have it do more
than one of the same step per
iteration and increment the counter
variable accordingly. This helps a lot
because you then decrease the number
of times you are checking the
condition for the loop overall. You
must be careful when doing this though
because you may end up overshooting
bounds.
See details and benchmarking here.
Reverse Loop Counting
Reverse your loop so that it counts
down instead of up. I have also seen
in various documents about
optimization that comparing a number
to zero is much quicker than comparing
it to another number, so if you
decrement and compare to zero it
should be faster.
See more details and benchmarking here.
Duff's Device
It's simple, but complicated to grasp at first. Read more about it here.
Make sure to check out the improved version further down that page.
The majority of this information was quoted directly from here: JavaScript Optimization. It's interesting, since it's such an old site it looks at optimization from the perspective of the browser processing power they had back then. Although the benchmarks they have recorded there are for IE 5.5 and Netscape 4.73, their benchmarking tools give accurate results for the browser you're using.
For the people who think these details don't matter, I think it says a bit about the way people perceive the power in advancing technologies we have. Just because our browsers are processing many times faster than what they use to doesn't necessarily mean that we should abuse that processing power.
I'm not suggesting spend hours optimizing two lines of code for 0.005ms, but if you keep some these techniques in mind and implement them where appropriate it will contribute to a faster web. After all, there are still many people using IE 6, so it would be wrong to assume everyone's browsers can handle the same processing.
Which JavaScript engine are we supposed to be targeting? If you're talking about such extreme optimisation, it makes a big difference. For starters, I'll point out that the array.join() trick for string concatenation is only really applicable to Microsoft's JScript engine; it can actually give worse performance on other JS engines.

Categories

Resources