Javascript efficiency: 'for' vs 'forEach' [closed] - javascript

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
The community reviewed whether to reopen this question 2 months ago and left it closed:
Needs details or clarity Add details and clarify the problem by editing this post.
Improve this question
What is the current standard in 2017 in Javascript with for() loops vs a .forEach.
I am currently working my way through Colt Steeles "Web Dev Bootcamp" on Udemy and he favours forEach over for in his teachings. I have, however, searched for various things during the exercises as part of the course work and I find more and more recommendations to use a for-loop rather than forEach. Most people seem to state the for loop is more efficient.
Is this something that has changed since the course was written (circa 2015) or are their really pros and cons for each, which one will learn with more experience.
Any advice would be greatly appreciated.

for
for loops are much more efficient. It is a looping construct specifically designed to iterate while a condition is true, at the same time offering a stepping mechanism (generally to increase the iterator). Example:
for (var i=0, n=arr.length; i < n; ++i ) {
...
}
This isn't to suggest that for-loops will always be more efficient, just that JS engines and browsers have optimized them to be so. Over the years there have been compromises as to which looping construct is more efficient (for, while, reduce, reverse-while, etc) -- different browsers and JS engines have their own implementations that offer different methodologies to produce the same results. As browsers further optimize to meet performance demands, theoretically [].forEach could be implemented in such a way that it's faster or comparable to a for.
Benefits:
efficient
early loop termination (honors break and continue)
condition control (i<n can be anything and not bound to an array's size)
variable scoping (var i leaves i available after the loop ends)
forEach
.forEach are methods that primarily iterate over arrays (also over other enumerable, such as Map and Set objects). They are newer and provide code that is subjectively easier to read. Example:
[].forEach((val, index)=>{
...
});
Benefits:
does not involve variable setup (iterates over each element of the array)
functions/arrow-functions scope the variable to the block
In the example above, val would be a parameter of the newly created function. Thus, any variables called val before the loop, would hold their values after it ends.
subjectively more maintainable as it may be easier to identify what the code is doing -- it's iterating over an enumerable; whereas a for-loop could be used for any number of looping schemes
Performance
Performance is a tricky topic, which generally requires some experience when it comes to forethought or approach. In order to determine ahead of time (while developing) how much optimization may be required, a programmer must have a good idea of past experience with the problem case, as well as a good understanding of potential solutions.
Using jQuery in some cases may be too slow at times (an experienced developer may know that), whereas other times may be a non-issue, in which case the library's cross-browser compliance and ease of performing other functions (e.g., AJAX, event-handling) would be worth the development (and maintenance) time saved.
Another example is, if performance and optimization was everything, there would be no other code than machine or assembly. Obviously that isn't the case as there are many different high level and low level languages, each with their own tradeoffs. These tradeoffs include, but are not limited to specialization, development ease and speed, maintenance ease and speed, optimized code, error free code, etc.
Approach
If you don't have a good understanding if something will require optimized code, it's generally a good rule of thumb to write maintainable code first. From there, you can test and pinpoint what needs more attention when it's required.
That said, certain obvious optimizations should be part of general practice and not required any thought. For instance, consider the following loop:
for (var i=0; i < arr.length; ++i ){}
For each iteration of the loop, JavaScript is retrieving the arr.length, a key-lookup costing operations on each cycle. There is no reason why this shouldn't be:
for (var i=0, n=arr.length; i < n; ++i){}
This does the same thing, but only retrieves arr.length once, caching the variable and optimizing your code.

Related

Javascript performance improvement by the way variable is accessed [duplicate]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
It seems that the phrase "Premature Optimization" is the buzz-word of the day. For some reason, iphone programmers in particular seem to think of avoiding premature optimization as a pro-active goal, rather than the natural result of simply avoiding distraction. The problem is, the term is beginning to be applied more and more to cases that are completely inappropriate.
For example, I've seen a growing number of people say not to worry about the complexity of an algorithm, because that's premature optimization (eg Help sorting an NSArray across two properties (with NSSortDescriptor?)). Frankly, I think this is just laziness, and appalling to disciplined computer science.
But it has occurred to me that maybe considering the complexity and performance of algorithms is going the way of assembly loop unrolling, and other optimization techniques that are now considered unnecessary.
What do you think? Are we at the point now where deciding between an O(n^n) and O(n!) complexity algorithm is irrelevant? What about O(n) vs O(n*n)?
What do you consider "premature optimization"? What practical rules do you use to consciously or unconsciously avoid it?
EDIT
I know my description is a bit general, but I'm interested in specific, practical rules or best practices people use to avoid "pre-mature optimization", particularly on the iphone platform.
Answering this requires you to first answer the question of "what is pre-mature optimization?". Since that definition clearly varies so greatly, any meaningful answer requires the author to define the term. That's why I don't really think this is a CW question. Again, if people disagree, I'll change it.
What is premature optimization?
Premature optimization is the process of optimizing your code (usually for performance) before you know whether or not it is worthwhile to do so. An example of premature optimization is optimizing the code before you have profiled it to find out where the performance bottleneck is. An even more extreme example of premature optimization is optimizing before you have run your program and established that it is running too slowly.
Are we at the point now where deciding between an O(n^n) and O(n!) complexity algorithm is irrelevant? What about O(n) vs O(n*n)?
It depends on the size of n and how often your code will get called.
If n is always less than 5 then the asymptotic performance is irrelevant. In this case the size of the constants will matter more. A simple O(n * n) algorithm could beat a more complicated O(n log n) algorithm for small n. Or the measurable difference could be so small that it doesn't matter.
I still think that there are too many people that spend time optimizing the 90% of code that doesn't matter instead of the 10% that does. No-one cares if some code takes 10ms instead of 1ms if that code is hardly ever called. There are times when just doing something simple that works and moving on is a good choice, even though you know that the algorithmic complexity is not optimal.
Every hour you spend optimizing rarely called code is one hour less that you can spend on adding features people actually want.
My vote goes for most people optimize what they think is the weak point, but they don't profile.
So regardless of how well you know algorithms and regardless of how well you've written your code, you don't know what else is happening outside your module. What do the APIs you've called do behind the scenes? Can you always gaurantee that the particular order of ops is the fastest?
This is what is meant by Premature Optimization. Anything that you think is an optimization that has not been rigorously tested by way of a profiler or other definitive tool (clock cycles by ops is not a bad thing, but it only tells you performance characteristics ~ actual calls is more important than timing, usually), is a premature optimization.
#k_b says it well above me, and it's what I say too. Make it right, make it simple, then profile, then tweak. Repeat as necessary.
Order of priority: 1. It has to work
2. It has to be maintainable
3. It has to be machine-efficient
That was from the first week of my first programming course. In 1982.
"Premature optimization" is any time Priority 3 has been considered before Priority 1 or 2.
Note that modern programming techniques (abstractions and interfaces) are designed to make this prioritization easier.
The one gray area: during the initial design, you do have to check that your solutions are not inherently cripplingly slow. Otherwise, don't worry about performance until you at least have some working code.
For some people, optimization is part of the fun of writing code, premature or not. I like to optimize, and restrain myself for the sake of legibility. The advice not to optimize so much is for the people that like to optimize.
iphone programmers in particular seem
to think of avoiding premature
optimization as a pro-active goal
The majority of iPhone code is UI related. There is not much need to optimize. There is a need not to choose a poor design that will result in bad performance, but once you start coding up a good design there is little need for optimization. So in that context, avoiding optimization is a reasonable goal.
What do you consider "premature
optimization"? What practical rules do
you use to consciously or
unconsciously avoid it?
Using the Agile approach (rapid iterations with refinement of requirements through interactions with users) is helpful as the awareness that the current interface is probably going to change drastically after the next session with the users makes it easier to focus on developing the essential features of the application rather than the performance.
If not, a few iterations where you spent a lot of time optimizing a feature that was entirely discarded after the session with the user should give you the message.
Algorithm complexity, and even choice, is an issue that should be hidden behind an interface. For example, a List is an abstraction that can be implemented various ways with different efficiencies for different situations.
Sometimes avoiding premature optimization can aid design, because if you design with the idea that you will need to optimize later, then you are more inclined to develop at the abstract level (e.g. list) rather than the iimplementation (e.g. Array or linked list) level.
This can result in simpler, and more readable code, in addition to avoiding distraction. If programmed to the interface, different implementations can be swapped in later to optmize. Prematurely optimizing leads to the risk that implementation details may be prematurely exposed and coupled with other software components that should not see these details.
What practical rules do you use to
consciously or unconsciously avoid it?
One way to avoid unnecessary optimization is to consider the relative cost benefit:
A) Cost for programmer to optimize code + cost to test said optimization + cost of maintaining more complex code resulting from said optimization
vs.
B) Cost to upgrade server on which software runs or simply buy another one (if scalable)
If A >> B consider whether it's the right thing to do. [Ignoring for the moment the environmental cost of B which may or may not be a concern to your organization]
This applies more generally than just to premature optimization but it can help instill in your developers a sense that spending their time doing optimization work is a cost and should not be undertaken unless there is a real measurable difference in something that actually matters like: number of servers required or customer satisfaction through improved response times.
If management can't see the benefit in reduced $ and customers can't see the benefit in better response times, ask yourself why you are doing it.
I think this is a question of common sense. There's a need to understand the big picture, or even what's happening under the hood, to be able to consider when a so-called "premature" move is justified.
I've worked with solutions where web service calls were necessary to calculate new values based on the contents of a local list. The way this was implemented was by making a web request per value. It wouldn't be premature optimization to send several values at a time. The same thing goes for the use of database transactions for multiple operations, i.e. multiple inserts.
When it comes to algorithms, initially the most important thing is to get it right and as simple as possible. Worrying about stack vs heap issues at that point would be madness.

JavaScript faster shift, unshift and splice implementation [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I have read this article:
https://gamealchemist.wordpress.com/2013/05/01/lets-get-those-javascript-arrays-to-work-fast/
At the end of point 6 the author says:
Rq about shift/unshift : beware, those are always O(n) operations
(meaning : each operation will take a time proportionnal to the number
of the array length). Unless you really need to, you shouldn’t use
them. Rather build your own rotating array if you need such feature.
And in the 7th point:
Rq for shift/unshift users : apply the same principle, with two
indexes, to avoid copy/reallocation. One index on the left, one on the
right, both starting at the middle of the array. Then you’ll be again
in O(1) time. Better. Don’t forget to re-center the indexes when they
are ==.
I was wondering what does the author mean when he says build your own rotating array and two indexes,...One index on the left, one on the right, both starting at the middle of the array. How should be this considerations translated into code (the author doesn't make an example for this use cases)?
Could the principles applied to shift and unshift be applied to Array.prototype.splice too?
EDIT: I have an ordered array of x coordinates going from indexes 0 (lower values for x) to n (higher x values). I would need to use myArray.splice(index, 0, item); several times and insert some new x coordinates between the already existent ones if this coordinate is < of an higher one and > of a lower one (I can easily find that out through a binary search) and I don't want it to reorder the indexes every time I call splice cause I have thousands of elements in the array myArray.
Can it be improved using the principles mentioned by the author of the linked article?
Thanks for the attention.
All performance questions must be answered by coding a very specific solution and then measuring the performance of that solution compared to your alternative with representative data in the browsers you care about. There are very few performance questions that can be answered accurately with an abstract question that does not include precise code to be measured.
There are some common sense items like if you're going to put 1000 items in an array, then yes it is probably faster to preallocate the array to the final length and then just fill in the array values rather than call .push() 1000 times. But, if you want to know how much difference there is and whether it's actually relevant in your particular situation, then you will need to code up two comparisons and measure them in multiple browsers in a tool like http://jsperf.com.
The recommendation in that article to create your own .splice() function seems suspect to me without measuring. It seems very possible that a good native code implementation of .splice() could be faster than one written entirely in Javascript. Again, if you really wanted to know, you would have to measure a specific test case.
If you have lots of array manipulations to do and you want to end up with a sorted array, it might be faster to just remove items, add new items onto the end of the array and the call .sort() with a custom comparison function when you're doing rather than inserting every new item in sorted order. But, again which way is faster will depend entirely upon exactly what you are doing, how often you're doing it and what browsers you care about the most. Measure, measure, measure if you really want to know.
As to whether your specific situation in your edit can be improved with a custom .splice(), you'd have to code it up both ways with a representative data set and then test in a tool like perf in multiple browsers to answer the question. Since you haven't provide code or data to test, none of us can really answer that one for you. There is no generic answer that works for all possible uses of .splice() on all possible data sets in all possible browsers. The devil is in the details and the details are in all the specifics of your situation.
My guess is that if you're really performance tweaking, you will often find bigger fish to fry in making your overall algorithm smarter (so it has less work to do in the first place) than by trying to rewrite array methods. The goal is to test/measure enough to understand where your performance bottlenecks really are so you can concentrate on the one area that makes the most difference and not spend a lot of time guessing about what might make things faster. I'm always going to code a smart use of the existing array methods and only consider a custom coded solution when I've proven to myself that I have a bottleneck in one particular operation that actually matters to the experience of my app. Per-optimization will just make the code more complicated and less maintainable and will generally be time poorly spent.
I was thinking about the same problem and have ended up using B+tree data structure. It takes time and not easy to implement but really good result. It can be considered the combination of both good aspects of array and linked-list:
In term of search performance, it is similar to binary search on array and even better version ? (not so sure but at least it's tight).
Modifying (insert, delete) the set without affect all other element index (affect range is very small constant - length of a block)
I would like to hear your thought, you can check this link for the visualization of b+tree in action.

Is it better practice to have 2 functions, or 1 complex function? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I'm curious as to what is considered better practice in the programming community.
On submit, a function is called, where some validation is done, and then a post is made through AJAX. I was considering having a function handle them both. I would change all element IDs to be the same, with only the difference of an incrementing number (e.g. txt1, txt2 / txt1Cnt, txt2Cnt / txt1Lbl, txt2Lbl, etc...)
Then I'd add loops and conditional statements to properly handle each form. This sounds like the fun way, which is why I wanted to do it, but now I'm thinking that maybe it would not be considered very efficient, since I wouldn't need to differentiate all the different elements if I just had 2 functions. Especially because the forms are not identical.
So, my question is, is there a rule of thumb when it comes to this sort of thing? Is it better to have less functions, or less complexity?
Thanks for your time.
There are several things you need to consider in these cases.
Code reusage - Breaking code into functions which do one or several certain things will let you reuse them later.
Code Readability - Code can be more readable when you divide it into logical functions. Especially in cases when someone else will be dealing with your code
Performance - If this function is called many times, in most cases it is better to have 1 function
A good rule of thumb is the Single Responsibility Principle, which says that a function should do only one thing.
More simple functions, less complexity.
My rule of thumb is: if there's a chance that I'll need this logic further on for another purpose, then it's better to separate it to its own method or function.
It forces you to spend some extra time abstracting some conditionals to fit the general case, but it saves you N extra times of repeating the logic in other use case.
Generally I like to make sure a single function or method performs a single significant task. This has two benefits;
Improve Code Readability - If you come back to your code in x months time, when you can no longer remember writing the code, it can be less time consuming to follow and adapt a sequence of shorter functions than a small number of large functions that each perform multiple tasks.
Increase Code Reusability - This applies more to OOP classes, but is also relevant in procedural. If you write a function that performs a reasonably specific task you can then re-use the function again in future projects, requiring the adaptation of a few parameters or paths rather than extracting the parts of a long, multi-functional function to rebuild the function to the new project's needs.

three.js why does it use for loops instead of while [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
What loop is faster, while or for
I see in three.js that there is the common code feature in many languages:
for ( var i = 0, l = something.length; i < l; i++ ) {
do some stuff over i
}
but I read that in javascript performance can be better by using:
var i = something.length;
while(i--){
do some stuff over i
}
Does this actually improve any performance significantly? is there a reason to prefer one over the other?
Does this actually improve any performance significantly?
No. Not reliably cross-browser (which is to say, across JavaScript implementations).
Moreover, note that in your example, your while loop loops backward (n to 0), the for loop you quote loops forward (0 to n). Sometimes it matters.
In general, micro-optimization is rarely appropriate, and this is particularly true in the case of JavaScript, where different implementations in the wild have markedly different performance characteristics. Instead, write code to be clear and maintainable, and address specific performance issues only if they arise, and if they arise address them on your target engines.

Optimizing JavaScript loop makes it slower

In the book JavaScript Patterns Stoyan Stefanov he claims that the common way of looping in JavaScript
for (i = 0, max = myarray.length; i < max; i++) {
// do something with myarray[i]
}
can be optimized by using this pattern instead
for (i = myarray.length; i--;) {
// do something with myarray[i]
}
I found that to be interesting so I decided to test it in the real world by applying the technique to a performance intensive loop showed in this blog post about doing pixel manipulation with canvas. The benchmarks comparing the regular code with the "optimized" code can be seen here.
The interesting thing is that the supposedly optimized loop is actually slower than the regular way of looping in both Opera and Firefox. Why is that?
This kind of micro-optimization always has very limited validity. Chances are that the VM implementations include optimizations for the "common ways" of doing things that go beyond what you can do on the language level.
Which is why micro-optimizations are usually a waste of time. Beginners tend to obsess over them and end up writing code that is hard to maintain AND slow.
Most of the ways to try to optimise a loop comes from C, and a time when compilers where simpler and processors executed one instruction after the other.
Modern processors run the code very differently, so optimising specific instructions doesn't have the same effect now.
For Javascript the changes are quite rapid now. It has gone from being interpreted to being compiled, which makes a huge performance difference. The compilers are very different between browsers, and they change with each new browser version, so something that is faster in one browser today, may be slower tomorrow.
I have tested some different ways of optimising loops, and currently there is very little difference in performance: http://jsperf.com/loopoptimisations
One thing that can be said for certain though, is that the regular way of writing loops is the most common, so that is what all compilers will be focusing on optimising.
To begin with, I see no reason why the second should be much faster than the first. The difference between comparing with zero versus comparing with another number is something that might make a difference in extremely tight loops in compiled code, but even there it's likely a cargo cult opt most of the time (read Richard Feyman's Cargo Cult Science if you don't get the reference, if nothing else it's a good read by there are also more than a few times where similar tendencies to copy something that worked well once to a case where there's no real reason to suppose it will help, in programming).
I could see the following being slower:
for (i = 0; i < myarray.length; i++) {
// do something with myarray[i]
}
But I could also see it not being slower, if the engine did the optimisation of hoisting the length check for you, or the implementation was such that checking the length and checking a variable was about equivalent cost anyway.
I could also see either that or the first code example you give, or perhaps both, being something that a given script-engine optimises - it is after all a very common idiom in js, and inherently involves looping, so it would be a sensible thing to try to detect and optimise for in a script engine.
Beyond such conjectures though, we can't really say anything about this beyond "because one works better in that engine than the other, that's why" without getting to the level below javascript and examining the implementation of the engine. Your very results would suggest that the answer won't be the same with each engine (after all, one did correspond more with what you expected).
Now, it's worth noting that in each case the results are quite close to each other anyway. If you found just one or two browsers that are currently reasonably popular where the change did indeed optimise, it could still be worth it.
If you're interested in whether it was ever worth or, or was just an assumption to being with, you could try to get a copy of Netscape 2 (first javascript browser ever, after all), and run some code to test the approach on it.
Edit: If you do try that sort of experiment, another is to try deliberately buggy loops that overshoot the array bounds by one. One possible optimisation for the engine is to realise you're walking the array, and check once on where you will end for out-of-range. if so, you could have different results if you will eventually error.

Categories

Resources