When/why to use map/reduce over for loops - javascript

So I am getting into a bit of object manipulation in JavaScript for the first time and I have a question I'm wondering if anyone could answer.
When I have an object I want to manipulate I could do something to the extent of a few nested for loops, however there are functions built into JavaScript, like map/reduce/filter, and libraries like lodash/underscore.
I assume the latter (map/reduce/filter and the libraries) are better practice but I'm just curious as to why.
I am doing some pretty basic object manipulation that could be solved with a few well placed for loops to grab and change the right keys/values in the object, but can be easily done with the functions/libraries in JS. Just curious as to how they are better - like better performance/cleaner code/ease of use/whatever else.
Apologies, there is no code. I would very much appreciate anyone helping me understand more here.
Edit - so taking from the examples for map()
I could take the example for javascript.map
var kvArray = [{key:1, value:10}, {key:2, value:20}, {key:3, value: 30}];
var reformattedArray = kvArray.map(function(obj){
var rObj = {};
rObj[obj.key] = obj.value;
return rObj;
});
I could do something like
var kvArray = [{key:1, value:10}, {key:2, value:20}, {key:3, value: 30}];
var reformattedArray = [];
for(var object in kvArray){
//combine both values into object inside of kvArray[object]);
};
A lot less code - but any other benefits worth knowing about?

I know I'm replying to an old answer but just wanted to point out for future readers.
Map reduce and filter functions come from the functional programming world.
These are first class built-in operators in languages like Lisp, Haskell, and others(ml?).
Functional languages tend to prefer to run operators over immutable data than make the code run over the data to operate on it (say loops).
So they provide simpler but powerful interfaces like map, filter and reduce when compared to providing for and while loops.
It also helps them satisfy other requirements like immutability etc. That's why maps give u back a new map instead of mutating the old one. These are very good from a concurrency point of view, though they may be slower in certain contexts.
This approach usually leads to fewer errors in code in multi-threaded or high concurrency apps.
When multiple actors act on the same piece of data, immutability helps keep code from stepping on each other's toes.
Since javascript tries to be partially functional by providing some functionalities of functional programming languages, it might have made sense to implement map, filter and reduce functions in it too.
YMMV depending on what you are doing with the tools you are given.
If your code works better with a for loop, go for it.
But if you ever find asynchronous code munching on common data and you end up splitting your hairs trying to debug a loop.
Say hi, to map, reduce and filter.

.map() allows you to create a new array by iterating over the original array and allowing you to run some sort of custom conversion function. The output from .map() is a new array.
var orig = [1,2,3,4,5];
var squares = orig.map(function(val) {
return val * val;
});
console.log(squares); // [1,4,9,16,25]
.reduce() allows you to iterate over an array accumulating a single result or object.
var orig = [1,2,3,4,5];
var sum = orig.reduce(function(cum, val) {
return cum + val;
}, 0);
console.log(sum); // 15
These are specialized iterators. You can use them when this type of output is exactly what you want. They are less flexible than a for loop (for example, you can't stop the iteration in the middle like you can with a for loop), but they are less typing for specific types of operations and for people that know them, they are likely a little easier to see the code's intent.
I have not myself tested the performance of .map() and .reduce() versus a for loop, but have seen tests for .forEach() which showed that .forEach() was actually slower in some browsers. This is perhaps because each iteration of the loop with .forEach() has to call your callback function, whereas in a plain for loop, you do not have to make such a function call (the code can be directly embedded there). In any case, it is rare that this type of performance difference is actually meaningful and you should generally use whichever construct makes clearer, easier to maintain code.
If you really wanted to optimize performance, you would have to write your own test case in a tool like jsperf and then run it in multiple browsers to see which way of doing things was best for your particular situation.
Another advantage of a plain for loop is that it can be used with array-like objects that support indexing, but do not support .reduce() and .map().
And, a for/of loop can be used with any object that implements the iterator protocol such as HTMLCollection.

This is like asking if I like basketball or football better. Both have their positives.
If you have 10 developers look at your for loop, 9 out of 10 will know what you are doing right away. Maybe half will have to look up what the map() method is, but then they'll also know what's going on. So in this respect, a for loop is easier for others to read.
On the flip side, map() will save you two or three lines of code.
As far as performance goes, you'll find map() is built internally with something akin to a for loop. You might see a few milliseconds of difference when it comes to performance speeds if you run them through large iterations; but they'll never be recognizable to an end user.

forEach(): Executes a provided function(callback) once for each array element. Doesn’t return anything (undefined) but this callback is allowed to mutate the calling array.
map(): Executes a provided function(callback) once for each array element and creates a new array with the results of this execution. It cannot mutate the calling array content.
Conclusion
Use map() when you need to return a new array.
Use forEach() when you want to change the original array
Use for when you need more control over the iteration (eg: you want to iterate every three elements (i + 3))

Bumped into this while searching for something else. So trying to answer it even if it is a old thread as the concepts applies no matter what.
If you consider performance and flexibility, "for" loop always beats the others, just because it doesn't have the overhead of calling a function for each iteration and can be used for any purpose.
But, there are other gains with functions like forEach, map, reduce etc (let's call them functional methods). It is mainly the readability, maintainability.
Below are few drawbacks of for loop
Introduces new variables to the scope, just for the counter/iteration
Hard to debug errors, due to un-intentional changes to counter variables. This becomes more difficult and the chances to make a mistake increases more as the number of loops with in loops increase
Developers have the habit of using loop variables as i, j, k. It is very easy to lose the track of which counter and which inner loop the code is executing once the loop increases certain lines of code
With ES6, we have at least limited/local scope introduced by 'let'. But before, the variables introduced by for loop have a function scope causing even more accidental errors
To avoid all of these, it is suggested to use functions like forEach, map, reduce when you know what you have to do (Not to forget most of these functional methods offer immutability). A small sacrifice in terms of performance for the greater good and more succinct code.
With ES6, most of the functional methods are supported by the language itself. They are optimised and we don't have to rely on libraries like lodash (unless there is a severe performance gain).

Just bumped into this and found that none of the answers highlights one important difference between for-loop and map as to when to use one over the other:
With map you can't break out of an iteration which you can with for-loop.
For e.g, you can't do this
const arr = [5, 6, 9, 4];
arr.map(elem=>{
if(elem === 5){
break; //This is not allowed
}
})

Summarising the differences between higher-order array methods (map, reduce, filter, etc. - I'll refer to these as HOMs) vs for loops, and including a few other points:
Counter variables: for loops introduce variables that introduce errors such as: OBOE; block scoping errors (which are complicated further by the differences in let and var declaration scoping)
Availability: HOMs are only available to objects that are arrays (Array.isArray(obj)); for loops can be used on objects that implement the iterator protocol (which includes arrays),
Early execution exit: there is none for HOMs; loops have the break and return statement for this,
Consecutive async iteration execution: Not possible in HOMs. Notice that only a loop can delay between console logging executions:
// Delays for a number of milliseconds
const delay = (ms = 1000) => new Promise(resolve => setTimeout(resolve, ms));
const items = ['a', 'b', 'c'];
const printItem = async (item) => {
await delay();
console.log(item);
}
const testForLoopParallelism = async () => {
for (const item of items) {
await printItem(item);
}
};
const testHigherOrderParallelism = () => {
return Promise.all(items.map(async item => await printItem(item)));
}
const run = async () => {
// Prints consecutively at a rate of ~1s, for a total of ~3s
console.time('for');
await testForLoopParallelism();
console.timeEnd('for');
// Prints all concurrently, for a total of ~1s
console.time('HOM');
await testHigherOrderParallelism();
console.timeEnd('HOM');
};
run();
Less importantly but worth noting:
Verbosity: array HOMs may be shorter than for loops
Subjectively easier to read: this can be argued for both HOMs and for for ... in loops
Subjectively better understood: loops may be more well known than HOMs for Javascript newcomers
Performance: May have performance differences that may need to be considered in high performance codebases on a case-by-case basis
Immutability aid: May aid in providing immutability - although it is possible to create mutations using either HOMs or for loops, e.g.,
items.map((item) => {
items.push(item);
return `${item}x`;
});

map, reduce, etc are functions that container-like data-structures should implement so that consumers can make use of them without having to understand their internals. These functions accept your logic as input. This allows you to implement changes to the internals of those data-structures without affecting their consumers.
The real reason why map is superior to for loops is because they are much easier to develop with as your application evolves. What if your requirements change such that you now have an object?
import map from 'lodash/fp/map';
import mapValues from 'lodash/fp/mapValues';
const before = map(logic, data);
const after = mapValues(logic, data);
And again, what if your requirements change such that now you have a tree? Well, now you're a good developer that realises that the code that is responsible for traversing the tree should be separated from the business logic. It should be written once, thoroughly tested once, be maintained by the team that owns the data-structure, and accept business logic as input. All so that consumers do not have to worry about its internals.
Instead, consumers should just do this.
import { mapTreeValues } from 'tree'
const before = map(logic, data);
const after = mapTreeValues(logic, data);
Long story short, for should only ever be used by owners of data structures to implement functions such as map, reduce, etc. These functions should never be coupled to business logic, instead they should accept it as input. Remember the single responsibility principle?
Aside: To preempt comparison with custom iterables, there are benefits to these being higher-order functions to improve ease of composition. For example:
getUser().then(updateUser)
getUsers().then(map(updateUser))

Related

Functional or Imperative code for dealing with considerably large array in javascript?

When working with array in javascript what would you choose, functional way or imperative way while imperative is faster than functional. I am so confused.
Here is the jsPerf test i ran with plain for loop and pair of map and filter.
My two cents:
In general, I prefer working functional way with arrays, since I find functional methods more flexible and easier to read and maintain.
Especially where the performance are not critical or the there aren't noticeable differences.
Let's say that the functional way takes 50x the times of regular loop. If a regular loop takes 1ms, it means the functional takes 50ms and in most of the case that is still okay.
So I wouldn't sacrifice my code for optimization in that case, especially in application and / or shared repo.
However, when I code videogames, I usually try to do regular loop. Both for performance reasons, but also because in that context usually you have to deal with arrays of bytes, and I find functional programming less flexible.
Said that: in JS the problem of array's method is that they aren't lazy. It means, in your case, you're iterating twice the array because you call two methods (filter and map). In other languages (e.g. Rust) such method are "lazy", and they're not invoked until you actually do something with the iterator: that reduce the performance issue you can have compare to a regular loop.
There are library in JS that supports lazy methods (e.g. RxJS on observable), so you might want to check those if you're looking for something in the middle (saving a bit of perf while still using a functional approach).
The difference between Array.map and a for-loop is that the for-loop does nothing more than iterating over the values of the array. Within the body of the loop, you can do whatever with these values. Array.map does more than that. It iterates over the array, creating a new array with the value of the callback invoked on every value.
In my opinion you should use a for-loop as much as possible over Array.map, while it is a lot quicker. Use Array.map when you want to create a new array with mutated values in the original array.
Basically these are the same as:
For-loop:
const array = [1, 2, 3];
const mutatedArray = [];
for(let i = 0; i < array.length; i++) {
let mutatedValue = array[i] * 2;
mutatedArray.push(mutatedValue);
}
Array.map:
const array = [1, 2, 3];
const mutatedArray = array.map(x => x * 2);
It is a lot cleaner and quicker to write.

Why does Javascript `iterator.next()` return an object?

Help! I'm learning to love Javascript after programming in C# for quite a while but I'm stuck learning to love the iterable protocol!
Why did Javascript adopt a protocol that requires creating a new object for each iteration? Why have next() return a new object with properties done and value instead of adopting a protocol like C# IEnumerable and IEnumerator which allocates no object at the expense of requiring two calls (one to moveNext to see if the iteration is done, and a second to current to get the value)?
Are there under-the-hood optimizations that skip the allocation of the object return by next()? Hard to imagine given the iterable doesn't know how the object could be used once returned...
Generators don't seem to reuse the next object as illustrated below:
function* generator() {
yield 0;
yield 1;
}
var iterator = generator();
var result0 = iterator.next();
var result1 = iterator.next();
console.log(result0.value) // 0
console.log(result1.value) // 1
Hm, here's a clue (thanks to Bergi!):
We will answer one important question later (in Sect. 3.2): Why can iterators (optionally) return a value after the last element? That capability is the reason for elements being wrapped. Otherwise, iterators could simply return a publicly defined sentinel (stop value) after the last element.
And in Sect. 3.2 they discuss using Using generators as lightweight threads. Seems to say the reason for return an object from next is so that a value can be returned even when done is true! Whoa. Furthermore, generators can return values in addition to yield and yield*-ing values and a value generated by return ends up as in value when done is true!
And all this allows for pseudo-threading. And that feature, pseudo-threading, is worth allocating a new object for each time around the loop... Javascript. Always so unexpected!
Although, now that I think about it, allowing yield* to "return" a value to enable a pseudo-threading still doesn't justify returning an object. The IEnumerator protocol could be extended to return an object after moveNext() returns false -- just add a property hasCurrent to test after the iteration is complete that when true indicates current has a valid value...
And the compiler optimizations are non-trivial. This will result in quite wild variance in the performance of an iterator... doesn't that cause problems for library implementors?
All these points are raised in this thread discovered by the friendly SO community. Yet, those arguments didn't seem to hold the day.
However, regardless of returning an object or not, no one is going to be checking for a value after iteration is "complete", right? E.g. most everyone would think the following would log all values returned by an iterator:
function logIteratorValues(iterator) {
var next;
while(next = iterator.next(), !next.done)
console.log(next.value)
}
Except it doesn't because even though done is false the iterator might still have returned another value. Consider:
function* generator() {
yield 0;
return 1;
}
var iterator = generator();
var result0 = iterator.next();
var result1 = iterator.next();
console.log(`${result0.value}, ${result0.done}`) // 0, false
console.log(`${result1.value}, ${result1.done}`) // 1, true
Is an iterator that returns a value after its "done" is really an iterator? What is the sound of one hand clapping? It just seems quite odd...
And here is in depth post on generators I enjoyed. Much time is spent controlling the flow of an application as opposed to iterating members of a collection.
Another possible explanation is that IEnumerable/IEnumerator requires two interfaces and three methods and the JS community preferred the simplicity of a single method. That way they wouldn't have to introduce the notion of groups of symbolic methods aka interfaces...
Are there under-the-hood optimizations that skip the allocation of the object return by next()?
Yes. Those iterator result objects are small and usually short-lived. Particularly in for … of loops, the compiler can do a trivial escape analysis to see that the object doesn't face the user code at all (but only the internal loop evaluation code). They can be dealt with very efficiently by the garbage collector, or even be allocated directly on the stack.
Here are some sources:
JS inherits it functionally-minded iteration protocol from Python, but with results objects instead of the previously favoured StopIteration exceptions
Performance concerns in the spec discussion (cont'd) were shrugged off. If you implement a custom iterator and it is too slow, try using a generator function
(At least for builtin iterators) these optimisations are already implemented:
The key to great performance for iteration is to make sure that the repeated calls to iterator.next() in the loop are optimized well, and ideally completely avoid the allocation of the iterResult using advanced compiler techniques like store-load propagation, escape analysis and scalar replacement of aggregates. To really shine performance-wise, the optimizing compiler should also completely eliminate the allocation of the iterator itself - the iterable[Symbol.iterator]() call - and operate on the backing-store of the iterable directly.
Bergi answered already, and I've upvoted, I just want to add this:
Why should you even be concerned about new object being returned? It looks like:
{done: boolean, value: any}
You know, you are going to use the value anyway, so it's really not an extra memory overhead. What's left? done: boolean and the object itself take up to 8 bytes each, which is the smallest addressable memory possible and must be processed by the cpu and allocated in memory in a few pico- or nanoseconds (I think it's pico- given the likely-existing v8 optimizations). Now if you still care about wasting that amount of time and memory, than you really should consider switching to something like Rust+WebAssembly from JS.

Why can Array.prototype.forEach not be chained?

I learned today that forEach() returns undefined. What a waste!
If it returned the original array, it would be far more flexible without breaking any existing code. Is there any reason forEach returns undefined.
Is there anyway to chain forEach with other methods like map & filter?
For example:
var obj = someThing.keys()
.filter(someFilter)
.forEach(passToAnotherObject)
.map(transformKeys)
.reduce(reduction)
Wouldn't work because the forEach doesn't want to play nice, requiring you to run all the methods before the forEach again to get the object in the state needed for the forEach.
What you want is known as method cascading via method chaining. Describing them in brief:
Method chaining is when a method returns an object that has another method that you immediately invoke. For example, using jQuery:
$("#person")
.slideDown("slow")
.addClass("grouped")
.css("margin-left", "11px");
Method cascading is when multiple methods are called on the same object. For example, in some languages you can do:
foo
..bar()
..baz();
Which is equivalent to the following in JavaScript:
foo.bar();
foo.baz();
JavaScript doesn't have any special syntax for method cascading. However, you can simulate method cascading using method chaining if the first method call returns this. For example, in the following code if bar returns this (i.e. foo) then chaining is equivalent to cascading:
foo
.bar()
.baz();
Some methods like filter and map are chainable but not cascadable because they return a new array, but not the original array.
On the other hand the forEach function is not chainable because it doesn't return a new object. Now, the question arises whether forEach should be cascadable or not.
Currently, forEach is not cascadable. However, that's not really a problem as you can simply save the result of the intermediate array in a variable and use that later:
var arr = someThing.keys()
.filter(someFilter);
arr.forEach(passToAnotherObject);
var obj = arr
.map(transformKeys)
.reduce(reduction);
Yes, this solution looks uglier than the your desired solution. However, I like it more than your code for several reasons:
It is consistent because chainable methods are not mixed with cascadable methods. Hence, it promotes a functional style of programming (i.e. programming with no side effects).
Cascading is inherently an effectful operation because you are calling a method and ignoring the result. Hence, you're calling the operation for its side effects and not for its result.
On the other hand, chainable functions like map and filter don't have any side effects (if their input function doesn't have any side effects). They are used solely for their results.
In my humble opinion, mixing chainable methods like map and filter with cascadable functions like forEach (if it was cascadable) is sacrilege because it would introduce side effects in an otherwise pure transformation.
It is explicit. As The Zen of Python teaches us, “Explicit is better than implicit.” Method cascading is just syntactic sugar. It is implicit and it comes at a cost. The cost is complexity.
Now, you might argue that my code looks more complex than yours. If so, you would be judging the book by its cover. In their famous paper Out of the Tar Pit, the authors Ben Moseley and Peter Marks describe different types of software complexities.
The second biggest software complexity on their list is complexity caused by explicit concern with control flow. For example:
var obj = someThing.keys()
.filter(someFilter)
.forEach(passToAnotherObject)
.map(transformKeys)
.reduce(reduction);
The above program is explicitly concerned with control flow because you are explicit stating that .forEach(passToAnotherObject) should happen before .map(transformKeys) even though it shouldn't have any effect on the overall transformation.
In fact, you can remove it from the equation altogether and it wouldn't make any difference:
var obj = someThing.keys()
.filter(someFilter)
.map(transformKeys)
.reduce(reduction);
This suggests that the .forEach(passToAnotherObject) didn't have any business being in the equation in the first place. Since it's a side effectful operation, it should be kept separate from pure code.
When you write it explicitly as I did above, not only are you separating pure code from side effectful code but also you can choose when to evaluate each computation. For example:
var arr = someThing.keys()
.filter(someFilter);
var obj = arr
.map(transformKeys)
.reduce(reduction);
arr.forEach(passToAnotherObject); // evaluate after pure computation
Yes, you are still explicitly concerned with control flow. However, at least now you know that .forEach(passToAnotherObject) has nothing to do with the other transformations.
Thus, you have eliminated some (but not all) of the complexity caused by explicit concern with control flow.
For these reasons, I believe that the current implementation of forEach is actually beneficial because it prevents you from writing code that introduces complexity due to explicit concern with control flow.
I know from personal experience from when I used to work at BrowserStack that explicit concern with control flow is a big problem in large-scale software applications. It is indeed a real world problem.
It's easy to write complex code because complex code is usually shorter (implicit) code. So it's always tempting to drop in a side effectful function like forEach in the middle of a pure computation because it requires less code refactoring.
However, in the long run it makes your program more complex. Think of what would happen a few years down the line when you quit the company that you work for and somebody else has to maintain your code. Your code now looks like:
var obj = someThing.keys()
.filter(someFilter)
.forEach(passToAnotherObject)
.forEach(doSomething)
.map(transformKeys)
.forEach(doSomethingElse)
.reduce(reduction);
The person reading your code now has to assume that all the additional forEach methods in your chain are essential, put in extra work to understand what each function does, figure out by herself that these extra forEach methods are not essential to compute obj, eliminate them from her mental model of your code and only concentrate on the essential parts.
That's a lot of unnecessary complexity added to your program, and you thought that it was making your program more simple.
It's easy to implement a chainable forEach function:
Array.prototype.forEachChain = function () {
this.forEach(...arguments);
return this;
};
const arr = [1,2,3,4];
const dbl = (v, i, a) => {
a[i] = 2 * v;
};
arr.forEachChain(dbl).forEachChain(dbl);
console.log(arr); // [4,8,12,16]

Why do Javascript Libraries have local references to array methods (push, slice, etc..)?

I have been reading the code of a few javascript libraries. I've noticed that AngularJS and Backbone.js both keep a local reference to array functions. e.g.:
var push = [].push // or array.push
what is the point of doing this when arrays are a construct of the language and should be accessible globally?
Because the Array prototype's functions can be applied to non-arrays.
For example, push'ing items into an array-like object:
var o = { length: 0 };
[].push.call(o, 'hi');
o; //Object {0: "hi", length: 1}
Another common practice is slice'ing the arguments object into a native array:
(function() {
return [].slice.call(arguments); //[1, 2]
}(1, 2));
As you can see, saving references to these functions reduce the look up overhead, makes code smaller and minification-friendly.
In my opinion this is mostly for convenience and readability, as repeatedly writing [].arrayMethod looks rather clunky. The performance and minification boosts are an extra.
Looking through Angular's source, here are the cases I've found:
push is used in the JQLite prototype. Note that jQuery objects have array-like structures, similarly to the first example in this answer;
slice is used inside the sliceArgs, concat and bind functions.
Backbone also slices argument objects (at Events#trigger and Underscore methods proxying), it also uses slice at Collection#slice.
I'm thinking library developers are especially keen on making their library resilient to the random things that might be happening on a page. If you get a reference to the correct Array.prototype.push early on and use closures to keep it out of reach of other code that you as a library writer are not aware of, it reduced the chances of (and makes it much easier to troubleshoot when) something unexpected happening if other code on the page decides to hijack this built-in method, which Javascript is very permissive of.
Consider:
function Library(){
var push=[].push;
var data=[];
this.save1=function(x){push.call(data, x);}
this.save2=function(x){data.push(x);}
this.get=function(){console.log(data);}
}
var o=new Library();
//Random on-page code
Array.prototype.push=function(x){console.info("doSomethingCrazy!");}
//Lets use the library functionality!
o.save1(1);
o.save2(2);
One reason, noted by Douglas Crockford in his lecture The Metamorphosis of Ajax, is that developers of JavaScript libraries can conditionally add utility methods, for example something like string.split, such that it will only be added to an object's prototype if it is not already defined by the standard libraries provided by the browser.

JavaScript's Statement Performance Questions

Can you guys help me determine the performance difference of each of these
statements? Which one would you use?
Making a new Array using
- var new_list = new Array(); //or
- var new_list = [];
Appending element using
- push('a')
- new_list[i]; (if i know the length)
Ternary operator or if() {} else (){}
Trying to make isodd function, which is faster
(! (is_even)) or (x%2!=0)
forEach() or normal iteration
one more
a= b = 3; or b=3; a=b;
[edit: I'm making a Math Library. So any performance hacks discussions are also welcome :) ]
Thanks for your help.
I've always assumed that since (x&1) is a bitwise operation, it would be the fastest way to check for even/odd numbers, rather than checking for the remainder of the number.
Performance characteristics for all browser (especially at the level of individual library functions) can vary dramatically, so it's difficult to give meaningful really meaningful answers to these questions.
Anyhoo, just looking at the fast js engines (so Nitro, TraceMonkey, and V8)
[ ] will be faster than new Array -- new Array turns into the following logic
cons = lookup property "Array", if it can't be found, throw an exception
Check to see if cons can be used as a constructor, if not: throw an exception
thisVal = runtime creates a new object directly
res = result of calling cons passing thisVal as the value for this -- which requires logic to distinguish JS functions from standard runtime functions (assuming standard runtime functions aren't implemented in JS, which is the normal case). In this case Array is a native constructor which will create and return a new runtime array object.
if res is undefined or null then the final result is thisVal otherwise the final result is res. In the case of calling Array a new array object will be returned and thisVal will be thrown away
[ ] just tells the JS engine to directly create a new runtime array object immediately with no additional logic. This means new Array has a large amount of additional (not very cheap) logic, and performs and extra unnecessary object allocation.
newlist[newlist.length] = ... is faster (esp. if newlist is not a sparse array), but push is sufficiently common for me to expect engine developers to put quite a bit of effort into improving performance so this could change in time.
If you have a tight enough loop there may be a very slight win to the ternary operator, but arguably that's an engine flaw in the trival case of a = b ? c : d vs if (b) a = c; else a = d
Just the function call overhead alone will dwarf the cost of more or less any JS operator, at least in the sane cases (eg. you're performing arithmetic on numbers rather than objects)
The foreach syntax isn't yet standardised but its final performane will depend on a large number of details; Often JS semantics result in efficient looking statements being less efficient -- eg. for (var i in array) ... is vastly slower than for (var i = 0; i < array.length; i++) ... as the JS semantics require in enumeration to build up a list of all properties on the object (including the prototype chain), and then checking to make sure that each property is still on the object before sending it through the loop. Oh, and the properties need to be converted from integers (in the array case anyway) into strings, which costs time and memory.
I'd suggest you code a simple script like:
for(var i = 0; i < 1000; i++){
// Test your code here.
}
You can benchmark whatever you want that way, possibly adding timing functions before and after the for statement to be more accurate.
Of course you'll need to tweak the upper limit (1000 in this example) depending on the nature of your operations - some will require more iterations, others less.
Both are native constructors probably no difference.
push is faster, it maps directly to native, where as [] is evaluative
Probably not much of a difference, but technically, they don't do the same thing, so it's not apples to apples
x%2, skips the function call which is relatively slow
I've heard, though can't find the link at the moment, that iteration is faster than the foreach, which was surprising to me.
Edit: On #5, I believe the reason is related to this, in that foreach is ordered forward, which requires the incrementor to count forward, whereas for loops are ever so infinitesimally faster when they are run backward:
for(var i=a.length;i>-1;i--) {
// do whatever
}
the above is slightly faster than:
for(var i=0;i<a.length;i++) {
// do whatever
}
As other posters suggest, I think doing some rough benchmarking is your best bet... however, I'd also note that you'll probably get very different results from different browsers, since I'm sure most of the questions you're asking come down to specific internal implementation of the language constructs rather than the language itself.
This page says push is slower.
http://dev.opera.com/articles/view/efficient-javascript/?page=2

Categories

Resources