Which of these cross-browser Javascript functions performs better? - javascript

As a rule of thumb, which of these methods of writing cross-browser Javascript functions will perform better?
Method 1
function MyFunction()
{
if (document.browserSpecificProperty)
doSomethingWith(document.browserSpecificProperty);
else
doSomethingWith(document.someOtherProperty);
}
Method 2
var MyFunction;
if(document.browserSpecificProperty) {
MyFunction = function() {
doSomethingWith(document.browserSpecificProperty);
};
} else {
MyFunction = function() {
doSomethingWith(document.someOtherProperty);
};
}
Edit: Upvote for all the fine answers so far. I've fixed the function to a more correct syntax.
Couple of points about the answers so far - whilst in the majority of cases it is a fairly pointless performance enhancement, there are a few reasons one might want to still spend some time analyzing the code:
Has to run on
slow computers, mobile devices, old
browsers etc.
Curiosity
Use the same
general principal to performance
enhance other scenarios where the
evaluation of the IF statement does
take some time.

Unless you're doing this a trillion times, it doesn't matter. Go with the one that is more readable and maintainable to you and/or your organization. The productivity gains you will get from writing clean, simple code matters way more than shaving a tenth of a microsecond off your JS execution time.
You should only even start thinking about what performs better when and only when you've written code and it is unacceptably slow. Then you should start tracking down the bottleneck, which will never be something like this. You will never get a measurable performance gain out of switching from one to the other here.

Unfortunately the code above is not actually cross-browser friendly as it relies on a mozilla quirk not present in other browsers -- namely that function statements are treated as function expressions inside branches. On browsers other that aren't built on mozilla the above code will always use the second function definition. I made a simple testcase to demonstrate this here.
Basically the ECMAScript spec says that function statements are treated similarly to var declarations, eg. they all get hoisted to the top of the current execution scope (eg. the start of a <script> tag, the start of a function, or the start of an eval block).

To clarify olliej's answer, your second method is technically a syntax error. You could rewrite it this way:
var MyFunction;
if(document.browserSpecificProperty) {
MyFunction = function() {
doSomethingWith(document.browserSpecificProperty);
};
} else {
MyFunction = function() {
doSomethingWith(document.someOtherProperty);
};
}
Which is at least correct syntax, but note that MyFunction would only be available in the scope in which that occurs. (Omit var MyFunction;, and preferably use window.MyFunction = function() ... for global.)

Technically, I would say that the second one would perform better, because the if statement is only executed once, rather than every time the function is run.
The difference, however, would be negligible to the point of being meaningless. The performance penalty of a single if statement such as this would be insignificant even compared to the performance penalty of simply calling a function. It would make a smallish difference even if if is called a million times.
The first one is easier to understand, because it doesn't have the awkwardness of defining the same function twice based on a condition, with both versions behaving differently. That seems to be a recipe for confusion later on.
I wouldn't be the first person to say that unless you are really insane about this optimization thing, you'll get more of a win out of code readability.

I generally prefer the second version, as the condition only has to be evaluated once and not on every call, but there are times when it's not really feasible because it will hamper readability.
Btw, this is a case where you might want to use the ?: operator, e.g (taken from production code):
var addEvent =
document.addEventListener ? function(type, listener) {
document.addEventListener(type, listener, false);
} :
document.attachEvent ? function(type, listener) {
document.attachEvent('on' + type, listener);
} :
throwError;

For your simplified example I would do what's below assuming that your browser property check only needs to be done once:
var MyFunction = (function() {
var rightProperty = document.browserSpecificProperty || document.someOtherProperty;
return function doSomethingWith() {
// use the rightProperty variable in your function
}
})();

The performance should be nearly equal!
Thing about using Frameworks like JQuery to get rid of the Browser compability problems!
If performance is your main goal, have a look at SlickSpeed! It is a page which benchmarks different JavaScript frameworks!

Related

Avoiding strings and hardcoded function names – advantages?

I recently came across a JavaScript script, in which the author seemed to try to avoid strings inside his code and assigned everything to a variable.
So instead of
document.addEventListener('click', (e) => { /*whatever*/ });
he would write
var doc = document;
var click = 'click';
var EventListener = 'EventListener';
var addEventListener = `add${EventListener}`;
doc[addEventListener](click, (e) => { /*whatever*/ });
While caching document into a variable can be regarded a micro optimization, I am really wondering if there is any other benefit of this practice in general - testability, speed, maintenance, anything?
Legacy IE attachEvent should be pretty much dead, so being able to quickly make the script only run in these environments can hardly be regarded an advantage I suppose.
The example you give looks pretty strange, and I can't imagine any "good practice" reason for most of those moves. My first guess was that it's the work of someone who wasn't sure what they were doing, although it's odd that they'd also be using ECMAScript 6 syntax.
Another possibility is that this is generated code (e.g. the output of some kind of visual programming tool, or a de-minifier). In that situation it's common to see this sort of excessive factoring because the code is generated from templates that are conservatively written to guard against errors; I'm thinking of the way preprocessor macros in C make liberal use of parentheses.
Sometimes variable declarations are written in a way that makes clear (to the compiler and/or the reader) what type of data the variable holds. For instance, asm.js code uses unnecessary-looking variable declarations as a trick to implement strongly-typed variables on top of regular JS. And sometimes declarations are written as a form of documentation (if you see var foo = Math.PI * 0, that's probably there to tell you that foo is an angle in radians, since otherwise the author would have just written var foo = 0.0). But that still wouldn't explain something like var click='click'.

Using a function or typeof

How much slower faster is the typeof operator than a function call? Or is it negligible and micro-optimising?
if (isNumber(myVar)) {
}
if (typeof myVar === 'number') {
}
Or is it negligible and micro-optimising?
Yes, this is definitely something to worry about if and only if you identify the code in question as being a performance bottleneck, which is really unlikely. It's micro-optimization. Function calls are really, really fast even if they don't get optimized out by the JavaScript engine. I used to worry about function call overhead when Array#forEach first appeared on the scene. Even back then, it wasn't an issue, even on the oldest, slowest JavaScript interpreter I could find: The one in IE6. Details on my blog: foreach and runtime cost
Re whether it takes longer... How long is a piece of string? It totally depends on the JavaScript engine you're using and whether the code in question is identified as a "hot" spot by the engine (assuming it's an engine like V8 that works in stages and optimizes hot spots).
A modern engine is likely to inline that if it becomes important to do so. That is not a guarantee.
Or is it negligible and micro-optimising?
It's negligible and micro-optimizing.
If you want to check if something's a number, I recommend using an isNaN check and then casting to a number.
if (!isNaN(myVar)) {
myVar = +myVar;
}
In this way, you don't actually care how the value gets treated as a number.
Someone using the API could then choose to pass an object that can be treated as a number:
myVar = {
valueOf: function () {
return 5;
}
};

With jQuery/JavaScript Is it slower to declare variables that will only be used once?

I just want to make sure I understand this basic idea correctly, and no matter how I phrase it, I'm not coming up with completely relevant search results.
This is faster:
function () {
$('#someID').someMethod( $('#someOtherID').data('some-data') );
}
than this:
function () {
var variableOne = $('#someID'),
variableTwo = $('#someIDsomeOtherID').data('some-data');
variableOne.someMethod( variableTwo );
}
is that correct?
I think the question may be "Is declaring variables slower than not declaring variables?" :facepalm:
The particular case where I questioned this is running a similarly constructed function after an AJAX call where some events must be delegated on to the newly loaded elements.
The answer you will benefit from the most is It does not make a difference.
Declaring variables and storing data in them is something you do so that you do not have to query that data more than once. Besides this obvious point, using variables rather than chaining everything into one big expression improves readability and makes your code more manageable.
When it comes to performance, I can't think of any scenario where you should debate declaring a variable or not. If there's even an absolute difference in speed, it would be so small that you should not see this as a reason to not use that variable and just leaving your code to becoming spaghetti.
If you want to use the element $('#someID') again and again
Then decelaring variable would be useful and caching $('#someId') is recommended.
var x = $('#someId')'

Don't make functions within loops Javascript

I get that there are possibly three hundred of these questions, and I understand why not to. If we were looping saying a regular for loop, each iteration we are creating an anonymous function expression which is using more memory. Instead we take the function outside of the loop thus giving it a name
Anonymous Function Iteration Example
var elements = document.getElementsByClassName('elementName');
for(var i=0; i < elements.length; i++ )
{
elements[i].addEventListener('click',function(e){
console.log(e);
});
}
Named Function Iteration Example
function handleClickEvents(e) {
console.log(e);
}
var elements = document.getElementsByClassName('elementName');
for(var i=0; i < elements.length; i++ )
{
elements[i].addEventListener('click',handleClickEvents);
}
Problem here is trying to prove someone the logic of this, and to be honest my jsperfs are disproving me completely. Please see the test results for yourself here
So is jsPerf just wrong in the calculations or is this just a myth busted completely? I see that by running the anonymous function as my eventListener function I gain speed compared to the ladder.
Can anyone enlighten me to what the deal is here and why if we gain more speed with the first example should I even bother with two more lines from the second version?
You should not worry about performance-- I hardly imagine you are adding millions of event listeners.
The second alternative (specifying a function reference) is superior in that the function, once defined, could potentially be used in other places. It requires fewer });, and so is less prone to typos. Perhaps more importantly, it is potentially more readable. Let's take the example of passing a function to Array#filter, to check that a filename is a jpg:
names.filter(function(name) {
return /\.jpg$/i.test(name);
});
vs.
function isJpeg(name) { return /\.jpg$/i.test(name); }
names.filter(isJpeg);
If you're chaining methods together, the benefits become more obvious:
names . filter(isJpeg) . map(makeThumbnail) . forEach(uploadJpg);
At the end of the day it doesn't really matter and boils down to personal preference, but the one thing that is clear is that performance concerns should not be what drives your decision, except in very specialized situations. A good general rule is to write very short, one-off functions in-line. With ES6 and arrow functions, more functions can be "very short" and be candidates for inlining.
By the way, even when writing the function in-line, it's often a good idea to give it a name:
names.filter(function isJpeg(name) {
That has a couple of benefits. First, it's a form of documentation/comment and helps people read your code. Second, most debuggers and stack traces will do a better job of reporting about the function. Most minifiers will remove the name so there's no production impact.
I believe there is flaw in your comparison. If you were to reverse the code. Putting the anonymous function later of the comparison. It will be slower. (http://jsperf.com/best-event-listener-practice/5). Later code will always be slower because there has been so many binding has done before.

Anonymous functions and memory consumption

In terms of memory consumption, are these equivalent or do we get a new function instance for every object in the latter?
var f=function(){alert(this.animal);}
var items=[];
for(var i=0;i<10;++i)
{
var item={"animal":"monkey"};
item.alertAnimal=f;
items.push(item);
}
and
var items=[];
for(var i=0;i<10;++i)
{
var item={"animal":"monkey"};
item.alertAnimal=function(){alert(this.animal);};
items.push(item);
}
EDIT
I'm thinking that in order for closure to work correctly, the second instance would indeed create a new function each pass. Is this correct?
You should pefer the first method, since the second one creates a function every time the interpreter passes that line.
Regarding your edit: We are in the same scope all the time, since JavaScript has function scope instead of block scope, so this might be optimizable, but i did not encounter an implementation that doesn't create it every time. I would recommend not to rely on this (probably possible) optimization, since implementations that lack support could likely exceed memory limits if you use this technique extensively (which is bad, since you do not know what implementation will run it, right?).
I am not an expert, but it seems to me that different javascript engines could be handling this in different ways.
For example, V8 has something called hidden classes, which could affect memory consumption when accessing the same property. Maybe somebody can confirm or deny this.

Categories

Resources