I was doing some benchmarking and found some absurd results that I cant seem to explain.
const a = undefined;
const b = {};
// add tests
suite
.add('Undefined variable', function () {
if(a > 0){
return true;
}
else{
return false;
}
})
.add('Undefined property', function () {
if(b.a > 0) {
return true;
}
else{
return false;
}
})
Test Results:
Undefined variable x 69,660,401 ops/sec ±2.48% (36 runs sampled)
Undefined property x 994,939,175 ops/sec ±0.85% (40 runs sampled)
Test Results
--------------------------------------------------------------------------
Undefined property : 994939174.67 ops/sec (+1328.27 %)
Undefined variable : 69660400.51 ops/sec ( +0.00 %)
--------------------------------------------------------------------------
Anyone have any idea why the first case Undefined variable is so much slower than the other one ?
I found similiar performance results on a jsbench test : https://jsbench.me/vdku4ert4l/2
(V8 developer here.)
Comparing undefined > 0 always has the same performance. The difference here is that in one of your cases, V8 can optimize away the comparison: for a property access like b.a, it remembers the hidden class of the object(s) seen (i.e. the values of b); that's the key idea of the technique called "inline caching".
V8 takes this idea one step further: if all encountered objects had the same hidden class, and that hidden class didn't have an a property, then when that function gets optimized, V8 takes that experience into account and produces optimized code that assumes that this will still be the case in the future, which in this case enables it to constant-fold away the property load and the comparison. In other words, it optimizes that function to something like:
function undefined_property_optimized() {
(if b.__hidden_class__ !== kPreviousHiddenClass) Deoptimize;
return false;
}
Where Deoptimize means: throw away this optimized code and go back to unoptimized code for this function (resuming execution exactly at the right point, of course).
the first test Undefined variable is being heavily slowed down
No, it's not being slowed down at all. The other case is "cheating", so to speak.
adding const b = { a: undefined }; doesn't change anything
That actually depends a lot on how exactly you run the test. In local testing, with minor modifications to what I force the engine to do, this addition either has no effect, or makes both functions have equal speed.
Rule of thumb #1: when you run a microbenchmark and you see several hundred million operations per second, then the optimizing compiler was able to optimize away pretty much everything, and you're testing an empty (or trivial) function.
Rule of thumb #2: the results of microbenchmarks are difficult to interpret correctly. You may have thought you were measuring property loads here, or > 0 comparisons; both assumptions would be incorrect: in the faster case, there are no properties being loaded and no > 0 comparisons being performed. To make sense of a microbenchmark, you really need to study the generated machine code (and/or other engine internals), to make sure it's testing what you think it's testing.
Rule of thumb #3: modern high-performance JavaScript engines are incredibly complex beasts, and the same snippet of JS will not always have the same performance; it depends heavily on the code around it (both the immediately surrounding lines, and far-away code elsewhere in your app can affect it).
Rule of thumb #4: the results of microbenchmarks almost never carry over to real-world code -- mostly because of the above three rules :-)
Side note: when you find yourself writing:
if (some_condition) {
return true;
} else {
return false;
}
then you can just replace that with return some_condition. Probably won't be faster, but makes your code shorter.
Related
How much slower faster is the typeof operator than a function call? Or is it negligible and micro-optimising?
if (isNumber(myVar)) {
}
if (typeof myVar === 'number') {
}
Or is it negligible and micro-optimising?
Yes, this is definitely something to worry about if and only if you identify the code in question as being a performance bottleneck, which is really unlikely. It's micro-optimization. Function calls are really, really fast even if they don't get optimized out by the JavaScript engine. I used to worry about function call overhead when Array#forEach first appeared on the scene. Even back then, it wasn't an issue, even on the oldest, slowest JavaScript interpreter I could find: The one in IE6. Details on my blog: foreach and runtime cost
Re whether it takes longer... How long is a piece of string? It totally depends on the JavaScript engine you're using and whether the code in question is identified as a "hot" spot by the engine (assuming it's an engine like V8 that works in stages and optimizes hot spots).
A modern engine is likely to inline that if it becomes important to do so. That is not a guarantee.
Or is it negligible and micro-optimising?
It's negligible and micro-optimizing.
If you want to check if something's a number, I recommend using an isNaN check and then casting to a number.
if (!isNaN(myVar)) {
myVar = +myVar;
}
In this way, you don't actually care how the value gets treated as a number.
Someone using the API could then choose to pass an object that can be treated as a number:
myVar = {
valueOf: function () {
return 5;
}
};
In Bluebird's util.js file, it has the following function:
function toFastProperties(obj) {
/*jshint -W027*/
function f() {}
f.prototype = obj;
ASSERT("%HasFastProperties", true, obj);
return f;
eval(obj);
}
For some reason, there's a statement after the return function, which I'm not sure why it's there.
As well, it seems that it is deliberate, as the author had silenced the JSHint warning about this:
Unreachable 'eval' after 'return'. (W027)
What exactly does this function do? Does util.toFastProperties really make an object's properties "faster"?
I've searched through Bluebird's GitHub repository for any comments in the source code or an explanation in their list of issues, but I couldn't find any.
2017 update: First, for readers coming today - here is a version that works with Node 7 (4+):
function enforceFastProperties(o) {
function Sub() {}
Sub.prototype = o;
var receiver = new Sub(); // create an instance
function ic() { return typeof receiver.foo; } // perform access
ic();
ic();
return o;
eval("o" + o); // ensure no dead code elimination
}
Sans one or two small optimizations - all the below is still valid.
Let's first discuss what it does and why that's faster and then why it works.
What it does
The V8 engine uses two object representations:
Dictionary mode - in which object are stored as key - value maps as a hash map.
Fast mode - in which objects are stored like structs, in which there is no computation involved in property access.
Here is a simple demo that demonstrates the speed difference. Here we use the delete statement to force the objects into slow dictionary mode.
The engine tries to use fast mode whenever possible and generally whenever a lot of property access is performed - however sometimes it gets thrown into dictionary mode. Being in dictionary mode has a big performance penalty so generally it is desirable to put objects in fast mode.
This hack is intended to force the object into fast mode from dictionary mode.
Bluebird's Petka himself talks about it here.
These slides (wayback machine) by Vyacheslav Egorov also mentions it.
The question "*https://stackoverflow.com/questions/23455678/pros-and-cons-of-dictionary-mode*" and its accepted answer are also related.
This slightly outdated article is still a fairly good read that can give you a good idea on how objects are stored in v8.
Why it's faster
In JavaScript prototypes typically store functions shared among many instances and rarely change a lot dynamically. For this reason it is very desirable to have them in fast mode to avoid the extra penalty every time a function is called.
For this - v8 will gladly put objects that are the .prototype property of functions in fast mode since they will be shared by every object created by invoking that function as a constructor. This is generally a clever and desirable optimization.
How it works
Let's first go through the code and figure what each line does:
function toFastProperties(obj) {
/*jshint -W027*/ // suppress the "unreachable code" error
function f() {} // declare a new function
f.prototype = obj; // assign obj as its prototype to trigger the optimization
// assert the optimization passes to prevent the code from breaking in the
// future in case this optimization breaks:
ASSERT("%HasFastProperties", true, obj); // requires the "native syntax" flag
return f; // return it
eval(obj); // prevent the function from being optimized through dead code
// elimination or further optimizations. This code is never
// reached but even using eval in unreachable code causes v8
// to not optimize functions.
}
We don't have to find the code ourselves to assert that v8 does this optimization, we can instead read the v8 unit tests:
// Adding this many properties makes it slow.
assertFalse(%HasFastProperties(proto));
DoProtoMagic(proto, set__proto__);
// Making it a prototype makes it fast again.
assertTrue(%HasFastProperties(proto));
Reading and running this test shows us that this optimization indeed works in v8. However - it would be nice to see how.
If we check objects.cc we can find the following function (L9925):
void JSObject::OptimizeAsPrototype(Handle<JSObject> object) {
if (object->IsGlobalObject()) return;
// Make sure prototypes are fast objects and their maps have the bit set
// so they remain fast.
if (!object->HasFastProperties()) {
MigrateSlowToFast(object, 0);
}
}
Now, JSObject::MigrateSlowToFast just explicitly takes the Dictionary and converts it into a fast V8 object. It's a worthwhile read and an interesting insight into v8 object internals - but it's not the subject here. I still warmly recommend that you read it here as it's a good way to learn about v8 objects.
If we check out SetPrototype in objects.cc, we can see that it is called in line 12231:
if (value->IsJSObject()) {
JSObject::OptimizeAsPrototype(Handle<JSObject>::cast(value));
}
Which in turn is called by FuntionSetPrototype which is what we get with .prototype =.
Doing __proto__ = or .setPrototypeOf would have also worked but these are ES6 functions and Bluebird runs on all browsers since Netscape 7 so that's out of the question to simplify code here. For example, if we check .setPrototypeOf we can see:
// ES6 section 19.1.2.19.
function ObjectSetPrototypeOf(obj, proto) {
CHECK_OBJECT_COERCIBLE(obj, "Object.setPrototypeOf");
if (proto !== null && !IS_SPEC_OBJECT(proto)) {
throw MakeTypeError("proto_object_or_null", [proto]);
}
if (IS_SPEC_OBJECT(obj)) {
%SetPrototype(obj, proto); // MAKE IT FAST
}
return obj;
}
Which directly is on Object:
InstallFunctions($Object, DONT_ENUM, $Array(
...
"setPrototypeOf", ObjectSetPrototypeOf,
...
));
So - we have walked the path from the code Petka wrote to the bare metal. This was nice.
Disclaimer:
Remember this is all implementation detail. People like Petka are optimization freaks. Always remember that premature optimization is the root of all evil 97% of the time. Bluebird does something very basic very often so it gains a lot from these performance hacks - being as fast as callbacks isn't easy. You rarely have to do something like this in code that doesn't power a library.
V8 developer here. The accepted answer is a great explanation, I just wanted to highlight one thing: the so-called "fast" and "slow" property modes are unfortunate misnomers, they each have their pros and cons. Here is a (slightly simplified) overview of the performance of various operations:
struct-like properties
dictionary properties
adding a property to an object
--
+
deleting a property
---
+
reading/writing a property, first time
-
+
reading/writing, cached, monomorphic
+++
+
reading/writing, cached, few shapes
++
+
reading/writing, cached, many shapes
--
+
colloquial name
"fast"
"slow"
So as you can see, dictionary properties are actually faster for most of the lines in this table, because they don't care what you do, they just handle everything with solid (though not record-breaking) performance. Struct-like properties are blazing fast for one particular situation (reading/writing the values of existing properties, where every individual place in the code only sees very few distinct object shapes), but the price they pay for that is that all other operations, in particular those that add or remove properties, become much slower.
It just so happens that the special case where struct-like properties have their big advantage (+++) is particularly frequent and really important for many apps' performance, which is why they acquired the "fast" moniker. But it's important to realize that when you delete properties and V8 switches the affected objects to dictionary mode, then it isn't being dumb or trying to be annoying: rather it attempts to give you the best possible performance for what you're doing. We have landed patches in the past that have achieved significant performance improvements by making more objects go to dictionary ("slow") mode sooner when appropriate.
Now, it can happen that your objects would generally benefit from struct-like properties, but something your code does causes V8 to transition them to dictionary properties, and you'd like to undo that; Bluebird had such a case. Still, the name toFastProperties is a bit misleading in its simplicity; a more accurate (though unwieldy) name would be spendTimeOptimizingThisObjectAssumingItsPropertiesWontChange, which would indicate that the operation itself is costly, and it only makes sense in certain limited cases. If someone took away the conclusion "oh, this is great, so I can happily delete properties now, and just call toFastProperties afterwards every time", then that would be a major misunderstanding and cause pretty bad performance degradation.
If you stick with a few simple rules of thumb, you'll never have a reason to even try to force any internal object representation changes:
Use constructors, and initialize all properties in the constructor. (This helps not only your engine, but also understandability and maintainability of your code. Consider that TypeScript doesn't quite force this but strongly encourages it, because it helps engineering productivity.)
Use classes or prototypes to install methods, don't just slap them onto each object instance. (Again, this is a common best practice for many reasons, one of them being that it's faster.)
Avoid delete. When properties come and go, prefer using a Map over the ES5-era "object-as-map" pattern. When an object can toggle into and out of a certain state, prefer boolean (or equivalent) properties (e.g. o.has_state = true; o.has_state = false;) over adding and deleting an indicator property.
When it comes to performance, measure, measure, measure. Before you start sinking time into performance improvements, profile your app to see where the hotspots are. When you implement a change that you hope will make things faster, verify with your real app (or something extremely close to it; not just a 10-line microbenchmark!) that it actually helps.
Lastly, if your team lead tells you "I've heard that there are 'fast' and 'slow' properties, please make sure that all of ours are 'fast'", then point them at this post :-)
Reality from 2021 (NodeJS version 12+).
Seems like a huge optimization is done, objects with deleted fields and sparse arrays don't become slow. Or I'm missing smth?
// run in Node with enabled flag
// node --allow-natives-syntax script.js
function Point(x, y) {
this.x = x;
this.y = y;
}
var obj1 = new Point(1, 2);
var obj2 = new Point(3, 4);
delete obj2.y;
var arr = [1,2,3]
arr[100] = 100
console.log('obj1 has fast properties:', %HasFastProperties(obj1));
console.log('obj2 has fast properties:', %HasFastProperties(obj2));
console.log('arr has fast properties:', %HasFastProperties(arr));
both show true
obj1 has fast properties: true
obj2 has fast properties: true
arr has fast properties: true
// run in Node with enabled flag
// node --allow-natives-syntax script.js
function Point(x, y) {
this.x = x;
this.y = y;
}
var obj2 = new Point(3, 4);
console.log('obj has fast properties:', %HasFastProperties(obj2)) // true
delete obj2.y;
console.log('obj2 has fast properties:', %HasFastProperties(obj2)); //true
var obj = {x : 1, y : 2};
console.log('obj has fast properties:', %HasFastProperties(obj)) //true
delete obj.x;
console.log('obj has fast properties:', %HasFastProperties(obj)); //fasle
Function and object look different
What is the JavaScript convention for no operation? Like a Python pass command.
One option is simply an empty function: function() {}
jQuery offers $.noop(), which simply calls the empty function above.
Is it acceptable to simply enter a value of false or 0?
In context... all of these work without throwing an error in Chrome:
var a = 2;
(a === 1) ? alert(1) : function() {};
(a === 1) ? alert(1) : $.noop();
(a === 1) ? alert(1) : false;
(a === 1) ? alert(1) : 0;
EDIT: A lot of people responded with, "don't do this! Change the code structure!" This reminds me of a post where someone asked how to sniff the browser. He received a barrage of posts saying, "DON'T DO THAT! IT'S EVIL," but nobody told him how to sniff the browser. This is not a code review. Imagine that you are dealing with legacy code that can't be changed, and without some function passed in, it will toss an error. Or, simply, that's the way the customer wants it, and they're paying me. So, respectfully, please answer the question: What is the best way to specify a "no operation" function in JavaScript?
EDIT2: How about one of these?
true;
false;
0;
1;
null;
To answer the original question, the most elegant and neat implementation of a noop function in pure Javascript (as is also discussed here) is Function.prototype. This is because:
Function.prototype is a function:
typeof Function.prototype === "function" // returns true
It can be invoked as a function and essentially does nothing as shown here:
setTimeout(function() {
console.log('Start: ', Date.now());
Function.prototype();
console.log('End : ', Date.now());
}, 1000);
Although this is a "true noop" since most browsers seem to do nothing to execute the noop defined this way (and hence save CPU cycles), there might be some performance issues associated with this (as is also mentioned by others in comments or in other answers).
However, that being said, you can easily define your own noop function and, infact, many libraries and frameworks also provide noop functions. Below are some examples:
var noop = function () {}; // Define your own noop in ES3 or ES5
const noop = () => {}; // Define in ES6 as Lambda (arrow function)
setTimeout(noop, 10000); // Using the predefined noop
setTimeout(function () {} , 10000); // Using directly in ES3 or ES5
setTimeout(() => {} , 10000); // Using directly in ES6 as Lambda (arrow function)
setTimeout(angular.noop, 10000); // Using with AngularJS 1.x
setTimeout(jQuery.noop, 10000); // Using with jQuery
Here is an alphabetical list of various implementations of noop functions (or related discussions or google searches):
AngularJS 1.x, Angular 2+ (Does not seem to have a native
implementation - use your own as shown above), Ember, jQuery, Lodash, NodeJS, Ramda, React (Does not seem to have a native implementation - use your own as shown above), RxJS,
Underscore
BOTTOM LINE: Although Function.prototype is an elegant way of expressing a noop in Javascript, however, there might be some performance issues related to its use. So, you can define and use your own (as shown above) or use one defined by the library/framework that you might be using in your code.
The most concise and performant noop is an empty arrow function: ()=>{}.
Arrow functions work natively in all browsers except IE (there is a babel transform if you must):
()=>{} vs. Function.Prototype
()=>{} is 87% faster than Function.prototype in Chrome 67.
()=>{} is 25% faster than Function.prototype in Firefox 60.
()=>{} is 85% faster than Function.prototype in Edge (6/15/2018).
()=>{} is 65% less code than Function.prototype.
The test below heats up using the arrow function to give bias to Function.prototype, yet the arrow function is the clear winner:
const noop = ()=>{};
const noopProto = Function.prototype;
function test (_noop, iterations) {
const before = performance.now();
for(let i = 0; i < iterations; i++) _noop();
const after = performance.now();
const elapsed = after - before;
console.info(`${elapsed.toFixed(4)}MS\t${_noop.toString().replace('\n', '')}\tISNOOP? ${_noop() === undefined}`);
return elapsed;
}
const iterations = 10000000
console.info(`noop time for ${iterations.toLocaleString()} iterations`)
const timings = {
noop: test(noop, iterations),
noopProto: test(noopProto, iterations)
}
const percentFaster = ((timings.noopProto - timings.noop)/timings.noopProto).toLocaleString("en-us", { style: "percent" });
console.info(`()=>{} is ${percentFaster} faster than Function.prototype in the current browser!`)
whatever you tend to achieve here is wrong. Ternary expressions shall not be used as a full statement, only in expression, so the answer to your question is:
none of your suggestions, instead do:
var a = 2;
if (a === 1)
alert(1)
// else do nothing!
then the code is easily understandable, readable and as much efficient as it can get.
Why make it more difficult, when it can be simple?
edit:
So then, does a "no-operation" command basically indicate an inferior code structure?
You're missing my point. All the above is about the ternary expression x ? y : z.
But, a no operation command does not makes sense in higher level languages such as Javascript.
It is usually used, in lower level languages such as assembly or C, as a way to make the processor do nothing for one instruction for timing purposes.
In JS, whether you do 0;, null;, function () {}; or an empty statement, there are great chances that it will be ignored by the interpretor when it is reading it, but before it gets interpreted, so in the end, you'll just make your program be loaded more slowly by a really tiny amount of time. Nota Bene: I'm assuming this, as I'm not involved in any widely used JS interpreter, and there are chances each interpreter has its own strategy.
In case you use something a bit more complicated, like $.noop() or var foo = function () {}; foo(), then the interpreter may do an unuseful function call that will end up spoiling a few bytes of your function stack, and a few cycles.
The only reason I see a function such as $.noop() would exist, would be to be able to still give a callback function to some event function that would throw an exception if it can't call that callback. But then, it's necessarily a function you need to give, and giving it the noop name is a good idea so you're telling your readers (and that may be you in 6 months) that you purposely give an empty function.
In the end, there's no such thing as "inferior" or "superior" code structure. You're either right or wrong in the way you use your tools.. Using a ternary for your example is like using a hammer when you want to screw. It'll work, but you're not sure you can hang something on that screw.
What could be considered either "inferior" or "superior" is the algorithm and ideas you put in your code. But that's another thing.
There is absolutely no problem or performance penalty of using Function.prototype over () => {}.
The main benefit of Function.prototype is having a singleton function rather than re-defining a new anonymous function each time. It's especially important to use a no-op like Function.prototype when defining default values and memoizing as it gives you a consistent object pointer which never changes.
The reason I'm recommending Function.prototype rather than Function is because of they're not the same:
Function() === Function()
// false
Function.prototype() === Function.prototype()
// true
Also, benchmarks from other answers are misleading. In fact, Function.prototype performs faster than () => {} depending on how you write and run the benchmark:
You can’t trust JS benchmarks << Specifically calling out benchmarks on this question.
Don't style your code from benchmarks; do whatever's maintainable and let the interpreter figure out how to optimize in the long run.
I think jQuery noop() is mostly intended to prevent code from crashing by providing a default function when the requested one is not available. For example, considering the following code sample, $.noop is chosen if fakeFunction is not defined, preventing the next call to fn from crashing:
var fn = fakeFunction || $.noop;
fn() // no crash
Then, noop() allows to save memory by avoiding to write the same empty function multiple times everywhere in your code. By the way, $.noop is a bit shorter than function(){} (6 bytes saved per token). So, there is no relationship between your code and the empty function pattern. Use null, false or 0 if you like, in your case there will be no side effect. Furthermore, it's worth noting that this code...
true/false ? alert('boo') : function(){};
... is completely useless since you'll never call the function, and this one...
true/false ? alert('boo') : $.noop();
... is even more useless since you call an empty function, which is exactly the same as...
true/false ? alert('boo') : undefined;
Let's replace the ternary expression with an if statement to see how much it's useless:
if (true/false) {
alert('boo');
} else {
$.noop(); // returns undefined which goes nowhere
}
You could simply write:
if (true/false) alert('boo');
Or even shorter:
true/false && alert('boo');
To finally answer your question, I guess a "conventional no operation" is the one which is never written.
I use:
(0); // nop
To test execution time of this run as:
console.time("mark");
(0); // nop
console.timeEnd("mark");
result: mark: 0.000ms
Using Boolean( 10 > 9) can be reduced it to simply ( 10 > 9) which returns true. Coming up with the idea to use a single operand I fully expected (0); would return false, but it simply returns the argument back as can be reviewed by performing this test at the console.
> var a = (0);
< undefined
> a
< 0
Need a succinct way of conditionally executing an expression, including function calls? (No noop necessary.)
true && expression // or `expression()`
Need a valid, callable expression with no side effects?
const noop = () => {}
if (true) noop()
Need a valid, non-callable expression with no side effects?
void 0;
false;
0;
I just read through this article on named function expressions and their incompatibilities with IE <= 8.
I'm curious about one statement in particular:
A common pattern in web development is to “fork” function definitions based on some kind of a feature test, allowing for the best performance.
An example taken from his page:
var contains = (function() {
var docEl = document.documentElement;
if (typeof docEl.compareDocumentPosition != 'undefined') {
return function(el, b) {
return (el.compareDocumentPosition(b) & 16) !== 0;
};
}
else if (typeof docEl.contains != 'undefined') {
return function(el, b) {
return el !== b && el.contains(b);
};
}
return function(el, b) {
if (el === b) return false;
while (el != b && (b = b.parentNode) != null);
return el === b;
};
})();
When I see this, my immediate reaction is that this would be terrible to maintain. Code written this way doesn't really lend itself to being easily understandable.
In this case, instead of conditionally defining a function within another function which is then called immediately after the outer function is declared, one could write a function of nested ifs. It would be longer, but in my opinion easier to understand (though I am coming from C/C++/Java).
I would prefer answers that include some test numbers or explanations on how these functions would differ at run time.
It is very efficient. Notice the (); at the very end. This executes and assigns the result of the outer function to contains immediately. It is much more efficient than executing the underlying logic every time that the function contains is used.
Instead of checking each time contains() is called that compareDocumentPosition exists, this is done once when the code first executes. The fact that compareDocumentPosition exists or doesn't exist won't change, so only checking it once is ideal.
Javascript: how much more efficient is forked function declaration?
Barring any magic optimization done with a JIT/run-time it "costs" the same to invoke any function. Functions are just objects that are often stored in variables (or properties).
How much more "efficient" the version that returns a specialized function-object is depends upon factors including (but not limited to):
the number of times the resultant function is executed (1x = no gain) and
the "cost" of the branch vs. other code (depends) and
the "cost" of creating said closure (very cheap)
For a cheap branch or a low number of execution counts the "efficiency" is diminished. If there is a specific use-case, then benchmark that and you will have "the answer".
When I see this, my immediate reaction is that this would be terrible to maintain. Code written this way doesn't really lend itself to being easily understandable.
This example doesn't necessarily do it justice, IMOHO and is messy for other reasons. I think that giving the anonymous outer function an explicit name -- this can be done even for function-expressions -- would help clarify the intent better, for instance. Write code to be clean first. Then run a performance analysis (benchmark) and fix as appropriate. Chance are the "slow parts" won't be what are initially expected.
Some of it "not being easy to understand" is just a lack of familiarity with this construct (not trying to imply anything negative here) -- on the other hand, every language I know of has features which are abused in cases where there are cleaner solutions.
In this case, instead of conditionally defining a function within another function which is then called immediately after the outer function is declared, one could write a function of nested ifs. It would be longer, but in my opinion easier to understand (though I am coming from C/C++/Java).
Again, the exact case is sort of messy, IMOHO. However, JavaScript is not C/C++/Java and functions-as-first-class-values and closures do not exist in C/C++/Java (this is a little white lie, closures can be emulate in Java and the newest C++ supports some form of closures AFAIK -- but I don't use C++).
This construct is thus not seen in those other languages because the other languages do not support it easily (or at all) -- it says nothing about the viability of the approach (in JavaScript or elsewhere) in general.
I would prefer answers that include some test numbers or explanations on how these functions would differ at run time.
See above.
Expanding upon the bold section at top:
A function is "just an object" that is "applied" (read: called) with the (...) operator.
function x () {
alert("hi")
}
x() // alerts
window.x() // alerts -- just a property (assumes global scope above)
a = {hello: x}
a.hello() // alerts (still property)
b = a.hello
b() // alerts (still just a value that can be invoked)
Happy coding.
The main advantage as mentioned is speed. Having a single function with nested ifs means the condition needs to be re-evaluated every time the function is called. However, we know that the results of the conditions will never change.
If you are concerned about readability, a similar effect can be achieved in a more readable way:
var contains = (function () {
var docEl = document.documentElement;
if (typeof docEl.compareDocumentPosition != 'undefined') {
return contains_version1;
} else if (typeof docEl.contains != 'undefined') {
return contains_version2;
} else {
return contains_version3;
}
function contains_version1() {
...
}
function contains_version2() {
...
}
function contains_version3() {
...
}
})();
Or:
(function () {
var docEl = document.documentElement;
var contains =
typeof docEl.compareDocumentPosition != 'undefined' ? contains_version1 :
typeof docEl.contains != 'undefined' ? contains_version2 :
contains_version3;
function contains_version1() {
...
}
function contains_version2() {
...
}
function contains_version3() {
...
}
})();
This is relatively strange construct if you are coming from pure C background, but should be easily to map to known concepts for C++/Java person. This particular sample is essentially implementation base class with abstract function with 3 derived classes implementing it differently for different browsers. Using "if" or "switch" for such case is not exactly the best approach in either C++ nor Java.
Likely set of such functions will be packaged into a "class" and in such case it will closely map to base class with virtual functions and multiple implementations for each browser...
As a rule of thumb, which of these methods of writing cross-browser Javascript functions will perform better?
Method 1
function MyFunction()
{
if (document.browserSpecificProperty)
doSomethingWith(document.browserSpecificProperty);
else
doSomethingWith(document.someOtherProperty);
}
Method 2
var MyFunction;
if(document.browserSpecificProperty) {
MyFunction = function() {
doSomethingWith(document.browserSpecificProperty);
};
} else {
MyFunction = function() {
doSomethingWith(document.someOtherProperty);
};
}
Edit: Upvote for all the fine answers so far. I've fixed the function to a more correct syntax.
Couple of points about the answers so far - whilst in the majority of cases it is a fairly pointless performance enhancement, there are a few reasons one might want to still spend some time analyzing the code:
Has to run on
slow computers, mobile devices, old
browsers etc.
Curiosity
Use the same
general principal to performance
enhance other scenarios where the
evaluation of the IF statement does
take some time.
Unless you're doing this a trillion times, it doesn't matter. Go with the one that is more readable and maintainable to you and/or your organization. The productivity gains you will get from writing clean, simple code matters way more than shaving a tenth of a microsecond off your JS execution time.
You should only even start thinking about what performs better when and only when you've written code and it is unacceptably slow. Then you should start tracking down the bottleneck, which will never be something like this. You will never get a measurable performance gain out of switching from one to the other here.
Unfortunately the code above is not actually cross-browser friendly as it relies on a mozilla quirk not present in other browsers -- namely that function statements are treated as function expressions inside branches. On browsers other that aren't built on mozilla the above code will always use the second function definition. I made a simple testcase to demonstrate this here.
Basically the ECMAScript spec says that function statements are treated similarly to var declarations, eg. they all get hoisted to the top of the current execution scope (eg. the start of a <script> tag, the start of a function, or the start of an eval block).
To clarify olliej's answer, your second method is technically a syntax error. You could rewrite it this way:
var MyFunction;
if(document.browserSpecificProperty) {
MyFunction = function() {
doSomethingWith(document.browserSpecificProperty);
};
} else {
MyFunction = function() {
doSomethingWith(document.someOtherProperty);
};
}
Which is at least correct syntax, but note that MyFunction would only be available in the scope in which that occurs. (Omit var MyFunction;, and preferably use window.MyFunction = function() ... for global.)
Technically, I would say that the second one would perform better, because the if statement is only executed once, rather than every time the function is run.
The difference, however, would be negligible to the point of being meaningless. The performance penalty of a single if statement such as this would be insignificant even compared to the performance penalty of simply calling a function. It would make a smallish difference even if if is called a million times.
The first one is easier to understand, because it doesn't have the awkwardness of defining the same function twice based on a condition, with both versions behaving differently. That seems to be a recipe for confusion later on.
I wouldn't be the first person to say that unless you are really insane about this optimization thing, you'll get more of a win out of code readability.
I generally prefer the second version, as the condition only has to be evaluated once and not on every call, but there are times when it's not really feasible because it will hamper readability.
Btw, this is a case where you might want to use the ?: operator, e.g (taken from production code):
var addEvent =
document.addEventListener ? function(type, listener) {
document.addEventListener(type, listener, false);
} :
document.attachEvent ? function(type, listener) {
document.attachEvent('on' + type, listener);
} :
throwError;
For your simplified example I would do what's below assuming that your browser property check only needs to be done once:
var MyFunction = (function() {
var rightProperty = document.browserSpecificProperty || document.someOtherProperty;
return function doSomethingWith() {
// use the rightProperty variable in your function
}
})();
The performance should be nearly equal!
Thing about using Frameworks like JQuery to get rid of the Browser compability problems!
If performance is your main goal, have a look at SlickSpeed! It is a page which benchmarks different JavaScript frameworks!