I have been looking through the polyfill implementations on the Mozilla Developer Network (MDN) as I require a few of these for a library. I know shim.js exists, but I'm not using that.
It seems that the polyfills are not consistent in code styling. It almost appears that they are written by the community in an almost "wiki" style.
Take for example String.prototype.contains
if(!('contains' in String.prototype)) {
String.prototype.contains = function(str, startIndex) {
return -1 !== String.prototype.indexOf.call(this, str, startIndex);
}
}
it seems more logical to me to implement this as such:
if(!String.prototype.contains) {
String.prototype.contains = function(str, startIndex) {
return this.indexOf(str, startIndex) !== -1;
}
}
Given that JavaScript is a size critical language (in that everything should be as small as possible for network transmission), my example should be favourable to the example on MDN as this saves a few bytes.
As the title suggests, I want to know how reliable the code is on MDN, and should I modify this as necessary to provide really clean, tiny implementations where possible?
It seems that your question refers to the article on String.contains().
Yes, MDN is a wiki so the quality of its content (including code examples) can vary. However, the content on general web topics (as opposed to extension development for example) is usually pretty good. Still, you shouldn't forget to use common sense.
The polyfill suggested on MDN and your version differ in three points:
!('contains' in String.prototype) vs. !String.prototype.contains to check whether a property exists: The former is clearly preferable. The in operator merely looks up a property, there are no side-effects. !String.prototype.contains on the other hand will actually retrieve the value of that property and convert it to a boolean value. Not only is this marginally slower, some property values like 0 will be wrongly coerced to false. You probably won't notice the difference with functions but this might become a real issue when polyfilling other property types.
-1 !== foo vs. foo !== -1 for comparisons: This is a matter of taste but some people prefer the former variant. The advantage of always putting the constant first in comparisons is that you won't unintentionally turn a comparison into an assignment: writing -1 = foo when you meant -1 == foo will cause an error. On the other hand, foo = -1 instead of foo == -1 will succeed and noticing that issue in your code might take a while. Obviously, if you choose to adapt that style you need to use it consistently throughout all your code.
String.prototype.indexOf.call vs. this.indexOf: The former guards against the situation that the indexOf method on the this object is overwritten. As a result, it is closer to the behavior of the native String.contains() function. Consider this example:
var a = "foo";
a.indexOf = function() {something_weird};
alert(a.contains("f"));
The native implementation of String.contains and a polyfill using String.prototype.indexOf.call will work even if this.indexOf is overwritten - a polyfill using this.indexOf however will fail.
Altogether, the code provided on MDN has a few more fail-safes. Whether these are required in your individual scenario is not given of course. However, dropping them to save a few bytes is the wrong approach to optimization ("premature optimization is the root of all evil"). Personally, I prefer good style over efficiency unless the difference in performance is known to be relevant.
Related
In Bluebird's util.js file, it has the following function:
function toFastProperties(obj) {
/*jshint -W027*/
function f() {}
f.prototype = obj;
ASSERT("%HasFastProperties", true, obj);
return f;
eval(obj);
}
For some reason, there's a statement after the return function, which I'm not sure why it's there.
As well, it seems that it is deliberate, as the author had silenced the JSHint warning about this:
Unreachable 'eval' after 'return'. (W027)
What exactly does this function do? Does util.toFastProperties really make an object's properties "faster"?
I've searched through Bluebird's GitHub repository for any comments in the source code or an explanation in their list of issues, but I couldn't find any.
2017 update: First, for readers coming today - here is a version that works with Node 7 (4+):
function enforceFastProperties(o) {
function Sub() {}
Sub.prototype = o;
var receiver = new Sub(); // create an instance
function ic() { return typeof receiver.foo; } // perform access
ic();
ic();
return o;
eval("o" + o); // ensure no dead code elimination
}
Sans one or two small optimizations - all the below is still valid.
Let's first discuss what it does and why that's faster and then why it works.
What it does
The V8 engine uses two object representations:
Dictionary mode - in which object are stored as key - value maps as a hash map.
Fast mode - in which objects are stored like structs, in which there is no computation involved in property access.
Here is a simple demo that demonstrates the speed difference. Here we use the delete statement to force the objects into slow dictionary mode.
The engine tries to use fast mode whenever possible and generally whenever a lot of property access is performed - however sometimes it gets thrown into dictionary mode. Being in dictionary mode has a big performance penalty so generally it is desirable to put objects in fast mode.
This hack is intended to force the object into fast mode from dictionary mode.
Bluebird's Petka himself talks about it here.
These slides (wayback machine) by Vyacheslav Egorov also mentions it.
The question "*https://stackoverflow.com/questions/23455678/pros-and-cons-of-dictionary-mode*" and its accepted answer are also related.
This slightly outdated article is still a fairly good read that can give you a good idea on how objects are stored in v8.
Why it's faster
In JavaScript prototypes typically store functions shared among many instances and rarely change a lot dynamically. For this reason it is very desirable to have them in fast mode to avoid the extra penalty every time a function is called.
For this - v8 will gladly put objects that are the .prototype property of functions in fast mode since they will be shared by every object created by invoking that function as a constructor. This is generally a clever and desirable optimization.
How it works
Let's first go through the code and figure what each line does:
function toFastProperties(obj) {
/*jshint -W027*/ // suppress the "unreachable code" error
function f() {} // declare a new function
f.prototype = obj; // assign obj as its prototype to trigger the optimization
// assert the optimization passes to prevent the code from breaking in the
// future in case this optimization breaks:
ASSERT("%HasFastProperties", true, obj); // requires the "native syntax" flag
return f; // return it
eval(obj); // prevent the function from being optimized through dead code
// elimination or further optimizations. This code is never
// reached but even using eval in unreachable code causes v8
// to not optimize functions.
}
We don't have to find the code ourselves to assert that v8 does this optimization, we can instead read the v8 unit tests:
// Adding this many properties makes it slow.
assertFalse(%HasFastProperties(proto));
DoProtoMagic(proto, set__proto__);
// Making it a prototype makes it fast again.
assertTrue(%HasFastProperties(proto));
Reading and running this test shows us that this optimization indeed works in v8. However - it would be nice to see how.
If we check objects.cc we can find the following function (L9925):
void JSObject::OptimizeAsPrototype(Handle<JSObject> object) {
if (object->IsGlobalObject()) return;
// Make sure prototypes are fast objects and their maps have the bit set
// so they remain fast.
if (!object->HasFastProperties()) {
MigrateSlowToFast(object, 0);
}
}
Now, JSObject::MigrateSlowToFast just explicitly takes the Dictionary and converts it into a fast V8 object. It's a worthwhile read and an interesting insight into v8 object internals - but it's not the subject here. I still warmly recommend that you read it here as it's a good way to learn about v8 objects.
If we check out SetPrototype in objects.cc, we can see that it is called in line 12231:
if (value->IsJSObject()) {
JSObject::OptimizeAsPrototype(Handle<JSObject>::cast(value));
}
Which in turn is called by FuntionSetPrototype which is what we get with .prototype =.
Doing __proto__ = or .setPrototypeOf would have also worked but these are ES6 functions and Bluebird runs on all browsers since Netscape 7 so that's out of the question to simplify code here. For example, if we check .setPrototypeOf we can see:
// ES6 section 19.1.2.19.
function ObjectSetPrototypeOf(obj, proto) {
CHECK_OBJECT_COERCIBLE(obj, "Object.setPrototypeOf");
if (proto !== null && !IS_SPEC_OBJECT(proto)) {
throw MakeTypeError("proto_object_or_null", [proto]);
}
if (IS_SPEC_OBJECT(obj)) {
%SetPrototype(obj, proto); // MAKE IT FAST
}
return obj;
}
Which directly is on Object:
InstallFunctions($Object, DONT_ENUM, $Array(
...
"setPrototypeOf", ObjectSetPrototypeOf,
...
));
So - we have walked the path from the code Petka wrote to the bare metal. This was nice.
Disclaimer:
Remember this is all implementation detail. People like Petka are optimization freaks. Always remember that premature optimization is the root of all evil 97% of the time. Bluebird does something very basic very often so it gains a lot from these performance hacks - being as fast as callbacks isn't easy. You rarely have to do something like this in code that doesn't power a library.
V8 developer here. The accepted answer is a great explanation, I just wanted to highlight one thing: the so-called "fast" and "slow" property modes are unfortunate misnomers, they each have their pros and cons. Here is a (slightly simplified) overview of the performance of various operations:
struct-like properties
dictionary properties
adding a property to an object
--
+
deleting a property
---
+
reading/writing a property, first time
-
+
reading/writing, cached, monomorphic
+++
+
reading/writing, cached, few shapes
++
+
reading/writing, cached, many shapes
--
+
colloquial name
"fast"
"slow"
So as you can see, dictionary properties are actually faster for most of the lines in this table, because they don't care what you do, they just handle everything with solid (though not record-breaking) performance. Struct-like properties are blazing fast for one particular situation (reading/writing the values of existing properties, where every individual place in the code only sees very few distinct object shapes), but the price they pay for that is that all other operations, in particular those that add or remove properties, become much slower.
It just so happens that the special case where struct-like properties have their big advantage (+++) is particularly frequent and really important for many apps' performance, which is why they acquired the "fast" moniker. But it's important to realize that when you delete properties and V8 switches the affected objects to dictionary mode, then it isn't being dumb or trying to be annoying: rather it attempts to give you the best possible performance for what you're doing. We have landed patches in the past that have achieved significant performance improvements by making more objects go to dictionary ("slow") mode sooner when appropriate.
Now, it can happen that your objects would generally benefit from struct-like properties, but something your code does causes V8 to transition them to dictionary properties, and you'd like to undo that; Bluebird had such a case. Still, the name toFastProperties is a bit misleading in its simplicity; a more accurate (though unwieldy) name would be spendTimeOptimizingThisObjectAssumingItsPropertiesWontChange, which would indicate that the operation itself is costly, and it only makes sense in certain limited cases. If someone took away the conclusion "oh, this is great, so I can happily delete properties now, and just call toFastProperties afterwards every time", then that would be a major misunderstanding and cause pretty bad performance degradation.
If you stick with a few simple rules of thumb, you'll never have a reason to even try to force any internal object representation changes:
Use constructors, and initialize all properties in the constructor. (This helps not only your engine, but also understandability and maintainability of your code. Consider that TypeScript doesn't quite force this but strongly encourages it, because it helps engineering productivity.)
Use classes or prototypes to install methods, don't just slap them onto each object instance. (Again, this is a common best practice for many reasons, one of them being that it's faster.)
Avoid delete. When properties come and go, prefer using a Map over the ES5-era "object-as-map" pattern. When an object can toggle into and out of a certain state, prefer boolean (or equivalent) properties (e.g. o.has_state = true; o.has_state = false;) over adding and deleting an indicator property.
When it comes to performance, measure, measure, measure. Before you start sinking time into performance improvements, profile your app to see where the hotspots are. When you implement a change that you hope will make things faster, verify with your real app (or something extremely close to it; not just a 10-line microbenchmark!) that it actually helps.
Lastly, if your team lead tells you "I've heard that there are 'fast' and 'slow' properties, please make sure that all of ours are 'fast'", then point them at this post :-)
Reality from 2021 (NodeJS version 12+).
Seems like a huge optimization is done, objects with deleted fields and sparse arrays don't become slow. Or I'm missing smth?
// run in Node with enabled flag
// node --allow-natives-syntax script.js
function Point(x, y) {
this.x = x;
this.y = y;
}
var obj1 = new Point(1, 2);
var obj2 = new Point(3, 4);
delete obj2.y;
var arr = [1,2,3]
arr[100] = 100
console.log('obj1 has fast properties:', %HasFastProperties(obj1));
console.log('obj2 has fast properties:', %HasFastProperties(obj2));
console.log('arr has fast properties:', %HasFastProperties(arr));
both show true
obj1 has fast properties: true
obj2 has fast properties: true
arr has fast properties: true
// run in Node with enabled flag
// node --allow-natives-syntax script.js
function Point(x, y) {
this.x = x;
this.y = y;
}
var obj2 = new Point(3, 4);
console.log('obj has fast properties:', %HasFastProperties(obj2)) // true
delete obj2.y;
console.log('obj2 has fast properties:', %HasFastProperties(obj2)); //true
var obj = {x : 1, y : 2};
console.log('obj has fast properties:', %HasFastProperties(obj)) //true
delete obj.x;
console.log('obj has fast properties:', %HasFastProperties(obj)); //fasle
Function and object look different
After reading through Lodash's code a bit and learning that it uses typeof comparisons more often than not (in its suite of _.is* tools), I ran some tests to confirm that it is faster, which it is, in fact (if marginally).
Discussing my confusion with a fellow developer, he pointed out that in case 1:
var a;
return a === undefined;
Two objects undergo comparison, whereas in case 2 (the faster case):
var a;
return typeof a === 'undefined';
is a far more simple and flat string compare.
I was always of the thought that undefined inhabited a static place in memory, and all triple equals was doing was comparing that reference. Who's correct (if either)?
JSPerf Test:
http://jsperf.com/testing-for-undefined-lodash
In this case (with var a in scope), both of the two posted pieces of code have identical semantics as x === undefined is only true when x is undefined and typeof x will only returned "undefined" when x is undefined (or not resolved in the execution context).
That out of the way, we end up with:
undefined === undefined
vs.
"undefined" === "undefined"
So, even in a naive implementation case the "difference" between the two is SameObject(a,b) vs StringEquals(a,b) as per the rules for Strict Equality Comparison.
But modern implementations of JavaScript are quite competitive and, as can be seen, are very well optimized for this case. I don't know of the exact implementation details, but there are at least two different techniques that could allow the performance (in FF/Chrome/Webkit) to be the same for both cases:
ECMAScript implementations may intern strings, such that there is only one "undefined" string. This would make the StringEquals(a,b) effectively the same as SameObject(a,b) which would end up amounting to a "pointer comparison" in the implementation - this alone could explain why the performance is the same.
Additionally, because typeof x === "undefined" is a common idiom, it could be translated during the parsing to end up in a special call that performs the same, say TypeOfUndefined(x). That is, the implementation could bypass the === (and StringEquals) entirely if it so chose.
IE comes out as a the "clear loser" and seems to be missing an applicable optimization here.
The implementation of the undefined value is outside the scope of ECMAScript; there are no mandates that it must be a single value/object in the implementation.
In practice, however, undefined (and other special values like null) is most likely implemented as either a singleton (e.g. "one object in memory") or as an immediate value or value flag (such that there is actually no undefined object).
According to EcmaScript (see http://www.ecma-international.org/publications/files/ECMA-ST/Ecma-262.pdf ) if both expressions are strings (typeof returns a string) it perfroms a string comparison (char by char), otherwise it just compares references (in case when one of the sides is undefined). There is no way that string comparison would be more efficient then simple two numbers (i.e. addresses in memory) comparison. However it is possible that it is highly optimized (browsers do not follow EcmaScript in 100%) so it's really hard to say.
Proper benchmark:
http://jsperf.com/undefined-comparison-reference-vs-string
Note that in your test the initial value of a is undefined (may or may not matter, it seems that it does).
Conclusion: always use
return a === undefined;
It definitely won't be slower and it might be faster. Besides it seems more natural to compare something to a special undefined object rather then to a string.
EDIT: Based on everyone's feedback, the original version of this question is more design-related, not standards-related. Making more SO-friendly.
Original:
Should a JS primitive be considered "equivalent" to an object-wrapped version of that primitive according to the ECMA standards?
Revised Question
Is there a universal agreement on how to compare primitive-wrapped objects in current JavaScript?
var n = new Number(1),
p = 1;
n === p; // false
typeof n; // "object"
typeof p; // "number"
+n === p; // true, but you need coercion.
EDIT:
As #Pointy commented, ECMA spec (262, S15.1.2.4) describes a Number.isNaN() method that behaves as follows:
Number.isNaN(NaN); // true
Number.isNaN(new Number(NaN)); // false
Number.isNaN(+(new Number(NaN))); // true, but you need coercion.
Apparently, the justification for this behavior is that isNaN will return true IF the argument coerces to NaN. new Number(NaN) does not directly coerce based on how the native isNaN operates.
It seems that the performance hit and trickiness in type conversion, etc, of directly using native Object Wrappers as opposed to primitives outweighs the semantic benefits for now.
See this JSPerf.
The short answer to your question is no, there is no consensus on how to compare values in JS because the question is too situational; it depends heavily on your particular circumstances.
But, to give some advice/provide a longer answer .... object versions of primitives are evil (in the "they will cause you lots of bugs sense", not in a moral sense), and should be avoided if possible. Therefore, unless you have a compelling reason to handle both, I'd suggest that you do not account for object-wrapped primitives, and simply stick to un-wrapped primitives in your code.
Plus, if you don't account for wrapped primitives it should eliminate any need for you to even have an equals method in the first place.
* Edit *
Just saw your latest comment, and if you need to compare arrays then the built-in == and === won't cut it. Even so, I'd recommend making an arrayEquals method rather than just an equals method, as you'll avoid lots of drama by keeping your function as focused as possible and using the built-in JS comparators as much as possible.
And if you do wrap that in some sort of general function, for convenience:
function equals(left, right) {
if (left.slice && right.slice) { // lame array check
return arrayEquals(left, right);
}
return left == right;
}
I'd still recommend against handling primitive-wrapped objects, unless by "handle" you make your function throw an error if it's passed a primitive-wrapped object. Again, because these objects will only cause you trouble, you should strive to avoid them as much as possible, and not leave yourself openings to introduce bad code.
RECAP:
Ok, it's been a while since I asked this question. As usual, I went and augmented the Object.prototype anyway, in spite of all the valid arguments against it given both here and elsewhere on the web. I guess I'm just that kind of stubborn jerk.
I've tried to come up with a conclusive way of preventing the new method from mucking up any expected behaviour, which proved to be a very tough, but informative thing to do. I've learned a great many things about JavaScript. Not in the least that I won't be trying anything as brash as messing with the native prototypes, (except for String.prototype.trim for IE < 9).
In this particular case, I don't use any libs, so conflicts were not my main concern. But having dug a little deeper into possible mishaps when playing around with native prototypes, I'm not likely to try this code in combination with any lib.
By looking into this prototype approach, I've come to a better understanding of the model itself. I was treating prototypes as some form of flexible traditional abstract class, making me cling on to traditional OOP thinking. This viewpoint doesn't really do the prototype model justice. Douglas Crockford wrote about this pitfall, sadly the pink background kept me from reading the full article.
I've decided to update this question in the off chance people who read this are tempted to see for themselves. All I can say to that is: by all means, do. I hope you learn a couple of neat things, as I did, before deciding to abandon this rather silly idea. A simple function might work just as well, or better even, especially in this case. After all, the real beauty of it is, that by adding just 3 lines of code, you can use that very same function to augment specific objects' prototypes all the same.
I know I'm about to ask a question that has been around for quite a while, but: Why is Object.prototype considered to be off limits? It's there, and it can be augmented all the same, like any other prototype. Why, then, shouldn't you take advantage of this. To my mind, as long as you know what you're doing, there's no reason to steer clear of the Object prototype. Take this method for example:
if (!Object.prototype.getProperties)
{
Object.prototype.getProperties = function(f)
{
"use strict";
var i,ret;
f = f || false;
ret = [];
for (i in this)
{
if (this.hasOwnProperty(i))
{
if (f === false && typeof this[i] === 'function')
{
continue;
}
ret.push(i);
}
}
return ret;
};
}
Basically, it's the same old for...in loop you would either keep safe in a function, or write over and over again. I know it will be added to all objects, and since nearly every inheritance chain in JavaScript can be traced back to the Object.prototype, but in my script, I consider it the lesser of two evils.
Perhaps, someone could do a better job at telling me where I'm wrong than this chap, among others. Whilst looking for reasons people gave NOT to touch the Object's prototype, one thing kept cropping up: it breaks the for..in loop-thingy, but then again: many frameworks do, too, not to mention your own inheritance chains. It is therefore bad practice not to include a .hasOwnProperty check when looping through an object's properties, to my mind.
I also found this rather interesting. Again: one comment is quite unequivocal: extending native prototypes is bad practice, but if the V8 people do it, who am I to say they're wrong? I know, that argument doesn't quite stack up.
The point is: I can't really see a problem with the above code. I like it, use it a lot and so far, it hasn't let me down once. I'm even thinking of attaching a couple more functions to the Object prototype. Unless somebody can tell me why I shouldn't, that is.
The fact is, it's fine as long as you know what you're doing and what the costs are. But it's a big "if". Some examples of the costs:
You'll need to do extensive testing with any library you choose to use with an environment that augments Object.prototype, because the overwhelming convention is that a blank object will have no enumerable properties. By adding an enumerable property to Object.prototype, you're making that convention false. E.g., this is quite common:
var obj = {"a": 1, "b": 2};
var name;
for (name in obj) {
console.log(name);
}
...with the overwhelming convention being that only "a" and "b" will show up, not "getProperties".
Anyone working on the code will have to be schooled in the fact that that convention (above) is not being followed.
You can mitigate the above by using Object.defineProperty (and similar) if supported, but beware that even in 2014, browsers like IE8 that don't support it properly remain in significant use (though we can hope that will change quickly now that XP is officially EOL'd). That's because using Object.defineProperty, you can add non-enumerable properties (ones that don't show up in for-in loops) and so you'll have a lot less trouble (at that point, you're primarily worried about name conflicts) — but it only works on systems that correctly implement Object.defineProperty (and a correct implementation cannot be "shimmed").
In your example, I wouldn't add getProperties to Object.prototype; I'd add it to Object and accept the object as an argument, like ES5 does for getPrototypeOf and similar.
Be aware that the Prototype library gets a lot of flak for extending Array.prototype because of how that affects for..in loops. And that's just Arrays (which you shouldn't use for..in on anyway (unless you're using the hasOwnProperty guard and quite probably String(Number(name)) === name as well).
...if the V8 people do it, who am I to say they're wrong?
On V8, you can rely on Object.defineProperty, because V8 is an entirely ES5-compliant engine.
Note that even when the properties are non-enumerable, there are issues. Years ago, Prototype (indirectly) defined a filter function on Array.prototype. And it does what you'd expect: Calls an iterator function and creates a new array based on elements the function chooses. Then ECMAScript5 came along and defined Array.prototype.filter to do much the same thing. But there's the rub: Much the same thing. In particular, the signature of the iterator functions that get called is different (ECMAScript5 includes an argument that Prototype didn't). It could have been much worse than that (and I suspect — but cannot prove — that TC39 were aware of Prototype and intentionally avoided too much conflict with it).
So: If you're going to do it, be aware of the risks and costs. The ugly, edge-case bugs you can run into as a result of trying to use off-the-shelf libraries could really cost you time...
If frameworks and libraries generally did what you are proposing, it would very soon happen that two different frameworks would define two different functionalities as the same method of Object (or Array, Number... or any of the existing object prototypes). It is therefore better to add such new functionality into its own namespace.
For example... imagine, you would have a library that would serialize objects to json and a library that would serialize them to XML and both would define their functionality as
Object.prototype.serialize = function() { ... }
and you would only be able to use the one that was defined later. So it is better if they don't do this, but instead
JSONSerializingLibrary.seralize = function(obj) { ... }
XMLSerializingLibrary.seralize = function(obj) { ... }
It could also happen that a new functionality is defined in a new ECMAscript standard, or added by a browser vendor. So imagine that your browsers would also add a serialize function. That would again cause conflict with libraries that defined the same function. Even if the libraries' functionality was the same as that which is built in to the browser, the interpreted script functions would override the native function which would, in fact, be faster.
See http://www.websanova.com/tutorials/javascript/extending-javascript-the-right-way
Which addresses some, but not all, the objections raised. The objection about different libraries creating clashing methods can be alleviated by raising an exception if a domain specific method is already present in Object.prototype. That will at least provide an alert when this undesirable event happens.
Inspired by this post I developed the following which is also available in the comments of the cited page.
!Object.implement && Object.defineProperty (Object.prototype, 'implement', {
// based on http://www.websanova.com/tutorials/javascript/extending-javascript-the-right-way
value: function (mthd, fnc, cfg) { // adds fnc to prototype under name mthd
if (typeof mthd === 'function') { // find mthd from function source
cfg = fnc, fnc = mthd;
(mthd = (fnc.toString ().match (/^function\s+([a-z$_][\w$]+)/i) || [0, ''])[1]);
}
mthd && !this.prototype[mthd] &&
Object.defineProperty (this.prototype, mthd, {configurable: !!cfg, value: fnc, enumerable: false});
}
});
Object.implement (function forEach (fnc) {
for (var key in this)
this.hasOwnProperty (key) && fnc (this[key], key, this);
});
I have used this primarily to add standard defined function on implementation that do not support them.
I’v come across both ways to apply Array prototypes to a native object:
arr = Array.prototype.slice.call(obj);
arr = [].slice.call(obj);
In similar fashion, getting the true type of a native array-like object:
type = Object.prototype.toString.call(obj);
type = {}.toString.call(obj);
A simple test:
function fn() {
console.log(
Array.prototype.slice.call(arguments),
[].slice.call(arguments),
Object.prototype.toString.call(arguments),
{}.toString.call(arguments)
);
}
fn(0,1);
Fiddle: http://jsfiddle.net/PhdmN/
They seem identical to me; the first syntax is used more often, but the second is definitely shorter. Are there any shortcomings when using the shorter syntax?
They are identical regarding functionality.
However, the Array object can be overwritten, causing the first method to fail.
//Example:
Array = {};
console.log(typeof Array.prototype.slice); // "undefined"
console.log(typeof [].slice); // "function"
The literal method creates a new instance of Array (opposed to Array.prototype. method). Benchmark of both methods: http://jsperf.com/bbarr-new-array-vs-literal/3
When you're going to use the method many times, the best practice is to cache the method:
var slice = Array.prototype.slice; //Commonly used
var slice = [].slice; - If you're concerned about the existence of Array, or if you just like the shorter syntax.
That's an interesting question! Let's pull up the pros (✔️) and cons (❌) for each alternative:
[].slice
✔️: Is typed faster
Two keystrokes, no shift-modifier or anything,
and your linter knows [.slice is a typo.
✔️: Is read faster
You can identify the relevant part (slice) faster.
✔️: Is more popular
56M+ snippets on GitHub (as of late 2018).
✔️: Can't be overwritten
The first part of Rob's answer demonstrates this perfectly.
✔️: Runs faster.
Wait, what? Well, that's actually the whole point of this answer.
Contrary to what you'd think and read pretty much everywhere, [].slice.call(...) does NOT instantiate a new, empty Array just to access its slice property!.
Nowadays (it has been so for 5+ years – as of late 2018), the JIT compilation (1) is included everywhere you run JavaScript (unless you're still browsing the Web with IE8 or lower).
This mechanism allows the JS Engine to: (2)
... resolve [].slice directly, and statically, as direct Array.prototype reference in one shot, and just one configurable property access: forEach
Array.prototype.slice
❌: Is typed slower
Typos (e.g.: Array.prorotype.slice) look fine until you try and run the code.
❌: Is less popular
8M+ snippets on GitHub (as of late 2018).
❌: Runs slower
Array.prototype.slice is: (2)
... a lookup for the whole scope for an Array reference until all scopes are walked 'till the global one ... because you can name a variable Array any time you want.
Once the global scope is reached, and the native found, the engine accesses its proottype and after that its method
...
O(N) scope resolution + 2 properties access (.prototype and .forEach).
✔️: Allows you to seamlessly adapt to whichever coding conventions would strictly prevent you from having a line start with either (, [ or `
Definitely a good thing (sarcastically).
✔️: You won't have to explain why [].slice is better in pretty much every way.
Although now, that would just boil down to clicking the share link below 👌
Disclaimer
Note that, realistically, neither does effectively run faster than the other. This isn't the bottleneck of your application.
You may want to read A crash course in just-in-time (JIT) compilers
Quoted from Andrea Giammarchi (#WebReflection on Twitter)