RECAP:
Ok, it's been a while since I asked this question. As usual, I went and augmented the Object.prototype anyway, in spite of all the valid arguments against it given both here and elsewhere on the web. I guess I'm just that kind of stubborn jerk.
I've tried to come up with a conclusive way of preventing the new method from mucking up any expected behaviour, which proved to be a very tough, but informative thing to do. I've learned a great many things about JavaScript. Not in the least that I won't be trying anything as brash as messing with the native prototypes, (except for String.prototype.trim for IE < 9).
In this particular case, I don't use any libs, so conflicts were not my main concern. But having dug a little deeper into possible mishaps when playing around with native prototypes, I'm not likely to try this code in combination with any lib.
By looking into this prototype approach, I've come to a better understanding of the model itself. I was treating prototypes as some form of flexible traditional abstract class, making me cling on to traditional OOP thinking. This viewpoint doesn't really do the prototype model justice. Douglas Crockford wrote about this pitfall, sadly the pink background kept me from reading the full article.
I've decided to update this question in the off chance people who read this are tempted to see for themselves. All I can say to that is: by all means, do. I hope you learn a couple of neat things, as I did, before deciding to abandon this rather silly idea. A simple function might work just as well, or better even, especially in this case. After all, the real beauty of it is, that by adding just 3 lines of code, you can use that very same function to augment specific objects' prototypes all the same.
I know I'm about to ask a question that has been around for quite a while, but: Why is Object.prototype considered to be off limits? It's there, and it can be augmented all the same, like any other prototype. Why, then, shouldn't you take advantage of this. To my mind, as long as you know what you're doing, there's no reason to steer clear of the Object prototype. Take this method for example:
if (!Object.prototype.getProperties)
{
Object.prototype.getProperties = function(f)
{
"use strict";
var i,ret;
f = f || false;
ret = [];
for (i in this)
{
if (this.hasOwnProperty(i))
{
if (f === false && typeof this[i] === 'function')
{
continue;
}
ret.push(i);
}
}
return ret;
};
}
Basically, it's the same old for...in loop you would either keep safe in a function, or write over and over again. I know it will be added to all objects, and since nearly every inheritance chain in JavaScript can be traced back to the Object.prototype, but in my script, I consider it the lesser of two evils.
Perhaps, someone could do a better job at telling me where I'm wrong than this chap, among others. Whilst looking for reasons people gave NOT to touch the Object's prototype, one thing kept cropping up: it breaks the for..in loop-thingy, but then again: many frameworks do, too, not to mention your own inheritance chains. It is therefore bad practice not to include a .hasOwnProperty check when looping through an object's properties, to my mind.
I also found this rather interesting. Again: one comment is quite unequivocal: extending native prototypes is bad practice, but if the V8 people do it, who am I to say they're wrong? I know, that argument doesn't quite stack up.
The point is: I can't really see a problem with the above code. I like it, use it a lot and so far, it hasn't let me down once. I'm even thinking of attaching a couple more functions to the Object prototype. Unless somebody can tell me why I shouldn't, that is.
The fact is, it's fine as long as you know what you're doing and what the costs are. But it's a big "if". Some examples of the costs:
You'll need to do extensive testing with any library you choose to use with an environment that augments Object.prototype, because the overwhelming convention is that a blank object will have no enumerable properties. By adding an enumerable property to Object.prototype, you're making that convention false. E.g., this is quite common:
var obj = {"a": 1, "b": 2};
var name;
for (name in obj) {
console.log(name);
}
...with the overwhelming convention being that only "a" and "b" will show up, not "getProperties".
Anyone working on the code will have to be schooled in the fact that that convention (above) is not being followed.
You can mitigate the above by using Object.defineProperty (and similar) if supported, but beware that even in 2014, browsers like IE8 that don't support it properly remain in significant use (though we can hope that will change quickly now that XP is officially EOL'd). That's because using Object.defineProperty, you can add non-enumerable properties (ones that don't show up in for-in loops) and so you'll have a lot less trouble (at that point, you're primarily worried about name conflicts) — but it only works on systems that correctly implement Object.defineProperty (and a correct implementation cannot be "shimmed").
In your example, I wouldn't add getProperties to Object.prototype; I'd add it to Object and accept the object as an argument, like ES5 does for getPrototypeOf and similar.
Be aware that the Prototype library gets a lot of flak for extending Array.prototype because of how that affects for..in loops. And that's just Arrays (which you shouldn't use for..in on anyway (unless you're using the hasOwnProperty guard and quite probably String(Number(name)) === name as well).
...if the V8 people do it, who am I to say they're wrong?
On V8, you can rely on Object.defineProperty, because V8 is an entirely ES5-compliant engine.
Note that even when the properties are non-enumerable, there are issues. Years ago, Prototype (indirectly) defined a filter function on Array.prototype. And it does what you'd expect: Calls an iterator function and creates a new array based on elements the function chooses. Then ECMAScript5 came along and defined Array.prototype.filter to do much the same thing. But there's the rub: Much the same thing. In particular, the signature of the iterator functions that get called is different (ECMAScript5 includes an argument that Prototype didn't). It could have been much worse than that (and I suspect — but cannot prove — that TC39 were aware of Prototype and intentionally avoided too much conflict with it).
So: If you're going to do it, be aware of the risks and costs. The ugly, edge-case bugs you can run into as a result of trying to use off-the-shelf libraries could really cost you time...
If frameworks and libraries generally did what you are proposing, it would very soon happen that two different frameworks would define two different functionalities as the same method of Object (or Array, Number... or any of the existing object prototypes). It is therefore better to add such new functionality into its own namespace.
For example... imagine, you would have a library that would serialize objects to json and a library that would serialize them to XML and both would define their functionality as
Object.prototype.serialize = function() { ... }
and you would only be able to use the one that was defined later. So it is better if they don't do this, but instead
JSONSerializingLibrary.seralize = function(obj) { ... }
XMLSerializingLibrary.seralize = function(obj) { ... }
It could also happen that a new functionality is defined in a new ECMAscript standard, or added by a browser vendor. So imagine that your browsers would also add a serialize function. That would again cause conflict with libraries that defined the same function. Even if the libraries' functionality was the same as that which is built in to the browser, the interpreted script functions would override the native function which would, in fact, be faster.
See http://www.websanova.com/tutorials/javascript/extending-javascript-the-right-way
Which addresses some, but not all, the objections raised. The objection about different libraries creating clashing methods can be alleviated by raising an exception if a domain specific method is already present in Object.prototype. That will at least provide an alert when this undesirable event happens.
Inspired by this post I developed the following which is also available in the comments of the cited page.
!Object.implement && Object.defineProperty (Object.prototype, 'implement', {
// based on http://www.websanova.com/tutorials/javascript/extending-javascript-the-right-way
value: function (mthd, fnc, cfg) { // adds fnc to prototype under name mthd
if (typeof mthd === 'function') { // find mthd from function source
cfg = fnc, fnc = mthd;
(mthd = (fnc.toString ().match (/^function\s+([a-z$_][\w$]+)/i) || [0, ''])[1]);
}
mthd && !this.prototype[mthd] &&
Object.defineProperty (this.prototype, mthd, {configurable: !!cfg, value: fnc, enumerable: false});
}
});
Object.implement (function forEach (fnc) {
for (var key in this)
this.hasOwnProperty (key) && fnc (this[key], key, this);
});
I have used this primarily to add standard defined function on implementation that do not support them.
Related
In Bluebird's util.js file, it has the following function:
function toFastProperties(obj) {
/*jshint -W027*/
function f() {}
f.prototype = obj;
ASSERT("%HasFastProperties", true, obj);
return f;
eval(obj);
}
For some reason, there's a statement after the return function, which I'm not sure why it's there.
As well, it seems that it is deliberate, as the author had silenced the JSHint warning about this:
Unreachable 'eval' after 'return'. (W027)
What exactly does this function do? Does util.toFastProperties really make an object's properties "faster"?
I've searched through Bluebird's GitHub repository for any comments in the source code or an explanation in their list of issues, but I couldn't find any.
2017 update: First, for readers coming today - here is a version that works with Node 7 (4+):
function enforceFastProperties(o) {
function Sub() {}
Sub.prototype = o;
var receiver = new Sub(); // create an instance
function ic() { return typeof receiver.foo; } // perform access
ic();
ic();
return o;
eval("o" + o); // ensure no dead code elimination
}
Sans one or two small optimizations - all the below is still valid.
Let's first discuss what it does and why that's faster and then why it works.
What it does
The V8 engine uses two object representations:
Dictionary mode - in which object are stored as key - value maps as a hash map.
Fast mode - in which objects are stored like structs, in which there is no computation involved in property access.
Here is a simple demo that demonstrates the speed difference. Here we use the delete statement to force the objects into slow dictionary mode.
The engine tries to use fast mode whenever possible and generally whenever a lot of property access is performed - however sometimes it gets thrown into dictionary mode. Being in dictionary mode has a big performance penalty so generally it is desirable to put objects in fast mode.
This hack is intended to force the object into fast mode from dictionary mode.
Bluebird's Petka himself talks about it here.
These slides (wayback machine) by Vyacheslav Egorov also mentions it.
The question "*https://stackoverflow.com/questions/23455678/pros-and-cons-of-dictionary-mode*" and its accepted answer are also related.
This slightly outdated article is still a fairly good read that can give you a good idea on how objects are stored in v8.
Why it's faster
In JavaScript prototypes typically store functions shared among many instances and rarely change a lot dynamically. For this reason it is very desirable to have them in fast mode to avoid the extra penalty every time a function is called.
For this - v8 will gladly put objects that are the .prototype property of functions in fast mode since they will be shared by every object created by invoking that function as a constructor. This is generally a clever and desirable optimization.
How it works
Let's first go through the code and figure what each line does:
function toFastProperties(obj) {
/*jshint -W027*/ // suppress the "unreachable code" error
function f() {} // declare a new function
f.prototype = obj; // assign obj as its prototype to trigger the optimization
// assert the optimization passes to prevent the code from breaking in the
// future in case this optimization breaks:
ASSERT("%HasFastProperties", true, obj); // requires the "native syntax" flag
return f; // return it
eval(obj); // prevent the function from being optimized through dead code
// elimination or further optimizations. This code is never
// reached but even using eval in unreachable code causes v8
// to not optimize functions.
}
We don't have to find the code ourselves to assert that v8 does this optimization, we can instead read the v8 unit tests:
// Adding this many properties makes it slow.
assertFalse(%HasFastProperties(proto));
DoProtoMagic(proto, set__proto__);
// Making it a prototype makes it fast again.
assertTrue(%HasFastProperties(proto));
Reading and running this test shows us that this optimization indeed works in v8. However - it would be nice to see how.
If we check objects.cc we can find the following function (L9925):
void JSObject::OptimizeAsPrototype(Handle<JSObject> object) {
if (object->IsGlobalObject()) return;
// Make sure prototypes are fast objects and their maps have the bit set
// so they remain fast.
if (!object->HasFastProperties()) {
MigrateSlowToFast(object, 0);
}
}
Now, JSObject::MigrateSlowToFast just explicitly takes the Dictionary and converts it into a fast V8 object. It's a worthwhile read and an interesting insight into v8 object internals - but it's not the subject here. I still warmly recommend that you read it here as it's a good way to learn about v8 objects.
If we check out SetPrototype in objects.cc, we can see that it is called in line 12231:
if (value->IsJSObject()) {
JSObject::OptimizeAsPrototype(Handle<JSObject>::cast(value));
}
Which in turn is called by FuntionSetPrototype which is what we get with .prototype =.
Doing __proto__ = or .setPrototypeOf would have also worked but these are ES6 functions and Bluebird runs on all browsers since Netscape 7 so that's out of the question to simplify code here. For example, if we check .setPrototypeOf we can see:
// ES6 section 19.1.2.19.
function ObjectSetPrototypeOf(obj, proto) {
CHECK_OBJECT_COERCIBLE(obj, "Object.setPrototypeOf");
if (proto !== null && !IS_SPEC_OBJECT(proto)) {
throw MakeTypeError("proto_object_or_null", [proto]);
}
if (IS_SPEC_OBJECT(obj)) {
%SetPrototype(obj, proto); // MAKE IT FAST
}
return obj;
}
Which directly is on Object:
InstallFunctions($Object, DONT_ENUM, $Array(
...
"setPrototypeOf", ObjectSetPrototypeOf,
...
));
So - we have walked the path from the code Petka wrote to the bare metal. This was nice.
Disclaimer:
Remember this is all implementation detail. People like Petka are optimization freaks. Always remember that premature optimization is the root of all evil 97% of the time. Bluebird does something very basic very often so it gains a lot from these performance hacks - being as fast as callbacks isn't easy. You rarely have to do something like this in code that doesn't power a library.
V8 developer here. The accepted answer is a great explanation, I just wanted to highlight one thing: the so-called "fast" and "slow" property modes are unfortunate misnomers, they each have their pros and cons. Here is a (slightly simplified) overview of the performance of various operations:
struct-like properties
dictionary properties
adding a property to an object
--
+
deleting a property
---
+
reading/writing a property, first time
-
+
reading/writing, cached, monomorphic
+++
+
reading/writing, cached, few shapes
++
+
reading/writing, cached, many shapes
--
+
colloquial name
"fast"
"slow"
So as you can see, dictionary properties are actually faster for most of the lines in this table, because they don't care what you do, they just handle everything with solid (though not record-breaking) performance. Struct-like properties are blazing fast for one particular situation (reading/writing the values of existing properties, where every individual place in the code only sees very few distinct object shapes), but the price they pay for that is that all other operations, in particular those that add or remove properties, become much slower.
It just so happens that the special case where struct-like properties have their big advantage (+++) is particularly frequent and really important for many apps' performance, which is why they acquired the "fast" moniker. But it's important to realize that when you delete properties and V8 switches the affected objects to dictionary mode, then it isn't being dumb or trying to be annoying: rather it attempts to give you the best possible performance for what you're doing. We have landed patches in the past that have achieved significant performance improvements by making more objects go to dictionary ("slow") mode sooner when appropriate.
Now, it can happen that your objects would generally benefit from struct-like properties, but something your code does causes V8 to transition them to dictionary properties, and you'd like to undo that; Bluebird had such a case. Still, the name toFastProperties is a bit misleading in its simplicity; a more accurate (though unwieldy) name would be spendTimeOptimizingThisObjectAssumingItsPropertiesWontChange, which would indicate that the operation itself is costly, and it only makes sense in certain limited cases. If someone took away the conclusion "oh, this is great, so I can happily delete properties now, and just call toFastProperties afterwards every time", then that would be a major misunderstanding and cause pretty bad performance degradation.
If you stick with a few simple rules of thumb, you'll never have a reason to even try to force any internal object representation changes:
Use constructors, and initialize all properties in the constructor. (This helps not only your engine, but also understandability and maintainability of your code. Consider that TypeScript doesn't quite force this but strongly encourages it, because it helps engineering productivity.)
Use classes or prototypes to install methods, don't just slap them onto each object instance. (Again, this is a common best practice for many reasons, one of them being that it's faster.)
Avoid delete. When properties come and go, prefer using a Map over the ES5-era "object-as-map" pattern. When an object can toggle into and out of a certain state, prefer boolean (or equivalent) properties (e.g. o.has_state = true; o.has_state = false;) over adding and deleting an indicator property.
When it comes to performance, measure, measure, measure. Before you start sinking time into performance improvements, profile your app to see where the hotspots are. When you implement a change that you hope will make things faster, verify with your real app (or something extremely close to it; not just a 10-line microbenchmark!) that it actually helps.
Lastly, if your team lead tells you "I've heard that there are 'fast' and 'slow' properties, please make sure that all of ours are 'fast'", then point them at this post :-)
Reality from 2021 (NodeJS version 12+).
Seems like a huge optimization is done, objects with deleted fields and sparse arrays don't become slow. Or I'm missing smth?
// run in Node with enabled flag
// node --allow-natives-syntax script.js
function Point(x, y) {
this.x = x;
this.y = y;
}
var obj1 = new Point(1, 2);
var obj2 = new Point(3, 4);
delete obj2.y;
var arr = [1,2,3]
arr[100] = 100
console.log('obj1 has fast properties:', %HasFastProperties(obj1));
console.log('obj2 has fast properties:', %HasFastProperties(obj2));
console.log('arr has fast properties:', %HasFastProperties(arr));
both show true
obj1 has fast properties: true
obj2 has fast properties: true
arr has fast properties: true
// run in Node with enabled flag
// node --allow-natives-syntax script.js
function Point(x, y) {
this.x = x;
this.y = y;
}
var obj2 = new Point(3, 4);
console.log('obj has fast properties:', %HasFastProperties(obj2)) // true
delete obj2.y;
console.log('obj2 has fast properties:', %HasFastProperties(obj2)); //true
var obj = {x : 1, y : 2};
console.log('obj has fast properties:', %HasFastProperties(obj)) //true
delete obj.x;
console.log('obj has fast properties:', %HasFastProperties(obj)); //fasle
Function and object look different
I frequently get an array of an objects keys using:
Object.keys(someobject)
I'm comfortable doing this. I understand that Object is the Object constructor function, and keys() is a method of it, and that keys() will return a list of keys on whatever object is given as the first parameter. My question is not how to get the keys of an object - please do not reply with non-answers explaining this.
My question is, why isn't there a more predictable keys() or getKeys() method, or keys instance variable available on Object.prototype, so I can have:
someobject.keys()
or as an instance variable:
someobject.keys
And return the array of keys?
Again, my intention is to understand the design of Javascript, and what purpose the somewhat unintuitive mechanism of fetching keys serves. I don't need help getting keys.
I suppose they don't want too many properties on Object.prototype since your own properties could shadow them.
The more they include, the greater the chance for conflict.
It would be very clumsy to get the keys for this object if keys was on the prototype...
var myObj: {
keys: ["j498fhfhdl89", "1084jnmzbhgi84", "jf03jbbop021gd"]
};
var keys = Object.prototype.keys.call(myObj);
An example of how introducing potentially shadowed properties can break code.
There seems to be some confusion as to why it's a big deal to add new properties to Object.prototype.
It's not at all difficult to conceive of a bit of code in existence that looks like this...
if (someObject.keys) {
someObject.keys.push("new value")
else
someObject.keys = ["initial value"]
Clearly this code would break if you add a keys function to Object.prototype. The fact that someObject.keys would now be a shadowing property breaks the code that is written to assume that it is not a shadowing property.
Hindsight is 20/20
If you're wondering why keys wasn't part of the original language, so that people would at least be accustomed to coding around it... well I guess they didn't find it necessary, or simply didn't think of it.
There are many possible methods and syntax features that aren't included in the language. That's why we have revisions to the specification, in order to add new features. For example, Array.prototype.forEach is a late addition. But they could add it to Array.prototype, because it doesn't break proper uses of Array.
It's not a realistic expectation that a language should include every possible feature in its 1.0 release.
Since Object.keys does nothing more than return an Array of an Object's enumerable own properties, it's a non-essential addition, that could be achieved with existing language features. It should be no surprise that it wasn't present earlier.
Conclusion
Adding keys to Object.prototype most certainly would break legacy code.
In a tremendously popular language like JavaScript, backward compatibility is most certainly going to be an important consideration. Adding new properties to Object.prototype at this point could prove to be disastrous.
I guess an answer to your question is "Because the committee decided so", but I can already hear you ask "why?" before the end of this sentence.
I read about this recently but I can't find the source right now. What it boiled down to was that in many cases you had to use Object.prototype.keys.call(myobject) anyway, because the likelihood of myobject.keys already being used in the object for something else.
I think you will find this archived mail thread interesting, where for example Brendan Eich discuss some aspects of the new methods in ECMAScript 5.
Update:
While digging in the mail-archive I found this:
Topic: Should Object.keys be repositioned as Object.prototype.keys
Discussion: Allen argued that this isn't really a meta layer operation
as it is intended for use in application layer code as an alternative
to for..in for getting a list of enumerable property names. As a
application layer method it belongs on Object.prototype rather than on
the Object constructor. There was general agreement in principle, but
it was pragmatically argued by Doug and Mark that it is too likely
that a user defined object would define its own property named "keys"
which would shadow Object.prototype.keys making it inaccessible for
use on such objects.
Action: Leave it as Object.keys.
Feel free to make your own, but the more properties you add to the Object prototype, the higher chance you'll collide. These collisions will most likely break any third party javascript library, and any code that relies on a for...in loop.
Object.prototype.keys = function () {
return Object.keys(this);
};
for (var key in {}) {
// now i'm getting 'keys' in here, wtf?
}
var something = {
keys: 'foo'
};
something.keys(); // error
I’v come across both ways to apply Array prototypes to a native object:
arr = Array.prototype.slice.call(obj);
arr = [].slice.call(obj);
In similar fashion, getting the true type of a native array-like object:
type = Object.prototype.toString.call(obj);
type = {}.toString.call(obj);
A simple test:
function fn() {
console.log(
Array.prototype.slice.call(arguments),
[].slice.call(arguments),
Object.prototype.toString.call(arguments),
{}.toString.call(arguments)
);
}
fn(0,1);
Fiddle: http://jsfiddle.net/PhdmN/
They seem identical to me; the first syntax is used more often, but the second is definitely shorter. Are there any shortcomings when using the shorter syntax?
They are identical regarding functionality.
However, the Array object can be overwritten, causing the first method to fail.
//Example:
Array = {};
console.log(typeof Array.prototype.slice); // "undefined"
console.log(typeof [].slice); // "function"
The literal method creates a new instance of Array (opposed to Array.prototype. method). Benchmark of both methods: http://jsperf.com/bbarr-new-array-vs-literal/3
When you're going to use the method many times, the best practice is to cache the method:
var slice = Array.prototype.slice; //Commonly used
var slice = [].slice; - If you're concerned about the existence of Array, or if you just like the shorter syntax.
That's an interesting question! Let's pull up the pros (✔️) and cons (❌) for each alternative:
[].slice
✔️: Is typed faster
Two keystrokes, no shift-modifier or anything,
and your linter knows [.slice is a typo.
✔️: Is read faster
You can identify the relevant part (slice) faster.
✔️: Is more popular
56M+ snippets on GitHub (as of late 2018).
✔️: Can't be overwritten
The first part of Rob's answer demonstrates this perfectly.
✔️: Runs faster.
Wait, what? Well, that's actually the whole point of this answer.
Contrary to what you'd think and read pretty much everywhere, [].slice.call(...) does NOT instantiate a new, empty Array just to access its slice property!.
Nowadays (it has been so for 5+ years – as of late 2018), the JIT compilation (1) is included everywhere you run JavaScript (unless you're still browsing the Web with IE8 or lower).
This mechanism allows the JS Engine to: (2)
... resolve [].slice directly, and statically, as direct Array.prototype reference in one shot, and just one configurable property access: forEach
Array.prototype.slice
❌: Is typed slower
Typos (e.g.: Array.prorotype.slice) look fine until you try and run the code.
❌: Is less popular
8M+ snippets on GitHub (as of late 2018).
❌: Runs slower
Array.prototype.slice is: (2)
... a lookup for the whole scope for an Array reference until all scopes are walked 'till the global one ... because you can name a variable Array any time you want.
Once the global scope is reached, and the native found, the engine accesses its proottype and after that its method
...
O(N) scope resolution + 2 properties access (.prototype and .forEach).
✔️: Allows you to seamlessly adapt to whichever coding conventions would strictly prevent you from having a line start with either (, [ or `
Definitely a good thing (sarcastically).
✔️: You won't have to explain why [].slice is better in pretty much every way.
Although now, that would just boil down to clicking the share link below 👌
Disclaimer
Note that, realistically, neither does effectively run faster than the other. This isn't the bottleneck of your application.
You may want to read A crash course in just-in-time (JIT) compilers
Quoted from Andrea Giammarchi (#WebReflection on Twitter)
For a long time I have been throwing around the idea of making my JavaScript more object oriented. I have looked at a few different implementations of this as well but I just cannot decide if it is necessary or not.
What I am trying to answer are the following questions
Is John Resig's simple inheritance structure safe to use for production?
Is there any way to be able to tell how well it has been tested?
Besides Joose what other choices do I have for this purpose? I need one that is easy to use, fast, and robust. It also needs to be compatible with jQuery.
Huh. It looks much more complicated than it needs to be, to me.
Actually looking more closely I really take exception to what it is doing with providing this._super() whilst in a method, to call the superclass method.
The code introduces a reliance on typeof==='function' (unreliable for some objects), Function#toString (argh, function decomposition is also unreliable), and deciding whether to wrap based on whether you've used the sequence of bytes _super in the function body (even if you've only used it in a string. and if you try eg. this['_'+'super'] it'll fail).
And if you're storing properties on your function objects (eg MyClass.myFunction.SOME_PRIVATE_CONSTANT, which you might do to keep namespaces clean) the wrapping will stop you from getting at those properties. And if an exception is thrown in a method and caught in another method of the same object, _super will end up pointing at the wrong thing.
All this is just to make calling your superclass's method-of-the-same name easier. But I don't think that's especially hard to do in JS anyway. It's too clever for its own good, and in the process making the whole less reliable. (Oh, and arguments.callee isn't valid in Strict Mode, though that's not really his fault since that occurred after he posted it.)
Here's what I'm using for classes at the moment. I don't claim that this is the “best” JS class system, because there are loads of different ways of doing it and a bunch of different features you might want to add or not add. But it's very lightweight and aims at being ‘JavaScriptic’, if that's a word. (It isn't.)
Function.prototype.makeSubclass= function() {
function Class() {
if (!(this instanceof Class))
throw 'Constructor function requires new operator';
if ('_init' in this)
this._init.apply(this, arguments);
}
if (this!==Object) {
Function.prototype.makeSubclass.nonconstructor.prototype= this.prototype;
Class.prototype= new Function.prototype.makeSubclass.nonconstructor();
}
return Class;
};
Function.prototype.makeSubclass.nonconstructor= function() {};
It provides:
protection against accidental missing new. The alternative is to silently redirect X() to new X() so missing new works. It's a toss-up which is best; I went for explicit error so that one doesn't get used to writing without new and causing problems on other objects not defined like that. Either way is better than the unacceptable JS default of letting this. properties fall onto window and mysteriously going wrong later.
an inheritable _init method, so you don't have to write a constructor-function that does nothing but call the superclass constructor function.
and that's really all.
Here's how you might use it to implement Resig's example:
var Person= Object.makeSubclass();
Person.prototype._init= function(isDancing) {
this.dancing= isDancing;
};
Person.prototype.dance= function() {
return this.dancing;
};
var Ninja = Person.makeSubclass();
Ninja.prototype._init= function() {
Person.prototype._init.call(this, false);
};
Ninja.prototype.swingSword= function() {
return true;
};
var p= new Person(true);
p.dance(); // => true
var n = new Ninja();
n.dance(); // => false
n.swingSword(); // => true
// Should all be true
p instanceof Person &&
n instanceof Ninja && n instanceof Person
Superclass-calling is done by specifically naming the method you want and calling it, a bit like in Python. You could add a _super member to the constructor function if you wanted to avoid naming Person again (so you'd say Ninja._super.prototype._init.call, or perhaps Ninja._base._init.call).
JavaScript is prototype based and not class based. My recommendation is not to fight it and declare subtypes the JS way:
MyDerivedObj.prototype = new MySuperObj();
MyDerivedObj.prototype.constructor = MyDerivedObj;
See how far you can get without using inheritance at all. Treat it as a performance hack (to be applied reluctantly where genuinely necessary) rather than a design principle.
In an a highly dynamic language like JS, it is rarely necessary to know whether an object is a Person. You just need to know if it has a firstName property or an eatFood method. You don't often need to know if an object is an array; if it has a length property and some other properties named after integers, that's usually good enough (e.g. the Arguments object). "If it walks like a duck and quacks like a duck, it's a duck."
// give back a duck
return {
walk: function() { ... },
quack: function() { ... }
};
Yes, if you're making very large numbers of small objects, each with dozens of methods, then by all means assign those methods to the prototype to avoid the overhead of creating dozens of slots in every instance. But treat that as a way of reducing memory overhead - a mere optimisation. And do your users a favour by hiding your use of new behind some kind of factory function, so they don't even need to know how the object is created. They just need to know it has method foo or property bar.
(And note that you won't really be modelling classical inheritance in that scenario. It's merely the equivalent of defining a single class to get the efficiency of a shared vtable.)
Adding methods to native JavaScript objects like Object, Function, Array, String, etc considered as bad practice.
But I could not understand why?
Can some body shed light on this?
Thanks in advance.
Because you might happen to use a library that defined a function with the same name, but working another way.
By overriding it, you break the other's library's behaviour, and then you scratch your head in debug mode.
Edit
If you really want to add a method with a very unpleasant name like prependMyCompanyName(...) to the String prototype, I think it's pretty much risk-free from an overriding point of view. But I hope for you that you won't have to type it too often...
Best way to do it is still, in my humble opinion, to define for example, a MyCompanyUtils object (you can find a shortcut like $Utils), and make it have a prepend(str,...) method.
The two big reasons in my opinion are that:
It makes your code harder to read. You write code once and read it many number of times more. If your code eventually falls into the hands of another person he may not immediately know that all of your Objects have a .to_whatever method.
It causes the possibility of namespace conflicts. If you try to put your library which overrides Object.prototype into another library, it may cause issues with other people doing the same thing.
There is also the effect that augmenting the Object prototype has on for...in loops to consider:
Object.prototype.foo = 1;
var obj = {
bar: 2
};
for (var i in obj) {
window.alert(i);
}
// Alerts both "foo" and "bar"