Advantage of using Object.create - javascript

Similar to, but different from this question. The code below is from JavaScript: The Definitive Guide. He's basically defining an inherit method that defers to Object.create if it exists, otherwise doing plain old Javascript inheritance using constructors and swapping prototypes around.
My question is, since Object.create doesn't exist on plenty of common browsers IE, what's the point of even trying to use it? It certainly clutters up the code, and one of the commenters on the previous question mentioned that Object.create isn't too fast.
So what's the advantage in trying to add extra code in order to occasionally utilize this ECMA 5 function that may or may not be slower than the "old" way of doing this?
function inherit(p) {
if (Object.create) // If Object.create() is defined...
return Object.create(p); // then just use it.
function f() {}; // Define a dummy constructor function.
f.prototype = p; // Set its prototype property to p.
return new f(); // Use f() to create an "heir" of p.
}

The speed difference is not very noticeable, since by nature you probably won't be creating too many objects (hundreds, even thousands isn't what I call a lot), and if you are and speed is a critical issue you probably won't be coding in JS, and if both of the above aren't true, then I'm sure within a few releases of all popular JS engines the difference will be negligible (this is already the case in some).
In answer to your question, the reasons aren't speed-related, but because the design pattern of Object.create is favoured to the old method (for the reasons outlined in that and other answers). They allow for proper utilisation of the ES5 property attributes (which make for more scalable objects, and thus more scalable apps), and can help with inheritance hierarchies.
It's forward engineering. If we took the line of "well it isn't implemented everywhere so let's not get our feet wet", things would move very slowly. On the contrary, early and ambitious adoption helps the industry move forward, helps business decision makers support new technologies, helps developers improve and perfect new ideas and the supporting frameworks. I'm an advocate for early (but precautionary and still backward-compatible) adoption, because experience shows that waiting for enough people to support a technology can leave you waiting far too long. May IE6 be a lesson to those who think otherwise.

Related

Is having a nested function inside a constructor memory inefficient? [duplicate]

In JavaScript, we have two ways of making a "class" and giving it public functions.
Method 1:
function MyClass() {
var privateInstanceVariable = 'foo';
this.myFunc = function() { alert(privateInstanceVariable ); }
}
Method 2:
function MyClass() { }
MyClass.prototype.myFunc = function() {
alert("I can't use private instance variables. :(");
}
I've read numerous times people saying that using Method 2 is more efficient as all instances share the same copy of the function rather than each getting their own. Defining functions via the prototype has a huge disadvantage though - it makes it impossible to have private instance variables.
Even though, in theory, using Method 1 gives each instance of an object its own copy of the function (and thus uses way more memory, not to mention the time required for allocations) - is that what actually happens in practice? It seems like an optimization web browsers could easily make is to recognize this extremely common pattern, and actually have all instances of the object reference the same copy of functions defined via these "constructor functions". Then it could only give an instance its own copy of the function if it is explicitly changed later on.
Any insight - or, even better, real world experience - about performance differences between the two, would be extremely helpful.
See http://jsperf.com/prototype-vs-this
Declaring your methods via the prototype is faster, but whether or not this is relevant is debatable.
If you have a performance bottleneck in your app it is unlikely to be this, unless you happen to be instantiating 10000+ objects on every step of some arbitrary animation, for example.
If performance is a serious concern, and you'd like to micro-optimise, then I would suggest declaring via prototype. Otherwise, just use the pattern that makes most sense to you.
I'll add that, in JavaScript, there is a convention of prefixing properties that are intended to be seen as private with an underscore (e.g. _process()). Most developers will understand and avoid these properties, unless they're willing to forgo the social contract, but in that case you might as well not cater to them. What I mean to say is that: you probably don't really need true private variables...
In the new version of Chrome, this.method is about 20% faster than prototype.method, but creating new object is still slower.
If you can reuse the object instead of always creating an new one, this can be 50% - 90% faster than creating new objects. Plus the benefit of no garbage collection, which is huge:
http://jsperf.com/prototype-vs-this/59
It only makes a difference when you're creating lots of instances. Otherwise, the performance of calling the member function is exactly the same in both cases.
I've created a test case on jsperf to demonstrate this:
http://jsperf.com/prototype-vs-this/10
You might not have considered this, but putting the method directly on the object is actually better in one way:
Method invocations are very slightly faster (jsperf) since the prototype chain does not have to be consulted to resolve the method.
However, the speed difference is almost negligible. On top of that, putting a method on a prototype is better in two more impactful ways:
Faster to create instances (jsperf)
Uses less memory
Like James said, this difference can be important if you are instantiating thousands of instances of a class.
That said, I can certainly imagine a JavaScript engine that recognizes that the function you are attaching to each object does not change across instances and thus only keeps one copy of the function in memory, with all instance methods pointing to the shared function. In fact, it seems that Firefox is doing some special optimization like this but Chrome is not.
ASIDE:
You are right that it is impossible to access private instance variables from inside methods on prototypes. So I guess the question you must ask yourself is do you value being able to make instance variables truly private over utilizing inheritance and prototyping? I personally think that making variables truly private is not that important and would just use the underscore prefix (e.g., "this._myVar") to signify that although the variable is public, it should be considered to be private. That said, in ES6, there is apparently a way to have the both of both worlds!
You may use this approach and it will allow you to use prototype and access instance variables.
var Person = (function () {
function Person(age, name) {
this.age = age;
this.name = name;
}
Person.prototype.showDetails = function () {
alert('Age: ' + this.age + ' Name: ' + this.name);
};
return Person; // This is not referencing `var Person` but the Person function
}()); // See Note1 below
Note1:
The parenthesis will call the function (self invoking function) and assign the result to the var Person.
Usage
var p1 = new Person(40, 'George');
var p2 = new Person(55, 'Jerry');
p1.showDetails();
p2.showDetails();
In short, use method 2 for creating properties/methods that all instances will share. Those will be "global" and any change to it will be reflected across all instances. Use method 1 for creating instance specific properties/methods.
I wish I had a better reference but for now take a look at this. You can see how I used both methods in the same project for different purposes.
Hope this helps. :)
This answer should be considered an expansion of the rest of the answers filling in missing points. Both personal experience and benchmarks are incorporated.
As far as my experience goes, I use constructors to literally construct my objects religiously, whether methods are private or not. The main reason being that when I started that was the easiest immediate approach to me so it's not a special preference. It might have been as simple as that I like visible encapsulation and prototypes are a bit disembodied. My private methods will be assigned as variables in the scope as well. Although this is my habit and keeps things nicely self contained, it's not always the best habit and I do sometimes hit walls. Apart from wacky scenarios with highly dynamic self assembling according to configuration objects and code layout it tends to be the weaker approach in my opinion particularly if performance is a concern. Knowing that the internals are private is useful but you can achieve that via other means with the right discipline. Unless performance is a serious consideration, use whatever works best otherwise for the task at hand.
Using prototype inheritance and a convention to mark items as private does make debugging easier as you can then traverse the object graph easily from the console or debugger. On the other hand, such a convention makes obfuscation somewhat harder and makes it easier for others to bolt on their own scripts onto your site. This is one of the reasons the private scope approach gained popularity. It's not true security but instead adds resistance. Unfortunately a lot of people do still think it's a genuinely way to program secure JavaScript. Since debuggers have gotten really good, code obfuscation takes its place. If you're looking for security flaws where too much is on the client, it's a design pattern your might want to look out for.
A convention allows you to have protected properties with little fuss. That can be a blessing and a curse. It does ease some inheritance issues as it is less restrictive. You still do have the risk of collision or increased cognitive load in considering where else a property might be accessed. Self assembling objects let you do some strange things where you can get around a number of inheritance problems but they can be unconventional. My modules tend to have a rich inner structure where things don't get pulled out until the functionality is needed elsewhere (shared) or exposed unless needed externally. The constructor pattern tends to lead to creating self contained sophisticated modules more so than simply piecemeal objects. If you want that then it's fine. Otherwise if you want a more traditional OOP structure and layout then I would probably suggest regulating access by convention. In my usage scenarios complex OOP isn't often justified and modules do the trick.
All of the tests here are minimal. In real world usage it is likely that modules will be more complex making the hit a lot greater than tests here will indicate. It's quite common to have a private variable with multiple methods working on it and each of those methods will add more overhead on initialisation that you wont get with prototype inheritance. In most cases is doesn't matter because only a few instances of such objects float around although cumulatively it might add up.
There is an assumption that prototype methods are slower to call because of prototype lookup. It's not an unfair assumption, I made the same myself until I tested it. In reality it's complex and some tests suggest that aspect is trivial. Between, prototype.m = f, this.m = f and this.m = function... the latter performs significantly better than the first two which perform around the same. If the prototype lookup alone were a significant issue then the last two functions instead would out perform the first significantly. Instead something else strange is going on at least where Canary is concerned. It's possible functions are optimised according to what they are members of. A multitude of performance considerations come into play. You also have differences for parameter access and variable access.
Memory Capacity. It's not well discussed here. An assumption you can make up front that's likely to be true is that prototype inheritance will usually be far more memory efficient and according to my tests it is in general. When you build up your object in your constructor you can assume that each object will probably have its own instance of each function rather than shared, a larger property map for its own personal properties and likely some overhead to keep the constructor scope open as well. Functions that operate on the private scope are extremely and disproportionately demanding of memory. I find that in a lot of scenarios the proportionate difference in memory will be much more significant than the proportionate difference in CPU cycles.
Memory Graph. You also can jam up the engine making GC more expensive. Profilers do tend to show time spent in GC these days. It's not only a problem when it comes to allocating and freeing more. You'll also create a larger object graph to traverse and things like that so the GC consumes more cycles. If you create a million objects and then hardly touch them, depending on the engine it might turn out to have more of an ambient performance impact than you expected. I have proven that this does at least make the gc run for longer when objects are disposed of. That is there tends to be a correlation with memory used and the time it takes to GC. However there are cases where the time is the same regardless of the memory. This indicates that the graph makeup (layers of indirection, item count, etc) has more impact. That's not something that is always easy to predict.
Not many people use chained prototypes extensively, myself included I have to admit. Prototype chains can be expensive in theory. Someone will but I've not measured the cost. If you instead build your objects entirely in the constructor and then have a chain of inheritance as each constructor calls a parent constructor upon itself, in theory method access should be much faster. On the other hand you can accomplish the equivalent if it matters (such as flatten the prototypes down the ancestor chain) and you don't mind breaking things like hasOwnProperty, perhaps instanceof, etc if you really need it. In either case things start to get complex once you down this road when it comes to performance hacks. You'll probably end up doing things you shouldn't be doing.
Many people don't directly use either approach you've presented. Instead they make their own things using anonymous objects allowing method sharing any which way (mixins for example). There are a number of frameworks as well that implement their own strategies for organising modules and objects. These are heavily convention based custom approaches. For most people and for you your first challenge should be organisation rather than performance. This is often complicated in that Javascript gives many ways of achieving things versus languages or platforms with more explicit OOP/namespace/module support. When it comes to performance I would say instead to avoid major pitfalls first and foremost.
There's a new Symbol type that's supposed to work for private variables and methods. There are a number of ways to use this and it raises a host of questions related to performance and access. In my tests the performance of Symbols wasn't great compared to everything else but I never tested them thoroughly.
Disclaimers:
There are lots of discussions about performance and there isn't always a permanently correct answer for this as usage scenarios and engines change. Always profile but also always measure in more than one way as profiles aren't always accurate or reliable. Avoid significant effort into optimisation unless there's definitely a demonstrable problem.
It's probably better instead to include performance checks for sensitive areas in automated testing and to run when browsers update.
Remember sometimes battery life matters as well as perceptible performance. The slowest solution might turn out faster after running an optimising compiler on it (IE, a compiler might have a better idea of when restricted scope variables are accessed than properties marked as private by convention). Consider backend such as node.js. This can require better latency and throughput than you would often find on the browser. Most people wont need to worry about these things with something like validation for a registration form but the number of diverse scenarios where such things might matter is growing.
You have to be careful with memory allocation tracking tools in to persist the result. In some cases where I didn't return and persist the data it was optimised out entirely or the sample rate was not sufficient between instantiated/unreferenced, leaving me scratching my head as to how an array initialised and filled to a million registered as 3.4KiB in the allocation profile.
In the real world in most cases the only way to really optimise an application is to write it in the first place so you can measure it. There are dozens to hundreds of factors that can come into play if not thousands in any given scenario. Engines also do things that can lead to asymmetric or non-linear performance characteristics. If you define functions in a constructor, they might be arrow functions or traditional, each behaves differently in certain situations and I have no idea about the other function types. Classes also don't behave the same in terms as performance for prototyped constructors that should be equivalent. You need to be really careful with benchmarks as well. Prototyped classes can have deferred initialisation in various ways, especially if your prototyped your properties as well (advice, don't). This means that you can understate initialisation cost and overstate access/property mutation cost. I have also seen indications of progressive optimisation. In these cases I have filled a large array with instances of objects that are identical and as the number of instances increase the objects appear to be incrementally optimised for memory up to a point where the remainder is the same. It is also possible that those optimisations can also impact CPU performance significantly. These things are heavily dependent not merely on the code you write but what happens in runtime such as number of objects, variance between objects, etc.

Revisiting extending native prototypes after ECMAScript 5

Recently, given the changes to defining properties in ECMAScript 5, I have revisited the question of whether we can safely extend the native JavaScript prototypes. In truth, all along I have extended prototypes like Array and Function, but I avoided doing so with Object, for the obvious reasons. In unit testing with Jasmine, by adding Object.prototype specs to specs for my own personal framework, extending Object.prototype with non-enumerable functions has appeared to be safe. Data properties like a "type" property, however, with getters/setters that do any unusual processing have had unintended consequences. There is still the possibility of conflicts with other libraries--though in my work, that hardly ever comes up. Nevertheless, as long as the functions are not enumerable, it looks like extending Object.prototype can be safe.
What do you think? Is it safe to extend Object.prototype now? Please discuss.
Extending objects native to JavaScript might become a little safer, through many collision concerns still stand. Generally, unless you're extending object to support standartized behavior from more recent standard it really would be still much safer to introduce wrapper - it is much easier to do things right way when you're the only one in control.
Speaking of objects native to environment (DOM elements and nodes, AJAX stuff), new JS standard still don't give and, arguably, can't give you any guarantee about any interaction with those except what defined in their interface standard. Never forget that they're potentially accessible through many different scripting engines and thus not need to be tailored for quirks of one specific language - JS. So recommendation to not extend those either still stands as well.
The definitive, absolute answer is ...
"It depends." :)
Extending any built in JavaScript object can be perfectly safe or it can be a complete disaster. It depends on what you are doing and how you are doing it.
Use smart practices and common sense and test the hell-out-of-it.

Should I be using prototypes if I don't require inheritance or optimization

If my site doesn't require any sort of optimization and my object structure is fairly flat and so there is no need for inheritance, should I be using prototypes at all:
MyObject.prototype.<something>
I refer to this question Javascript when to use prototypes and various writings by John Resig who seems to focus on the performance values of prototype.
I find using the module pattern to define objects and their functions easier to read and manage and as michielvoo mentions, premature optimization tends to cause more harm than good.
Are there any other benefits or uses cases for prototype?
It's a code organization pattern.
There are two good ways to share methods across objects
var oneWay = Object.create(bagOfMethods);
// some value of extend that is sensible
var otherWay = extend({}, bagOfMethods);
The only real advantage prototypes gives is that the link between oneWay and bagOfMethods is live.
Having a live link allows really powerful extensions and monkey patching.
An example is Object.prototype.methodNowLivesOnAllInstances = function () { };
This is a way of doing "asynchronous" programming, you can extend the "class" without worrying whether all instances get the new method.
A note on the modular pattern, it's bloated and not needed. Use a module builder like modul8 or browserify instead.
Well, if you refer to that question, the accepted answer for which starts out by calling it an optimization, then I'd say don't go for premature optimization. You may never reap any benefits (see: YAGNI) and you make your job more difficult, since you yourself (and your team?) prefer the module pattern.

Extending Object.prototype JavaScript

I am not asking if this is okay:
Object.prototype.method = function(){};
This is deemed evil by pretty much everyone, considering it messes up for(var i in obj).
The Real Question
Ignoring
Incompetent browsers(browsers that don't support Object.defineProperty)
Potential for property collision or overriding
Assuming you have some incredibly useful method, is this considered wrong/unethical?
Object.defineProperty(Object.prototype, 'methodOnSteriods',{
value: function(){ /* Makes breakfast, solves world peace, takes out trash */ },
writable: true,
configurable: true,
enumerable: false
});
If you believe the above is unethical, why would they even implement the feature in the first place?
UPDATE from 2021
Despite this being the accepted answer, 10 years of experience has taught me this isn't the best idea. Pretty much anything you can do to avoid polluting the global scope is a very very good thing.
Original answer below, for posterity, and because stack overflow will not let me delete an accepted answer.
Original answer from 2011
I think it's fine if it works in your target environment.
Also I think prototype extension paranoia is overblown. As long as you use hasOwnProperty() like a good developer that it's all fine. Worst case, you overload that property elsewhere and lose the method. But that's your own fault if you do that.
I'd say this is almost as evil as before. The biggest problem, still the same as before, is that Object.prototype is global. While your method might currently be solving world peace, it might have overwriten someone else's method (that was guaranteeing galactic peace) or may be overwritten in the future by some library you have no control over (therefore plunging the world into chaos again)
New versions of Javascript have lots of features related to properties, such as definig a property to be enumerable/not enumerable, having getters and setters... Object.defineProperty existis to give control over this.
From Mozilla Docs:
This method allows precise addition to or modification of a property
on an object. Normal property addition through assignment creates
properties which show up during property enumeration (for...in loop),
whose values may be changed, and which may be deleted. This method
allows these extra details to be changed from their defaults.
This new function is basically required to support the new features and you are supposed to use it on your own stuff. Being able to modify Object.prototype is just a side effect of it also being a "normal" object and is just as evil as before.
Well in "JavaScript: the good parts", there is a similar function, i think that is very usefull to improve javascript base objects (like String, Date, etc..), but just for that.
// Add a method conditionally. from "JavaScript: the good parts"
Function.prototype.method = function (name, func) {
if (!this.prototype[name]) {
this.prototype[name] = func;
}
}
.hasOwnProperty() will exclude iteration through inherited properties, which I personally find is often more annoying than helpful. It largely defeats the usefulness of Object.create()—which is ironic since the same guy who convinced everyone to do .hasOwnProperty() also promoted Object.create().
Object.prototype should not be extended, for the reasons listed here. If you really do want to extend it, then make the extensions non-iterable.
I realize this flies in the face of all of the published best-practices, but we really should stop “mandating” .hasOwnProperty() on object key iterations and embrace the usefulness of direct object-to-object inheritance.
The short answer is Yes, you should do it.
Before doing it, there are several precautions need to take:
1. using hasOwnProperty when iterating object, but this isn't really a precautions, when iterating object, I am already using hasOwnProperty anyway.
2. check if the name in Object.prototype.name has existed, this is very safe to avoid name collision.
3. take advantage of Object.defineProperty(), just add extra safeguard layer.
As you can see, it's not very complicated.
Now comes the advantages once you have taken care of the risks/disadvantages:
1. method chaining, this just makes code more readable, terse, and making coding more enjoyable. In turn makes you happier, and your life easier.
2. solves the browser compatibility issue, you are doing polyfill anyway.
P.S.:
Don't do it when working with a large team for sure.

What is Javascript missing?

Javascript is an incredible language and libraries like jQuery make it almost too easy to use.
What should the original designers of Javascript have included in the language, or what should we be pressuring them into adding to future versions?
Things I'd like to see:-
Some kind of compiled version of the language, so we programmers can catch more of our errors earlier, as well as providing a faster solution for browsers to consume.
optional strict types (eg, being able to declare a var as a float and keep it that way).
I am no expert on Javascript, so maybe these already exist, but what else should be there? Are there any killer features of other programming languages that you would love to see?
Read Javascript: The Good Parts from the author of JSLint, Douglas Crockford. It's really impressive, and covers the bad parts too.
One thing I've always longed for and ached for is some support for hashing. Specifically, let me track metadata about an object without needing to add an expando property on that object.
Java provides Object.getHashCode() which, by default, uses the underlying memory address; Python provides id(obj) to get the memory address and hash(obj) to be customizable; etc. Javascript provides nothing for either.
For example, I'm writing a Javascript library that tries to unobtrusively and gracefully enhance some objects you give me (e.g. your <li> elements, or even something unrelated to the DOM). Let's say I need to process each object exactly once. So after I've processed each object, I need a way to "mark it" as seen.
Ideally, I could make my own hashtable or set (either way, implemented as a dictionary) to keep track:
var processed = {};
function process(obj) {
var key = obj.getHashCode();
if (processed[key]) {
return; // already seen
}
// process the object...
processed[key] = true;
}
But since that's not an option, I have to resort to adding a property onto each object:
var SEEN_PROP = "__seen__";
function process(obj) {
if (obj[SEEN_PROP]) { // or simply obj.__seen__
return; // already seen
}
// process the object...
obj[SEEN_PROP] = true; // or obj.__seen__ = true
}
But these objects aren't mine, so this makes my script obtrusive. The technique is effectively a hack to work around the fact that I can't get a reliable hash key for any arbitrary object.
Another workaround is to create wrapper objects for everything, but often you need a way to go from the original object to the wrapper object, which requires an expando property on the original object anyway. Plus, that creates a circular reference which causes memory leaks in IE if the original object is a DOM element, so this isn't a safe cross-browser technique.
For developers of Javascript libraries, this is a recurring issue.
What should the original designers of Javascript have included in the language, or what should we be pressuring them into adding to future versions?
They should have got together and decided together what to implement, rather than competing against each other with slightly different implementations of the language (naming no names), to prevent the immense headache that has ensued for every developer over the past 15 years.
The ability to use arrays/objects as keys without string coercion might've been nice.
Javascript is missing a name that differentiates it from a language it is nothing like.
There are a few little things it could do better.
Choice of + for string concatenation was a mistake. An & would have been better.
It can be frustrating that for( x in list ) iterates over indices, as it makes it difficult to use a literal array. Newer versions have a solution.
Proper scoping would be nice. v1.7 is adding this, but it looks clunky.
The way to do 'private' and 'protected' variables in an object is a little bit obscure and hard to remember as it takes advantage of closures and how they affect scoping. Some syntactic sugar to hide the mechanics of this would be fabulous.
To be honest, many of the problems I routinely trip over are actually DOM quirks, not JavaScript per se. The other big problem, of course, is that recent versions of JavaScript have interesting and useful things, like generators. Unfortunately, most browsers are stuck at 1.5. Apparantly only FireFox is forging ahead.
File IO is missing.... though some would say it doesn't really need it...

Categories

Resources