Javascript Getting Objects to Fallback to One Another - javascript

Here's a ugly bit of Javascript it would be nice to find a workaround.
Javascript has no classes, and that is a good thing. But it implements fallback between objects in a rather ugly way. The foundational construct should be to have one object that, when a property fails to be found, it falls back to another object.
So if we want a to fall back to b we would want to do something like:
a = {sun:1};
b = {dock:2};
a.__fallback__ = b;
then
a.dock == 2;
But, Javascript instead provides a new operator and prototypes. So we do the far less elegant:
function A(sun) {
this.sun = sun;
};
A.prototype.dock = 2;
a = new A(1);
a.dock == 2;
But aside from elegance, this is also strictly less powerful, because it means that anything created with A gets the same fallback object.
What I would like to do is liberate Javascript from this artificial limitation and have the ability to give any individual object any other individual object as its fallback. That way I could keep the current behavior when it makes sense, but use object-level inheritance when that makes sense.
My initial approach is to create a dummy constructor function:
function setFallback(from_obj, to_obj) {
from_obj.constructor = function () {};
from_obj.constructor.prototype = to_obj;
}
a = {sun:1};
b = {dock:2};
setFallback(a, b);
But unfortunately:
a.dock == undefined;
Any ideas why this doesn't work, or any solutions for an implementation of setFallback?
(I'm running on V8, via node.js, in case this is platform dependent)
Edit:
I've posted a partial solution to this below, that works in the case of V8, but isn't general. I'd still appreciate a more general solution.

You could just use Object.create. It's part of ES5 so it's already available natively in some browsers. I believe it does exactly what you want.

Okay, some more research and cross-platform checking and there's some more information (though not a general solution).
Some implementations have basically what I did for my __fallback__. It is called __proto__ and is about perfect:
a = {sun:1};
b = {dock:2};
a.__proto__ = b;
a.dock == 2;
It seems that, what happens in when a new object is constructed is roughly this:
a = new Constructor(...args...);
produces behavior roughly equivalent to:
object.constructor = constructor;
object.__proto__ = constructor.prototype;
constructor.call(this, ...args...);
So it is no wonder that coming along later and adjusting an object's constructor or constructor.prototype has no effect, because the __proto__ setting is already set.
Now for my v8 application, I can just use __proto__, but I understand it that this isn't exposed on the IE VM (I don't run windows, so I can't tell). So it is not a general solution to the problem.

Related

javascript prototype for object

how the prototype works ? why the "xc" can not be accessed from e object?
please look down to the code , see the comments , i testing it in the chorme
var x={a:"xa",b:"xb",c:"xc"};
var e={a:"ea",b:"eb"};
console.log(Object.prototype); // this is {} why? i am expecting it to be null
console.log(e.prototype);
e.prototype=x;
console.log(e.prototype);
console.log(x.c);
console.log(e.c);//this is undefined , why? i am expecting it to be "xc"
console.log(e.a);
console.log(e.b);
console.log(e.prototype.a);
console.log(e.prototype.b);
i first think it would useful in css merging ,later i think for working out the dependency, then re-write css is more reasonable, however the knowledge is real. thanks very much.
var css={
'classSelectorExpressionIDOnly1':{
css_Ruls_name1:xxxx,
css_Rulss_name2:xxxx
}
'classSelectorExpressionIDOnlyX':{
css_Ruls_name1:xxxx,
css_Rulss_name9:xxxx
}
'classSelectorExpressionIDOnly2':{ '()inherit':["classSelectorExpressionIDOnly1","classSelectorExpressionIDOnlyX"]
css_Ruls_name3:xxxx,
css_Rulss_name5:xxxx
}
}
var mergeResult = Object.create(css.classSelectorExpressionIDOnly2);
for(var entry in mergeResult){
mergeResult[entry]= mergeResult[entry];
}
mergeResult.__proto__=css.classSelectorExpressionIDOnly1;
for(var entry in mergeResult){
mergeResult[entry]= mergeResult[entry];
}
mergeResult.__proto__=css.classSelectorExpressionIDOnlyX;
for(var entry in mergeResult){
mergeResult[entry]= mergeResult[entry];
}
------dependency re-write--------
.classSelectorExpressionIDOnly1,.classSelectorExpressionIDOnly2{
css_Ruls_name1:xxxx,
css_Rulss_name2:xxxx
}
.classSelectorExpressionIDOnlyX,.classSelectorExpressionIDOnly2{
css_Ruls_name1:xxxx,
css_Rulss_name9:xxxx
}
.classSelectorExpressionIDOnly2{
css_Ruls_name3:xxxx,
css_Rulss_name5:xxxx
}
That's not what the .prototype property is for. Despite the name, the .prototype property of functions isn't actually the prototype of the objects you're used to working with. This is one of the hardest things to understand about JavaScript, so it's not just you.
The key to understanding the prototype system in JavaScript is that the new operator creates two objects, not one. I'm going to talk about this in terms of four variables:
[[myPrototype]]
The prototype of an object. Every object theoretically has one (though for some objects, it might be undefined).
[[Constructor]]
The function that is being called with the New operator
[[newObject]]
The object that will eventually be returned
[[newPrototype]]
The object that will become [[newObject]].[[myPrototype]]
Note that these aren't valid JavaScript names (in fact, they're not valid names in most programming languages). All of this happens behind the scenes, and most implementations don't use these names either. I'm doing this to make clear that you can't normally see these objects.
When you use the new operator, JavaScript does roughly the following steps.
Create an object [[newPrototype]].
Set [[newPrototype]].[[myPrototype]] to [[Constructor]].prototype
Create an object [[newObject]].
Set [[newObject]].[[myPrototype]] to [[newPrototype]]
Set [[newObject]].[[myPrototype]].constructor to [[Constructor]]
Call [[Constructor]], with [[newObject]] as "this".
Note that [[newObject]].[[myPrototype]] isn't a perfect match for either [[newObject]] or [[Constructor]].prototype. That's why we need a third object between them: it carries the information you want to inherit (through [[newPrototype]].[[myPrototype]]), but it also carries information specific to the object you're creating (in [[newObject]].constructor).
And so we get to what the .prototype function is for. It's not the function's [[myPrototype]], and it's not the [[myPrototype]] for the objects you create with new. It's actually two levels back in the prototype chain, not one.
I hope this explanation helps you understand what the .prototype function is for. This isn't simple stuff, and not every explanation clicks with everybody. That's part of why we have so many explanations here.
When you first create an object, you can set its prototype directly with Object.create(). This function works with IE9 and higher (plus all other modern browsers), and it can be polyfilled if you need to work with older browsers. To see that prototype later, you use Object.getPrototypeOf(), which also has decent browser support (though IE only supports it in version 9 and higher). Using only these two functions, you might create your objects like this:
var x = {a:"xa",b:"xb",c:"xc"};
var e = Object.create(x);
x.a = "ea";
x.b = "eb";
console.log(Object.getPrototypeOf(Object));
console.log(Object.getPrototypeOf(e));
console.log(x.c);
console.log(e.c);//this is undefined , why? i am expecting it to be "xc"
console.log(e.a);
console.log(e.b);
console.log(Object.getPrototypeOf(e).a);
console.log(Object.getPrototypeOf(e).b);
Once an object has been created, there isn't a standard way to reset its prototype yet. ECMAScript 6 defines one (the Object.setPrototypeOf() function), but so far only Chrome and Firefox support it: IE and Safari do not. Still, if that's OK, you could do things like this:
var x = {a:"xa",b:"xb",c:"xc"};
var e = {a:"ea",b:"eb"};
console.log(Object.getPrototypeOf(object));
console.log(Object.getPrototypeOf(e));
Object.setPrototypeOf(e, x);
console.log(Object.getPrototypeOf(e));
console.log(x.c);
console.log(e.c);
console.log(e.a);
console.log(e.b);
console.log(Object.getPrototypeOf(e).a);
console.log(Object.getPrototypeOf(e).b);
There is a non-standard way to reset an existing object's prototype, and it even enjoys good browser support nowadays. To do this, you set the .__proto__ property on any standard object. You could use it like this:
var x = {a:"xa",b:"xb",c:"xc"};
var e = {a:"ea",b:"eb"};
console.log(object.__proto__);
console.log(e.__proto__);
e.__proto__ = x;
console.log(e.__proto__);
console.log(x.c);
console.log(e.c);
console.log(e.a);
console.log(e.b);
console.log(e.__proto__.a);
console.log(e.__proto__.b);
Now, onto your last question: why is Object.prototype equal to {}, rather than undefined? Because the Object constructor function has a .prototype property, which becomes the default prototype of all Objects created through it. The specs call this object [[ObjectPrototype]], and it's where things like the .hasOwnProperty() function live.
Have a look here:
https://stackoverflow.com/a/9959753/2768053
After reading that, you will turn your code into this:
var x={a:"xa",b:"xb",c:"xc"};
var e={a:"ea",b:"eb"};
console.log(Object.prototype.__proto__);
console.log(e.__proto__);
e.__proto__=x;
console.log(e.__proto__);
console.log(x.c);
console.log(e.c);
console.log(e.a);
console.log(e.b);
console.log(e.__proto__.a);
console.log(e.__proto__.b);
and you will get the results you expect :)

Substitute __proto__ of DOM Element Object

So basically I would like to extend a certain type of DOM elements by the following code:
var element = document.createElement("div");
var proto = Object.create(HTMLDivElement.prototype);
proto.newMethod = function() {console.log("Good.");};
proto.newConst = Math.PI / 2;
element.__proto__ = proto;
This code works in Chrome, Firefox and IE11 (IE10 not tested, but it will probably work), but I'm not sure whether it is proper JavaScript and whether it will continue to work in the future, because anyway this code is hacking DOM elements which is partially outside JavaScript. Could someone give explanation on how it works? I don't fully understand that, and I need to know if this method is robust. Thanks.
OK, to make things clearer, I know I should use Object.create() to specify prototype, but the real problem is that element objects are special and it's impossible to do that. The above code is more like a workaround, and this is why I'm asking this question.
Google's Polymer mutates __proto__ of DOM objects (code, line 259):
function implement(element, definition) {
if (Object.__proto__) {
element.__proto__ = definition.prototype;
} else {
customMixin(element, definition.prototype, definition.native);
element.__proto__ = definition.prototype;
}
}
So, should I trust this method because Google uses it?
From Mozilla Developer Network:
The __proto__ property is deprecated and should not be used. Object.getPrototypeOf should be used instead of the __proto__ getter to determine the [[Prototype]] of an object. Mutating the [[Prototype]] of an object, no matter how this is accomplished, is strongly discouraged, because it is very slow and unavoidably slows down subsequent execution in modern JavaScript implementations. However, Object.setPrototypeOf is provided in ES6 as a very-slightly-preferred alternative to the __proto__ setter.
In general, it is a bad practice to modify native prototypes like Array, String and even HTMLElement, details are described here, but if you control everything in the current context you can modify the prototypes by adding, on your own risk, some additional functional to achieve what you want. If you can guarantee that your code is not in conflict with some other code and the performance footprint is negligible then you are free to choose your path.
Your approach:
SomeHTMLElementInstance.__proto__ = newPrototype;
// or a general case like:
SomeHTMLElementPrototypeConstructor.prototype.newMethod = function () {
// Do something here
}
Recommended approach:
var SomeElementWrapper = function (someParams) {
this.container = document.createElement('SomeHTMLElement');
}
SomeElementWrapper.prototype.someMethod = function () {
// Do something with this.container without modifying its prototype
}

What makes my.class.js so fast? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I've been looking at the source code of my.class.js to find out what makes it so fast on Firefox. Here's the snippet of code used to create a class:
my.Class = function () {
var len = arguments.length;
var body = arguments[len - 1];
var SuperClass = len > 1 ? arguments[0] : null;
var hasImplementClasses = len > 2;
var Class, SuperClassEmpty;
if (body.constructor === Object) {
Class = function () {};
} else {
Class = body.constructor;
delete body.constructor;
}
if (SuperClass) {
SuperClassEmpty = function() {};
SuperClassEmpty.prototype = SuperClass.prototype;
Class.prototype = new SuperClassEmpty();
Class.prototype.constructor = Class;
Class.Super = SuperClass;
extend(Class, SuperClass, false);
}
if (hasImplementClasses)
for (var i = 1; i < len - 1; i++)
extend(Class.prototype, arguments[i].prototype, false);
extendClass(Class, body);
return Class;
};
The extend function is simply used to copy the properties of the second object onto the first (optionally overriding existing properties):
var extend = function (obj, extension, override) {
var prop;
if (override === false) {
for (prop in extension)
if (!(prop in obj))
obj[prop] = extension[prop];
} else {
for (prop in extension)
obj[prop] = extension[prop];
if (extension.toString !== Object.prototype.toString)
obj.toString = extension.toString;
}
};
The extendClass function copies all the static properties onto the class, as well as all the public properties onto the prototype of the class:
var extendClass = my.extendClass = function (Class, extension, override) {
if (extension.STATIC) {
extend(Class, extension.STATIC, override);
delete extension.STATIC;
}
extend(Class.prototype, extension, override);
};
This is all pretty straightforward. When you create a class, it simply returns the constructor function you provide it.
What beats my understanding however is how does creating an instance of this constructor execute faster than creating an instance of the same constructor written in Vapor.js.
This is what I'm trying to understand:
How do constructors of libraries like my.class.js create so many instances so quickly on Firefox? The constructors of the libraries are all very similar. Shouldn't the execution time also be similar?
Why does the way the class is created affect the execution speed of instantiation? Aren't definition and instantiation separate processes?
Where is my.class.js gaining this speed boost from? I don't see any part of the constructor code which should make it execute any faster. In fact traversing a long prototype chain like MyFrenchGuy.Super.prototype.setAddress.call should slow it down significantly.
Is the constructor function being JIT compiled? If so then why aren't the constructor functions of other libraries also being JIT compiled?
I don't mean to offend anyone, but this sort of thing really isn't worth the attention, IMHO. Almost any speed-difference between browsers is down to the JS engine. The V8 engine is very good at memory management, for example; especially when you compare it to IE's JScript engines of old.
Consider the following:
var closure = (function()
{
var closureVar = 'foo',
someVar = 'bar',
returnObject = {publicProp: 'foobar'};
returnObject.getClosureVar = function()
{
return closureVar;
};
return returnObject;
}());
Last time I checked, chrome actually GC'ed someVar, because it wasn't being referenced by the return value of the IIFE (referenced by closure), whereas both FF and Opera kept the entire function scope in memory.
In this snippet, it doesn't really matter, but for libs that are written using the module-pattern (AFAIK, that's pretty much all of them) that consist of thousands of lines of code, it can make a difference.
Anyway, modern JS-engines are more than just "dumb" parse-and-execute things. As you said: there's JIT compilation going on, but there's also a lot of trickery involved to optimize your code as much as possible. It could very well be that the snippet you posted is written in a way that FF's engine just loves.
It's also quite important to remember that there is some sort of speed-battle going on between Chrome and FF about who has the fastest engine. Last time I checked Mozilla's Rhino engine was said to outperform Google's V8, if that still holds true today, I can't say... Since then, both Google and Mozilla have been working on their engines...
Bottom line: speed differences between various browsers exist - nobody can deny that, but a single point of difference is insignificant: you'll never write a script that does just one thing over and over again. It's the overall performance that matters.
You have to keep in mind that JS is a tricky bugger to benchmark, too: just open your console, write some recursive function, and rung it 100 times, in FF and Chrome. compare the time it takes for each recursion, and the overall run. Then wait a couple of hours and try again... sometimes FF might come out on top, whereas other times Chrome might be faster, still. I've tried it with this function:
var bench = (function()
{
var mark = {start: [new Date()],
end: [undefined]},
i = 0,
rec = function(n)
{
return +(n === 1) || rec(n%2 ? n*3+1 : n/2);
//^^ Unmaintainable, but fun code ^^\\
};
while(i++ < 100)
{//new date at start, call recursive function, new date at end of recursion
mark.start[i] = new Date();
rec(1000);
mark.end[i] = new Date();
}
mark.end[0] = new Date();//after 100 rec calls, first element of start array vs first of end array
return mark;
}());
But now, to get back to your initial question(s):
First off: the snippet you provided doesn't quite compare to, say, jQuery's $.extend method: there's no real cloning going on, let alone deep-cloning. It doesn't check for circular references at all, which most other libs I've looked into do. checking for circular references does slow the entire process down, but it can come in handy from time to time (example 1 below). Part of the performance difference could be explained by the fact that this code simply does less, so it needs less time.
Secondly: Declaring a constructor (classes don't exist in JS) and creating an instance are, indeed, two different things (though declaring a constructor is in itself creating an instance of an object (a Function instance to be exact). The way you write your constructor can make a huge difference, as shown in example 2 below. Again, this is a generalization, and might not apply to certain use-cases on certain engines: V8, for example, tends to create a single function object for all instances, even if that function is part of the constructor - or so I'm told.
Thirdly: Traversing a long prototype-chain, as you mention is not as unusual as you might think, far from it, actually. You're constantly traversing chains of 2 or three prototypes, as shown in example 3. This shouldn't slow you down, as it's just inherent to the way JS resolves function calls or resolves expressions.
Lastly: It's probably being JIT-compiled, but saying that other libs aren't JIT-compiled just doesn't stack up. They might, then again, they might not. As I said before: different engines perform better at some tasks then other... it might be the case that FF JIT-compiles this code, and other engines don't.
The main reason I can see why other libs wouldn't be JIT-compiled are: checking for circular references, deep cloning capabilities, dependencies (ie extend method is used all over the place, for various reasons).
example 1:
var shallowCloneCircular = function(obj)
{//clone object, check for circular references
function F(){};
var clone, prop;
F.prototype = obj;
clone = new F();
for (prop in obj)
{//only copy properties, inherent to instance, rely on prototype-chain for all others
if (obj.hasOwnProperty(prop))
{//the ternary deals with circular references
clone[prop] = obj[prop] === obj ? clone : obj[prop];//if property is reference to self, make clone reference clone, not the original object!
}
}
return clone;
};
This function clones an object's first level, all objects that are being referenced by a property of the original object, will still be shared. A simple fix would be to simply call the function above recursively, but then you'll have to deal with the nasty business of circular references at all levels:
var circulars = {foo: bar};
circulars.circ1 = circulars;//simple circular reference, we can deal with this
circulars.mess = {gotcha: circulars};//circulars.mess.gotcha ==> circular reference, too
circulars.messier = {messiest: circulars.mess};//oh dear, this is hell
Of course, this isn't the most common of situations, but if you want to write your code defensively, you have to acknowledge the fact that many people write mad code all the time...
Example 2:
function CleanConstructor()
{};
CleanConstructor.prototype.method1 = function()
{
//do stuff...
};
var foo = new CleanConstructor(),
bar = new CleanConstructor);
console.log(foo === bar);//false, we have two separate instances
console.log(foo.method1 === bar.method1);//true: the function-object, referenced by method1 has only been created once.
//as opposed to:
function MessyConstructor()
{
this.method1 = function()
{//do stuff
};
}
var foo = new MessyConstructor(),
bar = new MessyConstructor();
console.log(foo === bar);//false, as before
console.log(foo.method1 === bar.method1);//false! for each instance, a new function object is constructed, too: bad performance!
In theory, declaring the first constructor is slower than the messy way: the function object, referenced by method1 is created before a single instance has been created. The second example doesn't create a method1, except for when the constructor is called. But the downsides are huge: forget the new keyword in the first example, and all you get is a return value of undefined. The second constructor creates a global function object when you omit the new keyword, and of course creates new function objects for each call. You have a constructor (and a prototype) that is, in fact, idling... Which brings us to example 3
example 3:
var foo = [];//create an array - empty
console.log(foo[123]);//logs undefined.
Ok, so what happens behind the scenes: foo references an object, instance of Array, which in turn inherits form the Object prototype (just try Object.getPrototypeOf(Array.prototype)). It stands to reason, therefore that an Array instance works in pretty much the same way as any object, so:
foo[123] ===> JS checks instance for property 123 (which is coerced to string BTW)
|| --> property not found #instance, check prototype (Array.prototype)
===========> Array.prototype.123 could not be found, check prototype
||
==========> Object.prototype.123: not found check prototype?
||
=======>prototype is null, return undefined
In other words, a chain like you describe isn't too far-fetched or uncommon. It's how JS works, so expecting that to slow things down is like expecting your brain to fry because your thinking: yes, you can get worn out by thinking too much, but just know when to take a break. Just like in the case of prototype-chains: their great, just know that they are a tad slower, yes...
I'm not entirely sure, but I do know that when programming, it is good practice to make the code as small as possible without sacrificing functionality. I like to call it minimalist code.
This can be a good reason to obfuscate code. Obfuscation shrinks the size of the file by using smaller method and variable names, making it harder to reverse-engineer, shrinking the file size, making it faster to download, as well as a potential performance boost. Google's javascript code is intensely obfuscated, and that contributes to their speed.
So in JavaScript, bigger isn't always better. When I find a way I can shrink my code, I implement it immediately, because I know it will benefit performance, even if by the smallest amount.
For example, using the var keyword in a function where the variable isn't needed outside the function helps garbage collection, which provides a very small speed boost versus keeping the variable in memory.
With a library like this this that produces "millions of operations per second" (Blaise's words), small performance boosts can add up to a noticeable/measurable difference.
So it is possible that my.class.js is "minimalist coded" or optimized in some manner. It could even be the var keywords.
I hope this helped somewhat. If it didn't help, then I wish you luck in getting a good answer.

Can you extend an object that has access to private properties with a function that can also access those private properties?

I am creating an object inside of an enclosure. Also in the enclosure are private properties that the object's functions can access - and this works as expected.
My issue: I want others to be able to extend my object with functions of their own (functions from a different context), but those functions will also need access to the same private properties - and I have not been able to find a way to make this work.
I've tried various configurations of .call, and also wrapping their function in a new function, amongst other things. I feel like I've gotten close to a solution, but have just fallen short.
Here's a bit of simplified example code that accurately reflects my situation:
//extension object
//fn2 can be any function, with any number of arguments, etc.
var obj1 = {};
obj1.fn2 = function (s1, s2){ console.log(priv); };
//actual object
var obj2 = (function (){
//private property
var priv = "hello world";
//return object
var obj3 = {};
//return object's native fn (works)
obj3.fn = function (s){ console.log(priv); };
//extension happens here - but is obviously not correct
obj3.fn2 = obj1.fn2;
//return object
return obj3;
})();
//try output
obj2.fn("goodbye world"); //works
obj2.fn2("goodbye world", "thx 4 teh phish"); //fails
Any insight would be appreciated. And I totally understand if what I want just isn't possible - but it sure seems like it should be :P
EDIT: Thank you all for the responses. I fully understand that the properties are more easily accessed as public, and that normally inherited objects won't have access to them otherwise. However, since the new function is being attached to the original object I have to believe there's a way to use the original context and not the context the new function was created in.
Now, I'm the first to say that eval is evil - and, in fact, I've never used it, or even considered using it, before. However, I'm trying everything I can think of to make this work - and I stumbled across this (seemingly) working solution:
obj3.fn2 = eval(obj1.fn2.toString());
So, if I check to make sure that obj1.fn2 is a typeof function, is there any way this could be harmful to my code? It doesn't execute the function, so I can't see how - but maybe I'm missing something?
Javascript doesn't have a "protected" analog. You either get super private or completely public. From here you can choose to:
Reconsider your class design, and have the subclasses depend only on the public interface of the parent class.
Add getter and setter functions to the public interface. Not necessarily the best thing though as you might just as well make the properties public (besides best practice issues and whatnot)
Just use public properties instead. This is the "natural" way to do OO inheritance in Javascript and is usually not a problem if you use a donvention like adding an underscore to the beggining of the name. As a bonus you can use the prototypal inheritance feature (it is nice knowing how to use this instead of only closure-based classes)
function Base(){
this._priv = "Hello world"
};
Base.prototype = {
fn: function(){
console.log(this._priv);
}
}
var obj2 = new Base();
obj2.fn = function(){ ... }
I hate to answer my own question - seems like a bit of a faux pas - but c'est la vie. (because I woke up French today?)
So, while I found that the eval() solution I presented last night in the edit to my original question does seem to be a valid solution, and a proper use of eval for retaining the object's context within the new function, it is far from perfect.
Firstly, it works in FF, but both IE and Chrome seem to hate it (those were the next ones I tried, and I quit trying others after they both failed). Though I'm sure it could probably be made to work across browsers, it seems like a hassle.
Secondly, it does give quite a bit of power to the new function, and as I look at my code more I do like the idea of controlling exactly what these new functions being added to my object get access to.
Thirdly, .eval() is typically pretty slow - and it turns out that .apply() (which is typically faster) just may work well enough.
This is because I realized at some point last night that no new functions on this object will need to set any of the private variables (at least, I'm fairly certain they won't) - and .apply() works fine to pass the values through for them to read.
I'm sure there's more to it than just those 3 things, but for now I think I'm going to go with more of a 'wrapper' solution - something like this:
var f = function (){
var fauxThis = {};
fauxThis.priv = priv;
obj1.fn2.apply(fauxThis, arguments);
};
obj3.fn2 = f;
//(To be placed where I had "obj3.fn2 = obj1.fn2;")
I am certainly willing now to consider the use of eval() in very specific cases - and may even revisit this specific use of it before I make my final decision of which direction to take. (especially if I can think of a case where the private value would need to be set)
Thanks all for your input!
The quickest and easiest solution is to prefix any supposedly private properties with the underscore (_).
Personally I like to bottle my private properties into a single object which would be placed on the object, like so:
obj.publicProp = 20;
obj._.privateProp = true;
I wouldn't worry so much about it though, the underscore is basically a universal symbol for private so those using the script will know that it's private and shouldn't be touched. Or, better yet, just leave it out of the public documentation ;)
There are other methods and you can use which do emulate "true" protected variables, but they're not the best as they avoid garbage collection, and can be clunky to use.

Is JavaScript's "new" keyword considered harmful?

In another question, a user pointed out that the new keyword was dangerous to use and proposed a solution to object creation that did not use new. I didn't believe that was true, mostly because I've used Prototype, Script.aculo.us and other excellent JavaScript libraries, and everyone of them used the new keyword.
In spite of that, yesterday I was watching Douglas Crockford's talk at YUI theater and he said the exactly same thing, that he didn't use the new keyword anymore in his code (Crockford on JavaScript - Act III: Function the Ultimate - 50:23 minutes).
Is it 'bad' to use the new keyword? What are the advantages and disadvantages of using it?
Crockford has done a lot to popularize good JavaScript techniques. His opinionated stance on key elements of the language have sparked many useful discussions. That said, there are far too many people that take each proclamation of "bad" or "harmful" as gospel, refusing to look beyond one man's opinion. It can be a bit frustrating at times.
Use of the functionality provided by the new keyword has several advantages over building each object from scratch:
Prototype inheritance. While often looked at with a mix of suspicion and derision by those accustomed to class-based OO languages, JavaScript's native inheritance technique is a simple and surprisingly effective means of code re-use. And the new keyword is the canonical (and only available cross-platform) means of using it.
Performance. This is a side-effect of #1: if I want to add 10 methods to every object I create, I could just write a creation function that manually assigns each method to each new object... Or, I could assign them to the creation function's prototype and use new to stamp out new objects. Not only is this faster (no code needed for each and every method on the prototype), it avoids ballooning each object with separate properties for each method. On slower machines (or especially, slower JS interpreters) when many objects are being created this can mean a significant savings in time and memory.
And yes, new has one crucial disadvantage, ably described by other answers: if you forget to use it, your code will break without warning. Fortunately, that disadvantage is easily mitigated - simply add a bit of code to the function itself:
function foo()
{
// if user accidentally omits the new keyword, this will
// silently correct the problem...
if ( !(this instanceof foo) )
return new foo();
// constructor logic follows...
}
Now you can have the advantages of new without having to worry about problems caused by accidentally misuse.
John Resig goes into detail on this technique in his Simple "Class" Instantiation post, as well as including a means of building this behavior into your "classes" by default. Definitely worth a read... as is his upcoming book, Secrets of the JavaScript Ninja, which finds hidden gold in this and many other "harmful" features of the JavaScript language (the chapter on with is especially enlightening for those of us who initially dismissed this much-maligned feature as a gimmick).
A general-purpose sanity check
You could even add an assertion to the check if the thought of broken code silently working bothers you. Or, as some commented, use the check to introduce a runtime exception:
if ( !(this instanceof arguments.callee) )
throw new Error("Constructor called as a function");
Note that this snippet is able to avoid hard-coding the constructor function name, as unlike the previous example it has no need to actually instantiate the object - therefore, it can be copied into each target function without modification.
ES5 taketh away
As Sean McMillan, stephenbez and jrh noted, the use of arguments.callee is invalid in ES5's strict mode. So the above pattern will throw an error if you use it in that context.
ES6 and an entirely harmless new
ES6 introduces Classes to JavaScript - no, not in the weird Java-aping way that old-school Crockford did, but in spirit much more like the light-weight way he (and others) later adopted, taking the best parts of prototypal inheritance and baking common patterns into the language itself.
...and part of that includes a safe new:
class foo
{
constructor()
{
// constructor logic that will ONLY be hit
// if properly constructed via new
}
}
// bad invocation
foo(); // throws,
// Uncaught TypeError: class constructors must be invoked with 'new'
But what if you don't want to use the new sugar? What if you just want to update your perfectly fine old-style prototypal code with the sort of safety checks shown above such that they keep working in strict mode?
Well, as Nick Parsons notes, ES6 provides a handy check for that as well, in the form of new.target:
function foo()
{
if ( !(new.target) )
throw new Error("Constructor called as a function");
// constructor logic follows...
}
So whichever approach you choose, you can - with a bit of thought and good hygiene - use new without harm.
I have just read some parts of Crockford's book "JavaScript: The Good Parts". I get the feeling that he considers everything that ever has bitten him as harmful:
About switch fall through:
I never allow switch cases to fall
through to the next case. I once found
a bug in my code caused by an
unintended fall through immediately
after having made a vigorous speech
about why fall through was sometimes
useful. (page 97, ISBN
978-0-596-51774-8)
About ++ and --:
The ++ (increment) and -- (decrement)
operators have been known to
contribute to bad code by encouraging
excessive trickiness. They are second
only to faulty architecture in
enabling viruses and other security
menaces. (page 122)
About new:
If you forget to include the new
prefix when calling a constructor
function, then this will not be
bound to the new object. Sadly, this
will be bound to the global object, so
instead of augmenting your new object,
you will be clobbering global
variables. That is really bad. There
is no compile warning, and there is no
runtime warning. (page 49)
There are more, but I hope you get the picture.
My answer to your question: No, it's not harmful. but if you forget to use it when you should you could have some problems. If you are developing in a good environment you notice that.
In the 5th edition of ECMAScript there is support for strict mode. In strict mode, this is no longer bound to the global object, but to undefined.
JavaScript being a dynamic language, there are a zillion ways to mess up where another language would stop you.
Avoiding a fundamental language feature such as new on the basis that you might mess up is a bit like removing your shiny new shoes before walking through a minefield just in case you might get your shoes muddy.
I use a convention where function names begin with a lowercase letter and 'functions' that are actually class definitions begin with an uppercase letter. The result is a really quite compelling visual clue that the 'syntax' is wrong:
var o = MyClass(); // This is clearly wrong.
On top of this, good naming habits help. After all, functions do things and therefore there should be a verb in its name whereas classes represent objects and are nouns and adjectives without any verb.
var o = chair() // Executing chair is daft.
var o = createChair() // Makes sense.
It's interesting how Stack Overflow's syntax colouring has interpreted the code above.
I am newbie to JavaScript so maybe I am just not too experienced in providing a good view point to this. Yet I want to share my view on this "new" thing.
I have come from the C# world where using the keyword "new" is so natural that it is the factory design pattern that looks weird to me.
When I first code in JavaScript, I don't realize that there is the "new" keyword and code like the one in YUI pattern and it doesn't take me long to run into disaster. I lose track of what a particular line is supposed to be doing when looking back the code I've written. More chaotic is that my mind can't really transit between object instances boundaries when I am "dry-running" the code.
Then, I found the "new" keyword which, to me, "separates" things. With the new keyword, it creates things. Without the new keyword, I know I won't confuse it with creating things unless the function I am invoking gives me strong clues of that.
For instance, with var bar=foo(); I don’t have any clues as what bar could possibly be.... Is it a return value or is it a newly created object? But with var bar = new foo(); I know for sure bar is an object.
Another case for new is what I call Pooh Coding. Winnie-the-Pooh follows his tummy. I say go with the language you are using, not against it.
Chances are that the maintainers of the language will optimize the language for the idioms they try to encourage. If they put a new keyword into the language they probably think it makes sense to be clear when creating a new instance.
Code written following the language's intentions will increase in efficiency with each release. And code avoiding the key constructs of the language will suffer with time.
And this goes well beyond performance. I can't count the times I've heard (or said) "why the hell did they do that?" when finding strange looking code. It often turns out that at the time when the code was written there was some "good" reason for it. Following the Tao of the language is your best insurance for not having your code ridiculed some years from now.
I wrote a post on how to mitigate the problem of calling a constructor without the new keyword.
It's mostly didactic, but it shows how you can create constructors that work with or without new and doesn't require you to add boilerplate code to test this in every constructor.
Constructors without using "new"
Here's the gist of the technique:
/**
* Wraps the passed in constructor so it works with
* or without the new keyword
* #param {Function} realCtor The constructor function.
* Note that this is going to be wrapped
* and should not be used directly
*/
function ctor(realCtor) {
// This is going to be the actual constructor
return function wrapperCtor() {
var obj; // The object that will be created
if (this instanceof wrapperCtor) {
// Called with new
obj = this;
} else {
// Called without new. Create an empty object of the
// correct type without running that constructor
surrogateCtor.prototype = wrapperCtor.prototype;
obj = new surrogateCtor();
}
// Call the real constructor function
realCtor.apply(obj, arguments);
return obj;
}
function surrogateCtor() {}
}
Here's how to use it:
// Create our point constructor
Point = ctor(function(x, y) {
this.x = x;
this.y = y;
});
// This is good
var pt = new Point(20, 30);
// This is OK also
var pt2 = Point(20, 30);
The rationale behind not using the new keyword, is simple:
By not using it at all, you avoid the pitfall that comes with accidentally omitting it. The construction pattern that YUI uses, is an example of how you can avoid the new keyword altogether:
var foo = function () {
var pub = { };
return pub;
}
var bar = foo();
Alternatively, you could do this:
function foo() { }
var bar = new foo();
But by doing so you run risk of someone forgetting to use the new keyword, and the this operator being all FUBAR. As far as I know, there isn't any advantage to doing this (other than you being used to it).
At The End Of The Day: It's about being defensive. Can you use the new statement? Yes. Does it make your code more dangerous? Yes.
If you have ever written C++, it's akin to setting pointers to NULL after you delete them.
I think "new" adds clarity to the code. And clarity is worth everything. It is good to know there are pitfalls, but avoiding them by avoiding clarity doesn't seem like the way for me.
Case 1: new isn't required and should be avoided
var str = new String('asd'); // type: object
var str = String('asd'); // type: string
var num = new Number(12); // type: object
var num = Number(12); // type: number
Case 2: new is required, otherwise you'll get an error
new Date().getFullYear(); // correct, returns the current year, i.e. 2010
Date().getFullYear(); // invalid, returns an error
Here is the briefest summary I could make of the two strongest arguments for and against using the new operator:
Arguments against new
Functions designed to be
instantiated as objects using the
new operator can have disastrous
effects if they are incorrectly
invoked as normal functions. A
function's code in such a case will
be executed in the scope where the
function is called, instead of in
the scope of a local object as
intended. This can cause global
variables and properties to get
overwritten with disastrous
consequences.
Finally, writing function Func(),
and then calling Func.prototype
and adding stuff to it so that you
can call new Func() to construct
your object seems ugly to some
programmers, who would rather use
another style of object inheritance
for architectural and stylistic
reasons.
For more on this argument check out Douglas Crockford's great and concise book JavaScript: The Good Parts. In fact, check it out anyway.
Arguments in favor of new
Using the new operator along with
prototypal assignment is fast.
That stuff about accidentally
running a constructor function's
code in the global namespace can
easily be prevented if you always
include a bit of code in your
constructor functions to check to
see if they are being called
correctly, and, in the cases where
they aren't, handling the call
appropriately as desired.
See John Resig's post for a simple explanation of this technique, and for a generally deeper explanation of the inheritance model he advocates.
I agree with PEZ and some here.
It seems obvious to me that "new" is self descriptive object creation, where the YUI pattern Greg Dean describes is completely obscured.
The possibility someone could write var bar = foo; or var bar = baz(); where baz isn't an object creating method seems far more dangerous.
I think new is evil, not because if you forget to use it by mistake it might cause problems, but because it screws up the inheritance chain, making the language tougher to understand.
JavaScript is prototype-based object-oriented. Hence every object must be created from another object like so: var newObj=Object.create(oldObj). Here oldObj is called the prototype of newObj (hence "prototype-based"). This implies that if a property is not found in newObj then it will be searched in oldObj. newObj by default will thus be an empty object, but due to its prototype chain, it appears to have all the values of oldObj.
On the other hand, if you do var newObj=new oldObj(), the prototype of newObj is oldObj.prototype, which is unnecessarily difficult to understand.
The trick is to use
Object.create=function(proto){
var F = function(){};
F.prototype = proto;
var instance = new F();
return instance;
};
It is inside this function and it is only here that new should be used. After this, simply use the Object.create() method. The method resolves the prototype problem.
In my not-so-humble opinion, "new" is a flawed concept in 2021 JavaScript. It adds words where none are needed. It makes the return value of a function/constructor implicit and forces the use of this in the function/constructor. Adding noise to code is never a good thing.
// With new
function Point(x, y) {
this.x = x
this.y = y
}
let point = new Point(0, 0)
Vs.
// Without new
function Point(x, y) {
return { x, y }
}
let point = Point(0, 0)

Categories

Resources