The nature of JavaScript allows for its native objects to be completely re-written. I want to know if there is any real danger in doing so!
Here are some examples of native JavaScript objects
Object
Function
Number
String
Boolean
Math
RegExp
Array
Lets assume that I want to model these to follow a similar pattern that you might find in Java (and some other OOP languages), so that Object defines a set of basic functions, and each other object inherits it (this would have to be explicitly defined by the user, unlike Java, where everything naturally derives from object)
Example:
Object = null;
function Object() {
Object.prototype.equals = function(other) {
return this === other;
}
Object.prototype.toString = function() {
return "Object";
}
Object.equals = function(objA, objB) {
return objA === objB;
}
}
Boolean = null;
function Boolean() {
}
extend(Boolean, Object); // Assume extend is an inheritance mechanism
Foo = null;
function Foo() {
Foo.prototype.bar = function() {
return "Foo.bar";
}
}
extend(Foo, Object);
In this scenario, Object and Boolean now have new implementations. In this respect, what is likely to happen? Am I likely to break things further down the line?
Edit:
I read somewhere that frameworks such as MooTools and Prototype have a similar approach to this, is this correct?
Monkey patching builtin classes like that is a controversial topic. I personally don't like doing that for 2 reaons:
Builtin classes are a global scope. This means that if two different modules try to add methods with the same name to the global classes then they will conflict, leading to subtle bugs. Even more subtly, if a future version of a browsers decides to implement a method with the same name you are also in trouble.
Adding things to the prototypes of common classes can break code that uses for-in loops without a hasOwnProperty check (people new to JS often do that to objects and arrays, since for-in kind of looks like a foreach loop). If you aren't 100% sure that the code you use is using for-in loops safely then monkeypatching Object.prototype could lead to problems.
That said, there is one situation where I find monkeypatching builtins acceptable and that is adding features from new browsers on older browsers (like, for example, the forEach method for arrays). In this case you avoid conflicts with future browser versions and aren't likely to catch anyone by surprise. But even then, I would still recommend using a shim from a third party instead of coding it on your own, since there are often many tricky corner cases that are hard to get right.
There's some level of preference here, but my personal take is that this sort of thing has the potential to become a giant intractable mess.
For example, you start with two projects, A and B, that each decide to implement all sorts of awesome useful fluent methods on String.
Project A has decided that String needs an isEmpty function that returns true if a string is zero-length or is only whitespace.
Project B has decided that String needs an isEmpty function that returns true if a string is zero-length, and an isEmptyOrWhitespace function that returns true if a string is zero-length or is only whitespace.
Now you have a project that wants to use some code from Project A and some code from Project B. Both of them make extensive use of their custom isEmpty functions. Do you have any chance of successfully joining the two? Probably not. You are in a cluster arrangement, so to speak.
Note that this is all very different than extension methods in C#, where you at least have to import the containing static class's namespace to get the extension method, there's no runtime conflict, and could reasonably consume from A and B in the same project as long as you didn't import their extensions namespace (hoping that they had the foresight to put their extension classes in a separate namespace for exactly this reason).
The worst case in JS that I know of along these lines is undefined. You can define it.
You're allowed to do things like undefined = 'blah';.... at which point, you can no longer rely on if(x === undefined). Which could easily break something elsewhere in your code (or, of course, in a third party lib you may be using).
That's completely bonkers, but definitely shows the definitely dangers of arbitrarily overwriting built-in objects.
See also: http://wtfjs.com/2010/02/15/undefined-is-mutable
For a slightly more sane example, take the Sahi browser testing tool. This tool allows you to write automated scripts for the browser to test your site. (similar to Selenium). One problem with doing that is if your site uses alert() or confirm(), the script would stop running while it waits for user input. Sahi gets around this by overwriting these functions with its own stub functions.
I avoid overriding the default behavior of the inherent objects. It's biten me a few times, while others I was fine. A library you can look at for an example is Sugar.js. Its a great library that some folks love, but I generally avoid it simply because it extends the behavior of existing JavScript objects, such as what you are doing.
I think however that you will find that this is purely opinion and style.
Related
How can I redefine == operator in V8 for my own classes? For example:
var v = Foo.BAR;
var other = getBar(); // returns a new instance of the same as Foo.BAR
assert(v == other); // I want true
The functions are defined in C++ with V8, not directly in JS. I know it's possible as it has been done for the String class.
V8 developer here.
I know it's possible as it has been done for the String class.
Of course a JavaScript engine can and does define what all the operators do -- that is its job. So I wouldn't say that the == operator has been redefined for strings; it has merely been defined.
If you're willing to modify V8, then you can change the behavior of the == operator. But that's going to be a lot of work, because there isn't just one place where it's defined: you'll have to touch the C++ runtime (start by looking at v8::internal::Object::Equals), the Ignition interpreter (look for TestEquals in src/interpreter/interpreter-generator.cc), and the Turbofan compiler (grep for kJSEqual in src/compiler/ and adapt how it's handled in the various phases, most notably JSTypedLowering::ReduceJSEqual but there are probably other places you'll have to touch as well).
Be aware that this is a massive project; IMHO it is not advisable to go down this path. A particular difficulty will be to get the information you need (specifically, "is this object an instance of one of the classes in question?") to all the places where you'll need it; I don't have a good suggestion for how to accomplish that. Another challenge is that porting your changes to new V8 versions will be quite time-consuming maintenance work.
My recommendation would be to go for a .equals function, defined on precisely the classes that should have it. That's clean and simple, easily maintainable/adaptable, and unsurprising to any other JavaScript developer (including your own future self) reading your code.
After dabbling with javascript for a while, I became progressively convinced that OOP is not the right way to go, or at least, not extensively. Having two or three levels of inheritance is ok, but working full OOP like one would do in Java seems just not fitting.
The language supports compositing and delegation natively. I want to use just that. However, I am having trouble replicating certain benefits from OOP.
Namely:
How would I check if an object implements a certain behavior? I have thought of the following methods
Check if the object has a particular method. But this would mean standardizing method names and if the project is big, it can quickly become cumbersome, and lead to the java problem (object.hasMethod('emailRegexValidatorSimpleSuperLongNotConflictingMethodName')...It would just move the problem of OOP, not fix it. Furthermore, I could not find info on the performance of looking up if methods exist
Store each composited object in an array and check if the object contains the compositor. Something like: object.hasComposite(compositorClass)...But that's also not really elegant and is once again OOP, just not in the standard way.
Have each object have an "implements" array property, and leave the responsibility to the object to say if it implements a certain behavior, whether it is through composition or natively. Flexible and simple, but requires to remember a number of conventions. It is my preferred method until now, but I am still looking.
How would I initialize an object without repeating all the set-up for composited objects? For example, if I have an "textInput" class that uses a certain number of validators, which have to be initialized with variables, and a class "emailInput" which uses the exact same validators, it is cumbersome to repeat the code. And if the interface of the validators change, the code has to change in every class that uses them. How would I go about setting that easily? The API I am thinking of should be as simple as doing object.compositors('emailValidator','lengthValidator','...')
Is there any performance loss associated with having most of the functions that run in the app go through an apply()? Since I am going to be using delegation extensively, basic objects will most probably have almost no methods. All methods will be provided by the composited objects.
Any good resource? I have read countless posts about OOP vs delegation, and about the benefits of delegation, etc, but I can't find anything that would discuss "javascript delegation done right", in the scope of a large framework.
edit
Further explanations:
I don't have code yet, I have been working on a framework in pure OOP and I am getting stuck and in need of multiple inheritance. Thus, I decided to drop classes totally. So I am now merely at theoretical level and trying to make sense out of this.
"Compositing" might be the wrong word; I am referring to the composite pattern, very useful for tree-like structures. It's true that it is rare to have tree structures on the front end (well, save for the DOM of course), but I am developing for node.js
What I mean by "switching from OOP" is that I am going to part from defining classes, using the "new" operator, and so on; I intend to use anonymous objects and extend them with delegators. Example:
var a = {};
compositor.addDelegates(a,["validator", "accessManager", "databaseObject"]);
So a "class" would be a function with predefined delegators:
function getInputObject(type, validator){
var input = {};
compositor.addDelegates(input,[compositor,renderable("input"+type),"ajaxed"]);
if(validator){input.addDelegate(validator);}
return input;
}
Does that make sense?
1) How would I check if an object implements a certain behavior?
Most people don't bother with testing for method existance like this.
If you want to test for methods in order to branch and do different things if its found or not then you are probably doing something evil (this kind of instanceof is usually a code smell in OO code)
If you are just checking if an object implements an interface for error checking then it is not much better then not testing and letting an exception be thrown if the method is not found. I don't know anyone that routinely does this checking but I am sure someone out there is doing it...
2) How would I initialize an object without repeating all the set-up for composited objects?
If you wrap the inner object construction code in a function or class then I think you can avoid most of the repetition and coupling.
3) Is there any performance loss associated with having most of the functions that run in the app go through an apply()?
In my experience, I prefer to avoid dealing with this unless strictly necessary. this is fiddly, breaks inside callbacks (that I use extensively for iteration and async stuff) and it is very easy to forget to set it correctly. I try to use more traditional approaches to composition. For example:
Having each owned object be completely independent, without needing to look at its siblings or owner. This allows me to just call its methods directly and letting it be its own this.
Giving the owned objects a reference to their owner in the form of a property or as a parameter passed to their methods. This allows the composition units to access the owner without depending on having the this correctly set.
Using mixins, flattening the separate composition units in a single level. This has big name clash issues but allows everyone to see each other and share the same "this". Mixins also decouples the code from changes in the composition structure, since different composition divisions will still flatten to the same mixed object.
4) Any good resources?
I don't know, so tell me if you find one :)
Some people will tell you that adding prototypes to JavaScript natives is evil. For example:
String.prototype.format = function(format, replacements) {
...
};
Now, for those that agree with that (if you don't, do not reply with answer—your opinion is N/A; this is not a discussion about prototypes), is adding static methods to natives equally as evil? (Hitherto and henceforth, "static" meaning simply a method whose context isn't an instance.)
For example, given that creating a String.prototype.format is evil, is adding it as a static an acceptable practice?
String.format = function(format, replacements) {
...
};
How is extending a native with a static method any different, concerning best-practices, than extending a native with a prototype? Either you're against extending natives in any way, or you're not—is there anyone in the camp that static extensions are acceptable while prototypal are not?
Ask yourself why extending natives is evil.
Common reasons are
Not future-proof, what if the standards say "There shall be a String.format"
Not past-proof, adding enumerable properties to prototypes will break bad code.
may lead to confusion as to what's common and what's standard
may break bad-code (duck-typing, looks like a duck, quaks like a --- throws exception :()
It's simply a matter of weighing up how much you value those reasons. I only care about #1 (future-proofing).
Adding a static method to a native Constructor will not likely have an unexpected impact on someone else's code or the time it takes to construct objects using said Constructor.
However, when you add a prototype method to a native Constrcutor, every instance (even the ones created to do operations like "test".indexOf("t")) will have the additional overhead of your method. Iterating over object properties or testing capabilities (since we often can't judge an object by its type) gets more difficult.
Let's say you add String.prototype.forEach in your code. That will leak into every module. Now when some other code tests for the forEach method (thinking it's an array in a modern browser), they'll get a string instead--evil.
I recently watched Douglas Crockford's JavaScript presentations, where he raves about JavaScript prototype inheritance as if it is the best thing since sliced white bread. Considering Crockford's reputation, it may very well be.
Can someone please tell me what is the downside of JavaScript prototype inheritance? (compared to class inheritance in C# or Java, for example)
In my experience, a significant disadvantage is that you can't mimic Java's "private" member variables by encapsulating a variable within a closure, but still have it accessible to methods subsequently added to the prototype.
i.e.:
function MyObject() {
var foo = 1;
this.bar = 2;
}
MyObject.prototype.getFoo = function() {
// can't access "foo" here!
}
MyObject.prototype.getBar = function() {
return this.bar; // OK!
}
This confuses OO programmers who are taught to make member variables private.
Things I miss when sub-classing an existing object in Javascript vs. inheriting from a class in C++:
No standard (built-into-the-language) way of writing it that looks the same no matter which developer wrote it.
Writing your code doesn't naturally produce an interface definition the way the class header file does in C++.
There's no standard way to do protected and private member variables or methods. There are some conventions for some things, but again different developers do it differently.
There's no compiler step to tell you when you've made foolish typing mistakes in your definition.
There's no type-safety when you want it.
Don't get me wrong, there are a zillion advantages to the way javascript prototype inheritance works vs C++, but these are some of the places where I find javascript works less smoothly.
4 and 5 are not strictly related to prototype inheritance, but they come into play when you have a significant sized project with many modules, many classes and lots of files and you wish to refactor some classes. In C++, you can change the classes, change as many callers as you can find and then let the compiler find all the remaining references for you that need fixing. If you've added parameters, changed types, changed method names, moved methods,etc... the compiler will show you were you need to fix things.
In Javascript, there is no easy way to discover all possible pieces of code that need to be changed without literally executing every possible code path to see if you've missed something or made some typo. While this is a general disadvantage of javascript, I've found it particularly comes into play when refactoring existing classes in a significant-sized project. I've come near the end of a release cycle in a significant-sized JS project and decided that I should NOT do any refactoring to fix a problem (even though that was the better solution) because the risk of not finding all possible ramifications of that change was much higher in JS than C++.
So, consequently, I find it's riskier to make some types of OO-related changes in a JS project.
I think the main danger is that multiple parties can override one another's prototype methods, leading to unexpected behavior.
This is particularly dangerous because so many programmers get excited about prototype "inheritance" (I'd call it extension) and therefore start using it all over the place, adding methods left and right that may have ambiguous or subjective behavior. Ultimately, if left unchecked, this kind of "prototype method proliferation" can lead to very difficult-to-maintain code.
A popular example would be the trim method. It might be implemented something like this by one party:
String.prototype.trim = function() {
// remove all ' ' characters from left & right
}
Then another party might create a new definition, with a completely different signature, taking an argument which specifies the character to trim. Suddenly all the code that passes nothing to trim has no effect.
Or another party reimplements the method to strip ' ' characters and other forms of white space (e.g., tabs, line breaks). This might go unnoticed for some time but lead to odd behavior down the road.
Depending on the project, these may be considered remote dangers. But they can happen, and from my understanding this is why libraries such as Underscore.js opt to keep all their methods within namespaces rather than add prototype methods.
(Update: Obviously, this is a judgment call. Other libraries--namely, the aptly-named Prototype--do go the prototype route. I'm not trying to say one way is right or wrong, only that this is the argument I've heard against using prototype methods too liberally.)
I miss being able to separate interface from implementation. In languages with an inheritance system that includes concepts like abstract or interface, you could e.g. declare your interface in your domain layer but put the implementation in your infrastructure layer. (Cf. onion architecture.) JavaScript's inheritance system has no way to do something like this.
I'd like to know if my intuitive answer matches up with what the experts think.
What concerns me is that if I have a function in C# (for the sake of discussion) that takes a parameter, any developer who writes code that calls my function immediately knows from the function signature what sort of parameters it takes and what type of value it returns.
With JavaScript "duck-typing", someone could inherit one of my objects and change its member functions and values (Yes, I know that functions are values in JavaScript) in almost any way imaginable so that the object they pass in to my function bears no resemblance to the object I expect my function to be passed.
I feel like there is no good way to make it obvious how a function is supposed to be called.
Javascript is an incredible language and libraries like jQuery make it almost too easy to use.
What should the original designers of Javascript have included in the language, or what should we be pressuring them into adding to future versions?
Things I'd like to see:-
Some kind of compiled version of the language, so we programmers can catch more of our errors earlier, as well as providing a faster solution for browsers to consume.
optional strict types (eg, being able to declare a var as a float and keep it that way).
I am no expert on Javascript, so maybe these already exist, but what else should be there? Are there any killer features of other programming languages that you would love to see?
Read Javascript: The Good Parts from the author of JSLint, Douglas Crockford. It's really impressive, and covers the bad parts too.
One thing I've always longed for and ached for is some support for hashing. Specifically, let me track metadata about an object without needing to add an expando property on that object.
Java provides Object.getHashCode() which, by default, uses the underlying memory address; Python provides id(obj) to get the memory address and hash(obj) to be customizable; etc. Javascript provides nothing for either.
For example, I'm writing a Javascript library that tries to unobtrusively and gracefully enhance some objects you give me (e.g. your <li> elements, or even something unrelated to the DOM). Let's say I need to process each object exactly once. So after I've processed each object, I need a way to "mark it" as seen.
Ideally, I could make my own hashtable or set (either way, implemented as a dictionary) to keep track:
var processed = {};
function process(obj) {
var key = obj.getHashCode();
if (processed[key]) {
return; // already seen
}
// process the object...
processed[key] = true;
}
But since that's not an option, I have to resort to adding a property onto each object:
var SEEN_PROP = "__seen__";
function process(obj) {
if (obj[SEEN_PROP]) { // or simply obj.__seen__
return; // already seen
}
// process the object...
obj[SEEN_PROP] = true; // or obj.__seen__ = true
}
But these objects aren't mine, so this makes my script obtrusive. The technique is effectively a hack to work around the fact that I can't get a reliable hash key for any arbitrary object.
Another workaround is to create wrapper objects for everything, but often you need a way to go from the original object to the wrapper object, which requires an expando property on the original object anyway. Plus, that creates a circular reference which causes memory leaks in IE if the original object is a DOM element, so this isn't a safe cross-browser technique.
For developers of Javascript libraries, this is a recurring issue.
What should the original designers of Javascript have included in the language, or what should we be pressuring them into adding to future versions?
They should have got together and decided together what to implement, rather than competing against each other with slightly different implementations of the language (naming no names), to prevent the immense headache that has ensued for every developer over the past 15 years.
The ability to use arrays/objects as keys without string coercion might've been nice.
Javascript is missing a name that differentiates it from a language it is nothing like.
There are a few little things it could do better.
Choice of + for string concatenation was a mistake. An & would have been better.
It can be frustrating that for( x in list ) iterates over indices, as it makes it difficult to use a literal array. Newer versions have a solution.
Proper scoping would be nice. v1.7 is adding this, but it looks clunky.
The way to do 'private' and 'protected' variables in an object is a little bit obscure and hard to remember as it takes advantage of closures and how they affect scoping. Some syntactic sugar to hide the mechanics of this would be fabulous.
To be honest, many of the problems I routinely trip over are actually DOM quirks, not JavaScript per se. The other big problem, of course, is that recent versions of JavaScript have interesting and useful things, like generators. Unfortunately, most browsers are stuck at 1.5. Apparantly only FireFox is forging ahead.
File IO is missing.... though some would say it doesn't really need it...