Rx.js, ofObjectChanges, and Object.observe - javascript

This may be too speculative a question for StackOverflow, but still.
Among all the goodness that Rx.js’s Observables are capable of, is observing changes of object properties through the ofObjectChanges method. Since the ofObjectChanges method is implemented using Object.observe, and since Object.observe is on its way out and won't make it into JavaScript spec, is it a good idea to watch for object property changes using Rx.js's ofObjectChanges? Are there any better ways of observing object’s properties?

Related

Why is it Object.defineProperty() rather than this.defineProperty() (for objects)?

I'm working on a JavaScript project, and was just wondering why an object instance doesn't inherit the defineProperty() and other methods, rather than having to call the superclass (superobject?) Object method.
I've looked at the MDN docs, and there are in fact "non-standard" property methods.
But those are deprecated. Why would the move be to the Object methods?
It seems to me that something like instance.defineProperty(...) is better than Object.defineProperty(instance, ...). I would say the same about some of the other Object methods as well.
It's to avoid collisions - in general, issues with objects that do not have the property with the value that you expect.
Objects in JS are often used as key-value-maps, and the keys can be arbitrary strings - for example __defineGetter__, hasOwnProperty or something less special. Now when you want to invoke such a function on an unknown object - like hasOwnProperty is often used in generic enumeration functions, where any JSON might be passed in - you can never be sure whether you got a overwritten property (that might not even be a function) or the original which you want, or whether the object inherits the property at all. To avoid this issue (or also this IE bug), you'd have to use Object.prototype.hasOwnProperty.call - that is ugly.
So, namespacing all those functions on Object is only useful, it's a cleaner API that separates the reflection methods from the object's application interface. This also helps optimisation (simplifying static analysis) and makes it easier to restrict access to the reflection API in sandboxes - at least that was the design idea.
You might be happy to have a defineProperty around in the prototype, but you can only use it safely when working with known objects. If you still want it (as you know when to use and when not), you could use
Object.defineProperty(Object.prototype, "defineProperty", {
writable: true,
enumberable: false,
value: function(prop, descr) {
return Object.defineProperty(this, prop, descr);
}
});
It's done like that to avoid collisions - remember, every method on Object.prototype is a method in every single user-defined object, too.
Imagine an object where you'd want a custom method defineProperty - that would completely break things when Object.defineProperty was on its prototype instead.
Interesting. The only reason I came up with so far is that people like to rewrite the prototypes and having this method "hidden" like this might help you avoid some bugs. Especially because of the good method name since that is more likely to get rewritten than, for example, __defineGetter__.
It seems that a lot of features depend on this functionality (link), so it makes sense to make it more global and secure in this context.

Revisiting extending native prototypes after ECMAScript 5

Recently, given the changes to defining properties in ECMAScript 5, I have revisited the question of whether we can safely extend the native JavaScript prototypes. In truth, all along I have extended prototypes like Array and Function, but I avoided doing so with Object, for the obvious reasons. In unit testing with Jasmine, by adding Object.prototype specs to specs for my own personal framework, extending Object.prototype with non-enumerable functions has appeared to be safe. Data properties like a "type" property, however, with getters/setters that do any unusual processing have had unintended consequences. There is still the possibility of conflicts with other libraries--though in my work, that hardly ever comes up. Nevertheless, as long as the functions are not enumerable, it looks like extending Object.prototype can be safe.
What do you think? Is it safe to extend Object.prototype now? Please discuss.
Extending objects native to JavaScript might become a little safer, through many collision concerns still stand. Generally, unless you're extending object to support standartized behavior from more recent standard it really would be still much safer to introduce wrapper - it is much easier to do things right way when you're the only one in control.
Speaking of objects native to environment (DOM elements and nodes, AJAX stuff), new JS standard still don't give and, arguably, can't give you any guarantee about any interaction with those except what defined in their interface standard. Never forget that they're potentially accessible through many different scripting engines and thus not need to be tailored for quirks of one specific language - JS. So recommendation to not extend those either still stands as well.
The definitive, absolute answer is ...
"It depends." :)
Extending any built in JavaScript object can be perfectly safe or it can be a complete disaster. It depends on what you are doing and how you are doing it.
Use smart practices and common sense and test the hell-out-of-it.

What is the hierarchy of classes in JavaScript?

Could anyone point to a reliable model of the JavaScript standard class/prototype cloning inheritance relations?
The "standard prototypes" I refer to are window, navigator, document, and so forth and so on.
If you're referring to the DOM objects provided by browsers, take a look at Mozilla's Gecko DOM reference. Each browser however provides their own native objects though, so you should verify that a certain object is indeed available and works similarly in all the browsers that you want to target.
this is a pretty good one.
http://phrogz.net/js/classes/OOPinJS2.html
As #Radu and #johnH pointed out, there's no "classes" per se, but there is a sort of inheritance through the prototype.

Why is it frowned upon to modify JavaScript object's prototypes?

I've come across a few comments here and there about how it's frowned upon to modify a JavaScript object's prototype? I personally don't see how it could be a problem. For instance extending the Array object to have map and include methods or to create more robust Date methods?
The problem is that prototype can be modified in several places. For example one library will add map method to Array's prototype and your own code will add the same but with another purpose. So one implementation will be broken.
Mostly because of namespace collisions. I know the Prototype framework has had many problems with keeping their names different from the ones included natively.
There are two major methods of providing utilities to people..
Prototyping
Adding a function to an Object's prototype. MooTools and Prototype do this.
Advantages:
Super easy access.
Disadvantages:
Can use a lot of system memory. While modern browsers just fetch an instance of the property from the constructor, some older browsers store a separate instance of each property for each instance of the constructor.
Not necessarily always available.
What I mean by "not available" is this:
Imagine you have a NodeList from document.getElementsByTagName and you want to iterate through them. You can't do..
document.getElementsByTagName('p').map(function () { ... });
..because it's a NodeList, not an Array. The above will give you an error something like: Uncaught TypeError: [object NodeList] doesn't have method 'map'.
I should note that there are very simple ways to convert NodeList's and other Array-like
Objects into real arrays.
Collecting
Creating a brand new global variable and stock piling utilities on it. jQuery and Dojo do this.
Advantages:
Always there.
Low memory usage.
Disadvantages:
Not placed quite as nicely.
Can feel awkward to use at times.
With this method you still couldn't do..
document.getElementsByTagName('p').map(function () { ... });
..but you could do..
jQuery.map(document.getElementsByTagName('p'), function () { ... });
..but as pointed out by Matt, in usual use, you would do the above with..
jQuery('p').map(function () { ... });
Which is better?
Ultimately, it's up to you. If you're OK with the risk of being overwritten/overwriting, then I would highly recommend prototyping. It's the style I prefer and I feel that the risks are worth the results. If you're not as sure about it as me, then collecting is a fine style too. They both have advantages and disadvantages but all and all, they usually produce the same end result.
As bjornd pointed out, monkey-patching is a problem only when there are multiple libraries involved. Therefore its not a good practice to do it if you are writing reusable libraries. However, it still remains the best technique out there to iron out cross-browser compatibility issues when using host objects in javascript.
See this blog post from 2009 (or the Wayback Machine original) for a real incident when prototype.js and json2.js are used together.
There is an excellent article from Nicholas C. Zakas explaining why this practice is not something that should be in the mind of any programmer during a team or customer project (maybe you can do some tweaks for educational purpose, but not for general project use).
Maintainable JavaScript: Don’t modify objects you don’t own:
https://www.nczonline.net/blog/2010/03/02/maintainable-javascript-dont-modify-objects-you-down-own/
In addition to the other answers, an even more permanent problem that can arise from modifying built-in objects is that if the non-standard change gets used on enough sites, future versions of ECMAScript will be unable to define prototype methods using the same name. See here:
This is exactly what happened with Array.prototype.flatten and Array.prototype.contains. In short, the specification was written up for those methods, their proposals got to stage 3, and then browsers started shipping it. But, in both cases, it was found that there were ancient libraries which patched the built-in Array object with their own methods with the same name as the new methods, and had different behavior; as a result, websites broke, the browsers had to back out of their implementations of the new methods, and the specification had to be edited. (The methods were renamed.)
For example, there is currently a proposal for String.prototype.replaceAll. If you ship a library which gets widely used, and that library monkeypatches a custom non-standard method onto String.prototype.replaceAll, the replaceAll name will no longer be usable by the specification-writers; it will have to be changed before browsers can implement it.

Detecting additions to a Javascript object's properties

Other than regularly polling for changes, is there any (standard) way to register an event or callback that will be triggered any time a new property is added to a specific object?
Simply put, the answer is no.
Mozilla's JavaScript implementation has an overload for unresolvable methods, but it doesn't work for standard properties, see __noSuchMethod__. Of course, you asked for a standard method and no other implementations support this as far as I'm aware.
Once upon a time, ActionScript supported the __resolve property. As far as I know, JS has no similar crossbrowser construct, but maybe you could simulate it with some simple (but still bloaty) accessor function, like this:
http://bytes.com/topic/javascript/answers/789987-does-javascript-support-some-kind-__resolve-method

Categories

Resources