Referring to javascript instance methods with a pound/hash sign - javascript

This question is similar to Why are methods in Ruby documentation preceded by a hash sign?
I understand why in Ruby instance methods are proceeded with a pound sign, helping to differentiate talking about SomeClass#someMethod from SomeObject.someMethod and allowing rdoc to work. And I understand that the authors of PrototypeJS admire Ruby (with good reason) and so they use the hash mark convention in their documentation.
My question is: is this a standard practice amongst JavaScript developers or is it just Prototype developers who do this?
Asked another way, is it proper for me to refer to instance methods in comments/documentation as SomeClass#someMethod? Or should my documentation refer to ``SomeClass.someMethod`?

No, I have not yet met another JavaScript project that uses this notation.
Something like this is useful in JavaScript, though, because unlike in many languages Class.methodName would refer to classmethods like String.fromCharCode, not instance methods which is what you are more often talking about. The method invoked by myinstance.methodName would be not MyClass.methodName but MyClass.prototype.methodName, and MyClass.prototype is an annoyance to keep typing.
(The standard JS library confuses this by making many instance methods also have a corresponding classmethod. But they're different functions.)
is it proepr for me to refer to instance methods in comments/documentation as SomeClass#someMethod?
Do what you like/find most readable. There's no standard here.

I think it comes from javadoc.
http://java.sun.com/j2se/1.5.0/docs/tooldocs/windows/javadoc.html#{#link}

Related

Leaking arguments when using Function.apply

I was reading on GitHub in petkaantonov/bluebird on Optimization killers. And when reaching chapter 3.3 what-is-safe-arguments-usage specifically the part What is safe arguments usage? I realized that I might be going wrong using bind and apply in my project.
The post states:
Be aware that adding properties to functions (e.g. fn.$inject =...) and bound functions (i.e. the result of Function#bind) generate hidden classes and, therefore, are not safe when using #apply.
I used this answer on the question Use of .apply() with 'new' operator. Is this possible? to be able to pass an array of arguments to my constructor function like this:
new (Cls.bind.apply(Cls, arguments))();
This looks suspiciously much like what is described as not safe in the post.
Is this true? Am I going wrong here?
I would simply like to understand if the issue in the post applies to this example case, especially because it might be useful to comment on the answer so others don't make the same error (the post is heavily upvoted so it seems people are using this solution a lot).
Note: I recently found out about the spread operator (awesome) which is a nice alternative, to my previous solution:
new Cls(...arguments);

What is this javascript documentation style called?

In the socket.io documentation, they use a nomenclature that doesn't look like javascript (though it's a javascript library) that seems a bit out of place.
Examples here: http://socket.io/docs/client-api/ (the page has changed since, here's a web archive snapshot as of 2014)
This one is clear enough (just specifying types of arguments and return value):
IO(url:String, opts:Object):Socket
But this style I don't recognize at all:
IO#protocol
Manager#timeout(v:Boolean):Manager
I can pretty much figure it out through deduction (though I find it hard to read because it looks so foreign), but where does this style come from and why? Is this from another language (it certainly isn't javascript syntax that I've ever seen)? Is there a name for it? Is there a description of this style of documenting objects, methods, properties?
FYI, the idea to ask this question came because I referred a user here on SO to the socket.io documentation and they came back and said that wasn't javascript, did I have a link to the javascript documentation. I had to explain that it was the javascript documentation, it was just a funky (non-javascript-like) documentation style.
The page in question has been rewritten since to use object.property instead, but I remember that Object#property style, though I don't think it's ever had a name.
The problem it's trying to solve is that properties/methods can be available on constructors like Array.isArray(), as well as on instances, like ['foo','bar'].join(' '). The question is how to denote the latter. There were some competing denotations, such as
array.join(), which is what the socket.io docs are using now
Array.prototype.join (technically correct, but arguably even more confusing than Array#join to anyone who doesn't know how prototypes work in JS)
Array#join(), invented to be clearly different from Array.join syntax, and to avoid confusion with any existing JavaScript syntax.
The Object#prototype syntax was somewhat popular ten years ago, but didn't win in the end, so now it's just confusing when you encounter it.

Why does Underscore.js define function aliases

Underscore.js defines aliases for functions like _.each (alias: forEach) and _.map (alias: collect) and I don't understand why.
I initially thought this was to avoid issues with browsers that didn't implement those functions natively, my thinking was that calling [].map() would throw an error in IE7 and 8 because they didn't implement it natively but found that there was no issue since Underscore already defines those.
Then I thought it could have something to do with conflicts with other JS libraries like Prototype that implement similarly named functions but then realised that having an alias doesn't actually make a difference in the case of _.map since prototype implements .map and .colelct and actually I'd been using prototype's implementation all along (eg. this.collection.collect(...)).
So far it doesn't seem to have made any difference and it hasn't created any issues but I'd really like to know why this is happening.
I guess the purpose of aliases is to make the library more familiar for programmers with different backgrounds (eg, collect and include are used in Ruby, fold in functional languages etc).
Also, aliases can improve readability in some cases, for example
list.select(...).reject(...)
"sounds" better than
list.filter(...).reject(...)
If you look at their documentation, you will find it pretty close to lodash library (http://lodash.com/), and jQuery's library, and also Backbone and Ruby (found in the home page).
My guess, is that both of them are made to do the same thing, one in Server (Lodash), other in Client (Underscore), and to use the same syntax, they have some methods aliases.
Also, adding some aliases is never good, since it decreases errors when writing in multiple languages.

The disadvantages of JavaScript prototype inheritance, what are they?

I recently watched Douglas Crockford's JavaScript presentations, where he raves about JavaScript prototype inheritance as if it is the best thing since sliced white bread. Considering Crockford's reputation, it may very well be.
Can someone please tell me what is the downside of JavaScript prototype inheritance? (compared to class inheritance in C# or Java, for example)
In my experience, a significant disadvantage is that you can't mimic Java's "private" member variables by encapsulating a variable within a closure, but still have it accessible to methods subsequently added to the prototype.
i.e.:
function MyObject() {
var foo = 1;
this.bar = 2;
}
MyObject.prototype.getFoo = function() {
// can't access "foo" here!
}
MyObject.prototype.getBar = function() {
return this.bar; // OK!
}
This confuses OO programmers who are taught to make member variables private.
Things I miss when sub-classing an existing object in Javascript vs. inheriting from a class in C++:
No standard (built-into-the-language) way of writing it that looks the same no matter which developer wrote it.
Writing your code doesn't naturally produce an interface definition the way the class header file does in C++.
There's no standard way to do protected and private member variables or methods. There are some conventions for some things, but again different developers do it differently.
There's no compiler step to tell you when you've made foolish typing mistakes in your definition.
There's no type-safety when you want it.
Don't get me wrong, there are a zillion advantages to the way javascript prototype inheritance works vs C++, but these are some of the places where I find javascript works less smoothly.
4 and 5 are not strictly related to prototype inheritance, but they come into play when you have a significant sized project with many modules, many classes and lots of files and you wish to refactor some classes. In C++, you can change the classes, change as many callers as you can find and then let the compiler find all the remaining references for you that need fixing. If you've added parameters, changed types, changed method names, moved methods,etc... the compiler will show you were you need to fix things.
In Javascript, there is no easy way to discover all possible pieces of code that need to be changed without literally executing every possible code path to see if you've missed something or made some typo. While this is a general disadvantage of javascript, I've found it particularly comes into play when refactoring existing classes in a significant-sized project. I've come near the end of a release cycle in a significant-sized JS project and decided that I should NOT do any refactoring to fix a problem (even though that was the better solution) because the risk of not finding all possible ramifications of that change was much higher in JS than C++.
So, consequently, I find it's riskier to make some types of OO-related changes in a JS project.
I think the main danger is that multiple parties can override one another's prototype methods, leading to unexpected behavior.
This is particularly dangerous because so many programmers get excited about prototype "inheritance" (I'd call it extension) and therefore start using it all over the place, adding methods left and right that may have ambiguous or subjective behavior. Ultimately, if left unchecked, this kind of "prototype method proliferation" can lead to very difficult-to-maintain code.
A popular example would be the trim method. It might be implemented something like this by one party:
String.prototype.trim = function() {
// remove all ' ' characters from left & right
}
Then another party might create a new definition, with a completely different signature, taking an argument which specifies the character to trim. Suddenly all the code that passes nothing to trim has no effect.
Or another party reimplements the method to strip ' ' characters and other forms of white space (e.g., tabs, line breaks). This might go unnoticed for some time but lead to odd behavior down the road.
Depending on the project, these may be considered remote dangers. But they can happen, and from my understanding this is why libraries such as Underscore.js opt to keep all their methods within namespaces rather than add prototype methods.
(Update: Obviously, this is a judgment call. Other libraries--namely, the aptly-named Prototype--do go the prototype route. I'm not trying to say one way is right or wrong, only that this is the argument I've heard against using prototype methods too liberally.)
I miss being able to separate interface from implementation. In languages with an inheritance system that includes concepts like abstract or interface, you could e.g. declare your interface in your domain layer but put the implementation in your infrastructure layer. (Cf. onion architecture.) JavaScript's inheritance system has no way to do something like this.
I'd like to know if my intuitive answer matches up with what the experts think.
What concerns me is that if I have a function in C# (for the sake of discussion) that takes a parameter, any developer who writes code that calls my function immediately knows from the function signature what sort of parameters it takes and what type of value it returns.
With JavaScript "duck-typing", someone could inherit one of my objects and change its member functions and values (Yes, I know that functions are values in JavaScript) in almost any way imaginable so that the object they pass in to my function bears no resemblance to the object I expect my function to be passed.
I feel like there is no good way to make it obvious how a function is supposed to be called.

Why is it frowned upon to modify JavaScript object's prototypes?

I've come across a few comments here and there about how it's frowned upon to modify a JavaScript object's prototype? I personally don't see how it could be a problem. For instance extending the Array object to have map and include methods or to create more robust Date methods?
The problem is that prototype can be modified in several places. For example one library will add map method to Array's prototype and your own code will add the same but with another purpose. So one implementation will be broken.
Mostly because of namespace collisions. I know the Prototype framework has had many problems with keeping their names different from the ones included natively.
There are two major methods of providing utilities to people..
Prototyping
Adding a function to an Object's prototype. MooTools and Prototype do this.
Advantages:
Super easy access.
Disadvantages:
Can use a lot of system memory. While modern browsers just fetch an instance of the property from the constructor, some older browsers store a separate instance of each property for each instance of the constructor.
Not necessarily always available.
What I mean by "not available" is this:
Imagine you have a NodeList from document.getElementsByTagName and you want to iterate through them. You can't do..
document.getElementsByTagName('p').map(function () { ... });
..because it's a NodeList, not an Array. The above will give you an error something like: Uncaught TypeError: [object NodeList] doesn't have method 'map'.
I should note that there are very simple ways to convert NodeList's and other Array-like
Objects into real arrays.
Collecting
Creating a brand new global variable and stock piling utilities on it. jQuery and Dojo do this.
Advantages:
Always there.
Low memory usage.
Disadvantages:
Not placed quite as nicely.
Can feel awkward to use at times.
With this method you still couldn't do..
document.getElementsByTagName('p').map(function () { ... });
..but you could do..
jQuery.map(document.getElementsByTagName('p'), function () { ... });
..but as pointed out by Matt, in usual use, you would do the above with..
jQuery('p').map(function () { ... });
Which is better?
Ultimately, it's up to you. If you're OK with the risk of being overwritten/overwriting, then I would highly recommend prototyping. It's the style I prefer and I feel that the risks are worth the results. If you're not as sure about it as me, then collecting is a fine style too. They both have advantages and disadvantages but all and all, they usually produce the same end result.
As bjornd pointed out, monkey-patching is a problem only when there are multiple libraries involved. Therefore its not a good practice to do it if you are writing reusable libraries. However, it still remains the best technique out there to iron out cross-browser compatibility issues when using host objects in javascript.
See this blog post from 2009 (or the Wayback Machine original) for a real incident when prototype.js and json2.js are used together.
There is an excellent article from Nicholas C. Zakas explaining why this practice is not something that should be in the mind of any programmer during a team or customer project (maybe you can do some tweaks for educational purpose, but not for general project use).
Maintainable JavaScript: Don’t modify objects you don’t own:
https://www.nczonline.net/blog/2010/03/02/maintainable-javascript-dont-modify-objects-you-down-own/
In addition to the other answers, an even more permanent problem that can arise from modifying built-in objects is that if the non-standard change gets used on enough sites, future versions of ECMAScript will be unable to define prototype methods using the same name. See here:
This is exactly what happened with Array.prototype.flatten and Array.prototype.contains. In short, the specification was written up for those methods, their proposals got to stage 3, and then browsers started shipping it. But, in both cases, it was found that there were ancient libraries which patched the built-in Array object with their own methods with the same name as the new methods, and had different behavior; as a result, websites broke, the browsers had to back out of their implementations of the new methods, and the specification had to be edited. (The methods were renamed.)
For example, there is currently a proposal for String.prototype.replaceAll. If you ship a library which gets widely used, and that library monkeypatches a custom non-standard method onto String.prototype.replaceAll, the replaceAll name will no longer be usable by the specification-writers; it will have to be changed before browsers can implement it.

Categories

Resources