What is the difference between $.proxy() and bind()? - javascript

In 2009, ECMAScript 5 added a built-in bind() function which takes an object as a parameter and returns an identical function in which this will always refer to the object you passed it. (I couldn't find anything that looked like a canonical documentation link.)
How is this different from jQuery's $.proxy() function? Did $.proxy() come first before ECMAScript 5 was released? Is there a particular reason to favor $.proxy(function(){}, this) over function(){}.bind(this)?

proxy came first and you should likely favor bind as it is a standard. The way they are called varies slightly (due to being attached to Function.prototype vs just being a function) but their behavior is the same.
There is a pretty good post here: jQuery.proxy() usage, that ends with that advice.

Edit
Please pay no attention to this post (despite being the accepted answer).
Long story short, it was my own fault for making assumptions about the context of the question, rather than just looking up the API docs, and was accepted as the answer before I could realize my own stupidity (making assumptions, without validating them) and delete it.
Matt Whipple's answer is 100% correct, and while I disagree with his statement that real Proxies are useless in JS (they would be fantastic in some low-level concerns), the rest of his statements are flat-out objectively correct (aside from actual dates for .bind vs .proxy, as .bind was in the spec years before it landed consistently in browsers).
Below is my shame, in the stocks for all to see...
Feel free to throw tomatoes at it.
If you want to know why I answered the way I did, read the comments below.
The difference between $({}).proxy() and func.bind({}) is that proxy is a loose connection.
You can detach at any time.
That's sort of what proxies are for.
The invisible-interface between what you want to do and the thing that will actually do it.
For the record, there's also a $.bind() which is not a proxy. That is to say, it fully binds to this, in the same way that func.bind() does, rather than implementing a mediator-system to attach and detach context from functions at-will.

$.proxy came first.
Below is a simple way to preserve a particular context on function call
var myProxy = (function(context,fn){
return function(){
fn.call(context);
}
})( myContext, myFn );
You could easily use this before it came out jquery.
Answer is simple:
bind is the official.
Use bind - if it really is supported in browsers which is required to run the script

From Underscore bind vs jQuery.proxy vs Native bind
In addition to what is already mentioned, there's another difference between $.proxy() and .bind. Methods bound with $.proxy will return the same reference if called multiple times; jQuery caches functions proxied to an Object.
jsFiddle

Here is a test you could try to for performance comparison.
http://jsperf.com/bind-vs-jquery-proxy/5
At this time, October 2014.
The performance varies like crazy between browsers.
IE 11 native bind is fastest.
However, for all three browsers I tested with, native bind out preform jquery proxy.
And since bind() is standard, I would suggest sticking to it if possible.

Related

Is there a case where using bind would not cover the need for using either call or apply?

I am learning javascript via javascript.info.
I learned about bind (and a lesson before about call and apply)
I see this post inquires about the differences on all 3.
After reading all that, I wonder: is there a scenario where using bind would NOT cover the need for using either call or apply ?
I mean, call or apply are meant for calling the function immediately. bind - later on.
Would not the following work in all cases?
let f = function(){/* code here */};
let errorFreeF = f.bind(myContextObj);
errorFreeF();
A less verbose snippet:
function(){/* code here */}.bind(myContextObj)();
If the answer is using bind is always safe and indeed covers all scenarios, seems like the call/apply could be deprecated in future JS versions ? Or it performs worse?
You can use .bind as a substitute for .call and .apply. If you wanted, you could convert all uses of .call and .apply with .bind.
The other combinations are true as well. If you really wanted to, you could also replace all instances of those methods with .call, or with .apply.
It's just that it's sometimes inelegant to do so. For similar reasons, even though Array.prototype.forEach could be used to identify an element which matches a condition in an array, Array.prototype.find is more appropriate and requires less code.
seems like the call/apply could be deprecated in future JS versions ?
No, for a few reasons:
They're useful methods to have (they can result in cleaner code than having to write the same thing with .bind)
Using .bind creates a function. In rare cases, the extra overhead resulting from the creation of a function may be a problem. (But .call and .apply only invoke functions, they don't create functions)
The language designers will never implement a change (such as removing .call or .apply) because that will break existing websites, which is avoided at all costs.

JavaScript: Extending Element prototype

I have seen a lot of discussion regarding extending Element. As far as I can tell, these are the main issues:
It may conflict with other libraries,
It adds undocumented features to DOM routines,
It doesn’t work with legacy IE, and
It may conflict with future changes.
Given a project which references no other libraries, documents changes, and doesn’t give a damn for historical browsers:
Is there any technical reason not to extend the Element prototype. Here is an example of how this is useful:
Element.prototype.toggleAttribute=function(attribute,value) {
if(value===undefined) value=true;
if(this.hasAttribute(attribute)) this.removeAttribute(attribute);
else this.addAttribute(attribute,value);
};
I’ve seen too many comments about the evils of extending prototypes without offering a reasonable explanation.
Note 1: The above example is possibly too obvious, as toggleAttribute is the sort of method which might be added in the future. For discussion, imagine that it’s called manngoToggleAttribute.
Note 2: I have removed a test for whether the method already exists. Even if such a method already exists, it is more predictable to override it. In any case, I am assuming that the point here is that the method has not yet been defined, let alone implemented. That is the point here.
Note 3: I see that there is now a standard method called toggleAttribute which doesn’t behave exactly the same. With modification, the above would be a simple polyfill. This doesn’t change the point of the question.
Is it ok? Technically yes. Should you extend native APIs? As a rule of thumb no. Unfortunately the answer is more complex. If you are writing a large framework like Ember or Angular it may be a good idea to do so because your consumers will have Benifits if better API convenience. But if you're only doing this for yourself then the rule of thumb is no.
The reasoning is that doing so destabilizes the trust of that object. In other words by adding, changing, modifying a native object it no longer follows the well understood and documented behavior that anyone else (including your future self) will expect.
This style hides implementation that can go unnoticed. What is this new method?, Is it a secret browser thing?, what does it do?, Found a bug do I report this to Google or Microsoft now?. A bit exaggerated but the point is that the truth of an API has now changed and it is unexpected in this one off case. It makes maintainability need extra thought and understanding that would not be so if you just used your own function or wrapper object. It also makes changes harder.
Relevant post: Extending builtin natives. Evil or not?
Instead of trying to muck someone else's (or standard) code just use your own.
function toggleAttribute(el, attribute, value) {
var _value = (value == null ? true : value;
if (el.hasAttribute(attribute)) {
el.removeAttribute(attribute);
} else {
el.addAttribute(attribute, _value);
}
};
Now it is safe, composible, portable, and maintainable. Plus other developers (including your future self) won't scratch their heads confused where this magical method that is not documented in any standard or JS API came from.
Do not modify objects you don't own.
Imagine a future standard defines Element.prototype.toggleAttribute. Your code checks if it has a truthy value before assigning your function. So you could end up with the future native function, which may behave differently than what you expected.
Even more, just reading Element.prototype.toggleAttribute might call a getter, which could run some code with undesired sideways effects. For example, see what happens when you get Element.prototype.id.
You could skip the check and assign your function directly. But that could run a setter, with some undesired sideways effects, and your function wouldn't be assigned as the property.
You could use a property definition instead of a property assignment. That should be safer... unless Element.prototype has some special [[DefineOwnProperty]] internal method (e.g. is a proxy).
It might fail in lots of ways. Don't do this.
In my assessment: no
Massive overwriting Element.prototype slow down performance and can conflict with standardization, but a technical reason does not exist.
I'm using several Element.prototype custom methods.
so far so good until I observe a weird behaviour.
<!DOCTYPE html >
<html >
<body>
<script>
function doThis( ){
alert('window');
}
HTMLElement.prototype.doThis = function( ){
alert('HTMLElement.prototype');
}
</script>
<button onclick="doThis( )" >Do this</button>
</body>
</html>
when button is clicked, the prototype method is executed instead of the global one.
The browser seems to assume this.doThis() which is weird. To overcome, I have to use window.doThis() in the onclick.
It might be better if w3c can come with with diff syntax for calling native/custom methods e.g.
myElem.toggleAttribute() // call native method
myElem->toggleAttribute() // call custom method
Is there any technical reason not to extend the Element prototype.
Absolutely none!
pardon me:
ABSOLUTELY NONE!
In addition
the .__proto__, was practically an a illegal (Mozilla) prototype extension until yesterday. - Today, it's a Standard.
p.s.: You should avoid the use of if(!Element.prototype.toggleAttribute) syntax by any means, the if("toggleAttribute" in Element.prototype) will do.

Leaking arguments when using Function.apply

I was reading on GitHub in petkaantonov/bluebird on Optimization killers. And when reaching chapter 3.3 what-is-safe-arguments-usage specifically the part What is safe arguments usage? I realized that I might be going wrong using bind and apply in my project.
The post states:
Be aware that adding properties to functions (e.g. fn.$inject =...) and bound functions (i.e. the result of Function#bind) generate hidden classes and, therefore, are not safe when using #apply.
I used this answer on the question Use of .apply() with 'new' operator. Is this possible? to be able to pass an array of arguments to my constructor function like this:
new (Cls.bind.apply(Cls, arguments))();
This looks suspiciously much like what is described as not safe in the post.
Is this true? Am I going wrong here?
I would simply like to understand if the issue in the post applies to this example case, especially because it might be useful to comment on the answer so others don't make the same error (the post is heavily upvoted so it seems people are using this solution a lot).
Note: I recently found out about the spread operator (awesome) which is a nice alternative, to my previous solution:
new Cls(...arguments);

Why is it frowned upon to modify JavaScript object's prototypes?

I've come across a few comments here and there about how it's frowned upon to modify a JavaScript object's prototype? I personally don't see how it could be a problem. For instance extending the Array object to have map and include methods or to create more robust Date methods?
The problem is that prototype can be modified in several places. For example one library will add map method to Array's prototype and your own code will add the same but with another purpose. So one implementation will be broken.
Mostly because of namespace collisions. I know the Prototype framework has had many problems with keeping their names different from the ones included natively.
There are two major methods of providing utilities to people..
Prototyping
Adding a function to an Object's prototype. MooTools and Prototype do this.
Advantages:
Super easy access.
Disadvantages:
Can use a lot of system memory. While modern browsers just fetch an instance of the property from the constructor, some older browsers store a separate instance of each property for each instance of the constructor.
Not necessarily always available.
What I mean by "not available" is this:
Imagine you have a NodeList from document.getElementsByTagName and you want to iterate through them. You can't do..
document.getElementsByTagName('p').map(function () { ... });
..because it's a NodeList, not an Array. The above will give you an error something like: Uncaught TypeError: [object NodeList] doesn't have method 'map'.
I should note that there are very simple ways to convert NodeList's and other Array-like
Objects into real arrays.
Collecting
Creating a brand new global variable and stock piling utilities on it. jQuery and Dojo do this.
Advantages:
Always there.
Low memory usage.
Disadvantages:
Not placed quite as nicely.
Can feel awkward to use at times.
With this method you still couldn't do..
document.getElementsByTagName('p').map(function () { ... });
..but you could do..
jQuery.map(document.getElementsByTagName('p'), function () { ... });
..but as pointed out by Matt, in usual use, you would do the above with..
jQuery('p').map(function () { ... });
Which is better?
Ultimately, it's up to you. If you're OK with the risk of being overwritten/overwriting, then I would highly recommend prototyping. It's the style I prefer and I feel that the risks are worth the results. If you're not as sure about it as me, then collecting is a fine style too. They both have advantages and disadvantages but all and all, they usually produce the same end result.
As bjornd pointed out, monkey-patching is a problem only when there are multiple libraries involved. Therefore its not a good practice to do it if you are writing reusable libraries. However, it still remains the best technique out there to iron out cross-browser compatibility issues when using host objects in javascript.
See this blog post from 2009 (or the Wayback Machine original) for a real incident when prototype.js and json2.js are used together.
There is an excellent article from Nicholas C. Zakas explaining why this practice is not something that should be in the mind of any programmer during a team or customer project (maybe you can do some tweaks for educational purpose, but not for general project use).
Maintainable JavaScript: Don’t modify objects you don’t own:
https://www.nczonline.net/blog/2010/03/02/maintainable-javascript-dont-modify-objects-you-down-own/
In addition to the other answers, an even more permanent problem that can arise from modifying built-in objects is that if the non-standard change gets used on enough sites, future versions of ECMAScript will be unable to define prototype methods using the same name. See here:
This is exactly what happened with Array.prototype.flatten and Array.prototype.contains. In short, the specification was written up for those methods, their proposals got to stage 3, and then browsers started shipping it. But, in both cases, it was found that there were ancient libraries which patched the built-in Array object with their own methods with the same name as the new methods, and had different behavior; as a result, websites broke, the browsers had to back out of their implementations of the new methods, and the specification had to be edited. (The methods were renamed.)
For example, there is currently a proposal for String.prototype.replaceAll. If you ship a library which gets widely used, and that library monkeypatches a custom non-standard method onto String.prototype.replaceAll, the replaceAll name will no longer be usable by the specification-writers; it will have to be changed before browsers can implement it.

Detecting additions to a Javascript object's properties

Other than regularly polling for changes, is there any (standard) way to register an event or callback that will be triggered any time a new property is added to a specific object?
Simply put, the answer is no.
Mozilla's JavaScript implementation has an overload for unresolvable methods, but it doesn't work for standard properties, see __noSuchMethod__. Of course, you asked for a standard method and no other implementations support this as far as I'm aware.
Once upon a time, ActionScript supported the __resolve property. As far as I know, JS has no similar crossbrowser construct, but maybe you could simulate it with some simple (but still bloaty) accessor function, like this:
http://bytes.com/topic/javascript/answers/789987-does-javascript-support-some-kind-__resolve-method

Categories

Resources