overloading vs overriding in javascript - javascript

In a recent JavaScript interview I was asked about overloading vs overriding. I know this is a concept in Java. But is there something similar in JavaScript, and if so what would be code examples? My understanding is that overloading isn't common in javascript. Why would you need to use "overloading" in JS?
OverRiding is a bit clearer to me - an example of over riding would be in subclassing where you are inheriting from a super class but over riding some methods/properties to create unique ones for a sub class.

JavaScript does not support overloading.
JavaScript supports overriding, so if you define two functions with the same name, the last one defined will override the previously defined version and every time a call will be made to the function, the last defined one will get executed.
more read here http://blog.mastykarz.nl/overloading-functions-javascript/

There's a nice example of 'faking' JavaScript function overloading here: https://stackoverflow.com/a/457589/2754135
You basically use a parameter in your function that takes an object, that object contains any number of parameters you want.
It's not actually overloading obviously, cause that's not possible in JavaScript.

there is no need for the traditional concept of overload in javascript, because of its dynamic nature. In the more traditional programming languages, as Java, you can define a method multiple times with different signatures and the language will correctly use the method you want to call just using the signature: thats called overload of methods. On the other hand override is the possibility to redefine a method of the parent class in the child class.
To do overload in javascript it is common practice to use last parameter that is called options. For example
function yourFunction(firstParam, secondParam, options) {};
the options is just a javascript object that have props you want to pass. Then you can use the so called "options" pattern to check for props.
To do override it is more difficult in pure javascript because of the prototypal nature of the language: when you "extend" a base object with a new one, you can use .call() function of the constructor object passing this to decorate the newly created object with the parent props.

While JavaScript does not support overloading in a traditional sense,
More than the required arguments may be passed at any time to a JavaScript method, and accessed through the arguments variable. This is functionally similar.

Related

Call, Bind, Apply Vs Passing The Reference Object As An Argument

Using call and bind is important when setting the value of this to the intended object. But what is the significance of using this instead of just passing the object as an argument to the function?
The question occured to me after seeing that .bind is not supported in IE9 and below, so I started passing the object as a parameter to the function I am calling. (I know I could use a shim for bind but that is not the question. The question is about the rationale behind using this.)
What is the purpose of the this, call, apply, bind syntax in Javascript and what was it a solution to? Wouldn't it be simpler to pass the object as a parameter both for cross-browser support and simplicity (since this is commonly misunderstood or forgotton to be set correctly in code)?
As far as I know, this technique used at some languages (like Delphi). But using one-argument-less is more convenient I guess. It is not a functional style, it is OOP style and we love it, right? :)

Delegation VS Concatenation in Javascript

Javascript lacks a class construct, however you can still achieve inheritance many different ways.
You can mimic classes by utilizing prototypes to create constructor functions and thus implementing inheritance via delegation.
The most common way to do this is with the new keyword, but you can also implement object.create().
Alternatively, you can take a concatenative approach and copy behavior that you want your object to inherit directly into the object itself, rather than by pointing its [[prototype]] to another object.
What are the advantages and drawbacks of each pattern in JS?
It seems many experts are advocating the concatenative approach, like Doug Crockford for example. The delegation pattern is clearly more popular at the moment, however.
I know that, in general, the concatenative approach is going to be more flexible, but will consume more memory because your copying methods all over the place rather than just referencing a common set.
However, I also know that the V8 engine has ways of optimizing this stuff.
Besides performance, is there anything that you can achieve via delegation that you can achieve via concatenation? For example, in delegation, I can have a method in a super class that implements some basic functionality and in my subclass methods I can call the super's method and then do some subclass-specific computations before returning the final result. I can't think of a way to do this with the concatenative approach without just using a different method name.
The main differences are object size and flexibility. Inheriting properties instead of copying them will lead to smaller objects, especially if your method API is relatively large. Also, inheriting properties from a single object that can still be manipulated is much more dynamic (see Does some JavaScript library use dynamic aspects of the prototype system?). It might be more complicated to optimize than inheriting from static objects, but it is still faster than not sharing between objects.

Javascript - Difference between defineProperty and directly defining a function on an object

I recently created my own module for node.js for use with the koa module. It's a translation module like koa-i18n. I've studied other koa modules to see how functions/properties are applied to the koa context/request and some of them use the Object.defineProperty function, but what I did in my module was apply a function directly on 'this'.
So, what is the difference between using
Object.defineProperty(app.context, 'getSomeValue', { ... });
and
return function* (next) { this.getSomeValue = function () { ... } }
I've also come across the node-delegates module which uses the 'apply' function.
Which of these methods is the preferred way of applying a function/property to an existing object and what are the pros and cons?
The defineProperty method has specific advantages over directly setting a property in an object or returning a function object (which in some ways can simulate pesudo-private fields).
You can use defineProperty to define constants decide whether they are enumerable and more.
You can check-out a similar discussion here - when do you use Object.defineProperty().
Also do check out the examples from Mozilla Developer Network for this method and the configs for being able to decide whether the prop is writable, enumerable etc using define property.
Apply is a bit different and I think a better comparison would be with the JavaScript call method. It is similar to call with mostly schematic differences. See note here. Apply and Call can be used in a way to invoke a method - roughly like reflection in other languages like Java.

using .call vs. passing "this" in JavaScript

I know that if you have some javascript function and you want to call it such that using this within it would refer not to the object on which it was directly called you can use func.call(thatObject,param,and,more,params...).
But suppose you are the writer of func and the only usage for func is via func.call,
Why would you not define it to begin with as:
function func(that,param,and,more,params...) {
//and in here use *that* and not *this*
}
yep, it looks less "cool" because its not a method of an object,
but hey if the only usage for func is via func.call it all seems like just extra code and overhead.
Am I missing something here? or is the source code in which I have seen this pattern just "over OOed" ?
There appears to be a large performance difference. Using
func(){
//code here, this.something
}
func.call(thatObject)
according to the first couple of tests is about 8 times slower than using
func(that){
//code here, that.something
}
func(thatObject)
Test it yourself, JSPerf here
Ultimately though, speed alone is rarely the most important factor in which code we use. Code is designed for people as much as it is for computers, we need to communicate our intentions clearly to both. Whichever makes the code cleanest is best, and we should follow conventions whenever it is feasible. I personally prefer the second option here, but I think the general convention is the first. So I think you use call in most situations, except for when you need the fastest code possible or if the convention changes.
Your pattern will work in this scenario but in general it has some problems ...
Constructor functions -> We create objects by using new keyword where 'this' object is automatically created and returned (default) also has the link with the function's prototype.
When calling the function someone might forget to pass 'that' object and your code will fail. In case of call if you pass null it is reset to global object (non-strict mode which is common)
func.call is intended to be used when an alternative (or any) context for the function is needed to be provided.
Your suggestion is odd, since, when you define the function using this pattern:**
function func(that,param,and,more,params...) {
//and in here use *that* and not *this*
}
it's either:
doesn't belong to any object
in which case passing an object to act as this doesn't make any sense, since there should be no this.someThing calls to begin with
or belongs to a different object or is defined to be a plug to an object
in which case it does exactly what func.call would do, while contaminating the definition of the function with redundant parameter.
UPDATE:
Think of the second example in this way - imagine that some set of Objects (what you'd call from the same "class") allow injection of arbitraty functions, say for iteration over all properties of the object and some summary or manipulation or what have you.
In Java, the common pattern is to create an Iterator and pass it, and its main purpose is to serve as sort of placeholder so that next and hasNext methods can be called - since Java doesn't really have object-less functions. (granted there is some additional logic there, but let's leave it alone for the sake of this discussion).
JavaScript does not neeed this! All these call and apply methods do not need to have some additional Iterator object to hold them. They can be defined "detached" from any object, while still with intention to be used in a context of one (hence this usage in their code) and injected into code that knows to accept such functions.
The "host" code than only needs to call or apply them, knowing that this will refer to itself - this very "host" object.
This leads to a more concise, reusable and portable code, IMO.
UPDATE 2:
See more here.

Use closure to make methods private in prototype

I am using prototype 1.4 in our project,I used to create Class in this manner:
1) Manner1
var Person=Class.create();
Person.prototype={
initialize:function(name,age){
this._check(name,age);
this.name=name;
this.age=age;
},
say:function(){
console.log(this.name+','+this.age);
}
_check:function(name,age){
if(typeof(name)!='string' || typeof(age)!='number')
throw new Error("Bad format...");
}
}
However,in the above code,the method of Person "_check" can be called outside which is not my expected.
And in my former post ,thanks for 'T.J. Crowder',he told me one solution to make the method totally private:
2) Manner2
var Person=(function(){
var person_real=Class.create();
person_real.prototype={
initialize:function(name,age){
_check(name,age);
this.name=name;
this.age=age;
},
say:function(){
console.log(this.name+','+this.age);
}
}
//some private method
function _check(name,age){
//....the check code here
}
return person_real;
})();
Now,the "_check" can not be exposed outside.
But what I am now confusing is that does this manner will cause performance problem or is it the best pratice?
Since one of the reasons we create class(the blueprint) is to reducing the repeat codes which can be reused many times anywhere.
Now look at the "Manner1":
We create a Class "Person",then we puts all the instance methods to the prototype object of class Person.
Then each time we call the
var obj=new Person('xx',21);
the object "obj" will own the methods from the Person.prototype. "obj" itself does not hold any codes here.
However in "Manner2":
Each time we call:
var obj=new Person('xx',21);
A new blueprint will be created,the private methods such as "_check" will be created each time also.
Is it a waste of memory?
Note: maybe I am wrong. But I am really confused. Any one can give me an explaination?
You're getting caught with a couple common problems that people have with Javascript.
Second question first, as its easier to answer. With prototypical inheritance you're only describing the differences between two related objects. If you create a function on the prototype of an object and then make 10000 clones of it, there's still only one function. Each individual object will call that method, and due to how "this" works in Javascript, the "this" in those function calls will point to the individual objects despite the function living in the singular prototype.
Of course when you have unique per-item properties, the unique values for each object, then yes you can can performance issues (I hesitate to use the term "instance" because it doesn't mean the same thing in prototypical inheritance compared to class based inheritance). Since you're no longer benefiting from one single shared function/dataset you lose that benefit. Knowing how to minimize these inefficiencies is one key part of smart, efficient programming in Javascript (and any prototypal inheritance based language). There's a lot of non-obvious tricks to get functionality and data shared with minimal duplication.
The first question is a common class of confusion in Javascript due to the near omnipotence of Function in Javascript. Functions in Javascript single handedly do the jobs of many different constructs in other languages. Similarly, Objects in Javascript fill the role of a lot of data constructs on other languages. This concentrated responsibility brings a lot of powerful options to the table, but makes Javascript a kind of boogieman: it looks like really simple, and is really simple. But it's really simple kind of how like poker is really simple. The's a small set of moving pieces, but the way they interact begets a much deeper metagame that rapidly becomes bewildering.
It's better to understand objects/functions/data in a Javascript application/script/whatever as a constantly evolving system, as this better helps capture prototypal inheritance. Perhaps like a football (american) game where the current state of the game can't really be captured without knowing how many downs there is, how many timeouts remain, what quarter you're in, etc. You have to move with the code instead of looking at it like most languages, even dynamic ones like php.
In your first example all you're doing essentially is using Class.create to get the initialize() function automatically called and to handle grandparent constructor stuff automatically so otherwise similar to just doing:
var Person = function(){ this.initialize.call(this, arguments) };
Person.prototype = { ... }
Without the grandparent stuff. Any properties set directly on an object are going to be public (unless you're using ES5 defineProperty stuff to specifically control it) so adding _ to the beginning of the name won't do anything outside of conventions. There is no such thing as a private or protected member in javascript. There is ONLY public so if you define a property on an object it is public. (again discounting ES5 which does add some control to this, but it's not really designed to fill the same as private variables in other languages and it shouldn't be used or relied on in the same way).
Instead of private members we have to use scope to create privacy. It's important to remember that Javascript only has function scoping. If you want to have private members in an Object then you need to wrap the whole object in a bigger scope (function) and selectively make public what you want. That's an important basic rule that for some reason I rarely see succincly explained. (other schemes exist like providing a public interface to register/add functions into a private context, but they all end with everything that's sharing having access to a private context build by a function, and usually are way more complicated than needed).
Think of it this way. You want to share these members inside the object, so all the object members need to be within the same scope. But you don't want it shared on the outside so they must be defined in their own scope. You can't put them as members ON the public object since Javascript has no concept of anything besides public variables (ES5 aside, which can still be screwed with) you're left with function scope as the only way to implement privacy.
Fortunately this isn't painful because you have a number of effective, even smooth ways to do this. Here comes in Javascript's power as a functional language. By returning a function from a function (or an object containing functions/data/references) you can easily control what gets let back out into the world. And since Javascript has lexical scoping, those newly unleashed public interfaces still have a pipe back into the private scope they were originally created in, where their hidden brethren still live.
The immediately executed function function serves the role of gatekeeper here. The objects and scope aren't created until the function is run, and often the only purpose for the container function is to provide scope, so an anonymous immediately executing function fits the bill (neither anonymous or immediately executing are required though). It's important to recognize the flexibility in Javascript in terms of construction an "Object" or whatever. In most languages you carefully construct your class structure and what members each thing has and on and on and on. In Javascript, "if you dream it you can make it" is essentially the name of the game. Specifically, I can create an anonymous function with 20 different Objects/Classes/whatever built inside of it and then ad-hoc cherry pick a method or two from each and return those as a single public Object, and then that IS the object/interface even though it wasn't until two seconds before I did return { ...20 functions from 20 different objects... }.
So in your second example you're seeing this tactic being used. In this case a prototype is create and then a separate function is created in the same scope. As usual, only one copy of the prototype will exist no matter how many children come from it. And only one copy of that function exists because that containing function (the scope) is only called once. The children created from the prototype will not have access to call the function, but it will seem like they do because they're getting backdoor access through functions that live on the prototype. If a child goes and implements its own initialize in that example it will no longer be able to make use of _check because it never had direct access in the first place.
Instead of looking at it as the child doing stuff, look at it as the parent/prototype tending to the set of functions that live on it like a central phone operator that everything routes through, with children as thin clients. The children don't run the functions, the children ping the prototype to run them with this pointing at the caller child.
This is why stuff like Prototype's Class becomes useful. If you're implementing multilevel inheritance you start running into holes where prototypes up the chain miss out on some functionality. This is because while properties/functions in the prototype chain automagically show up on children, the constructor functions do not similarly cascade; the constructor property inherits same as anything else, but they're all named constructor so you only get the direct prototype's constructor for free. Also constructors need to be executed to get their special sauce whereas a prototype is simply a property. Constructors usually hold that key bit of functionality that sets up an objects own private data which is often required for the rest of the functionality to work. Inheriting a buttload of functions from a prototype chain doesn't do any good if you don't manually run through the constructor chain as well (or use something like Class) to get set up.
This is also part of why you don't usually see deep inheritance trees in Javascript and is usually peoples' complaints about stuff like GWT with those deep Java-like inheritance patterns. It's not really good for Javascript and prototypical inheritance. You want shallow and wide, a few layers deep or less. And it's important to get a good grasp on where in the flow an object is living and what properties are where. Javascript does a good job of creating a facade that makes it look like X and Y functionality is implemented on an object, but when you peer behind you realize that (ideally) most objects are mostly empty shells that a.) hold a few bits of unique data and b.) are packaged with pointers to the appropriate prototype objects and their functionality to make use of that data. If you're doing a lot of copying of functions and redundant data then you're doing it wrong (though don't feel bad because it's more common than not).
Herein lies the difference between Shallow Copy and Deep Copy. jQuery's $.extend defaults to shallow (I would assume most/all do). A shallow copy is basically just passing around references without duplicating functions. This allows you to build objects like legos and get a bit more direct control to allow merging parts of multiple objects. A shallow copy should, similar to using 10000 objects built from a prototype, not hurt you on performance or memory. This is also a good reason to be very careful about haphazardly doing a deep copy, and to appreciate that there is a big difference between shallow and deep (especially when it comes to the DOM). This is also one of those place Javascript libraries care some heavy lifting. When there's browser bugs that cause browsers to fail at handling the situations where it should be cheap to do something because it should be using references and efficiently garbage collecting. When that happens it tends to require creative or annoying workarounds to ensure you're not leaking copies.
A new blueprint will be created,the private methods such as "_check" will be created each time also. Is it a waste of memory?
You are wrong. In the second way, you execute the surrounding function only one time and then assign person_real to Person. The code is exactly the same as in the first way (apart form _check of course). Consider this variation of the first way:
var Person=Class.create();
Person.prototype={
initialize:function(name,age){
_check(name,age);
this.name=name;
this.age=age;
},
say:function(){
console.log(this.name+','+this.age);
}
}
function _check(name,age){
//....the check code here
}
Would you still say that _check is created every time you call new Person? Probably not. The difference here is that _check is global and can be accessed form any other code. By putting this piece in a function and call the function immediately, we make _check local to that function.
The methods initialize and say have access to _check, because they are closures.
Maybe it makes more sense to you when we replace the immediate function with a normal function call:
function getPersonClass(){
var person_real=Class.create();
person_real.prototype={
initialize:function(name,age){
_check(name,age);
this.name=name;
this.age=age;
},
say:function(){
console.log(this.name+','+this.age);
}
}
//some private method
function _check(name,age){
//....the check code here
}
return person_real;
}
var Person = getPersonClass();
getPersonClass is only called once. As _check is created inside this function, it means that it is also only created once.

Categories

Resources