Creating a module I ended designing a pattern where I attach methods to a function, and I'm not sure if it is correct. It is a closure that returns a function that has some methods attached which in turn calls the function itself.
I don't know if this is a bad practice or if it is considered to be ok. My objective is to provide ways to call the function with certain presents or in different ways, but I want to retain the ability to just call the function in its simpler form. Would this lead to memory leaks or anything like that?
I'm not making use of this at any point, so no danger of losing context.
Below you can find a code snippet with a simplified version.
function factory( general ){
var pusher = setTimeout(function(){ console.log('$',general) },1000);
var counter = 0;
function reporter ( specific ){
counter++;
console.log(counter, general , specific)
}
reporter.middleware = function ( something ){
clearTimeout(pusher);
return factory ( general + something )
}
return reporter
}
Thanks in advance.
Would this lead to memory leaks or anything like that?
No more than anything else. :-)
Since functions are proper objects in JavaScript, you can add properties to them, and those properties can refer to other functions. At a technical level, it's not a problem at all. For instance, jQuery does it with its $ function, which not only is callable ($()), but also has various other functions on it ($.ajax, $.Deferred, $.noConflict, etc.).
Whether it's good style or design is a matter, largely, of opinion, so a bit off-topic for SO. It may well be fine. Or it may be that you'd be better off returning a non-function object with your various functions on it as properties.
Related
I wish I could think of a better question for my situation.
Let me give the context in code.
I have this general function intended for binding with different variations (only about 3 or 4 cases).
GeneralFunction = function(helper, paramA, paramB) {
if (paramA == "hello") {
return helper(paramA);
}
return paramB;
}
Then I have this function to return a particular variation of general function.
function getFlavorX() {
return GeneralFunction.bind(undefined, helperX);
}
My concern is that the getFlavorX() could be called many many times (thousands), and according to documentation of bind, seems each call to bind creates a new function. Even for the exact same helperX?
So I guess I am kind of leaking function objects?
I guess the concern is valid, but memory leak or not depends on rest of code. Since in my case, the variations of binding are very limited, I manually maintain the list of bindings function objects (I return an already binded function object, instead of let user codes invoke bind on each call).
I recently joined a large software developing project which uses mainly JavaScript and a particular question has been on my mind since day one. I know this issue has been here on SO before, but I have never seen the core question being properly answered. So here I ask it again:
Are there any benefits in JavaScript in using function expressions rather than function declarations?
In other words, is this:
var myFunction = function () {
// Nice code.
}
in any way better than this:
function myFunction () {
// Nice code.
}
As I see it, function expressions only introduce negative aspects on several levels to the code base. Here I list a few.
Function expression as the one above, suddenly forces you to be careful with forward references, as the anonymous function object that the myFunction variable refers to, does not exist until the variable expression actually executes. This is never a problem if you use function declarations.
Apart from generating twice as many objects as in the case with function declarations, this usage introduces a very bad programming habit, which is that developers tend to declare their functions only when they feel they need them. The result is code that mixes object declarations, function expressions and logic in something that obscures the core logic of a piece of code.
As a side effect of 2), code becomes much harder to read. If you would use proper function declarations and only var declarations for objects that actually will be variable in the code, it becomes far easier to scan the indentation line of code segment and quickly find the objects and the functions. When everything is declared as "var", you are suddenly forced to read much more carefully to find this piece of information.
As yet another nasty side effect of 2), as users get into the bad habit of only declaring their functions when they feel they need them, function expressions start showing up inside event handlers and loops, effectively creating a new copy of the function object either each time an event handler is called or for each turn in the loop. Needless to say, this is bad! Here is an example to show what I mean:
var myList = ['A', 'B', 'C'];
myList.forEach(function (element) {
// This is the problem I see.
var myInnerFunction = function () {
// Code that does something with element.
};
};
So to sum up, at least in my view, the only situation in which it is fair to use something like:
var myFunction = function () {
// Nice code.
}
is when your logic intends to change the myFunction reference to point at different functions during execution. In that situation myFunction (the variable) is something that is variable in the code, hence you are properly informing a different programmer of your intents, rather than confusing him/her.
With this background in mind, I ask my question again. Have I missed something central about function expressions (in the context described above) in which they provide any benefit over function declarations?
Style 1: Objects with constructor/prototype
function DB(url) {
this.url = url;
}
DB.prototype.info = function (callback) {
http.get(this.url + '/info', callback);
};
Style 2: Closures
function DB(url) {
return { info: async.apply(http.get, url + '/info') };
}
This is just an example and assume that there are more prototype methods and private methods involved.
I have read in posts One and Two that closure style is much more preferred in nodejs over the other. Please help me clarify why using this.something syntax is bad in nodejs.
You can give your opinion about which is better, but I mostly need to know about what are the advantages and disadvantages of each style when used in nodejs.
It's not about a style. These two functions do two completely different things.
Closure provides an access to local variables. This way you can create private variables that aren't accessible from the outside (like url in your example). But it has a performance impact since closure is created each time your object is created.
Prototype function is faster, but it is created before object, and don't know anything about an object itself.
Sometimes it even makes sense to use both of them at the same time. :)
PS: coding style is described here: https://npmjs.org/doc/coding-style.html . It doesn't explain your particular question, but I feel I have to balance those two links in previous answer with something more sensible. :)
Closures, when done correctly, allow you to encapsulate data through the use of the scope chain that cannot be modified by any other caller.
The prototype chain does not provide any protection in that same sense. The main drawback to the use of Objects in the fashion you describe, especially in a server or library scenario, is that the "this" keyword can be modified by the caller. You have no control over that and your code will break in wildly unpredictable ways if it occurs.
var mongo = new DB('localhost');
mongo.info.call(this); // broken
Now it may not happen as explicitly as that but if you are passing around objects or object properties as event handlers, callbacks, etc into other functions, you have no way of knowing - or protecting against - that type of usage. So the bottom line is that the 'this' keyword is not something you can bank on. While you can completely control your immediate scope with the use of closures.
In a similar vein, you also have no guarantee that your object's prototype chain has not been altered. Unless, of course, you are creating a closure over the object and returning a wrapper.
Lastly, the closure structure more closely follows the Law of Demeter since your object would, theoretically, be "reaching through" via the prototype chain. Using a closure to encapsulate other calls allows you to expose a single method which can result in calls to another service methods. This provides greater maintainability and flexibility since you now control the methods you expose directly without relying on the prototype chain. Of course, the LoD is just one way of doing things so that may or may not be important to you.
Node follow javascript standards. So any javascript coding style is a proper coding style for node.js. But the following links may give you the abbreviation of node.js coding style.
http://nodeguide.com/style.html
http://innofied.com/javascript-coding-standards-follow/
I use sjsClass: https://www.npmjs.org/package/sjsclass
Code example:
Class.extend('DB', {
'protected url': null,
__constructor: function (url) {
this.url = url;
},
info: function (callback) {
http.get(this.url + '/info', callback);
}
});
There are benefits to both styles and i think it depends on what your module/file is trying to expose. I heavily use closure style for most modules i use in my code. (like db abstraction, cache abstraction, mail etc..) and i use constructors/prototype for objects i create a lot of (like a node in a doubly-linked-list)
=== objects with attributes defined inside a closure
if you create an object (lets call it self),
inside its scope add a bunch of methods that access and attach to that object (self.x)
and at the end export self, everything has access only to what you added to self and cannot access the local variables inside the function where you created self
=== constructors and prototype
on the other hand if you create constructors and add methods/fields to them trough prototype every function that attaches itself to your instance has access to its internal variables and state.
==
there are some things that work easier with prototypes like EventEmitter
and Streams but it not very hard to attach them to objects also.
Javascript is both an object oriented language and functional language, and missing the heavy lifting tools on both sides
like proper inheritance ever seen this.super().super().someMethod() ?? I havn't
(you need it if both superclasses have the same method name)
or nomads or simple generators at the side of functional programming.
so for me it makes sense to use both, and pick the one that's most suited to your problem.
EDIT
There is one big benefit for objects which i totally forgot about.
In your second example you use a flow control library (async in this case but any defered library will do), it makes your code so much cleaner, however
for your example to work the get method of http.get has to be bound to http, which in many cases it is not. so then your code will look like http.get.bind(http)
if http were an object and get was defined in its scope it would always work and allows you to pass it around to other code. (like async)
IMHO this is discussion is larger than node ... it's about javascript language.
So I suggest read this:
http://addyosmani.com/resources/essentialjsdesignpatterns/book/
and google a lil about javascript design patterns!
constructor can be use like that
var db = new DB();
...
if(db instanceof DB){
...
}
Closures can make private variables like
function DB(url) {
var urlParam = '&a=b';
return {
info: async.apply(http.get, url + '/info' + urlParam)
};
}
urlParam is a private variables cannot be get or set
if you only want a static class or simple class, use Closures.
This is subjective (opinion based) - but only to a degree, don't rush voting to close. Causing some arguments at work as everyone has a different opinion and people are trying to enforce a single way of doing it.
Simple context: when you have the option to save a reference in your closure to the instance or to use a polyfilled Function.prototype.bind, what possible disadvantages do you see to either approach?
To illustrate possible usecases, I just made up some class methods.
Pattern one, saved ref:
obj.prototype.addEvents = function(){
var self = this;
// reference can be local also - for unbinding.
this.onElementClick = function(){
self.emit('clicked');
self.element.off('click', self.onElementClick);
};
this.element.on('click', this.onElementClick);
};
Pattern two, a simple fn.bind:
obj.prototype.addEvents = function(){
// saved reference needs to be bound to this to be unbound
// once again, this can be a local var also.
this.onElementClick = function(){
this.emit('clicked');
this.element.off('click', this.onElementClick);
}.bind(this);
this.element.on('click', this.onElementClick);
};
Pattern two and a half, proto method to event:
obj.prototype.addEvents = function(){
// delegate event to a class method elsewhere
this.element.on('click', this.onElementClick.bind(this));
};
obj.prototype.onElementClick = function(){
this.emit('clicked');
this.element.off('click', this.onElementClick); // not matching due to memoized bound
};
Personally, I am of the opinion that there isn't a single correct way of doing this and should judge on a per-case basis. I quite like the saved reference pattern where possible. I am being told off.
Question recap:
Are there any GC issues to be considered / be mindful of?
Are there any other obvious downsides or pitfalls you can think of on either method?
Polyfill performance or event native .bind vs a saved ref?
My personal preference is to use the saved reference method. Reasoning about the value of this can be very hard sometimes because of how JavaScript treats this.
The bind is nice but if you miss the .bind(this) it looks like a bug.
The latter exposes too much; every time you need a callback you'd need to expose another helper in your API.
There are many ways to use prototyping. I think the most important thing is to pick one and stick to it.
Are there any GC issues to be considered / be mindful of?
Older engines don't infer what variables are still used from the closure and do persist the whole scope. Using bind does make it easy because the context is explicitly passed and the un-collected scope does not contain additional variables.
However, this doesn't make a difference if you're using a function expression anyway (as in patterns #1 and #2).
Are there any other obvious downsides or pitfalls you can think of on either method?
Saving reference:
needs an additional line for declaring the variable, sometimes even a whole new scope (IEFE)
Code can't be easily moved because you need to rename your variable
Using bind:
Easily overlooked on the end of a function expression (just like the invocation of an IEFE), it's not clear what this refers to when reading from top to bottom
Easily forgotten
I personally tend to use bind because of its conciseness, but only with functions (methods) declared elsewhere.
Polyfill performance or event native .bind vs a saved ref?
You don't care.
In your example, you actually don't need that reference to the bound function and the off method. jQuery can take care of that itself, you can use the one method for binding fire-once listeners. Then your code can be shortened to
obj.prototype.addEvents = function(){
this.element.one('click', this.emit.bind(this, 'clicked'));
};
I've been working with the SpiderMonkey C API and would like to implement a closure in C using their API. The one I would like to implement is fairly complex, but I can't even figure out how to do a simple one such as:
function x() {
var i = 0;
return function() { i++; print(i); };
}
var y = x();
y(); //1
y(); //2
y(); //3
I was wondering if anyone knows how I might do this. I found the JS_NewFunction method, but I don't actually know if that is a step in the right direction. Any help will be appreciated, thanks!
I don't know if there's a pure C way of doing closures or not. I would reccomend though, if you can, to just implement the functionality you need in javascript, and simply evaluate the javascript text in JSAPI. From there, use JSAPI to grab whatever handles/variables you need to implement your host functionality. It's really onerous to do javascripty things using JSAPI, avoid it if you can.
Narrated as if you're probably still interested, a year later.
Knitting my brows furiously at the documentation for JS_GetParent, I see
For some functions, it is used to implement lexical scoping (but this is an implementation detail).
and then, along with a list of API functions that create functions,
Some of these functions allow the application to specify a parent object. If the JSAPI function creating the object has a parent parameter, and the application passes a non-null value to it, then that object becomes the new object's parent. Otherwise, if the context is running any scripts or functions, a default parent object is selected based on those.
I might experiment with this later, but it seems that you might be able to do this either by (merely) creating the function in the API during the execution of the function that you want it to have the scope of.
Otherwise, you might be able to set the lexical scope of a function to some object manually using JS_SetParent, but the documentation keeps ominously calling that use of parents 'internal'.
</necro>