Most efficient alternative to making functions within a loop - javascript

The function I want to create requires an access myObject.
Is it better to create the helper function inside the main function so I have access to myObject with the scope.
Or should I make the helper function outside of myFunction and pass myObject as a parameter?
EDIT: Could I get memory-leak problem with those methods?
//Method 1
var myFunction = function(myObject){
var helper = function(i){
return console.log(i,myObject[i]);
}
for(var i in myObject){
var callback = helper(i);
}
}
//Method 2
var myFunction = function(myObject){
for(var i in myObject){
var callback = helper(i,myObject);
}
}
var helper = function(i,myObject){
return console.log(i,myObject[i]);
}

I assume you're asking for performance.
Here is a jsperf to test it for you (Yup, this exists and is very neat).
So in running the tests a bunch of times I note that neither method clearly wins. It's very close and almost certainly doesn't matter. I also note that on Windows 8.1, IE 11 blows Chrome out of the water, it's not even close - like by a factor of 20. That's a neat little result but not really indicative of real world code performance and likely we're just hitting an optimized case. Within IE the in-function version is a tiny-almost-irrelevant bit faster.
You can play with the inputs somewhat (for example I just passed in a small object, maybe a big one has different results), but I'm ready to call a preliminary conclusion: Don't worry about performance here. Use whichever one makes more sense for your specific case in terms of code clarity.

Apart from performance benefit mentioned by George Mauer, I think it will be a good idea to keep it inside to keep global context clean.
Different functions can have their own helpers defined inside them with same name rather than having function1Helper, function2Helper... in the global context.

Related

Javascript closures performance

I have being working for a while in javascript and usually do something like this just to cache the value of properties of functions that are declared inside a deep structure or "namespace"
//global scope
(function ($, lib) {
//function scope 1
var format = lib.format, // instead of calling lib.format all the time just call format
touch = lib.pointer.touch, //instead of calling lib.pointer.touch each time just touch
$doc = $(document),
log = logger.log; //not console log...
$doc.on('app:ready', function () {
//function scope 2
$doc.on('some:event', function (e) {
//function scope 3
//use the cached variables
log(format('{0} was triggered on the $doc object', e.type);
});
$doc.on(touch, function (e) {
//function scope 3
log(format('this should be {1} and is... {0} ', e.type, touch);
});
});
}(jQuery, lib));
I was doing that because:
as lazy as I am, writing touch seem more appealing that writing lib.pointer.touch, even when powerful IDEs with fancy autocompletion could help on this, touch is shorter.
a minimizer could convert that single private variable to a single letter variable, so it also made sense for me (I know, I know, never optimize too soon, but this seems to be safe I guess)
All the code written that way seems to perform decently on mobile devices and desktop browsers, so It seems to be a safe practice (In "the practice", pun intended). But I since this relies in closures, and inner functions have to create a closure to save the context it was declared I was wondering...
if a function does not uses variables from the outside context (free variables)... is the closure context still saved? (or if i a tree falls in the wood and nobody is there to hear it, does it still make the crash sound? hehe) I'm aware that this could vary between javascript engines, because ECMA mention nothing about if it is required to save the context or not when variables from the outside are not accessed.
if the above expression is true... will this block of code be more efficient?
//global scope
(function ($, lib) {
//function scope 1
var format = lib.format,
touch = lib.pointer.touch,
$doc = $(document),
log = console.log;
$doc.on('app:ready', function () {
(function ($doc, touch, lib, format) {
// since all the variables are provided as arguments in this function
// there is no need to save them to the [[scope]] of this function
// because they are local to this self invoking function now
$doc.on('some:event', function (e) {
//function scope 3
//use the cached variables
log(format('{0} was triggered on the $doc object', e.type);
});
$doc.on(touch, function (e) {
//function scope 3
log(format('this should be {1} and is... {0} ', e.type, touch);
});
}($doc, touch, lib, format));
});
}(jQuery, lib));
Is it more efficient because it passes those variables to self immediate invoking function? will the cost of creating that new function any impact in the code, (negative or positive)?
How I can properly measure the memory consumption of my javascript library in a reliable way? I have 100x little javascript modules all inside immediate self invoking functions mostly to avoid variables from leaking to the global context. So They all are wrapped in modules very similar to the block of code mentioned above.
will it have a better effect to cache the variables closer, even when that will mean I will have to repeat the declarations of the variables closer to where they're going to be used?
I have the feeling that looking for a variable not in the current local context, the engine will first look into the parent scope and iterate over all the variables at that level... the more variables per level, worse would probably be the performance looking for a variable.
try to find an undefined variable from the internal closures will be the most expensive, because by definition the variable will be first search on the parent scope until the global scope, and not finding it will force the engine to finally reach the global scope. Is that true? are engines optimizing this kind of lookups?
In the end... I know that I will not want to implement my code as the second example mostly because It will make the code harder to read and I'm kinda confortable with the minimized size of the final output using the first approach. My question is motivated by the curiosity and to try to understand a bit better this really nice feature of javascript.
Accordingly to this test...
http://jsperf.com/closures-js
It seems the second approach is faster. But is only evident when iterating an insane number of times... Right now my code does not do that number of iterations... but probably is consuming more memory because of my way of coding...
update: it has being pointed to me that this question is too large. I'm sorry will try to break in small parts. This question was motivated mostly by curiosity as I said, performance seems negligible even in mobile devices. Thank you for all your feedback.
I think it is premature optimization. You already know performance is not a problem in most cases. Even in tight loops, performance does not degrade that bad. Let the JavaScript engine optimize this on its own as Chrome has started doing, by removing unneeded variables from closures.
One important thing is, don't make your code harder to read with unnecessary optimization. Your example takes quite a bit more code, hampering development. In some cases, we are forced to make the code harder to read because we know a particular piece of the app is more memory/performance intensive, but only at that point should we do that.
If you add a breakpoint to the following code (in Chrome), you'll see that the world variable has been optimized out of the closure, look at the 'Closure' node under the Scope Variables http://jsfiddle.net/4J6JP/1/
.
(function(){
var hello = "hello", world="world";
setTimeout(function(){
debugger;
console.log(hello);
});
})()
Note that if you add an eval in that inner function, then all bets are off and the closure can't be optimized.

Is it better to exit from a Function to cut-down on Activation Objects, than recursively or calling nested functions?

In JavaScript and other languages, I've heard about Activation Objects being created as you invoke a method / function. In order to optimize and maintain a good performance, it sounds like a developer should limit how many functions are being called.
Now if there's no way around it and you must call multiple methods, is it better to call one method after another, like this:
myFunc1();
myFunc2();
myFunc3();
// or...
var myFuncs = [myFunc1, myFunc2, myFunc3];
for(var a=0, aLen=myFuncs.length; a<aLen; a++) {
myFuncs[a]();
}
OR, to nest them like this:
function myFunc1() {
// Do something...
myFunc2();
}
function myFunc2() {
// Do Something else...
myFunc3();
}
function myFunc3() {
//Do one last thing.
}
//Start the execution of all 3 methods:
myFunc1();
I'm assuming it makes more sense to go with the 1st technique, since it comes back to the previous scope and releases the last Activation Object... but if someone could confirm this,
I would really like to know!
Thanks
In order to optimize and maintain a good performance, it sounds like a developer should limit how many functions are being called.
Yes and no. Functions (or more generally, subroutines) are there to be called, and not doing so makes no sense. If you can make your code more DRY by introducing another function, do so.
The only place where not using them is reasonable are high-performance loops which run thousands of times doing little work, and function calls would add a noticable overhead. Do not try to prematurely optimize!
Also, there are some languages which handle recursion not well and where you will need to translate recursive function calls to loops, preventing stackoverflow exceptions. However, this is a rare case as well.
is it better to call one method after another, or to nest them?
That depends, since the two techniques do different things. With #1, there are just 3 independent functions which are called after each other. In contrast, #2 defines functions that always call each other - you can't get myFunc2 without myFunc3. Is that intended?
If it is, there's nothing wrong with this nesting. The two additional stack layers will not harm your performance.
For information concerning Activation Objects, please refer to http://dmitrysoshnikov.com/ecmascript/chapter-2-variable-object/#more-546
This is not an optimization level concern however, as the concern you listed is an example of EXTREME pre-optimization and your time is not worth that type of investment. And actually, the example you listed above, there is little to no savings when you are looking at Activation Objects alone.
As for proper use however, I try to encapsulate as much as I can. If a function doesn't have to go in the global scope, and can live within the scope of another function, then that's where it should be declared.
for example, for better scoping.
var f2 = function() {
}
var f1 = function() {
f2()
}
// is not as nice as:
var f1 = function() {
var f2 = function()
f2()
}
// or even better..
var f1 = function() {
function() {
}() ; execute
}
Separation of responsibility:
private function myFunc1(): void
{
}
private function myFunc2(): void
{
}
private function myFunc3(): void
{
}
private function doAllFunc(): void
{
myFunc1();
myFunc2();
myFunc3();
}

What is the difference between these two styles of writing functions in Javascript

I am struggling to find out what is better between following two versions of Javascript fucntions
var FirstName = function(){
var value = 0;
this.getValue = function(){
return value;
}
}
and
var FirstName = function(){
var value = 0;
return {
getValue: function(){
return value;
}
}
}
I do understand that the latter one forms a closure but I do not understand, from usage perspective what advantage does closure in second style provide over the first one?
EDIT: Based on comment from Felix, both the functions form closure. So semantically there is no difference between these two functions (as far as I understand them). So which is a preferred way? Is there any guideline?
The second way seems a little redundant in this case. But I'm sure there are cases where it may be more beneficial to use the second way. I personally go with the first one just because it makes for cleaner and less confusing code usually. It's just a preference thing. Whichever you like and whichever is consistent with the rest of the code you are working on. Consistency is the most important thing.
The first method does not return anything but just creates the function as a global variable, whereas the second returns the inner function as a member of an object.
By returning the inner function, you can set and maintain different states of the function throughout your code, whereas with the first one, it will overwrite the method every time you run it. Seems like the latter is more flexible..
You can mess around with this demo I made.

Executing dynamically passed function call with jQuery

I have this function call passed as a string:
var dcall = "tumbsNav(1)";
Is it possible to execute this dynamically like exec in SQL??
exec(dcall)
eval is the equivalent, but you should NOT use it.
Instead, pass the function like this:
var dcall = function() {tumbsNav(1);};
Then call it with:
dcall();
To do this, you want eval(dcall).
eval can open terribly security holes and performance issues in your program. If you need to use it, that usually means you have designed your program poorly.
Instead, you might keep a reference to the function you want to call and hold an array of arguments and use apply, e.g., tumbsNav.apply(null, [1]);. I don't know your code, so that's most general solution I can offer.
Wherever you're storing
var dcall = "tumbsNav(1)";
You can instead store
var dcall = function() {
return tumbsNav(1);
};
Wherever you were calling it, instead of calling
eval(dcall);
You can instead call
dcall();
The only case this wouldn't work is if tumbsNav wasn't defined at the time var func = ... is called. Then you would have to store the string. If the string is completely under your control, then there's no security hole, but be aware of all the problems mentioned by #Porco
As Kolink mentioned, my example would not cause a problem if tumbsNav was not defined when assigning it with a wrapped anonymous function that calls tumbs. The comment above would only make sense if the example had been the following:
var dcall = tumbsNav, darg = 1;
// later in the code, you can call
dcall(darg) ;
Use eval(dcall).
As others have mentioned eval is considered bad practice. The main reasons for this are
1) Improper use can leave your code vulnerable to injection attacks.
2) Maintaining code becomes more difficult (no line numbers, can't use debugging tools)
3) Executes more slowly (browsers can't compile)
4) Scope becomes impossible to predict.
However, if you understand all these then eval can be very helpful.

Object Orientated Javascript / Variable declarations / Performance

So I have a rather large object orientated javascript class, with about 120 functions (a lot of getters and setters). Some of these functions have variables that are basically constants.
What I'm wondering, is should I declare these variables in a global scope of the object, so every time the function is run it doesn't have to re-declare the variable?
An example function is below. this.displayContacts is run several times (and will always run within the object), so in this case, there's no point declaring the 'codes' object inside the function?
function orderObject() {
this.displayContacts = function() {
var codes = {'02':'02','03':'03','07':'07','08':'08'};
// do something with codes
};
}
So, would this be better, performance wise?
function orderObject() {
var codes = {'02':'02','03':'03','07':'07','08':'08'};
this.displayContacts = function() {
// do something with codes.
};
}
My other concern is that if I end up with a lot of global variables/objects inside the main orderObject, will that be MORE of a performance hit than simply re-declaring the variables each time?
absolutely.
function MyClass() {
this.somevar = ''; // instance scoped variable
};
MyClass.CODES = {'02':'02'...}; // 'Class' scoped variable; one instance for all objects
MyClass.prototype.instanceMethod = function(){
// 'this' refers to the object *instance*
// it can still use MyClass.CODES, but can also account for variables local to the class
}
CONSTANT is 'static' so to speak, in java-talk. If your codes are global to the class (and the rest of your application), you will save a lot of overhead this way -- only define the object once. Note that you can have 'static' class-level methods as well, for those cases where the function doesn't need to operate on variables specific to an instance of the class.
Unless your app is really beefy, performance optimization probably wont make it noticeably faster. But that doesn't mean that OO design is not worth-while -- if you are going to use javascript in an object oriented way, its not too hard and never a bad idea to use good OO principals.
I would say that if you have something that you are using in multiple places that it should become a property of your object so that it doesn't have to be redeclared each time. It would also help make the maintenance of the object easier if that constant has to change. Then you are changing it only in one place and not having to hunt down all the locations where you used it.
Don't repeat yourself.
Garbage collection in JavaScript depends on the browser, and most modern browsers handle it pretty well. If you go ahead and make these global, you might see a slight performance increase simply because it's not executing that line of code every time. However, I can't imagine any significant increase in performance by making these static properties on the class, but if they don't change, then it would make more sense.

Categories

Resources