While writing javascript, one can define a method in 3 different ways.
1] A function in global namespace
function doSomething();
2] A function that is member of a function
function Clazz() {}
Clazz.doSomething = function(){};
3] A function that is memeber of the instance of function
function Clazz() {}
Clazz.prototype.doSomething = function(){};
Depending upon the code organization, one can choose one of the above methods over others.
But purely from performance standpoint which is the most efficient one? (especially between 1 and 2)
Will your answer be different if doSomething has arguments?
From a pure performance POV, 1 should be the fastest. The reason being that it would require less work to setup the scope chain & execution context. Also if you access any global variables from within the function, the resolution will be fastest with 1, again simply because of the depth of scope chain.
As a general rule further up (near to the global) an object is in the scope, the faster it is. for the same reason accessing property a.b will be faster than accessing a.b.c
The performance gain might not be too much in case of a simple function call, however it can mount up if say you call the function n a loop.
None of those declarations do the same thing and aren't interchangeable, what kind of comparison do you expect? It's like asking if it's faster to instantiate 10 variables or an array with 10 items: one is fastest, but the result is not the same.
You cannot compare performance between the function declarations,
For example add(a,b) functions declared in all the 3 places give the same performance. performance matters by how you write your code, not by where you declare your functions...
you are missing the most optimized one:
var x = function(){}
When javascript sees the definition:
function x(){}
It then converts it into the former form. If you do it that way in the first place there is a negligable speed up. But for the sake of answering your question, this is the most optimal.
Related
I came across this excerpt while reading Chapter 2 of "You Don't Know JS Yet".
But beware, it's more complicated than you'll assume. For example, how might you determine if two function references are "structurally equivalent"? Even stringifying to compare their source code text wouldn't take into account things like closure.
I just want to make sure if I understand correctly on what the author meant by "closure". I'm thinking of this example:
function x() {
console.log('Hello');
}
const foo = x;
function y() {
const bar = x;
if(foo.toString() === bar.toString()) { // returns true but the closure of foo and bar is different
// do something
}
}
Also, under what circumstances we need to compare two functions? Thanks.
Here is an example of two functions that look the same yet will behave differently because of their closure:
const what="great",
fun1=()=>console.log(`This is ${what}!`);
{ // this block will have its own closure
const what="stupid",
fun2=()=>console.log(`This is ${what}!`);
console.log(fun1.toString()==fun2.toString());
fun1();
fun2();
}
The author means that closure is the data that a function carries with it, including the variables from its surrounding scope that are used in its body. In the example you gave, even though the two functions foo and bar have the same source code (as indicated by the toString() comparison), they are not structurally equivalent because they have different closure values.
As for when to compare two functions, you might need to compare two functions in certain scenarios, such as when you want to determine if two functions have the same behavior, if they have the same closure, or if they are bound to the same execution context.
As far as I can tell, the author is saying "closure" as a reference to scope. I think you are correct.
As for the comparison of two functions, the most obvious case that comes to mind is comparing the time and space complexity performance of two functions that perform the same overall task. However, in regards to the authors 'structurally equivalent' notion, I don't know.
Are there any limits to what types of values can be set using const in JavaScript, and in particular, functions? Is this valid? Granted it does work, but is it considered bad practice for any reason?
const doSomething = () => {
...
}
Should all functions be defined this way in ES6? It does not seem like this has caught on, if so.
There's no problem with what you've done, but you must remember the difference between function declarations and function expressions.
A function declaration, that is:
function doSomething () {}
Is hoisted entirely to the top of the scope (and like let and const they are block scoped as well).
This means that the following will work:
doSomething() // works!
function doSomething() {}
A function expression, that is:
[const | let | var] = function () {} (or () =>
Is the creation of an anonymous function (function () {}) and the creation of a variable, and then the assignment of that anonymous function to that variable.
So the usual rules around variable hoisting within a scope -- block-scoped variables (let and const) do not hoist as undefined to the top of their block scope.
This means:
if (true) {
doSomething() // will fail
const doSomething = function () {}
}
Will fail since doSomething is not defined. (It will throw a ReferenceError)
If you switch to using var you get your hoisting of the variable, but it will be initialized to undefined so that block of code above will still not work. (This will throw a TypeError since doSomething is not a function at the time you call it)
As far as standard practices go, you should always use the proper tool for the job.
Axel Rauschmayer has a great post on scope and hoisting including es6 semantics: Variables and Scoping in ES6
Although using const to define functions seems like a hack, it comes with some great advantages that make it superior (in my opinion)
It makes the function immutable, so you don't have to worry about that function being changed by some other piece of code.
You can use fat arrow syntax, which is shorter & cleaner.
Using arrow functions takes care of this binding for you.
example with function
// define a function
function add(x, y) { return x + y; }
// use it
console.log(add(1, 2)); // 3
// oops, someone mutated your function
add = function (x, y) { return x - y; };
// now this is not what you expected
console.log(add(1, 2)); // -1
same example with const
// define a function (wow! that is 8 chars shorter)
const add = (x, y) => x + y;
// use it
console.log(add(1, 2)); // 3
// someone tries to mutate the function
add = (x, y) => x - y; // Uncaught TypeError: Assignment to constant variable.
// the intruder fails and your function remains unchanged
It has been three years since this question was asked, but I am just now coming across it. Since this answer is so far down the stack, please allow me to repeat it:
Q: I am interested if there are any limits to what types of values can be
set using const in JavaScript—in particular functions. Is this valid?
Granted it does work, but is it considered bad practice for any
reason?
I was motivated to do some research after observing one prolific JavaScript coder who always uses const statement for functions, even when there is no apparent reason/benefit.
In answer to "is it considered bad practice for any reason?" let me say, IMO, yes it is, or at least, there are advantages to using function statement.
It seems to me that this is largely a matter of preference and style. There are some good arguments presented above, but none so clear as is done in this article:
Constant confusion: why I still use JavaScript function statements by medium.freecodecamp.org/Bill Sourour, JavaScript guru, consultant, and teacher.
I urge everyone to read that article, even if you have already made a decision.
Here's are the main points:
Function statements have two clear advantages over [const] function
expressions:
Advantage #1: Clarity of intent When scanning through
thousands of lines of code a day, it’s useful to be able to figure out
the programmer’s intent as quickly and easily as possible.
Advantage #2: Order of declaration == order of execution
Ideally, I want to declare my code more or less in the order that I
expect it will get executed.
This is the showstopper for me: any value declared using the const
keyword is inaccessible until execution reaches it.
What I’ve just described above forces us to write code that looks
upside down. We have to start with the lowest level function and work
our way up.
My brain doesn’t work that way. I want the context before the details.
Most code is written by humans. So it makes sense that most people’s
order of understanding roughly follows most code’s order of execution.
There are some very important benefits to the use of const and some would say it should be used wherever possible because of how deliberate and indicative it is.
It is, as far as I can tell, the most indicative and predictable declaration of variables in JavaScript, and one of the most useful, BECAUSE of how constrained it is. Why? Because it eliminates some possibilities available to var and let declarations.
What can you infer when you read a const? You know all of the following just by reading the const declaration statement, AND without scanning for other references to that variable:
the value is bound to that variable (although its underlying object is not deeply immutable)
it can’t be accessed outside of its immediately containing block
the binding is never accessed before declaration, because of Temporal Dead Zone (TDZ) rules.
The following quote is from an article arguing the benefits of let and const. It also more directly answers your question about the keyword's constraints/limits:
Constraints such as those offered by let and const are a powerful way of making code easier to understand. Try to accrue as many of these constraints as possible in the code you write. The more declarative constraints that limit what a piece of code could mean, the easier and faster it is for humans to read, parse, and understand a piece of code in the future.
Granted, there’s more rules to a const declaration than to a var declaration: block-scoped, TDZ, assign at declaration, no reassignment. Whereas var statements only signal function scoping. Rule-counting, however, doesn’t offer a lot of insight. It is better to weigh these rules in terms of complexity: does the rule add or subtract complexity? In the case of const, block scoping means a narrower scope than function scoping, TDZ means that we don’t need to scan the scope backwards from the declaration in order to spot usage before declaration, and assignment rules mean that the binding will always preserve the same reference.
The more constrained statements are, the simpler a piece of code becomes. As we add constraints to what a statement might mean, code becomes less unpredictable. This is one of the biggest reasons why statically typed programs are generally easier to read than dynamically typed ones. Static typing places a big constraint on the program writer, but it also places a big constraint on how the program can be interpreted, making its code easier to understand.
With these arguments in mind, it is recommended that you use const where possible, as it’s the statement that gives us the least possibilities to think about.
Source: https://ponyfoo.com/articles/var-let-const
There are special cases where arrow functions just won't do the trick:
If we're changing a method of an external API, and need the object's reference.
If we need to use special keywords that are exclusive to the function expression: arguments, yield, bind etc.
For more information:
Arrow function expression limitations
Example:
I assigned this function as an event handler in the Highcharts API.
It's fired by the library, so the this keyword should match a specific object.
export const handleCrosshairHover = function (proceed, e) {
const axis = this; // axis object
proceed.apply(axis, Array.prototype.slice.call(arguments, 1)); // method arguments
};
With an arrow function, this would match the declaration scope, and we won't have access to the API obj:
export const handleCrosshairHover = (proceed, e) => {
const axis = this; // this = undefined
proceed.apply(axis, Array.prototype.slice.call(arguments, 1)); // compilation error
};
The function I want to create requires an access myObject.
Is it better to create the helper function inside the main function so I have access to myObject with the scope.
Or should I make the helper function outside of myFunction and pass myObject as a parameter?
EDIT: Could I get memory-leak problem with those methods?
//Method 1
var myFunction = function(myObject){
var helper = function(i){
return console.log(i,myObject[i]);
}
for(var i in myObject){
var callback = helper(i);
}
}
//Method 2
var myFunction = function(myObject){
for(var i in myObject){
var callback = helper(i,myObject);
}
}
var helper = function(i,myObject){
return console.log(i,myObject[i]);
}
I assume you're asking for performance.
Here is a jsperf to test it for you (Yup, this exists and is very neat).
So in running the tests a bunch of times I note that neither method clearly wins. It's very close and almost certainly doesn't matter. I also note that on Windows 8.1, IE 11 blows Chrome out of the water, it's not even close - like by a factor of 20. That's a neat little result but not really indicative of real world code performance and likely we're just hitting an optimized case. Within IE the in-function version is a tiny-almost-irrelevant bit faster.
You can play with the inputs somewhat (for example I just passed in a small object, maybe a big one has different results), but I'm ready to call a preliminary conclusion: Don't worry about performance here. Use whichever one makes more sense for your specific case in terms of code clarity.
Apart from performance benefit mentioned by George Mauer, I think it will be a good idea to keep it inside to keep global context clean.
Different functions can have their own helpers defined inside them with same name rather than having function1Helper, function2Helper... in the global context.
So I have a rather large object orientated javascript class, with about 120 functions (a lot of getters and setters). Some of these functions have variables that are basically constants.
What I'm wondering, is should I declare these variables in a global scope of the object, so every time the function is run it doesn't have to re-declare the variable?
An example function is below. this.displayContacts is run several times (and will always run within the object), so in this case, there's no point declaring the 'codes' object inside the function?
function orderObject() {
this.displayContacts = function() {
var codes = {'02':'02','03':'03','07':'07','08':'08'};
// do something with codes
};
}
So, would this be better, performance wise?
function orderObject() {
var codes = {'02':'02','03':'03','07':'07','08':'08'};
this.displayContacts = function() {
// do something with codes.
};
}
My other concern is that if I end up with a lot of global variables/objects inside the main orderObject, will that be MORE of a performance hit than simply re-declaring the variables each time?
absolutely.
function MyClass() {
this.somevar = ''; // instance scoped variable
};
MyClass.CODES = {'02':'02'...}; // 'Class' scoped variable; one instance for all objects
MyClass.prototype.instanceMethod = function(){
// 'this' refers to the object *instance*
// it can still use MyClass.CODES, but can also account for variables local to the class
}
CONSTANT is 'static' so to speak, in java-talk. If your codes are global to the class (and the rest of your application), you will save a lot of overhead this way -- only define the object once. Note that you can have 'static' class-level methods as well, for those cases where the function doesn't need to operate on variables specific to an instance of the class.
Unless your app is really beefy, performance optimization probably wont make it noticeably faster. But that doesn't mean that OO design is not worth-while -- if you are going to use javascript in an object oriented way, its not too hard and never a bad idea to use good OO principals.
I would say that if you have something that you are using in multiple places that it should become a property of your object so that it doesn't have to be redeclared each time. It would also help make the maintenance of the object easier if that constant has to change. Then you are changing it only in one place and not having to hunt down all the locations where you used it.
Don't repeat yourself.
Garbage collection in JavaScript depends on the browser, and most modern browsers handle it pretty well. If you go ahead and make these global, you might see a slight performance increase simply because it's not executing that line of code every time. However, I can't imagine any significant increase in performance by making these static properties on the class, but if they don't change, then it would make more sense.
lets say that I have the following scenario:
var namespace = {};
(function($)
{
$.extend(namespace,
{
test1: function(someArray, someObj)
{
for(var i= 0, ii= someArray.length;i<ii;i++)
{
var text = someObj[someArray[i]];
// do something with text
}
},
test2: function(someArray, someObj,i,ii,text)
/*
see that the i,ii,text are unused parameters,
that will be used instead of variables
*/
{
for(i= 0, ii= someArray.length;i<ii;i++)
{
text = someObj[someArray[i]];
// do something with text
}
},
});
})(jQuery);
Now, the result of the test1 and test2 are the same... but what about the performance, memory usage...
Is there any difference between declaring the i,ii, test variables in the two ways presented above ?
I think that the test2, for example, is probably more efficient because the variables are in the local function scope so after the function exits, the execution context is destroy, releasing the resources used for the arguments... the variables will not be assigned to the global object 'window'.
So what method is performing best? and why?
[Edit]
Thanks all for your answers !
There is no problem if the code has readability issues... I`m only interested now about the performance/memory usage.
If you do not declare your variables with var i then they become implicitly global.
Always declare your variables. If you did any benchmarking on that you would find that declared local variables are actually faster then implied global variables. Also you don't leak to the global state that way.
Benchmark!.
As you can see the performance is identical.
In terms of memory usage, local variables (test1) are probably better as the compiler doesn't have to remember that the function has 5 parameters.
But that's a nano optimisation If you care about performance differences of this caliber write assembly instead. Go for readable and maintanable code.
[Edit]
Didn't notice "local" variables in method parameter. That is a readability killer! Don't do that. You will find that test1 is probably still more efficient.
Why don't you profile your code?
The variables are also local in test1. You are declaring them with var i.
There is no difference between these methods.
The variables in "test1" are all declared with var, so they're not global. Those two should be essentially the same.
test1 is faster, because every time JavaScript looks for symbols (such as a variable name) it starts by looking in the local scope. So by using global scope it has to look in more places to find the symbols. The same applies for parameters, but they are better than globals.
It may be miniscule, but declaring a new variable (text) in every iteration would require new memory allocation I believe. Though I'm not sure how javascript handles that. I usually declare variables beforehand and then assign values afterwards for that reason, but that is only because someone said "hey you should do it that way" and presented the same argument.