In terms of memory consumption, are these equivalent or do we get a new function instance for every object in the latter?
var f=function(){alert(this.animal);}
var items=[];
for(var i=0;i<10;++i)
{
var item={"animal":"monkey"};
item.alertAnimal=f;
items.push(item);
}
and
var items=[];
for(var i=0;i<10;++i)
{
var item={"animal":"monkey"};
item.alertAnimal=function(){alert(this.animal);};
items.push(item);
}
EDIT
I'm thinking that in order for closure to work correctly, the second instance would indeed create a new function each pass. Is this correct?
You should pefer the first method, since the second one creates a function every time the interpreter passes that line.
Regarding your edit: We are in the same scope all the time, since JavaScript has function scope instead of block scope, so this might be optimizable, but i did not encounter an implementation that doesn't create it every time. I would recommend not to rely on this (probably possible) optimization, since implementations that lack support could likely exceed memory limits if you use this technique extensively (which is bad, since you do not know what implementation will run it, right?).
I am not an expert, but it seems to me that different javascript engines could be handling this in different ways.
For example, V8 has something called hidden classes, which could affect memory consumption when accessing the same property. Maybe somebody can confirm or deny this.
Related
I just want to make sure I understand this basic idea correctly, and no matter how I phrase it, I'm not coming up with completely relevant search results.
This is faster:
function () {
$('#someID').someMethod( $('#someOtherID').data('some-data') );
}
than this:
function () {
var variableOne = $('#someID'),
variableTwo = $('#someIDsomeOtherID').data('some-data');
variableOne.someMethod( variableTwo );
}
is that correct?
I think the question may be "Is declaring variables slower than not declaring variables?" :facepalm:
The particular case where I questioned this is running a similarly constructed function after an AJAX call where some events must be delegated on to the newly loaded elements.
The answer you will benefit from the most is It does not make a difference.
Declaring variables and storing data in them is something you do so that you do not have to query that data more than once. Besides this obvious point, using variables rather than chaining everything into one big expression improves readability and makes your code more manageable.
When it comes to performance, I can't think of any scenario where you should debate declaring a variable or not. If there's even an absolute difference in speed, it would be so small that you should not see this as a reason to not use that variable and just leaving your code to becoming spaghetti.
If you want to use the element $('#someID') again and again
Then decelaring variable would be useful and caching $('#someId') is recommended.
var x = $('#someId')'
I'm kind of curious about what the best practice is when referencing the 'global' namespace in javascript, which is merely a shortcut to the window object (or vice versia depending on how you look at it).
I want to know if:
var answer = Math.floor(value);
is better or worse than:
var answer = window.Math.floor(value);
Is one better or worse, even slightly, for performance, resource usage, or compatibility?
Does one have a slighter higher cost? (Something like an extra pointer or something)
Edit note: While I am a readability over performance nazi in most situations, in this case I am ignoring the differences in readability to focus solely on performance.
First of all, never compare things like these for performance reasons. Math.round is obviously easier on the eyes than window.Math.round, and you wouldn't see a noticeable performance increase by using one or the other. So don't obfuscate your code for very slight performance increases.
However, if you're just curious about which one is faster... I'm not sure how the global scope is looked up "under the hood", but I would guess that accessing window is just the same as accessing Math (window and Math live on the same level, as evidenced by window.window.window.Math.round working). Thus, accessing window.Math would be slower.
Also, the way variables are looked up, you would see a performance increase by doing var round = Math.round; and calling round(1.23), since all names are first looked up in the current local scope, then the scope above the current one, and so on, all the way up to the global scope. Every scope level adds a very slight overhead.
But again, don't do these optimizations unless you're sure they will make a noticeable difference. Readable, understandable code is important for it to work the way it should, now and in the future.
Here's a full profiling using Firebug:
<!DOCTYPE html>
<html>
<head>
<title>Benchmark scope lookup</title>
</head>
<body>
<script>
function bench_window_Math_round() {
for (var i = 0; i < 100000; i++) {
window.Math.round(1.23);
}
}
function bench_Math_round() {
for (var i = 0; i < 100000; i++) {
Math.round(1.23);
}
}
function bench_round() {
for (var i = 0, round = Math.round; i < 100000; i++) {
round(1.23);
}
}
console.log('Profiling will begin in 3 seconds...');
setTimeout(function () {
console.profile();
for (var i = 0; i < 10; i++) {
bench_window_Math_round();
bench_Math_round();
bench_round();
}
console.profileEnd();
}, 3000);
</script>
</body>
</html>
My results:
Time shows total for 100,000 * 10 calls, Avg/Min/Max show time for 100,000 calls.
Calls Percent Own Time Time Avg Min Max
bench_window_Math_round
10 86.36% 1114.73ms 1114.73ms 111.473ms 110.827ms 114.018ms
bench_Math_round
10 8.21% 106.04ms 106.04ms 10.604ms 10.252ms 13.446ms
bench_round
10 5.43% 70.08ms 70.08ms 7.008ms 6.884ms 7.092ms
As you can see, window.Math is a really bad idea. I guess accessing the global window object adds additional overhead. However, the difference between accessing the Math object from the global scope, and just accessing a local variable with a reference to the Math.round function isn't very great... Keep in mind that this is 100,000 calls, and the difference is only 3.6ms. Even with one million calls you'd only see a 36ms difference.
Things to think about with the above profiling code:
The functions are actually looked up from another scope, which adds overhead (barely noticable though, I tried importing the functions into the anonymous function).
The actual Math.round function adds overhead (I'm guessing about 6ms in 100,000 calls).
This can be an interest question if you want to know how the Scope Chain and the Identifier Resolution process works.
The scope chain is a list of objects that are searched when evaluating an identifier, those objects are not accessible by code, only its properties (identifiers) can be accessed.
At first, in global code, the scope chain is created and initialised to contain only the global object.
The subsequent objects in the chain are created when you enter in function execution context and by the with statement and catch clause, both also introduce objects into the chain.
For example:
// global code
var var1 = 1, var2 = 2;
(function () { // one
var var3 = 3;
(function () { // two
var var4 = 4;
with ({var5: 5}) { // three
alert(var1);
}
})();
})();
In the above code, the scope chain will contain different objects in different levels, for example, at the lowest level, within the with statement, if you use the var1 or var2 variables, the scope chain will contain 4 objects that will be needed to inspect in order to get that identifier: the one introduced by the with statement, the two functions, and finally the global object.
You also need to know that window is just a property that exists in the global object and it points to the global object itself. window is introduced by browsers, and in other environments often it isn't available.
In conclusion, when you use window, since it is just an identifier (is not a reserved word or anything like that) and it needs to pass all the resolution process in order to get the global object, window.Math needs an additional step that is made by the dot (.) property accessor.
JS performance differs widely from browser to browser.
My advice: benchmark it. Just put it in a for loop, let it run a few million times, and time it.... see what you get. Be sure to share your results!
(As you've said) Math.floor will probably just be a shortcut for window.Math (as window is a Javascript global object) in most Javascript implementations such as V8.
Spidermonkey and V8 will be so heavily optimised for common usage that it shouldn't be a concern.
For readability my preference would be to use Math.floor, the difference in speed will be so insignificant it's not worth worrying about ever. If you're doing a 100,000 floors it's probably time to switch that logic out of the client.
You may want to have a nose around the v8 source there's some interesting comments there about shaving nanoseconds off functions such as this int.Parse() one.
// Some people use parseInt instead of Math.floor. This
// optimization makes parseInt on a Smi 12 times faster (60ns
// vs 800ns). The following optimization makes parseInt on a
// non-Smi number 9 times faster (230ns vs 2070ns). Together
// they make parseInt on a string 1.4% slower (274ns vs 270ns).
As far as I understand JavaScript logic, everything you refer to as something is searched in the global variable scope. In browser implementations, the window object is the global object. Hence, when you are asking for window.Math you actually have to de-reference what window means, then get its properties and find Math there. If you simply ask for Math, the first place where it is sought, is the global object.
So, yes- calling Math.something will be faster than window.Math.something.
D. Crockeford talks about it in his lecture http://video.yahoo.com/watch/111593/1710507, as far as I recall, it's in the 3rd part of the video.
If Math.round() is being called in a local/function scope the interpreter is going to have to check first for a local var then in the global/window space. So in local scope my guess would be that window.Math.round() would be very slightly faster. This isn't assembly, or C or C++, so I wouldn't worry about which is faster for performance reasons, but if out of curiosity, sure, benchmark it.
I'm just curious. Maybe someone knows what JavaScript engines can optimize in 2013 and what they can't? Any assumptions for nearest future? I was looking for some good articles, but still there is no "bible" in the internet.
Ok, let's focus on single quesiton:
Suppose I have a function which is called every 10ms or in a tight loop:
function bottleneck () {
var str = 'Some string',
arr = [1,2,3,4],
job = function () {
// do something;
};
// Do something;
// console.log(Date.getTime());
}
I do not need to calculate the initial values for variables every time, as you see. But, if I move them to upper scope, I will loose on variable lookup. So is there a way to tell Javasript engine to do such an obvious thing - precalculate variables' initial values?
I've careated a jsperf to clear my question. I'm experimenting with different types. I'm especially interested in functions and primitives.
if you need to call a function every 10ms, and it's a bottleneck, the first thought you should have is "I shouldn't call this function every 10ms". Something went wrong in the architecting you did. That said, see 1b in http://jsperf.com/variables-caching/2, which is about four times faster than your "cached" version - the main reason being that for every variable in your code, you're either moving up scope, or redeclaring. In 1b, we go up scope once, to get "initials", then set up local aliasses for its content, from local reference. Much time is saved.
(Concerns V8)
Well the array data itself is not created but an unique array object needs to be created every-time. The backing array for the values 1,2,3,4 is shared by these objects.
The string is interned and it is actually fastest to copy paste same string everywhere as a literal rather than referencing some common variable. But for maintenance you don't really want to do that.
Don't create any new function inside a hot function, if your job function references any variables from the bottleneck function then first of all those variables will become context allocated and slow to access anywhere even in the outer function and it will prevent inlining of the bottleneck function as of now. Inlining is a big deal optimization you don't want to miss when otherwise possible.
I'm using John Resig's recipe for JavaScript 'classes' and inheritance. I've stripped my code back to something like this for this question:
MyClass = Class.extend({
// create an <h3>Hello world!</h3> in the HTML document
init : function (divId) {
this._divId = divId;
this._textDiv = document.createElement("h3");
this._textDiv.innerHTML = "Hello world!";
document.getElementById(divId).appendChild(this._textDiv);
},
// remove the <h3> and delete this object
remove : function () {
var container = document.getElementById(this._divId);
container.parentNode.removeChild(container);
// can I put some code here to release this object?
}
});
All works well:
var widget = new MyClass("theDivId");
...
widget.remove();
I'm going to have hundreds of these things on a page (obviously with some sensible functionality) and I'd like a simple way to release the memory for each object. I understand I can use widget = null; and trust the GC releases the object when required (?), but can I do something explicit in the remove() method? I know that placing this = null; at the end of remove() doesn't work ;)
there is no ways to destroy objects manually, only way is to free all links to your object and trust removal to GC
actually in your code you should clear this._textDiv = null and container = null in remove method too, because it can be a problem for GC in some browsers
No. You don't have any way of accessing the garbage collector directly. As you say, the best you can do is make sure the object is no longer referenced.
IMO, it's better that way. The garbage collector is much smarter than you (and me) because years of research has gone into writing the thing, and even when you try and make optimisations, you're likely still not doing a better job than it would.
Of course if you're interfacing with a JS engine you will be able to control the execution and force garbage collection (among much more), although I very much doubt you're in that position. If you're interested, download and compile spider monkey (or v8, or whatever engine tickles your fancy), and in the repl I think its gc() for both.
That brings me to another point, since the standard doesn't define the internals of garbage collection, even if you manage to determine that invoking the gc at some point in your code is helpful, it's likely that that will not reap the same benefits across all platforms.
this is a keyword, to which you cannot assign any value. The only way to remove objects from a scope is to manually assign nullto every variable.
This method doesn't always work, however: In some implementations of the XMLHttpRequest, one has to reset the onreadystate and open functions, before the XMLHttpRequest object is freed from the memory.
Consider the following Javascript function (1):
function setData(domElement) {
domElement.myDataProperty = {
'suppose': 'this',
'object': 'is',
'static': 'and',
'pretty': 'big'
};
};
Now what I don't like about this function is that the exact same object is created every time the function is called. Since the object does not change I would rather create it just once. So we could make the following adjustments (2):
var dataObject = {
'suppose': 'this',
'object': 'is',
'static': 'and',
'pretty': 'big'
};
function setData(domElement) {
domElement.myDataProperty = dataObject;
};
Now the object is created once when the script is loaded and stored in dataObject. But let's assume that setData is called only occasionally -- most of the times that the script is loaded the function is not used. What I don't like about this function in that case is that the object is always created and held in memory, including many occasions in which it will never be used. I figured you could do something like this to strike the ideal balance (3):
var dataObject;
function setData(domElement) {
if (!dataObject) {
dataObject = {
'suppose': 'this',
'object': 'is',
'static': 'and',
'pretty': 'big'
};
}
domElement.myDataProperty = dataObject;
};
Would that make sense? I figure it depends on when the interpreter decides to create an object. Does it really wait until it passes the !dataObject condition, or does it enter the function, tries to be smart and decides to construct it in advance? Perhaps different Javascript engines have different policies with regard to this?
Then of course there is the question of whether these optimizations will ever matter in practice. Obviously this depends on factors like the size of the object, the speed of the engine, the amount of resources available, etc.. But in general, which one would you say is the more significant optimization: from (1) to (2) or from (2) to (3)?
The answer is, you're not supposed to know. The examples you showed have very little difference between them. The only way you'd ever reasonably worry about this is if you had actual evidence that one way or another was noticably harming performance or memory usage on a particular interpreter. Until then, it's the interpreter's job to worry about that stuff for you.
That said, if you really want to know... Try it and find out. call the different versions 1,000,000 times and see what difference it makes.
Make a giant version of the object and see if that makes a dent. Watch task manager. Try different browsers. Report back your results. It's a much better way to find out than just asking a bunch of jerks on the internet what they guess might be the case.
just keep in mind that object has to be in memory anyway, regardless ... as source text
A new object must be created -- it cannot not be, partially because the spec requires it, but mostly because alternative behaviour would be counter intuitive, take:
function f() {
return {a : "b", c: "d"};
}
o=f();
alert([o.c, o.e]); // Alerts "b,"
delete o.c;
o.e="f";
o=f();
alert([o.c, o.e]); // If the object was only created once this would produce ",f"
Do you really expect a new object expression to not actually produce the object you're asking for? Because that's what you seem to want.
Conceivably you just want to do:
var myFunction = (function(){
var object = {a: "b", c: "d"};
return function() { return object; }
})();
Which would get the effect you want, although you would have to realise that the object you're returning is a completely mutable object that can be changed, and everyone would be sharing that same mutating instance.
First, I'd implement it in situation #2 and load it once immediately after the page is loaded.
If there was a problem with page speed, I would measure the time taken for specific tasks within the page.
If it was very expensive to create the object (relatively speaking), then I would move to situation #3.
There's no point in adding the 'if' statement if it really doesn't buy you anything... and in this case, creating a simple/big object is no sweat off your CPU's back. Without measurements, you're not optimizing - you're just shooting blind.
It's actually a fairly common method of initializing things that I've personally used in C++ and Java.
First, this optimization will never matter in practice.
Second, the last function is exactly as good as the first function. Well, almost. In the first I suppose you're at the mercy of the garbage collector, which should destroy the old object when you reassign domElement.myDataProperty. Still, without knowing exactly how the garbage collector works on your target platform (and it can be very different across browsers), you can't be sure you're saving any work at all really.
Try all three of them in a couple of browsers and find out which is faster.