Trusting Javascript garbage collector - javascript

I have several instances where my Javascript code appears to be leaking memory but I'm not sure what I should be expecting from the garbage collector.
For example var = new Object() in an interval timer function running in Firefox seems to leak over time. There's some simple solutions but I'm curious if I should be expecting the garbage collector to handle everything or I'm responsible for helping the garbage collector.
If I need to help the garbage collector what are the rules?

Most (I believe all) Javascript (ECMAScript) engines work by a method called "reference counting." I'll leave you to goggle that term.
In short, an object is freed for release when nothing is pointing to it ... using it.
Two things may throwing off your sense of how much memory is being used.
1) ECMAScript does not release the object the instant that the system is done with it. Garbage collection is run "as needed." This can vary widely.
2) Closures can hold onto a reference longer than you think. Accidental closures can hold things longer than you might expect.

Had to make this an answer rather than a comment for length:
OK -- first a clarification of terms:
If it is running on a timer, then it is not recursive. It is a common misunderstanding, however.
It recursive code, a function calls itself -- the original function invocation remains on the stack until the whole thing finally unwinds and a value is returned to the original caller. When using a time out, each iteration of the function its in a separate execution context.
Recursive function example
function factorial(n) {
if (n == 1) {
return 1;
} else {
return n * factorial(n-1);
}
}
This is /not/ recursive:
function annoy() {
window.setTimeout(annoy, 1000);
window.alert("This will annoy every second!");
}
Each iteration of 'annoy' is completely independent and stand alone. It just sets up the timer for another instance to be called. There is no pile of 'annoy' functions on the stack and you can't return anything to the caller.
Secondly: In the example that I gave you, the variable a not go out out of scope, but the old objects that a refers to had no active references so they are free for release. What a variable points to can vary.
var a, b;
a = {};
b = a; // This object now has TWO references using it.
b = null; // The object now has one reference
a = null; // Object has no references and is free for release.
At this point, the best thing I can do is point you here:
http://www.ibm.com/developerworks/web/library/wa-sieve/

Related

Where and for how long does my referenceless javascript object exist?

var SomeObj = function() {
this.i = 0;
};
setTimeout(function() {
new SomeObj; // I mean this object
}, 0);
At what point is the SomeObj object garbage collected?
It is eligible for garbage collection as soon as it is no longer used.
That means immediately after the constructor call in your case.
How timely this actually happens is an implementation detail. If you run into GC issues, you need to dig into your specific Javascript engine.
An object that is not referenced from anywhere doesn't "exist" at all from the view of your program. How long it still resides somewhere in memory depends on the garbage collection characteristics of your interpreter, and when/whether it feels the need to collect it.
In your specific case, the object does become eligible for garbage collection right after it has been created and the reference that the expression yields is not used (e.g. in an assignment). In fact, the object might not get created at all in the first place, an optimising compiler could easily remove the whole function altogether - it has no side effects and no return value.

Does adding window as a prefix to the global objects speed up accessing global objects? [duplicate]

I'm kind of curious about what the best practice is when referencing the 'global' namespace in javascript, which is merely a shortcut to the window object (or vice versia depending on how you look at it).
I want to know if:
var answer = Math.floor(value);
is better or worse than:
var answer = window.Math.floor(value);
Is one better or worse, even slightly, for performance, resource usage, or compatibility?
Does one have a slighter higher cost? (Something like an extra pointer or something)
Edit note: While I am a readability over performance nazi in most situations, in this case I am ignoring the differences in readability to focus solely on performance.
First of all, never compare things like these for performance reasons. Math.round is obviously easier on the eyes than window.Math.round, and you wouldn't see a noticeable performance increase by using one or the other. So don't obfuscate your code for very slight performance increases.
However, if you're just curious about which one is faster... I'm not sure how the global scope is looked up "under the hood", but I would guess that accessing window is just the same as accessing Math (window and Math live on the same level, as evidenced by window.window.window.Math.round working). Thus, accessing window.Math would be slower.
Also, the way variables are looked up, you would see a performance increase by doing var round = Math.round; and calling round(1.23), since all names are first looked up in the current local scope, then the scope above the current one, and so on, all the way up to the global scope. Every scope level adds a very slight overhead.
But again, don't do these optimizations unless you're sure they will make a noticeable difference. Readable, understandable code is important for it to work the way it should, now and in the future.
Here's a full profiling using Firebug:
<!DOCTYPE html>
<html>
<head>
<title>Benchmark scope lookup</title>
</head>
<body>
<script>
function bench_window_Math_round() {
for (var i = 0; i < 100000; i++) {
window.Math.round(1.23);
}
}
function bench_Math_round() {
for (var i = 0; i < 100000; i++) {
Math.round(1.23);
}
}
function bench_round() {
for (var i = 0, round = Math.round; i < 100000; i++) {
round(1.23);
}
}
console.log('Profiling will begin in 3 seconds...');
setTimeout(function () {
console.profile();
for (var i = 0; i < 10; i++) {
bench_window_Math_round();
bench_Math_round();
bench_round();
}
console.profileEnd();
}, 3000);
</script>
</body>
</html>
My results:
Time shows total for 100,000 * 10 calls, Avg/Min/Max show time for 100,000 calls.
Calls Percent Own Time Time Avg Min Max
bench_window_Math_round
10 86.36% 1114.73ms 1114.73ms 111.473ms 110.827ms 114.018ms
bench_Math_round
10 8.21% 106.04ms 106.04ms 10.604ms 10.252ms 13.446ms
bench_round
10 5.43% 70.08ms 70.08ms 7.008ms 6.884ms 7.092ms
As you can see, window.Math is a really bad idea. I guess accessing the global window object adds additional overhead. However, the difference between accessing the Math object from the global scope, and just accessing a local variable with a reference to the Math.round function isn't very great... Keep in mind that this is 100,000 calls, and the difference is only 3.6ms. Even with one million calls you'd only see a 36ms difference.
Things to think about with the above profiling code:
The functions are actually looked up from another scope, which adds overhead (barely noticable though, I tried importing the functions into the anonymous function).
The actual Math.round function adds overhead (I'm guessing about 6ms in 100,000 calls).
This can be an interest question if you want to know how the Scope Chain and the Identifier Resolution process works.
The scope chain is a list of objects that are searched when evaluating an identifier, those objects are not accessible by code, only its properties (identifiers) can be accessed.
At first, in global code, the scope chain is created and initialised to contain only the global object.
The subsequent objects in the chain are created when you enter in function execution context and by the with statement and catch clause, both also introduce objects into the chain.
For example:
// global code
var var1 = 1, var2 = 2;
(function () { // one
var var3 = 3;
(function () { // two
var var4 = 4;
with ({var5: 5}) { // three
alert(var1);
}
})();
})();
In the above code, the scope chain will contain different objects in different levels, for example, at the lowest level, within the with statement, if you use the var1 or var2 variables, the scope chain will contain 4 objects that will be needed to inspect in order to get that identifier: the one introduced by the with statement, the two functions, and finally the global object.
You also need to know that window is just a property that exists in the global object and it points to the global object itself. window is introduced by browsers, and in other environments often it isn't available.
In conclusion, when you use window, since it is just an identifier (is not a reserved word or anything like that) and it needs to pass all the resolution process in order to get the global object, window.Math needs an additional step that is made by the dot (.) property accessor.
JS performance differs widely from browser to browser.
My advice: benchmark it. Just put it in a for loop, let it run a few million times, and time it.... see what you get. Be sure to share your results!
(As you've said) Math.floor will probably just be a shortcut for window.Math (as window is a Javascript global object) in most Javascript implementations such as V8.
Spidermonkey and V8 will be so heavily optimised for common usage that it shouldn't be a concern.
For readability my preference would be to use Math.floor, the difference in speed will be so insignificant it's not worth worrying about ever. If you're doing a 100,000 floors it's probably time to switch that logic out of the client.
You may want to have a nose around the v8 source there's some interesting comments there about shaving nanoseconds off functions such as this int.Parse() one.
// Some people use parseInt instead of Math.floor. This
// optimization makes parseInt on a Smi 12 times faster (60ns
// vs 800ns). The following optimization makes parseInt on a
// non-Smi number 9 times faster (230ns vs 2070ns). Together
// they make parseInt on a string 1.4% slower (274ns vs 270ns).
As far as I understand JavaScript logic, everything you refer to as something is searched in the global variable scope. In browser implementations, the window object is the global object. Hence, when you are asking for window.Math you actually have to de-reference what window means, then get its properties and find Math there. If you simply ask for Math, the first place where it is sought, is the global object.
So, yes- calling Math.something will be faster than window.Math.something.
D. Crockeford talks about it in his lecture http://video.yahoo.com/watch/111593/1710507, as far as I recall, it's in the 3rd part of the video.
If Math.round() is being called in a local/function scope the interpreter is going to have to check first for a local var then in the global/window space. So in local scope my guess would be that window.Math.round() would be very slightly faster. This isn't assembly, or C or C++, so I wouldn't worry about which is faster for performance reasons, but if out of curiosity, sure, benchmark it.

Which types of objects should I store as variables in JavaScript?

Two jsperfs relating to the question:
Cache-ing 'this'
Cache-ing booleans
I'm on Mac 10.9. In Safari 7, Chrome 32, and Firefox 26, storing 'this' inside a variable seems to run slightly slower than not storing it. For example:
function O() {
var THIS = this;
THIS.foo = 'foo';
THIS.bar = 'bar';
THIS.baz = 'baz';
}
was a bit slower than:
function O() {
this.foo = 'foo';
this.bar = 'bar';
this.baz = 'baz';
}
Why is this? Is it because 'this' references the original object every time?
In Chrome and Firefox, storing a Boolean object and then referencing the value of that variable later seemed to run a bit faster than writing 'true' or 'false' every time (in theory, creating a new Boolean object every time.) BUT, in Safari, the opposite appeared to be true. For example:
function lt() {
if(arguments[0] < arguments[1]) return true;
return false;
}
was a bit faster (in Firefox and Chrome) than:
var TRUE = true,
FALSE = false;
function lt() {
if(arguments[0] < arguments[1]) return TRUE;
return FALSE;
}
With the exception of Safari, is this because a new Boolean object is being created every time when not storing it inside a variable? What could be the explanation as to why there was the opposite effect in Safari?
I'm inclined to think that in small bits of code, the difference would be negligible, but I'm curious if it could make a difference with which someone should be concerned when the code gets much lengthier. I also read a question asking about performance data vs. perceived performance, where perceived performance is generally the thing to look at in these cases.
An issue with the above statistics in the jsperfs is the lack of a large data sample. The reason I ask this question is because I'm writing a small JS library. In that context, what are best practices as far as 'caching' certain objects?
It's hard to answer your question as to why safari behaves differently from Firefox and Chrome in these examples, because the reason is very implementation-dependent and I'm not familiar with the source codes of either of these browsers; nor have I spent a long time trying to reverse-engineer them. But I can give you a rough sketch of how variable caching influences performance just using the ECMAScript spec:
First it's important to understand that the this keyword is initialized during invocation, and it's always local to your execution context, i.e. the look-up for the this keyword will terminate on the activation record of the current function invocation. So in your first example you're creating an additional local variable - double the work - leaving the execution context with following activation record (omitted other system-defined properties, e.g. arguments):
{ this: caller, THIS: caller };
// this: system-created property (?during function object creation or invocation?)
// caller: system-initialized value, during invocation
// THIS: user-created-and-initialized property, slows down execution
So in general the best reason to cache a variable is because the look-up operation is more expensive than creating a local property. Creating a local property obviously only benefits performance if it is referenced more than one time. Fore mentioned operation can be done on the scope-chain, or on the prototype-chain using the dotted look-up operation. As long as property creation is less expensive than the actual look-up operation, you can cache the property.
The best reason to cache a value, is to avoid creating the same values over and over for every invocation. The best way to cache a value is through an (anonymous) closure. E.g.:
var obj0, obj1, funExt;
obj0 = {};
obj1 = {};
funExt = (function () {
var cachedId, cachedObj;
// After evaluation of funExt is done these values will persist in closure,
// avoiding value creation during the execution of returned function.
cachedId = "extension";
cachedObj = { prop0: myValue, prop1: myValue, prop2: myValue };
return function (o) {
if (o) {
o[cachedId] = cachedObject;
} else if (this !== window) {
o = (this[cachedId] = cachedObj);
}
return o;
};
}());
obj0.funExt();
funExt(obj1);
Concerning your second example I can only say that in Chrome and Firefox boolean creation is less expensive than a one-level scope look-up. Given the complexity of the example function both operations should be rather cheap.
It's also good to keep in mind that developers design javascript engines to do these in-line optimizations on the fly. So there is no guarantee that these optimizations will yield significant performance boosts. I prefer to stick to simplicity and clarity of structure when building libraries. So I use closure to either communicate that values are being shared, or that certain values remain constant. And I use variable caching if the given variable is too long or lacks clarity.
In general, the more deeply an object is in your object graph, the more reason to cache it. So for example: no reason to cache this in your example, but if you have an object foo.bar.something.else.x and want to use multiple times, then chances are you better off caching that into a local variable. It will be faster, and also more readable.
Another good reason to cache an object if it's multiple levels up in your scope chain. For example, when you have 4 levels of nested functions and in the innermost you want to use a variable from the global scope multiple times, it's reasonable to cache it in a local variable.

Do javascript objects inside of an array erase from the memory when I clear the array?

I never really gave much thought to garbage collection and I don't know whether or not it is necessary to take into account when making small javascript games/applications. Any advice is appreciated, but I will ask my specific question at the end.
A lot of the time I write code of this form:
var foos=new Array();
generateFoos();
function generateFoos()
{
foos=[];
for (fooIndex=0;fooIndex<numberOfFoos;fooIndex++)
{
foos[fooIndex]=new foo(Math.random(),Math.random());
}
}
function foo(bar,bas)
{
this.bar=bar;
this.bas=bas;
}
So my question is, when I say foos=[] (line 5), does this delete the objects in that array from the memory or do they float around somewhere, making the program larger and slower? What should I do if I want to call generateFoos() a loooot of times, like every time the user presses a key.
Thanks!
For a specific answer, since the accepted one doesn't actually answer the question directly, is that yes, foo = [] does de-reference any previous values in the array.
As Ales says, "An object becomes eligible for garbage collection when it becomes unreachable." Indeed, this is when the browser will clear such things from memory.
An important point, delete DOES NOT GARBAGE COLLECT.
You see this over and over, and even in the comments on this question. The delete keyword removes a property from an object and has nothing to do with garbage collection.
I also wanted to offer some advice on your code itself.
1) Use literals, not new, for basic data types
2) Don't call functions before you declare them. Yes, it works but its messy and harder to read later. Remember, you spend much more time reading your code than writing it. Make it easy to follow later.
3) Remember your function scope. Any variable declared without var goes global. With var, it remains within the scope of the function that contains it. Another way variables are scoped within a function is when they are passed in as named parameters.
4) Use var on your functions when creating them. In your code, your functions are globals.
5) Use spacing. Density of text is not next to godliness. You might be 20-something now with great eyesight, but you'll appreciate white space in just a very few short years.
6) Declare counters in for loops with var. Unless you want them to be global. And you almost never will.
Lets re-work your code now:
var numberOfFoos = 10,
foos = [];
var generateFoos = function(){
foos = [];
for( var fooIndex = 0; fooIndex < numberOfFoos; fooIndex++ ){
foos[ fooIndex ] = new foo( Math.random(), Math.random() );
}
},
foo = function( bar, bas ){
this.bar = bar;
this.bas = bas;
}
generateFoos();
console.log( foos );
To answer your question : An object becomes eligible for garbage collection when it becomes unreachable. If your program is not holding any other references to the objects in that array, they will be garbage collected.
The timing of the actual garbage collection depends on many aspects and on the choosen Garbage Collection algorithm but you should not worry about it when writing your code.
The best advice when considering Garbage collection is to leave the Garbage collector to do its job. Do not try to instruct it or help it (by e.g. nulling references manually). Focus on the problem and functional aspect of the code.

What makes my.class.js so fast? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I've been looking at the source code of my.class.js to find out what makes it so fast on Firefox. Here's the snippet of code used to create a class:
my.Class = function () {
var len = arguments.length;
var body = arguments[len - 1];
var SuperClass = len > 1 ? arguments[0] : null;
var hasImplementClasses = len > 2;
var Class, SuperClassEmpty;
if (body.constructor === Object) {
Class = function () {};
} else {
Class = body.constructor;
delete body.constructor;
}
if (SuperClass) {
SuperClassEmpty = function() {};
SuperClassEmpty.prototype = SuperClass.prototype;
Class.prototype = new SuperClassEmpty();
Class.prototype.constructor = Class;
Class.Super = SuperClass;
extend(Class, SuperClass, false);
}
if (hasImplementClasses)
for (var i = 1; i < len - 1; i++)
extend(Class.prototype, arguments[i].prototype, false);
extendClass(Class, body);
return Class;
};
The extend function is simply used to copy the properties of the second object onto the first (optionally overriding existing properties):
var extend = function (obj, extension, override) {
var prop;
if (override === false) {
for (prop in extension)
if (!(prop in obj))
obj[prop] = extension[prop];
} else {
for (prop in extension)
obj[prop] = extension[prop];
if (extension.toString !== Object.prototype.toString)
obj.toString = extension.toString;
}
};
The extendClass function copies all the static properties onto the class, as well as all the public properties onto the prototype of the class:
var extendClass = my.extendClass = function (Class, extension, override) {
if (extension.STATIC) {
extend(Class, extension.STATIC, override);
delete extension.STATIC;
}
extend(Class.prototype, extension, override);
};
This is all pretty straightforward. When you create a class, it simply returns the constructor function you provide it.
What beats my understanding however is how does creating an instance of this constructor execute faster than creating an instance of the same constructor written in Vapor.js.
This is what I'm trying to understand:
How do constructors of libraries like my.class.js create so many instances so quickly on Firefox? The constructors of the libraries are all very similar. Shouldn't the execution time also be similar?
Why does the way the class is created affect the execution speed of instantiation? Aren't definition and instantiation separate processes?
Where is my.class.js gaining this speed boost from? I don't see any part of the constructor code which should make it execute any faster. In fact traversing a long prototype chain like MyFrenchGuy.Super.prototype.setAddress.call should slow it down significantly.
Is the constructor function being JIT compiled? If so then why aren't the constructor functions of other libraries also being JIT compiled?
I don't mean to offend anyone, but this sort of thing really isn't worth the attention, IMHO. Almost any speed-difference between browsers is down to the JS engine. The V8 engine is very good at memory management, for example; especially when you compare it to IE's JScript engines of old.
Consider the following:
var closure = (function()
{
var closureVar = 'foo',
someVar = 'bar',
returnObject = {publicProp: 'foobar'};
returnObject.getClosureVar = function()
{
return closureVar;
};
return returnObject;
}());
Last time I checked, chrome actually GC'ed someVar, because it wasn't being referenced by the return value of the IIFE (referenced by closure), whereas both FF and Opera kept the entire function scope in memory.
In this snippet, it doesn't really matter, but for libs that are written using the module-pattern (AFAIK, that's pretty much all of them) that consist of thousands of lines of code, it can make a difference.
Anyway, modern JS-engines are more than just "dumb" parse-and-execute things. As you said: there's JIT compilation going on, but there's also a lot of trickery involved to optimize your code as much as possible. It could very well be that the snippet you posted is written in a way that FF's engine just loves.
It's also quite important to remember that there is some sort of speed-battle going on between Chrome and FF about who has the fastest engine. Last time I checked Mozilla's Rhino engine was said to outperform Google's V8, if that still holds true today, I can't say... Since then, both Google and Mozilla have been working on their engines...
Bottom line: speed differences between various browsers exist - nobody can deny that, but a single point of difference is insignificant: you'll never write a script that does just one thing over and over again. It's the overall performance that matters.
You have to keep in mind that JS is a tricky bugger to benchmark, too: just open your console, write some recursive function, and rung it 100 times, in FF and Chrome. compare the time it takes for each recursion, and the overall run. Then wait a couple of hours and try again... sometimes FF might come out on top, whereas other times Chrome might be faster, still. I've tried it with this function:
var bench = (function()
{
var mark = {start: [new Date()],
end: [undefined]},
i = 0,
rec = function(n)
{
return +(n === 1) || rec(n%2 ? n*3+1 : n/2);
//^^ Unmaintainable, but fun code ^^\\
};
while(i++ < 100)
{//new date at start, call recursive function, new date at end of recursion
mark.start[i] = new Date();
rec(1000);
mark.end[i] = new Date();
}
mark.end[0] = new Date();//after 100 rec calls, first element of start array vs first of end array
return mark;
}());
But now, to get back to your initial question(s):
First off: the snippet you provided doesn't quite compare to, say, jQuery's $.extend method: there's no real cloning going on, let alone deep-cloning. It doesn't check for circular references at all, which most other libs I've looked into do. checking for circular references does slow the entire process down, but it can come in handy from time to time (example 1 below). Part of the performance difference could be explained by the fact that this code simply does less, so it needs less time.
Secondly: Declaring a constructor (classes don't exist in JS) and creating an instance are, indeed, two different things (though declaring a constructor is in itself creating an instance of an object (a Function instance to be exact). The way you write your constructor can make a huge difference, as shown in example 2 below. Again, this is a generalization, and might not apply to certain use-cases on certain engines: V8, for example, tends to create a single function object for all instances, even if that function is part of the constructor - or so I'm told.
Thirdly: Traversing a long prototype-chain, as you mention is not as unusual as you might think, far from it, actually. You're constantly traversing chains of 2 or three prototypes, as shown in example 3. This shouldn't slow you down, as it's just inherent to the way JS resolves function calls or resolves expressions.
Lastly: It's probably being JIT-compiled, but saying that other libs aren't JIT-compiled just doesn't stack up. They might, then again, they might not. As I said before: different engines perform better at some tasks then other... it might be the case that FF JIT-compiles this code, and other engines don't.
The main reason I can see why other libs wouldn't be JIT-compiled are: checking for circular references, deep cloning capabilities, dependencies (ie extend method is used all over the place, for various reasons).
example 1:
var shallowCloneCircular = function(obj)
{//clone object, check for circular references
function F(){};
var clone, prop;
F.prototype = obj;
clone = new F();
for (prop in obj)
{//only copy properties, inherent to instance, rely on prototype-chain for all others
if (obj.hasOwnProperty(prop))
{//the ternary deals with circular references
clone[prop] = obj[prop] === obj ? clone : obj[prop];//if property is reference to self, make clone reference clone, not the original object!
}
}
return clone;
};
This function clones an object's first level, all objects that are being referenced by a property of the original object, will still be shared. A simple fix would be to simply call the function above recursively, but then you'll have to deal with the nasty business of circular references at all levels:
var circulars = {foo: bar};
circulars.circ1 = circulars;//simple circular reference, we can deal with this
circulars.mess = {gotcha: circulars};//circulars.mess.gotcha ==> circular reference, too
circulars.messier = {messiest: circulars.mess};//oh dear, this is hell
Of course, this isn't the most common of situations, but if you want to write your code defensively, you have to acknowledge the fact that many people write mad code all the time...
Example 2:
function CleanConstructor()
{};
CleanConstructor.prototype.method1 = function()
{
//do stuff...
};
var foo = new CleanConstructor(),
bar = new CleanConstructor);
console.log(foo === bar);//false, we have two separate instances
console.log(foo.method1 === bar.method1);//true: the function-object, referenced by method1 has only been created once.
//as opposed to:
function MessyConstructor()
{
this.method1 = function()
{//do stuff
};
}
var foo = new MessyConstructor(),
bar = new MessyConstructor();
console.log(foo === bar);//false, as before
console.log(foo.method1 === bar.method1);//false! for each instance, a new function object is constructed, too: bad performance!
In theory, declaring the first constructor is slower than the messy way: the function object, referenced by method1 is created before a single instance has been created. The second example doesn't create a method1, except for when the constructor is called. But the downsides are huge: forget the new keyword in the first example, and all you get is a return value of undefined. The second constructor creates a global function object when you omit the new keyword, and of course creates new function objects for each call. You have a constructor (and a prototype) that is, in fact, idling... Which brings us to example 3
example 3:
var foo = [];//create an array - empty
console.log(foo[123]);//logs undefined.
Ok, so what happens behind the scenes: foo references an object, instance of Array, which in turn inherits form the Object prototype (just try Object.getPrototypeOf(Array.prototype)). It stands to reason, therefore that an Array instance works in pretty much the same way as any object, so:
foo[123] ===> JS checks instance for property 123 (which is coerced to string BTW)
|| --> property not found #instance, check prototype (Array.prototype)
===========> Array.prototype.123 could not be found, check prototype
||
==========> Object.prototype.123: not found check prototype?
||
=======>prototype is null, return undefined
In other words, a chain like you describe isn't too far-fetched or uncommon. It's how JS works, so expecting that to slow things down is like expecting your brain to fry because your thinking: yes, you can get worn out by thinking too much, but just know when to take a break. Just like in the case of prototype-chains: their great, just know that they are a tad slower, yes...
I'm not entirely sure, but I do know that when programming, it is good practice to make the code as small as possible without sacrificing functionality. I like to call it minimalist code.
This can be a good reason to obfuscate code. Obfuscation shrinks the size of the file by using smaller method and variable names, making it harder to reverse-engineer, shrinking the file size, making it faster to download, as well as a potential performance boost. Google's javascript code is intensely obfuscated, and that contributes to their speed.
So in JavaScript, bigger isn't always better. When I find a way I can shrink my code, I implement it immediately, because I know it will benefit performance, even if by the smallest amount.
For example, using the var keyword in a function where the variable isn't needed outside the function helps garbage collection, which provides a very small speed boost versus keeping the variable in memory.
With a library like this this that produces "millions of operations per second" (Blaise's words), small performance boosts can add up to a noticeable/measurable difference.
So it is possible that my.class.js is "minimalist coded" or optimized in some manner. It could even be the var keywords.
I hope this helped somewhat. If it didn't help, then I wish you luck in getting a good answer.

Categories

Resources