Javascript: Constructor function vs Object Initializer speed - javascript

Is there any difference between the run speeds of a constructor function when compared to an equivalent object initializer?
For example
function blueprint(var1, var2){
this.property1 = var1;
this.property2 = var2;
}
var object1 = new blueprint(value1,value2);
vs
object1 = {property1:value1, property2:value2};
If there is, is it relevant enough to be of concern when optimizing code or would file size take priority?

If there is, is it relevant enough to be of concern when optimizing code or would file size take priority?
Neither.
It's extremely rare for decisions like this to have any (positive) effect on the system performance. Even if current browsers (or whatever your execution environment) show an observable advantage one way or another, that difference is not terribly likely to persist over new releases.
"It's much easier to optimize correct code than to correct optimized code."
Write readable, maintainable code and when it is all correct, check to see whether it is objectionably slow or the files are unreasonably large and make the optimizations.

Ran in console:
function blueprint(var1, var2){
this.property1 = var1;
this.property2 = var2;
}
var start = new Date();
var stop;
var object1;
for (var i = 0; i < 1000000; i++) {
object1 = new blueprint(1,1);
}
stop = new Date();
console.log(stop - start);
Results...
Google Chrome: 2832 milliseconds
Firefox 3.6.17: 3441 milliseconds
Ran in console:
var start = new Date();
var stop;
var object1;
for (var i = 0; i < 1000000; i++) {
object1 = {
'property1': 1,
'property2': 1
};
}
stop = new Date();
console.log(stop - start);
Results...
Google Chrome: 2302 milliseconds
Firefox 3.6.17: 2285 milliseconds
Offhand, it's pretty obvious which one is faster. However, unless you are going through a significant amount of operations I think you should use whatever is more readable and not worry about it.

I think object intializer will be faster than using constructor because constructor has a function call and it has to maintain its own instance too.
As a side note, use constructor if you want to create multiple instances of similar objects other wise go for object initializer if only single object is required.

Using a constructor to create a trivial object with just value properties is counter-productive. Just creating a simple object literal from scratch each time is faster. You can always define a function if it is to be called from lots of different places. Hey you just created a basic constructor function :lol:
If your object becomes non-trivial, for example including getters, setters, or full-blown methods, then a constructor (with the javascript in a prototype to be shared) is orders of magnitude faster than creating an object from scratch. Of course you are talking about a few micro-seconds (on a typical desktop) for creating an object with a small amount of embedded javascript vs less than a microsecond for calling a constructor, so in most cases it isn't important. Creating an object with only value properties is another order of magnitude faster.
Remember also that the initial creation of the constructor is an expensive operation, which may be more important if it is only to be used a few times. In some cases the constructor can be pre-compiled, for example if it is defined in a javascript code module in a Firefox addon, and then it is a win-win.
There are also more formal methods for creating objects such as the Object.create() function. However this is complicated and cumbersome to use and doesn't appear to be well optimised in any current browser. In all the tests I've run it is desperately slow compared to other methods, but might be useful when you need advanced capabilities and aren't going to be calling it hundreds of times.

The constructor function is used for multiple entries under the same "object".
The object initializer should only be used for a limited amount of entries, for example 3.
The constructor function is faster for multiple entries while the ...
object initializer is faster for few entries, at least in theory, I have not tested the speeds because I doubt the difference is catastrophic.

I wouldn't worry about it. The overhead of the constructor is an additional function call and a few extra properties to set (like the prototype). With modern JIT engines, it should hardly matter.

Related

Is a deep inheritance chain slowing down method lookup in V8 JavaScript engine?

I'm writing a base class for a game in TypeScript. It got functionality like sending messages, resources management, etc.
Inspired by Mixins, I wrote the following code(compiled to JavaScript):
function Messenger(Base) {
return class Messenger extends Base {
$dispatch(e) {
// TODO
}
};
}
function ResourceManager(Base) {
return class ResourceManager extends Base {
$loadRes(key) {
// TODO
return Promise.resolve({});
}
};
}
class Component {
}
class GameBase extends Component {
start() {
console.log('start');
}
init() {
console.log('init');
}
}
const Klass = ResourceManager(Messenger(GameBase));
var gg = new Klass();
gg.start();
As far as I know, when I try to call gg.start, the JavaScript engine lookup the prototype chain, and it a little bit longer in this case and it becomes event long when the mixins grow:
Is this slowing down the method lookup?
Is V8 optimized this looking-up process and can I just ignore the lookup overhead?
V8 developer here. This is a complex issue; the short answer is "it depends".
It is trivially true that having to walk a longer prototype chain when doing a lookup takes more time. However, if that's done only once or twice, then that time is typically too short to matter.
So the next question is: how often will such lookups be performed? V8 tries to cache lookup results whenever it can (search for the term "inline caches" if you want to know more); the effectiveness of such caching, as all caching, critically depends on the number of different cases seen.
So if your code is mostly "monomorphic" (i.e. at any given foo.bar lookup, foo will always have the same type/shape, including same prototype chain), or low-degree polymorphic (up to four different types of foo), then the full prototype chain walk only needs to be done once (or up to four times, respectively), and after that the cached results will be used, so if you execute such code thousands of times, you won't see a performance difference between prototype chains that are one step or hundreds of steps long.
On the other hand, if you have property loads or stores that see many different types (as tends to happen in certain frameworks, where every single lookup goes through some central getProperty(object, property) { /* do some framework stuff, and then: */ return object[property]; } function), then caching becomes useless, and V8 has to perform the full lookup every time. This is particularly slow with long prototype chains, but that said it is always much slower than cacheable cases (even with short prototype chains).
In conclusion, if you're somewhat careful about your overall program design and avoid having many different types at the same code locations, then you can easily afford very long prototype chains. In fact, keeping as much of your code monomorphically typed as possible tends to have significantly more impact than keeping prototype chain lengths short. On the other hand, shorter prototype chain lengths do make the engine's life easier, and personally I'd argue that they can (if you don't overdo it) also improve readability, so all else being equal, I'd suggest to keep your object model as simple as you can.
I wrote a little benchmark to see how much the lookup along the prototype chain would cost (be careful, it will block your browser when clicking on 'Run code snippet'; rather, execute it in Node locally):
function generateObjectWithPrototype(prototype) {
const f = function() {};
f.prototype = prototype;
return new f();
}
const originalObject = new (function() {
this.doSomething = function() {};
})();
let currentObject = originalObject;
for (let i = 0; i < 60001; i++) {
currentObject = generateObjectWithPrototype(currentObject);
const start = +new Date();
currentObject.doSomething();
const end = +new Date();
if (i % 10000 === 0) {
console.log(`Iteration ${i}: Took ${end - start}ms`);
}
}
The result:
Iteration 0: Took 0ms
Iteration 10000: Took 0ms
Iteration 20000: Took 1ms
Iteration 30000: Took 1ms
Iteration 40000: Took 2ms
Iteration 50000: Took 3ms
Iteration 60000: Took 4ms
So in this case, for a prototype-depth of 60,000, the additional time it takes to find the doSomething() method is roughly 4ms. I would say that's neglectable.

How does Bluebird's util.toFastProperties function make an object's properties "fast"?

In Bluebird's util.js file, it has the following function:
function toFastProperties(obj) {
/*jshint -W027*/
function f() {}
f.prototype = obj;
ASSERT("%HasFastProperties", true, obj);
return f;
eval(obj);
}
For some reason, there's a statement after the return function, which I'm not sure why it's there.
As well, it seems that it is deliberate, as the author had silenced the JSHint warning about this:
Unreachable 'eval' after 'return'. (W027)
What exactly does this function do? Does util.toFastProperties really make an object's properties "faster"?
I've searched through Bluebird's GitHub repository for any comments in the source code or an explanation in their list of issues, but I couldn't find any.
2017 update: First, for readers coming today - here is a version that works with Node 7 (4+):
function enforceFastProperties(o) {
function Sub() {}
Sub.prototype = o;
var receiver = new Sub(); // create an instance
function ic() { return typeof receiver.foo; } // perform access
ic();
ic();
return o;
eval("o" + o); // ensure no dead code elimination
}
Sans one or two small optimizations - all the below is still valid.
Let's first discuss what it does and why that's faster and then why it works.
What it does
The V8 engine uses two object representations:
Dictionary mode - in which object are stored as key - value maps as a hash map.
Fast mode - in which objects are stored like structs, in which there is no computation involved in property access.
Here is a simple demo that demonstrates the speed difference. Here we use the delete statement to force the objects into slow dictionary mode.
The engine tries to use fast mode whenever possible and generally whenever a lot of property access is performed - however sometimes it gets thrown into dictionary mode. Being in dictionary mode has a big performance penalty so generally it is desirable to put objects in fast mode.
This hack is intended to force the object into fast mode from dictionary mode.
Bluebird's Petka himself talks about it here.
These slides (wayback machine) by Vyacheslav Egorov also mentions it.
The question "*https://stackoverflow.com/questions/23455678/pros-and-cons-of-dictionary-mode*" and its accepted answer are also related.
This slightly outdated article is still a fairly good read that can give you a good idea on how objects are stored in v8.
Why it's faster
In JavaScript prototypes typically store functions shared among many instances and rarely change a lot dynamically. For this reason it is very desirable to have them in fast mode to avoid the extra penalty every time a function is called.
For this - v8 will gladly put objects that are the .prototype property of functions in fast mode since they will be shared by every object created by invoking that function as a constructor. This is generally a clever and desirable optimization.
How it works
Let's first go through the code and figure what each line does:
function toFastProperties(obj) {
/*jshint -W027*/ // suppress the "unreachable code" error
function f() {} // declare a new function
f.prototype = obj; // assign obj as its prototype to trigger the optimization
// assert the optimization passes to prevent the code from breaking in the
// future in case this optimization breaks:
ASSERT("%HasFastProperties", true, obj); // requires the "native syntax" flag
return f; // return it
eval(obj); // prevent the function from being optimized through dead code
// elimination or further optimizations. This code is never
// reached but even using eval in unreachable code causes v8
// to not optimize functions.
}
We don't have to find the code ourselves to assert that v8 does this optimization, we can instead read the v8 unit tests:
// Adding this many properties makes it slow.
assertFalse(%HasFastProperties(proto));
DoProtoMagic(proto, set__proto__);
// Making it a prototype makes it fast again.
assertTrue(%HasFastProperties(proto));
Reading and running this test shows us that this optimization indeed works in v8. However - it would be nice to see how.
If we check objects.cc we can find the following function (L9925):
void JSObject::OptimizeAsPrototype(Handle<JSObject> object) {
if (object->IsGlobalObject()) return;
// Make sure prototypes are fast objects and their maps have the bit set
// so they remain fast.
if (!object->HasFastProperties()) {
MigrateSlowToFast(object, 0);
}
}
Now, JSObject::MigrateSlowToFast just explicitly takes the Dictionary and converts it into a fast V8 object. It's a worthwhile read and an interesting insight into v8 object internals - but it's not the subject here. I still warmly recommend that you read it here as it's a good way to learn about v8 objects.
If we check out SetPrototype in objects.cc, we can see that it is called in line 12231:
if (value->IsJSObject()) {
JSObject::OptimizeAsPrototype(Handle<JSObject>::cast(value));
}
Which in turn is called by FuntionSetPrototype which is what we get with .prototype =.
Doing __proto__ = or .setPrototypeOf would have also worked but these are ES6 functions and Bluebird runs on all browsers since Netscape 7 so that's out of the question to simplify code here. For example, if we check .setPrototypeOf we can see:
// ES6 section 19.1.2.19.
function ObjectSetPrototypeOf(obj, proto) {
CHECK_OBJECT_COERCIBLE(obj, "Object.setPrototypeOf");
if (proto !== null && !IS_SPEC_OBJECT(proto)) {
throw MakeTypeError("proto_object_or_null", [proto]);
}
if (IS_SPEC_OBJECT(obj)) {
%SetPrototype(obj, proto); // MAKE IT FAST
}
return obj;
}
Which directly is on Object:
InstallFunctions($Object, DONT_ENUM, $Array(
...
"setPrototypeOf", ObjectSetPrototypeOf,
...
));
So - we have walked the path from the code Petka wrote to the bare metal. This was nice.
Disclaimer:
Remember this is all implementation detail. People like Petka are optimization freaks. Always remember that premature optimization is the root of all evil 97% of the time. Bluebird does something very basic very often so it gains a lot from these performance hacks - being as fast as callbacks isn't easy. You rarely have to do something like this in code that doesn't power a library.
V8 developer here. The accepted answer is a great explanation, I just wanted to highlight one thing: the so-called "fast" and "slow" property modes are unfortunate misnomers, they each have their pros and cons. Here is a (slightly simplified) overview of the performance of various operations:
struct-like properties
dictionary properties
adding a property to an object
--
+
deleting a property
---
+
reading/writing a property, first time
-
+
reading/writing, cached, monomorphic
+++
+
reading/writing, cached, few shapes
++
+
reading/writing, cached, many shapes
--
+
colloquial name
"fast"
"slow"
So as you can see, dictionary properties are actually faster for most of the lines in this table, because they don't care what you do, they just handle everything with solid (though not record-breaking) performance. Struct-like properties are blazing fast for one particular situation (reading/writing the values of existing properties, where every individual place in the code only sees very few distinct object shapes), but the price they pay for that is that all other operations, in particular those that add or remove properties, become much slower.
It just so happens that the special case where struct-like properties have their big advantage (+++) is particularly frequent and really important for many apps' performance, which is why they acquired the "fast" moniker. But it's important to realize that when you delete properties and V8 switches the affected objects to dictionary mode, then it isn't being dumb or trying to be annoying: rather it attempts to give you the best possible performance for what you're doing. We have landed patches in the past that have achieved significant performance improvements by making more objects go to dictionary ("slow") mode sooner when appropriate.
Now, it can happen that your objects would generally benefit from struct-like properties, but something your code does causes V8 to transition them to dictionary properties, and you'd like to undo that; Bluebird had such a case. Still, the name toFastProperties is a bit misleading in its simplicity; a more accurate (though unwieldy) name would be spendTimeOptimizingThisObjectAssumingItsPropertiesWontChange, which would indicate that the operation itself is costly, and it only makes sense in certain limited cases. If someone took away the conclusion "oh, this is great, so I can happily delete properties now, and just call toFastProperties afterwards every time", then that would be a major misunderstanding and cause pretty bad performance degradation.
If you stick with a few simple rules of thumb, you'll never have a reason to even try to force any internal object representation changes:
Use constructors, and initialize all properties in the constructor. (This helps not only your engine, but also understandability and maintainability of your code. Consider that TypeScript doesn't quite force this but strongly encourages it, because it helps engineering productivity.)
Use classes or prototypes to install methods, don't just slap them onto each object instance. (Again, this is a common best practice for many reasons, one of them being that it's faster.)
Avoid delete. When properties come and go, prefer using a Map over the ES5-era "object-as-map" pattern. When an object can toggle into and out of a certain state, prefer boolean (or equivalent) properties (e.g. o.has_state = true; o.has_state = false;) over adding and deleting an indicator property.
When it comes to performance, measure, measure, measure. Before you start sinking time into performance improvements, profile your app to see where the hotspots are. When you implement a change that you hope will make things faster, verify with your real app (or something extremely close to it; not just a 10-line microbenchmark!) that it actually helps.
Lastly, if your team lead tells you "I've heard that there are 'fast' and 'slow' properties, please make sure that all of ours are 'fast'", then point them at this post :-)
Reality from 2021 (NodeJS version 12+).
Seems like a huge optimization is done, objects with deleted fields and sparse arrays don't become slow. Or I'm missing smth?
// run in Node with enabled flag
// node --allow-natives-syntax script.js
function Point(x, y) {
this.x = x;
this.y = y;
}
var obj1 = new Point(1, 2);
var obj2 = new Point(3, 4);
delete obj2.y;
var arr = [1,2,3]
arr[100] = 100
console.log('obj1 has fast properties:', %HasFastProperties(obj1));
console.log('obj2 has fast properties:', %HasFastProperties(obj2));
console.log('arr has fast properties:', %HasFastProperties(arr));
both show true
obj1 has fast properties: true
obj2 has fast properties: true
arr has fast properties: true
// run in Node with enabled flag
// node --allow-natives-syntax script.js
function Point(x, y) {
this.x = x;
this.y = y;
}
var obj2 = new Point(3, 4);
console.log('obj has fast properties:', %HasFastProperties(obj2)) // true
delete obj2.y;
console.log('obj2 has fast properties:', %HasFastProperties(obj2)); //true
var obj = {x : 1, y : 2};
console.log('obj has fast properties:', %HasFastProperties(obj)) //true
delete obj.x;
console.log('obj has fast properties:', %HasFastProperties(obj)); //fasle
Function and object look different

How big are JavaScript function objects?

I was just wondering how the overhead is on a function object.
In an OOP design model, you can spawn up a lot of objects each with their own private functions, but in the case where you have 10,000+, these private function objects, I assume, can make for a lot of overhead.
I'm wondering if there are cases where it would be advantageous enough to move these functions to a utility class or external manager to save the memory taken up by these function objects.
This is how Chrome handles functions, and other engines may do different things.
Let's look at this code:
var funcs = [];
for (var i = 0; i < 1000; i++) {
funcs.push(function f() {
return 1;
});
}
for (var i = 0; i < 1000; i++) {
funcs[0]();
}
http://jsfiddle.net/7LS6B/4/
Now, the engine creates 1000 functions.
The individual function itself takes up almost no memory at all (36 bytes in this case), since it merely holds a pointer to a so-called SharedFunctionInfo object, which is basically a reference to the function definition in your source code*. This is called lazy parsing.
Only when you run it frequently does the JIT kick in, and creates a compiled version of the function, which requires more memory. So, funcs[0] takes up 256 bytes in the end:
*) This is not exactly true. It also holds scope information and the function's name and other metadata, which is why it has a size of 592 bytes in this case.
First of all, it's common to place methods in the object constructor prototype, so they'll be shared among all instances of a given object:
function MyObject() {
....
}
MyObject.prototype.do_this = function() {
...
}
MyObject.prototype.do_that = function() {
...
}
Also note that a "function object" is a constant code-only block or a closure; in both cases the size is not related to the code:
x = [];
for (var i=0; i<1000; i++) {
x.push(function(){ ... });
}
The size of each element of the array is not going to depend on the code size, because the code itself will be shared between all of the function object instances. Some memory will be required for each of the 1000 instances, but it would be roughly the same amount required by other objects like strings or arrays and not related to how much code is present inside the function.
Things would be different if you create functions using JavaScript's eval: In that case I'd expect each function to take quite a bit and proportional to code size unless some super-smart caching and sharing is done also at this level.
Function objects do in fact take up a lot of space. Objects themselves may not take up much room as shown below but Function objects seem to take up considerably more. In order to test this, I used Function("return 2;") in order to create an array of anonymous functions.
The result was as implied by the OP. That these do in fact take up space.
Created
100,000 of these Function()'s created caused 75.4 MB to be used, from 0. I ran this test in a more controlled environment. This conversion is a little more obvious, where it indicates that each function object is going to consume 754 bytes. And these are empty. Larger function objects may surpass 1kb which will become significant very quickly. Spinning up the 75MB was non trivial on the client, and caused a near 4 second lock of the UI.
Here is the script I used to create the function objects:
fs = [];
for(var i = 0; i < 100000; i++ ){
fs.push(Function("return 2;"));
}
Calling these functions also affects memory levels. Calling the functions added an additional 34MB of memory use.
Called
This is what I used to call them:
for( var i = 0; i < fs.length; i++ ){
for( var a = 0; a < 1000; a++ ){
fs[i]();
}
}
Using jsfiddle in edit mode is hard to get accurate results, I would suggest embedding it.
Embedded jsFiddle Demo
These statements are incorrect, I left them to allow the comments to retain context.
Function objects don't take very much space at all. The operating system and memory available are going to be what decides in the end how this memory is managed. This is not going to really impact anything on a scale which you should be worried about.
When loaded on my computer, a relatively blank jsfiddle consumed 5.4MB of memory. After creating 100,000 function objects it jumped to 7.5MB. This seems to be an insignificant amount of memory per function object (the implication being 21 bytes per function object: 7.5M-5.4M / 100k).
jsFiddle Demo

What makes my.class.js so fast? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I've been looking at the source code of my.class.js to find out what makes it so fast on Firefox. Here's the snippet of code used to create a class:
my.Class = function () {
var len = arguments.length;
var body = arguments[len - 1];
var SuperClass = len > 1 ? arguments[0] : null;
var hasImplementClasses = len > 2;
var Class, SuperClassEmpty;
if (body.constructor === Object) {
Class = function () {};
} else {
Class = body.constructor;
delete body.constructor;
}
if (SuperClass) {
SuperClassEmpty = function() {};
SuperClassEmpty.prototype = SuperClass.prototype;
Class.prototype = new SuperClassEmpty();
Class.prototype.constructor = Class;
Class.Super = SuperClass;
extend(Class, SuperClass, false);
}
if (hasImplementClasses)
for (var i = 1; i < len - 1; i++)
extend(Class.prototype, arguments[i].prototype, false);
extendClass(Class, body);
return Class;
};
The extend function is simply used to copy the properties of the second object onto the first (optionally overriding existing properties):
var extend = function (obj, extension, override) {
var prop;
if (override === false) {
for (prop in extension)
if (!(prop in obj))
obj[prop] = extension[prop];
} else {
for (prop in extension)
obj[prop] = extension[prop];
if (extension.toString !== Object.prototype.toString)
obj.toString = extension.toString;
}
};
The extendClass function copies all the static properties onto the class, as well as all the public properties onto the prototype of the class:
var extendClass = my.extendClass = function (Class, extension, override) {
if (extension.STATIC) {
extend(Class, extension.STATIC, override);
delete extension.STATIC;
}
extend(Class.prototype, extension, override);
};
This is all pretty straightforward. When you create a class, it simply returns the constructor function you provide it.
What beats my understanding however is how does creating an instance of this constructor execute faster than creating an instance of the same constructor written in Vapor.js.
This is what I'm trying to understand:
How do constructors of libraries like my.class.js create so many instances so quickly on Firefox? The constructors of the libraries are all very similar. Shouldn't the execution time also be similar?
Why does the way the class is created affect the execution speed of instantiation? Aren't definition and instantiation separate processes?
Where is my.class.js gaining this speed boost from? I don't see any part of the constructor code which should make it execute any faster. In fact traversing a long prototype chain like MyFrenchGuy.Super.prototype.setAddress.call should slow it down significantly.
Is the constructor function being JIT compiled? If so then why aren't the constructor functions of other libraries also being JIT compiled?
I don't mean to offend anyone, but this sort of thing really isn't worth the attention, IMHO. Almost any speed-difference between browsers is down to the JS engine. The V8 engine is very good at memory management, for example; especially when you compare it to IE's JScript engines of old.
Consider the following:
var closure = (function()
{
var closureVar = 'foo',
someVar = 'bar',
returnObject = {publicProp: 'foobar'};
returnObject.getClosureVar = function()
{
return closureVar;
};
return returnObject;
}());
Last time I checked, chrome actually GC'ed someVar, because it wasn't being referenced by the return value of the IIFE (referenced by closure), whereas both FF and Opera kept the entire function scope in memory.
In this snippet, it doesn't really matter, but for libs that are written using the module-pattern (AFAIK, that's pretty much all of them) that consist of thousands of lines of code, it can make a difference.
Anyway, modern JS-engines are more than just "dumb" parse-and-execute things. As you said: there's JIT compilation going on, but there's also a lot of trickery involved to optimize your code as much as possible. It could very well be that the snippet you posted is written in a way that FF's engine just loves.
It's also quite important to remember that there is some sort of speed-battle going on between Chrome and FF about who has the fastest engine. Last time I checked Mozilla's Rhino engine was said to outperform Google's V8, if that still holds true today, I can't say... Since then, both Google and Mozilla have been working on their engines...
Bottom line: speed differences between various browsers exist - nobody can deny that, but a single point of difference is insignificant: you'll never write a script that does just one thing over and over again. It's the overall performance that matters.
You have to keep in mind that JS is a tricky bugger to benchmark, too: just open your console, write some recursive function, and rung it 100 times, in FF and Chrome. compare the time it takes for each recursion, and the overall run. Then wait a couple of hours and try again... sometimes FF might come out on top, whereas other times Chrome might be faster, still. I've tried it with this function:
var bench = (function()
{
var mark = {start: [new Date()],
end: [undefined]},
i = 0,
rec = function(n)
{
return +(n === 1) || rec(n%2 ? n*3+1 : n/2);
//^^ Unmaintainable, but fun code ^^\\
};
while(i++ < 100)
{//new date at start, call recursive function, new date at end of recursion
mark.start[i] = new Date();
rec(1000);
mark.end[i] = new Date();
}
mark.end[0] = new Date();//after 100 rec calls, first element of start array vs first of end array
return mark;
}());
But now, to get back to your initial question(s):
First off: the snippet you provided doesn't quite compare to, say, jQuery's $.extend method: there's no real cloning going on, let alone deep-cloning. It doesn't check for circular references at all, which most other libs I've looked into do. checking for circular references does slow the entire process down, but it can come in handy from time to time (example 1 below). Part of the performance difference could be explained by the fact that this code simply does less, so it needs less time.
Secondly: Declaring a constructor (classes don't exist in JS) and creating an instance are, indeed, two different things (though declaring a constructor is in itself creating an instance of an object (a Function instance to be exact). The way you write your constructor can make a huge difference, as shown in example 2 below. Again, this is a generalization, and might not apply to certain use-cases on certain engines: V8, for example, tends to create a single function object for all instances, even if that function is part of the constructor - or so I'm told.
Thirdly: Traversing a long prototype-chain, as you mention is not as unusual as you might think, far from it, actually. You're constantly traversing chains of 2 or three prototypes, as shown in example 3. This shouldn't slow you down, as it's just inherent to the way JS resolves function calls or resolves expressions.
Lastly: It's probably being JIT-compiled, but saying that other libs aren't JIT-compiled just doesn't stack up. They might, then again, they might not. As I said before: different engines perform better at some tasks then other... it might be the case that FF JIT-compiles this code, and other engines don't.
The main reason I can see why other libs wouldn't be JIT-compiled are: checking for circular references, deep cloning capabilities, dependencies (ie extend method is used all over the place, for various reasons).
example 1:
var shallowCloneCircular = function(obj)
{//clone object, check for circular references
function F(){};
var clone, prop;
F.prototype = obj;
clone = new F();
for (prop in obj)
{//only copy properties, inherent to instance, rely on prototype-chain for all others
if (obj.hasOwnProperty(prop))
{//the ternary deals with circular references
clone[prop] = obj[prop] === obj ? clone : obj[prop];//if property is reference to self, make clone reference clone, not the original object!
}
}
return clone;
};
This function clones an object's first level, all objects that are being referenced by a property of the original object, will still be shared. A simple fix would be to simply call the function above recursively, but then you'll have to deal with the nasty business of circular references at all levels:
var circulars = {foo: bar};
circulars.circ1 = circulars;//simple circular reference, we can deal with this
circulars.mess = {gotcha: circulars};//circulars.mess.gotcha ==> circular reference, too
circulars.messier = {messiest: circulars.mess};//oh dear, this is hell
Of course, this isn't the most common of situations, but if you want to write your code defensively, you have to acknowledge the fact that many people write mad code all the time...
Example 2:
function CleanConstructor()
{};
CleanConstructor.prototype.method1 = function()
{
//do stuff...
};
var foo = new CleanConstructor(),
bar = new CleanConstructor);
console.log(foo === bar);//false, we have two separate instances
console.log(foo.method1 === bar.method1);//true: the function-object, referenced by method1 has only been created once.
//as opposed to:
function MessyConstructor()
{
this.method1 = function()
{//do stuff
};
}
var foo = new MessyConstructor(),
bar = new MessyConstructor();
console.log(foo === bar);//false, as before
console.log(foo.method1 === bar.method1);//false! for each instance, a new function object is constructed, too: bad performance!
In theory, declaring the first constructor is slower than the messy way: the function object, referenced by method1 is created before a single instance has been created. The second example doesn't create a method1, except for when the constructor is called. But the downsides are huge: forget the new keyword in the first example, and all you get is a return value of undefined. The second constructor creates a global function object when you omit the new keyword, and of course creates new function objects for each call. You have a constructor (and a prototype) that is, in fact, idling... Which brings us to example 3
example 3:
var foo = [];//create an array - empty
console.log(foo[123]);//logs undefined.
Ok, so what happens behind the scenes: foo references an object, instance of Array, which in turn inherits form the Object prototype (just try Object.getPrototypeOf(Array.prototype)). It stands to reason, therefore that an Array instance works in pretty much the same way as any object, so:
foo[123] ===> JS checks instance for property 123 (which is coerced to string BTW)
|| --> property not found #instance, check prototype (Array.prototype)
===========> Array.prototype.123 could not be found, check prototype
||
==========> Object.prototype.123: not found check prototype?
||
=======>prototype is null, return undefined
In other words, a chain like you describe isn't too far-fetched or uncommon. It's how JS works, so expecting that to slow things down is like expecting your brain to fry because your thinking: yes, you can get worn out by thinking too much, but just know when to take a break. Just like in the case of prototype-chains: their great, just know that they are a tad slower, yes...
I'm not entirely sure, but I do know that when programming, it is good practice to make the code as small as possible without sacrificing functionality. I like to call it minimalist code.
This can be a good reason to obfuscate code. Obfuscation shrinks the size of the file by using smaller method and variable names, making it harder to reverse-engineer, shrinking the file size, making it faster to download, as well as a potential performance boost. Google's javascript code is intensely obfuscated, and that contributes to their speed.
So in JavaScript, bigger isn't always better. When I find a way I can shrink my code, I implement it immediately, because I know it will benefit performance, even if by the smallest amount.
For example, using the var keyword in a function where the variable isn't needed outside the function helps garbage collection, which provides a very small speed boost versus keeping the variable in memory.
With a library like this this that produces "millions of operations per second" (Blaise's words), small performance boosts can add up to a noticeable/measurable difference.
So it is possible that my.class.js is "minimalist coded" or optimized in some manner. It could even be the var keywords.
I hope this helped somewhat. If it didn't help, then I wish you luck in getting a good answer.

JavaScript's Statement Performance Questions

Can you guys help me determine the performance difference of each of these
statements? Which one would you use?
Making a new Array using
- var new_list = new Array(); //or
- var new_list = [];
Appending element using
- push('a')
- new_list[i]; (if i know the length)
Ternary operator or if() {} else (){}
Trying to make isodd function, which is faster
(! (is_even)) or (x%2!=0)
forEach() or normal iteration
one more
a= b = 3; or b=3; a=b;
[edit: I'm making a Math Library. So any performance hacks discussions are also welcome :) ]
Thanks for your help.
I've always assumed that since (x&1) is a bitwise operation, it would be the fastest way to check for even/odd numbers, rather than checking for the remainder of the number.
Performance characteristics for all browser (especially at the level of individual library functions) can vary dramatically, so it's difficult to give meaningful really meaningful answers to these questions.
Anyhoo, just looking at the fast js engines (so Nitro, TraceMonkey, and V8)
[ ] will be faster than new Array -- new Array turns into the following logic
cons = lookup property "Array", if it can't be found, throw an exception
Check to see if cons can be used as a constructor, if not: throw an exception
thisVal = runtime creates a new object directly
res = result of calling cons passing thisVal as the value for this -- which requires logic to distinguish JS functions from standard runtime functions (assuming standard runtime functions aren't implemented in JS, which is the normal case). In this case Array is a native constructor which will create and return a new runtime array object.
if res is undefined or null then the final result is thisVal otherwise the final result is res. In the case of calling Array a new array object will be returned and thisVal will be thrown away
[ ] just tells the JS engine to directly create a new runtime array object immediately with no additional logic. This means new Array has a large amount of additional (not very cheap) logic, and performs and extra unnecessary object allocation.
newlist[newlist.length] = ... is faster (esp. if newlist is not a sparse array), but push is sufficiently common for me to expect engine developers to put quite a bit of effort into improving performance so this could change in time.
If you have a tight enough loop there may be a very slight win to the ternary operator, but arguably that's an engine flaw in the trival case of a = b ? c : d vs if (b) a = c; else a = d
Just the function call overhead alone will dwarf the cost of more or less any JS operator, at least in the sane cases (eg. you're performing arithmetic on numbers rather than objects)
The foreach syntax isn't yet standardised but its final performane will depend on a large number of details; Often JS semantics result in efficient looking statements being less efficient -- eg. for (var i in array) ... is vastly slower than for (var i = 0; i < array.length; i++) ... as the JS semantics require in enumeration to build up a list of all properties on the object (including the prototype chain), and then checking to make sure that each property is still on the object before sending it through the loop. Oh, and the properties need to be converted from integers (in the array case anyway) into strings, which costs time and memory.
I'd suggest you code a simple script like:
for(var i = 0; i < 1000; i++){
// Test your code here.
}
You can benchmark whatever you want that way, possibly adding timing functions before and after the for statement to be more accurate.
Of course you'll need to tweak the upper limit (1000 in this example) depending on the nature of your operations - some will require more iterations, others less.
Both are native constructors probably no difference.
push is faster, it maps directly to native, where as [] is evaluative
Probably not much of a difference, but technically, they don't do the same thing, so it's not apples to apples
x%2, skips the function call which is relatively slow
I've heard, though can't find the link at the moment, that iteration is faster than the foreach, which was surprising to me.
Edit: On #5, I believe the reason is related to this, in that foreach is ordered forward, which requires the incrementor to count forward, whereas for loops are ever so infinitesimally faster when they are run backward:
for(var i=a.length;i>-1;i--) {
// do whatever
}
the above is slightly faster than:
for(var i=0;i<a.length;i++) {
// do whatever
}
As other posters suggest, I think doing some rough benchmarking is your best bet... however, I'd also note that you'll probably get very different results from different browsers, since I'm sure most of the questions you're asking come down to specific internal implementation of the language constructs rather than the language itself.
This page says push is slower.
http://dev.opera.com/articles/view/efficient-javascript/?page=2

Categories

Resources