Can a JavaScript object's child reference itself? - javascript

I have a JavaScript object Team and a Score which represent points and some other functions. I want to know if it's safe to store the team in the score at the same time as storing the score in the team.
var Score = function(team){
this.team = team;
this.points = 0;
...
}
var team = {
name : 'Team 1',
}
team.score = new Score(team);
The result of this is that if I log team.score.team.score.team.score.team.score.team.score.points = 0. This is perfect for what I am programming, however does it represent a dangerous setup that may crash older browsers or cause any other issues? It looks exactly like an infinite loop however Chrome seems to be handling it fine.
Are there any reasons why I shouldn't do this?

Good question by the way.
This is called circular referencing.
Meaning the you are creating the nested reference of the same object.
Garbage collection in browsers: The main function of the garbage collector in the browser is to free the memory if the memory occupied by the object is no longer in use. But in the case of circular reference
An object is said to reference another object if the former has an
access to the latter (either implicitly or explicitly). For instance,
a JavaScript object has a reference to its prototype (implicit
reference) and to its properties values (explicit reference)
(Source MDN)
This is forcing the garbage collecting algorithm to prevent the object from being garbage collected, which in turn is a memory leak.
As per the MDN Mark and sweep algorithm is been improved in such circumstance of circular referencing which is intelligent enough to remove the object of this type.
Circular referencing was a problem in IE < 8 which caused the IE browsers to go hay wire on this. Read this link and this one
IBM link
This article sheds light on JavaScript circular referencing memory leak with example and clarity on the subject.
Final Verdict: Better to avoid circular referenced objects, only use when its highly needed at programmers discretion. As modern browsers today are quite efficiently built though but its not a good practice as a developer to write code that causes unwanted memory consumption and leaks.

Diagrammatic Represation For Circular Referencing
Consider the code snippet below:
const obj = {
id: 1
};
obj.cirRef = obj;
console.log(obj.cirRef === obj); // true
console.log(obj.cirRef.cirRef === obj); // true
console.log(obj.cirRef.cirRef.cirRef.cirRef.id); // 1
Here's a diagrammatic representation for the same:
Now using the diagram above, follow the wires and try to answer what this expression obj.cirRef.cirRef.id evaluates to, the answer is 1.

var Score = function(team,point){
this.team = team;
this.points = 0;
...
}
var team = {
name : 'Team 1',
point : 'point'
}
team.score = new Score(team);
team.score = new Score(point);
Try this, maybe it can help you

Related

WeakSet: garbage collection doesn't work? [duplicate]

The WeakSet is supposed to store elements by weak reference. That is, if an object is not referenced by anything else, it should be cleaned from the WeakSet.
I have written the following test:
var weakset = new WeakSet(),
numbers = [1, 2, 3];
weakset.add(numbers);
weakset.add({name: "Charlie"});
console.log(weakset);
numbers = undefined;
console.log(weakset);
Even though my [1, 2, 3] array is not referenced by anything, it's not being removed from the WeakSet. The console prints:
WeakSet {[1, 2, 3], Object {name: "Charlie"}}
WeakSet {[1, 2, 3], Object {name: "Charlie"}}
Why is that?
Plus, I have one more question. What is the point of adding objects to WeakSets directly, like this:
weakset.add({name: "Charlie"});
Are those Traceur's glitches or am I missing something?
And finally, what is the practical use of WeakSet if we cannot even iterate through it nor get the current size?
it's not being removed from the WeakSet. Why is that?
Most likely because the garbage collector has not yet run. However, you say you are using Traceur, so it just might be that they're not properly supported. I wonder how the console can show the contents of a WeakSet anyway.
What is the point of adding objects to WeakSets directly?
There is absolutely no point of adding object literals to WeakSets.
What is the practical use of WeakSet if we cannot even iterate through it nor get the current size?
All you can get is one bit of information: Is the object (or generically, value) contained in the set?
This can be useful in situations where you want to "tag" objects without actually mutating them (setting a property on them). Lots of algorithms contain some sort of "if x was already seen" condition (a JSON.stringify cycle detection might be a good example), and when you work with user-provided values the use of a Set/WeakSet would be advisable. The advantage of a WeakSet here is that its contents can be garbage-collected while your algorithm is still running, so it helps to reduce memory consumption (or even prevents leaks) when you are dealing with lots of data that is lazily (possibly even asynchronously) produced.
This is a really hard question. To be completely honest I had no idea in the context of JavaScript so I asked in esdiscuss and got a convincing answer from Domenic.
WeakSets are useful for security and validation reasons. If you want to be able to isolate a piece of JavaScript. They allow you to tag an object to indicate it belongs to a special set of object.
Let's say I have a class ApiRequest:
class ApiRequest {
constructor() {
// bring object to a consistent state, use platform code you have no direct access to
}
makeRequest() {
// do work
}
}
Now, I'm writing a JavaScript platform - my platform allows you to run JavaScript to make calls - to make those calls you need a ApiRequest - I only want you to make ApiRequests with the objects I give you so you can't bypass any constraints I have in place.
However, at the moment nothing is stopping you from doing:
ApiRequest.prototype.makeRequest.call(null, args); // make request as function
Object.create(ApiRequest.prototype).makeRequest(); // no initialization
function Foo(){}; Foo.prototype = ApiRequest.prototype; new Foo().makeRequest(); // no super
And so on, note that you can't keep a normal list or array of ApiRequest objects since that would prevent them from being garbage collected. Other than a closure, anything can be achieved with public methods like Object.getOwnPropertyNames or Object.getOwnSymbols. So you one up me and do:
const requests = new WeakSet();
class ApiRequest {
constructor() {
requests.add(this);
}
makeRequest() {
if(!request.has(this)) throw new Error("Invalid access");
// do work
}
}
Now, no matter what I do - I must hold a valid ApiRequest object to call the makeRequest method on it. This is impossible without a WeakMap/WeakSet.
So in short - WeakMaps are useful for writing platforms in JavaScript. Normally this sort of validation is done on the C++ side but adding these features will enable moving and making things in JavaScript.
(Of course, everything a WeakSet does a WeakMap that maps values to true can also do, but that's true for any map/set construct)
(Like Bergi's answer suggests, there is never a reason to add an object literal directly to a WeakMap or a WeakSet)
By definition, WeakSet has only three key functionalities
Weakly link an object into the set
Remove a link to an object from the set
Check if an object has already been linked to the set
Sounds more pretty familiar?
In some application, developers may need to implement a quick way to iterate through a series of data which is polluted by lots and lots of redundancy but you want to pick only ones which have not been processed before (unique). WeakSet could help you. See an example below:
var processedBag = new WeakSet();
var nextObject = getNext();
while (nextObject !== null){
// Check if already processed this similar object?
if (!processedBag.has(nextObject)){
// If not, process it and memorize
process(nextObject);
processedBag.add(nextObject);
}
nextObject = getNext();
}
One of the best data structure for application above is Bloom filter which is very good for a massive data size. However, you can apply the use of WeakSet for this purpose as well.
A "weak" set or map is useful when you need to keep an arbitrary collection of things but you don't want their presence in the collection from preventing those things from being garbage-collected if memory gets tight. (If garbage collection does occur, the "reaped" objects will silently disappear from the collection, so you can actually tell if they're gone.)
They are excellent, for example, for use as a look-aside cache: "have I already retrieved this record, recently?" Each time you retrieve something, put it into the map, knowing that the JavaScript garbage collector will be the one responsible for "trimming the list" for you, and that it will automatically do so in response to prevailing memory conditions (which you can't reasonably anticipate).
The only drawback is that these types are not "enumerable." You can't iterate over a list of entries – probably because this would likely "touch" those entries and so defeat the purpose. But, that's a small price to pay (and you could, if need be, "code around it").
WeakSet is a simplification of WeakMap for where your value is always going to be boolean true. It allows you to tag JavaScript objects so to only do something with them once or to maintain their state in respect to a certain process. In theory as it doesn't need to hold a value it should use a little less memory and perform slightly faster than WeakMap.
var [touch, untouch] = (() => {
var seen = new WeakSet();
return [
value => seen.has(value)) || (seen.add(value), !1),
value => !seen.has(value) || (seen.delete(value), !1)
];
})();
function convert(object) {
if(touch(object)) return;
extend(object, yunoprototype); // Made up.
};
function unconvert(object) {
if(untouch(object)) return;
del_props(object, Object.keys(yunoprototype)); // Never do this IRL.
};
Your console was probably incorrectly showing the contents due to the fact that the garbage collection did not take place yet. Therefore since the object wasn't garbage collected it would show the object still in weakset.
If you really want to see if a weakset still has a reference to a certain object then use the WeakSet.prototype.has() method. This method, as the name implies returns a boolean indicating wether the object still exists in the weakset.
Example:
var weakset = new WeakSet(),
numbers = [1, 2, 3];
weakset.add(numbers);
weakset.add({name: "Charlie"});
console.log(weakset.has(numbers));
numbers = undefined;
console.log(weakset.has(numbers));
Let me answer the first part, and try to avoid confusing you further.
The garbage collection of dereferenced objects is not observable! It would be a paradox, because you need an object reference to check if it exists in a map. But don't trust me on this, trust Kyle Simpson:
https://github.com/getify/You-Dont-Know-JS/blob/1st-ed/es6%20%26%20beyond/ch5.md#weakmaps
The problem with a lot of explanations I see here, is that they re-reference a variable to another object, or assign it a primitive value, and then check if the WeakMap contains that object or value as a key. Of course it doesn't! It never had that object/value as a key!
So the final piece to this puzzle: why does inspecting the WeakMap in a console still show all those objects there, even after you've removed all of your references to those objects? Because the console itself keeps persistent references to those Objects, for the purpose of being able to list all the keys in the WeakMap, because that is something that the WeakMap itself cannot do.
While I'm searching about use cases of Weakset I found these points:
"The WeakSet is weak, meaning references to objects in a WeakSet are held weakly.
If no other references to an object stored in the WeakSet exist, those objects can be garbage collected."
##################################
They are black boxes: we only get any data out of a WeakSet if we have both the WeakSet and a value.
##################################
Use Cases:
1 - to avoid bugs
2 - it can be very useful in general to avoid any object to be visited/setup twice
Refrence: https://esdiscuss.org/topic/actual-weakset-use-cases
3 - The contents of a WeakSet can be garbage collected.
4 - Possibility of lowering memory utilization.
Refrence: https://www.geeksforgeeks.org/what-is-the-use-of-a-weakset-object-in-javascript/
##################################
Example on Weakset: https://exploringjs.com/impatient-js/ch_weaksets.html
I Advice you to learn more about weak concept in JS: https://blog.logrocket.com/weakmap-weakset-understanding-javascript-weak-references/

How to store a weak reference to an object? [duplicate]

Is there any way in JavaScript to create a "weak reference" to another object? Here is the wiki page describing what a weak reference is. Here is another article that describes them in Java. Can anyone think of a way to implement this behavior in JavaScript?
Update: Since July, 2020 some implementations (Chrome, Edge, Firefox and Node.js) has had support for WeakRefs as defined in the WeakRefs proposal, which is a "Stage 3 Draft" as of December 16, 2020.
There is no language support for weakrefs in JavaScript. You can roll your own using manual reference counting, but not especially smoothly. You can't make a proxy wrapper object, because in JavaScript objects never know when they're about to be garbage-collected.
So your ‘weak reference’ becomes a key (eg. integer) in a simple lookup, with an add-reference and remove-reference method, and when there are no manually-tracked references anymore then entry can be deleted, leaving future lookups on that key to return null.
This is not really a weakref, but it can solve some of the same problems. It's typically done in complex web applications to prevent memory leakage from browsers (typically IE, especially older versions) when there is a reference loop between a DOM Node or event handler, and an object associated with it such as a closure. In these cases a full reference-counting scheme may not even be necessary.
When running JS on NodeJS, you may consider https://github.com/TooTallNate/node-weak.
Update: September 2019
It is not possible to use weak references yet, but most likely soon it will be possible, as WeakRefs in JavaScript are Work In Progress. Details below.
Proposal
Proposal in now in Stage 3 which means that it has complete specification and that further refinement will require feedback from implementations and users.
The WeakRef proposal encompasses two major new pieces of functionality:
Creating weak references to objects with the WeakRef class
Running user-defined finalizers after objects are garbage-collected, with the FinalizationGroup class
Use cases
A primary use for weak references is to implement caches or mappings holding large objects, where it’s desired that a large object is not kept alive solely because it appears in a cache or mapping.
Finalization is the execution of code to clean up after an object that has become unreachable to program execution. User-defined finalizers enable several new use cases, and can help prevent memory leaks when managing resources that the garbage collector doesn't know about.
Source and further reading
https://github.com/tc39/proposal-weakrefs
https://v8.dev/features/weak-references
2021 Update
WeakRef is now implemented in Chrome, Edge, and Firefox. Still waiting on Safari and some other holdouts.
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/WeakRef
May 2021 Update
It's now available on Safari thus all major browsers. See above.
Just for reference; JavaScript doesn't have it, but ActionScript 3 (which is also ECMAScript) does. Check out the constructor parameter for Dictionary.
Finally they are here. Not yet implemented in browsers, but soon to be.
https://v8.dev/features/weak-references
True weak references, no, not yet (but browser makers are looking at the subject). But here is an idea on how to simulate weak references.
You could build a cache which you drive your objects through. When an object is stored, the cache keeps a prediction of how much memory the object will take up. For some items, like storing images, this is straight forward to work out. For others this would be more difficult.
When you need an object, you then ask the cache for it. If the cache has the object, it is returned. If it is not there, then the item is generated, stored, and then returned.
The weak references are simulated by the cache removing items, when the total amount of predicted memory reaches a certain level. It will predict which items are least used based on how often they are retrieved, weighted by how long ago they were taken out. A 'calculation' cost could also be added, if the code that creates the item is passed into the cache as a closure. This would allow the cache to keep items which are very expensive to build or generate.
The deletion algorithm is key, because if you get this wrong then you could end up removing the most popular items. This would cause terrible performance.
As long as the cache is the only object with permanent references to the objects stored, then the above system should work pretty well as an alternative to true weak references.
Using a caching mechanism to emulate a weak reference, as JL235 suggested above, is reasonable. If weak references would exist natively, you would observe a behavior like this:
this.val = {};
this.ref = new WeakReference(this.val);
...
this.ref.get(); // always returns val
...
this.val = null; // no more references
...
this.ref.get(); // may still return val, depending on already gc'd or not
Whereas with a cache you would observe:
this.val = {};
this.key = cache.put(this.val);
...
cache.get(this.key); // returns val, until evicted by other cache puts
...
this.val = null; // no more references
...
cache.get(this.key); // returns val, until evicted by other cache puts
As a holder of a reference, you should not make any assumptions about when it refers to a value, this is no different using a cache
the proposal and some details https://github.com/tc39/proposal-weakrefs
Typescript copy/paste version
export class IterableWeakMap<T extends Object, V> {
weakMap = new WeakMap();
refSet = new Set<WeakRef<T>>();
finalizationGroup = new FinalizationRegistry(IterableWeakMap.cleanup);
static cleanup({ set, ref }: { set: Set<WeakRef<Object>>; ref: WeakRef<Object> }) {
set.delete(ref);
}
constructor(iterable?: Iterable<[T, V]>) {
if (!iterable) return;
for (const [key, value] of iterable) {
this.set(key, value);
}
}
set(key: T, value: V) {
const ref = new WeakRef<T>(key);
this.weakMap.set(key, { value, ref });
this.refSet.add(ref);
this.finalizationGroup.register(key, { set: this.refSet, ref }, ref);
}
get(key: T) {
const entry = this.weakMap.get(key);
return entry && entry.value;
}
delete(key: T) {
const entry = this.weakMap.get(key);
if (!entry) {
return false;
}
this.weakMap.delete(key);
this.refSet.delete(entry.ref);
this.finalizationGroup.unregister(entry.ref);
return true;
}
*[Symbol.iterator]() {
for (const ref of this.refSet) {
const key = ref.deref();
if (!key) continue;
const { value } = this.weakMap.get(key);
yield [key, value];
}
}
entries() {
return this[Symbol.iterator]();
}
*keys() {
for (const [key] of this) {
yield key;
}
}
*values() {
for (const [, value] of this) {
yield value;
}
}
}
EcmaScript 6 (ES Harmony) has a WeakMap object. Browser support amongst modern browsers is pretty good (the last 3 versions of Firefox, chrome and even an upcoming IE version support it).
http://www.jibbering.com/faq/faq_notes/closures.html
ECMAScript uses automatic garbage collection. The specification does not define the details, leaving that to the implementers to sort out, and some implementations are known to give a very low priority to their garbage collection operations. But the general idea is that if an object becomes un-referable (by having no remaining references to it left accessible to executing code) it becomes available for garbage collection and will at some future point be destroyed and any resources it is consuming freed and returned to the system for re-use.
This would normally be the case upon exiting an execution context. The scope chain structure, the Activation/Variable object and any objects created within the execution context, including function objects, would no longer be accessible and so would become available for garbage collection.
Meaning there are no weak ones only ones that no longer become available.

Timing issues considerations when using WeakMap from EcmaScript

What is the proper usage of the WeakMap in JavaScript? What kind of timing issues may occur when I use it? IN particular, I am wondering what would happen in the following situation:
var wm1 = new WeakMap()
var o1 = {},
o2 = function(){},
o3 = window;
// in other method:
wm1.set(o1, 37);
wm1.set(o2, "azerty");
if (wm1.has(o2)) {
//Garbage collection happen here, objects from wm1 may no longer exists
Console.log(wm1.get(o2)) // what will happen here? just undefined? null?
}
how GC will affect WeakMaps?
Update: my bad, I missed the fact that you can't have string as keys in WeakMap, my question does not make if I take into account that fact.
WeakMaps are explicitly designed to not exhibit the least observable garbage collection behaviour. There will be absolutely zero issues.
In your specific situtation, as long as you hold a reference to the object or to the function (through the live variables o1 and o2 that are still on the stack), you will be able to find them in the WeakMap or WeakSet. As soon as you don't hold a reference to them any more, and nobody does, they are eligible for garbage collection (just as usual) - and given that, nobody will be able to try to look them up in the collection.

How does Bluebird's util.toFastProperties function make an object's properties "fast"?

In Bluebird's util.js file, it has the following function:
function toFastProperties(obj) {
/*jshint -W027*/
function f() {}
f.prototype = obj;
ASSERT("%HasFastProperties", true, obj);
return f;
eval(obj);
}
For some reason, there's a statement after the return function, which I'm not sure why it's there.
As well, it seems that it is deliberate, as the author had silenced the JSHint warning about this:
Unreachable 'eval' after 'return'. (W027)
What exactly does this function do? Does util.toFastProperties really make an object's properties "faster"?
I've searched through Bluebird's GitHub repository for any comments in the source code or an explanation in their list of issues, but I couldn't find any.
2017 update: First, for readers coming today - here is a version that works with Node 7 (4+):
function enforceFastProperties(o) {
function Sub() {}
Sub.prototype = o;
var receiver = new Sub(); // create an instance
function ic() { return typeof receiver.foo; } // perform access
ic();
ic();
return o;
eval("o" + o); // ensure no dead code elimination
}
Sans one or two small optimizations - all the below is still valid.
Let's first discuss what it does and why that's faster and then why it works.
What it does
The V8 engine uses two object representations:
Dictionary mode - in which object are stored as key - value maps as a hash map.
Fast mode - in which objects are stored like structs, in which there is no computation involved in property access.
Here is a simple demo that demonstrates the speed difference. Here we use the delete statement to force the objects into slow dictionary mode.
The engine tries to use fast mode whenever possible and generally whenever a lot of property access is performed - however sometimes it gets thrown into dictionary mode. Being in dictionary mode has a big performance penalty so generally it is desirable to put objects in fast mode.
This hack is intended to force the object into fast mode from dictionary mode.
Bluebird's Petka himself talks about it here.
These slides (wayback machine) by Vyacheslav Egorov also mentions it.
The question "*https://stackoverflow.com/questions/23455678/pros-and-cons-of-dictionary-mode*" and its accepted answer are also related.
This slightly outdated article is still a fairly good read that can give you a good idea on how objects are stored in v8.
Why it's faster
In JavaScript prototypes typically store functions shared among many instances and rarely change a lot dynamically. For this reason it is very desirable to have them in fast mode to avoid the extra penalty every time a function is called.
For this - v8 will gladly put objects that are the .prototype property of functions in fast mode since they will be shared by every object created by invoking that function as a constructor. This is generally a clever and desirable optimization.
How it works
Let's first go through the code and figure what each line does:
function toFastProperties(obj) {
/*jshint -W027*/ // suppress the "unreachable code" error
function f() {} // declare a new function
f.prototype = obj; // assign obj as its prototype to trigger the optimization
// assert the optimization passes to prevent the code from breaking in the
// future in case this optimization breaks:
ASSERT("%HasFastProperties", true, obj); // requires the "native syntax" flag
return f; // return it
eval(obj); // prevent the function from being optimized through dead code
// elimination or further optimizations. This code is never
// reached but even using eval in unreachable code causes v8
// to not optimize functions.
}
We don't have to find the code ourselves to assert that v8 does this optimization, we can instead read the v8 unit tests:
// Adding this many properties makes it slow.
assertFalse(%HasFastProperties(proto));
DoProtoMagic(proto, set__proto__);
// Making it a prototype makes it fast again.
assertTrue(%HasFastProperties(proto));
Reading and running this test shows us that this optimization indeed works in v8. However - it would be nice to see how.
If we check objects.cc we can find the following function (L9925):
void JSObject::OptimizeAsPrototype(Handle<JSObject> object) {
if (object->IsGlobalObject()) return;
// Make sure prototypes are fast objects and their maps have the bit set
// so they remain fast.
if (!object->HasFastProperties()) {
MigrateSlowToFast(object, 0);
}
}
Now, JSObject::MigrateSlowToFast just explicitly takes the Dictionary and converts it into a fast V8 object. It's a worthwhile read and an interesting insight into v8 object internals - but it's not the subject here. I still warmly recommend that you read it here as it's a good way to learn about v8 objects.
If we check out SetPrototype in objects.cc, we can see that it is called in line 12231:
if (value->IsJSObject()) {
JSObject::OptimizeAsPrototype(Handle<JSObject>::cast(value));
}
Which in turn is called by FuntionSetPrototype which is what we get with .prototype =.
Doing __proto__ = or .setPrototypeOf would have also worked but these are ES6 functions and Bluebird runs on all browsers since Netscape 7 so that's out of the question to simplify code here. For example, if we check .setPrototypeOf we can see:
// ES6 section 19.1.2.19.
function ObjectSetPrototypeOf(obj, proto) {
CHECK_OBJECT_COERCIBLE(obj, "Object.setPrototypeOf");
if (proto !== null && !IS_SPEC_OBJECT(proto)) {
throw MakeTypeError("proto_object_or_null", [proto]);
}
if (IS_SPEC_OBJECT(obj)) {
%SetPrototype(obj, proto); // MAKE IT FAST
}
return obj;
}
Which directly is on Object:
InstallFunctions($Object, DONT_ENUM, $Array(
...
"setPrototypeOf", ObjectSetPrototypeOf,
...
));
So - we have walked the path from the code Petka wrote to the bare metal. This was nice.
Disclaimer:
Remember this is all implementation detail. People like Petka are optimization freaks. Always remember that premature optimization is the root of all evil 97% of the time. Bluebird does something very basic very often so it gains a lot from these performance hacks - being as fast as callbacks isn't easy. You rarely have to do something like this in code that doesn't power a library.
V8 developer here. The accepted answer is a great explanation, I just wanted to highlight one thing: the so-called "fast" and "slow" property modes are unfortunate misnomers, they each have their pros and cons. Here is a (slightly simplified) overview of the performance of various operations:
struct-like properties
dictionary properties
adding a property to an object
--
+
deleting a property
---
+
reading/writing a property, first time
-
+
reading/writing, cached, monomorphic
+++
+
reading/writing, cached, few shapes
++
+
reading/writing, cached, many shapes
--
+
colloquial name
"fast"
"slow"
So as you can see, dictionary properties are actually faster for most of the lines in this table, because they don't care what you do, they just handle everything with solid (though not record-breaking) performance. Struct-like properties are blazing fast for one particular situation (reading/writing the values of existing properties, where every individual place in the code only sees very few distinct object shapes), but the price they pay for that is that all other operations, in particular those that add or remove properties, become much slower.
It just so happens that the special case where struct-like properties have their big advantage (+++) is particularly frequent and really important for many apps' performance, which is why they acquired the "fast" moniker. But it's important to realize that when you delete properties and V8 switches the affected objects to dictionary mode, then it isn't being dumb or trying to be annoying: rather it attempts to give you the best possible performance for what you're doing. We have landed patches in the past that have achieved significant performance improvements by making more objects go to dictionary ("slow") mode sooner when appropriate.
Now, it can happen that your objects would generally benefit from struct-like properties, but something your code does causes V8 to transition them to dictionary properties, and you'd like to undo that; Bluebird had such a case. Still, the name toFastProperties is a bit misleading in its simplicity; a more accurate (though unwieldy) name would be spendTimeOptimizingThisObjectAssumingItsPropertiesWontChange, which would indicate that the operation itself is costly, and it only makes sense in certain limited cases. If someone took away the conclusion "oh, this is great, so I can happily delete properties now, and just call toFastProperties afterwards every time", then that would be a major misunderstanding and cause pretty bad performance degradation.
If you stick with a few simple rules of thumb, you'll never have a reason to even try to force any internal object representation changes:
Use constructors, and initialize all properties in the constructor. (This helps not only your engine, but also understandability and maintainability of your code. Consider that TypeScript doesn't quite force this but strongly encourages it, because it helps engineering productivity.)
Use classes or prototypes to install methods, don't just slap them onto each object instance. (Again, this is a common best practice for many reasons, one of them being that it's faster.)
Avoid delete. When properties come and go, prefer using a Map over the ES5-era "object-as-map" pattern. When an object can toggle into and out of a certain state, prefer boolean (or equivalent) properties (e.g. o.has_state = true; o.has_state = false;) over adding and deleting an indicator property.
When it comes to performance, measure, measure, measure. Before you start sinking time into performance improvements, profile your app to see where the hotspots are. When you implement a change that you hope will make things faster, verify with your real app (or something extremely close to it; not just a 10-line microbenchmark!) that it actually helps.
Lastly, if your team lead tells you "I've heard that there are 'fast' and 'slow' properties, please make sure that all of ours are 'fast'", then point them at this post :-)
Reality from 2021 (NodeJS version 12+).
Seems like a huge optimization is done, objects with deleted fields and sparse arrays don't become slow. Or I'm missing smth?
// run in Node with enabled flag
// node --allow-natives-syntax script.js
function Point(x, y) {
this.x = x;
this.y = y;
}
var obj1 = new Point(1, 2);
var obj2 = new Point(3, 4);
delete obj2.y;
var arr = [1,2,3]
arr[100] = 100
console.log('obj1 has fast properties:', %HasFastProperties(obj1));
console.log('obj2 has fast properties:', %HasFastProperties(obj2));
console.log('arr has fast properties:', %HasFastProperties(arr));
both show true
obj1 has fast properties: true
obj2 has fast properties: true
arr has fast properties: true
// run in Node with enabled flag
// node --allow-natives-syntax script.js
function Point(x, y) {
this.x = x;
this.y = y;
}
var obj2 = new Point(3, 4);
console.log('obj has fast properties:', %HasFastProperties(obj2)) // true
delete obj2.y;
console.log('obj2 has fast properties:', %HasFastProperties(obj2)); //true
var obj = {x : 1, y : 2};
console.log('obj has fast properties:', %HasFastProperties(obj)) //true
delete obj.x;
console.log('obj has fast properties:', %HasFastProperties(obj)); //fasle
Function and object look different

What makes my.class.js so fast? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I've been looking at the source code of my.class.js to find out what makes it so fast on Firefox. Here's the snippet of code used to create a class:
my.Class = function () {
var len = arguments.length;
var body = arguments[len - 1];
var SuperClass = len > 1 ? arguments[0] : null;
var hasImplementClasses = len > 2;
var Class, SuperClassEmpty;
if (body.constructor === Object) {
Class = function () {};
} else {
Class = body.constructor;
delete body.constructor;
}
if (SuperClass) {
SuperClassEmpty = function() {};
SuperClassEmpty.prototype = SuperClass.prototype;
Class.prototype = new SuperClassEmpty();
Class.prototype.constructor = Class;
Class.Super = SuperClass;
extend(Class, SuperClass, false);
}
if (hasImplementClasses)
for (var i = 1; i < len - 1; i++)
extend(Class.prototype, arguments[i].prototype, false);
extendClass(Class, body);
return Class;
};
The extend function is simply used to copy the properties of the second object onto the first (optionally overriding existing properties):
var extend = function (obj, extension, override) {
var prop;
if (override === false) {
for (prop in extension)
if (!(prop in obj))
obj[prop] = extension[prop];
} else {
for (prop in extension)
obj[prop] = extension[prop];
if (extension.toString !== Object.prototype.toString)
obj.toString = extension.toString;
}
};
The extendClass function copies all the static properties onto the class, as well as all the public properties onto the prototype of the class:
var extendClass = my.extendClass = function (Class, extension, override) {
if (extension.STATIC) {
extend(Class, extension.STATIC, override);
delete extension.STATIC;
}
extend(Class.prototype, extension, override);
};
This is all pretty straightforward. When you create a class, it simply returns the constructor function you provide it.
What beats my understanding however is how does creating an instance of this constructor execute faster than creating an instance of the same constructor written in Vapor.js.
This is what I'm trying to understand:
How do constructors of libraries like my.class.js create so many instances so quickly on Firefox? The constructors of the libraries are all very similar. Shouldn't the execution time also be similar?
Why does the way the class is created affect the execution speed of instantiation? Aren't definition and instantiation separate processes?
Where is my.class.js gaining this speed boost from? I don't see any part of the constructor code which should make it execute any faster. In fact traversing a long prototype chain like MyFrenchGuy.Super.prototype.setAddress.call should slow it down significantly.
Is the constructor function being JIT compiled? If so then why aren't the constructor functions of other libraries also being JIT compiled?
I don't mean to offend anyone, but this sort of thing really isn't worth the attention, IMHO. Almost any speed-difference between browsers is down to the JS engine. The V8 engine is very good at memory management, for example; especially when you compare it to IE's JScript engines of old.
Consider the following:
var closure = (function()
{
var closureVar = 'foo',
someVar = 'bar',
returnObject = {publicProp: 'foobar'};
returnObject.getClosureVar = function()
{
return closureVar;
};
return returnObject;
}());
Last time I checked, chrome actually GC'ed someVar, because it wasn't being referenced by the return value of the IIFE (referenced by closure), whereas both FF and Opera kept the entire function scope in memory.
In this snippet, it doesn't really matter, but for libs that are written using the module-pattern (AFAIK, that's pretty much all of them) that consist of thousands of lines of code, it can make a difference.
Anyway, modern JS-engines are more than just "dumb" parse-and-execute things. As you said: there's JIT compilation going on, but there's also a lot of trickery involved to optimize your code as much as possible. It could very well be that the snippet you posted is written in a way that FF's engine just loves.
It's also quite important to remember that there is some sort of speed-battle going on between Chrome and FF about who has the fastest engine. Last time I checked Mozilla's Rhino engine was said to outperform Google's V8, if that still holds true today, I can't say... Since then, both Google and Mozilla have been working on their engines...
Bottom line: speed differences between various browsers exist - nobody can deny that, but a single point of difference is insignificant: you'll never write a script that does just one thing over and over again. It's the overall performance that matters.
You have to keep in mind that JS is a tricky bugger to benchmark, too: just open your console, write some recursive function, and rung it 100 times, in FF and Chrome. compare the time it takes for each recursion, and the overall run. Then wait a couple of hours and try again... sometimes FF might come out on top, whereas other times Chrome might be faster, still. I've tried it with this function:
var bench = (function()
{
var mark = {start: [new Date()],
end: [undefined]},
i = 0,
rec = function(n)
{
return +(n === 1) || rec(n%2 ? n*3+1 : n/2);
//^^ Unmaintainable, but fun code ^^\\
};
while(i++ < 100)
{//new date at start, call recursive function, new date at end of recursion
mark.start[i] = new Date();
rec(1000);
mark.end[i] = new Date();
}
mark.end[0] = new Date();//after 100 rec calls, first element of start array vs first of end array
return mark;
}());
But now, to get back to your initial question(s):
First off: the snippet you provided doesn't quite compare to, say, jQuery's $.extend method: there's no real cloning going on, let alone deep-cloning. It doesn't check for circular references at all, which most other libs I've looked into do. checking for circular references does slow the entire process down, but it can come in handy from time to time (example 1 below). Part of the performance difference could be explained by the fact that this code simply does less, so it needs less time.
Secondly: Declaring a constructor (classes don't exist in JS) and creating an instance are, indeed, two different things (though declaring a constructor is in itself creating an instance of an object (a Function instance to be exact). The way you write your constructor can make a huge difference, as shown in example 2 below. Again, this is a generalization, and might not apply to certain use-cases on certain engines: V8, for example, tends to create a single function object for all instances, even if that function is part of the constructor - or so I'm told.
Thirdly: Traversing a long prototype-chain, as you mention is not as unusual as you might think, far from it, actually. You're constantly traversing chains of 2 or three prototypes, as shown in example 3. This shouldn't slow you down, as it's just inherent to the way JS resolves function calls or resolves expressions.
Lastly: It's probably being JIT-compiled, but saying that other libs aren't JIT-compiled just doesn't stack up. They might, then again, they might not. As I said before: different engines perform better at some tasks then other... it might be the case that FF JIT-compiles this code, and other engines don't.
The main reason I can see why other libs wouldn't be JIT-compiled are: checking for circular references, deep cloning capabilities, dependencies (ie extend method is used all over the place, for various reasons).
example 1:
var shallowCloneCircular = function(obj)
{//clone object, check for circular references
function F(){};
var clone, prop;
F.prototype = obj;
clone = new F();
for (prop in obj)
{//only copy properties, inherent to instance, rely on prototype-chain for all others
if (obj.hasOwnProperty(prop))
{//the ternary deals with circular references
clone[prop] = obj[prop] === obj ? clone : obj[prop];//if property is reference to self, make clone reference clone, not the original object!
}
}
return clone;
};
This function clones an object's first level, all objects that are being referenced by a property of the original object, will still be shared. A simple fix would be to simply call the function above recursively, but then you'll have to deal with the nasty business of circular references at all levels:
var circulars = {foo: bar};
circulars.circ1 = circulars;//simple circular reference, we can deal with this
circulars.mess = {gotcha: circulars};//circulars.mess.gotcha ==> circular reference, too
circulars.messier = {messiest: circulars.mess};//oh dear, this is hell
Of course, this isn't the most common of situations, but if you want to write your code defensively, you have to acknowledge the fact that many people write mad code all the time...
Example 2:
function CleanConstructor()
{};
CleanConstructor.prototype.method1 = function()
{
//do stuff...
};
var foo = new CleanConstructor(),
bar = new CleanConstructor);
console.log(foo === bar);//false, we have two separate instances
console.log(foo.method1 === bar.method1);//true: the function-object, referenced by method1 has only been created once.
//as opposed to:
function MessyConstructor()
{
this.method1 = function()
{//do stuff
};
}
var foo = new MessyConstructor(),
bar = new MessyConstructor();
console.log(foo === bar);//false, as before
console.log(foo.method1 === bar.method1);//false! for each instance, a new function object is constructed, too: bad performance!
In theory, declaring the first constructor is slower than the messy way: the function object, referenced by method1 is created before a single instance has been created. The second example doesn't create a method1, except for when the constructor is called. But the downsides are huge: forget the new keyword in the first example, and all you get is a return value of undefined. The second constructor creates a global function object when you omit the new keyword, and of course creates new function objects for each call. You have a constructor (and a prototype) that is, in fact, idling... Which brings us to example 3
example 3:
var foo = [];//create an array - empty
console.log(foo[123]);//logs undefined.
Ok, so what happens behind the scenes: foo references an object, instance of Array, which in turn inherits form the Object prototype (just try Object.getPrototypeOf(Array.prototype)). It stands to reason, therefore that an Array instance works in pretty much the same way as any object, so:
foo[123] ===> JS checks instance for property 123 (which is coerced to string BTW)
|| --> property not found #instance, check prototype (Array.prototype)
===========> Array.prototype.123 could not be found, check prototype
||
==========> Object.prototype.123: not found check prototype?
||
=======>prototype is null, return undefined
In other words, a chain like you describe isn't too far-fetched or uncommon. It's how JS works, so expecting that to slow things down is like expecting your brain to fry because your thinking: yes, you can get worn out by thinking too much, but just know when to take a break. Just like in the case of prototype-chains: their great, just know that they are a tad slower, yes...
I'm not entirely sure, but I do know that when programming, it is good practice to make the code as small as possible without sacrificing functionality. I like to call it minimalist code.
This can be a good reason to obfuscate code. Obfuscation shrinks the size of the file by using smaller method and variable names, making it harder to reverse-engineer, shrinking the file size, making it faster to download, as well as a potential performance boost. Google's javascript code is intensely obfuscated, and that contributes to their speed.
So in JavaScript, bigger isn't always better. When I find a way I can shrink my code, I implement it immediately, because I know it will benefit performance, even if by the smallest amount.
For example, using the var keyword in a function where the variable isn't needed outside the function helps garbage collection, which provides a very small speed boost versus keeping the variable in memory.
With a library like this this that produces "millions of operations per second" (Blaise's words), small performance boosts can add up to a noticeable/measurable difference.
So it is possible that my.class.js is "minimalist coded" or optimized in some manner. It could even be the var keywords.
I hope this helped somewhat. If it didn't help, then I wish you luck in getting a good answer.

Categories

Resources