I'm trying to understand the example given by WeakSet documentation here.
// Execute a callback on everything stored inside an object
function execRecursively(fn, subject, _refs = null){
if(!_refs)
_refs = new WeakSet();
// Avoid infinite recursion
if(_refs.has(subject))
return;
fn(subject);
if("object" === typeof subject){
_refs.add(subject);
for(let key in subject)
execRecursively(fn, subject[key], _refs);
}
}
const foo = {
foo: "Foo",
bar: {
bar: "Bar"
}
};
foo.bar.baz = foo; // Circular reference!
execRecursively(obj => console.log(obj), foo);
In the doc it says:
The WeakSet is weak, meaning references to objects in a WeakSet are held weakly. If no other references to an object stored in the WeakSet exist, those objects can be garbage collected.
Object foo is defined outside of execRecursively function. The WeakSet is defined inside of it. So, there is a reference to the object held in the Weakset which is outside of the scope of the function.
The doc continues with:
The number of objects or their traversal order is immaterial, so a WeakSet is more suitable (and performant) than a Set for tracking object references, especially if a very large number of objects is involved.
Now, my question is how this code can be more performant than the time when a Set is used? Because, even in current example there is a reference to the foo which prevents the garbage collector to remove the object.
How can this code be more performant than the time when a Set is used?
Like the docs say, a WeakSet doesn't keep track of the number of objects or the order in which they were put in the collection, so there's a tiny bit less overhead.
In the current example there is a reference to the foo which prevents the garbage collector to remove the object.
Yes, however that is specific to your example. The weakness only gets interesting (and useful) when the objects are lazily generated while traversing the structure. See the following example:
function generate(n) {
if (n <= 0) return foo;
else return {
value: "x".repeat(n),
get next() { return generate(n-1); },
}
}
const foo = generate(100000);
let sum = 0;
execRecursively(obj => { sum += obj.value.length, foo);
console.log(sum);
If execRecursively would use a Set, during the execution it would need to keep a hundred thousand objects that contain very long strings in memory. When using a WeakSet, the objects can already be garbage-collected during the execution.
Related
I am sorry is something similar has been asked, the search didn't find it.
Let's say I have function that creates a big memoized value in js, in the example an array of 1 million random numbers, and call that function twice, so I have two big arrays.
const memoizedValue = () => {
const val = [...Array(1000000)].map((_, i) => i + Math.random())
const valAtIndex = i => val[i];
return valAtIndex;
}
let valAtIndex1 = memoizedValue();
console.log(valAtIndex1(1000)) // 1000.9215271184725
console.log(valAtIndex1(1001)) // 1001.123987792151
let valAtIndex2 = memoizedValue();
console.log(valAtIndex2(1000)) // 1000.8830808003901
console.log(valAtIndex2(1001)) // 1001.3989636036797
Do I remove the array if I clear a reference to the instance? Does the garbage collector clear the memory for the array if I do this (asuming I don't have any other instance):
valAtIndex1 = (i) => i;
console.log(valAtIndex1(1000)) // 1000
console.log(valAtIndex1(1001)) // 1001
What if I asign the value null, does this clear the memory for the Array?
valAtIndex1 = null;
If you release all references to the function you've put in valAtIndex1 (for instance, by doing valAtIndex1 = x where x can be literally anything as long as it's not the value valAtIndex1 already has in it — null, undefined, 42, ...), that releases that function's reference to the context where it was created, which in turn releases that context's reference to the big array. The garbage collector is free to reclaim that memory.
For example:
let valAtIndex = memoizedValue();
console.log(valAtIndex(0)); // 1.3753751200626736 or whatever
valAtIndex = null; // Releases the reference
In the comments you seemed particularly concerned about overwriting the variable's value with the result of another call to memoizedValue. That's absolutely fine:
let valAtIndex = memoizedValue(); // Creates array A and function A accessing it
console.log(valAtIndex(0)); // 1.3753751200626736 or whatever
valAtIndex = memoizedValue(); // Creates array B and function b accessing it,
// releasing the reference to function A
console.log(valAtIndex(0)); // 1.6239617464592875 or whatever
Side note: Your code to create the massive array will create an unnecessary temporary massive array (the one you're creating with the array literal), and possibly two of them (the one you're creating with Array(1000000) may well reserve memory for all one million array elements; it does in V8, for instance, even though the array is empty). Instead:
const val = Array.from({length: 1000000}, (_, i) => i + Math.random());
Assigning the array to null undefined or "" will allow it to be cleared by the garbage collector. Javascript does not allow you to directly manage the garbage collector in this manner. Just assign it as one of the above values once you're done using it, and it will be cleared automatically for you.
Consider a function that acts on one element of complex object I'm passing around. I can write it as:
function foo(object) {
bar(object.item)
}
or
function foo(item) {
bar(item);
}
If I have the option, is there any performance benefit to passing just the single element of the object to a function vs passing the whole object and pulling out the pieces I need?
I.e., is it more efficient to call:
foo(object);
and pass along the entire object let foo deal with, or
foo(object.item);
which only passes the single item?
Update: It looks like the terminology I could not find until the comments arrived is whether Javascript is pass-by-reference or pass-by-value.
As objects are passed by reference in Javascript (with some important caveats), it should make no difference how big the object I'm passing is.
Interesting reading:
https://medium.freecodecamp.org/understanding-by-reference-vs-by-value-d49139beb1c4
Object property access is a little bit more expensive than plain value access. (warning, the following snippet will block your browser for a bit, depending on your computer's specs:)
(() => {
const obj = { foo: 'bar' };
const t0 = performance.now();
for (let i = 0; i < 1e9; i++) {
obj.foo;
}
const t1 = performance.now();
const { foo } = obj;
for (let i = 0; i < 1e9; i++) {
foo;
}
const t2 = performance.now();
console.log('Object property access: ' + (t1 - t0));
console.log('Variable access: ' + (t2 - t1));
})();
The difference is tiny, but it's there. When you have obj.prop, the interpreter first has to look up what obj refers to, and then it has to look up the prop property on obj to get to the value. So, it makes sense that it's a bit easier when the value you're looking for is already in a standalone variable - all that's necessary to get to it is for the interpreter to look up that variable.
But, for the example you mention, no matter what, you'll be doing one object lookup, and one plain value lookup:
foo(object); // 1 variable lookup
function foo(object) {
bar(object.item) // 1 object property lookup
}
// vs
foo(object.item); // 1 object property lookup
function foo(item) {
bar(item); // 1 variable lookup
}
It would be different if foo used the .item property more than once. For example, if foo had:
foo(object); // 1 variable lookup
function foo(object) {
bar(object.item) // 1 object property lookup
baz(object.item) // 1 object property lookup
}
that would require two object property lookups, which means it would be (very slightly) more efficient to pass the item alone.
All this said, the difference really is minuscule. As you can see with the snippet, it requires one billion iterations (at least on my machine) to reliably see a difference, and even then, the improvement is only ~5-10% or so, at least for me on Chrome. It's not something worth worrying about in 99% of situations. Code readability is more important.
How shall I delete objects in Javascript to properly destroy them (call destructors if there are any?) and prevent memory leaks? I've seen two ways:
delete obj;
and
obj = null;
But even after reading this I have not understood what is really the difference and what should I use.
Also, I guess there are not real destructors in Javascript, but in case of complex relationship of multiple objects is it really enough to rely on garbage collector to prevent memory leaks?
One major difference between the two is the value of obj after the operation. Consider
x = 42
x = null
console.log(x) // prints: null
delete x;
console.log(x) // prints: undefined
Assigning null is giving a new value to an existing property. It will still exist after the operation. When you delete a property it is removed entirely.
Google has something to say about delete:
Prefer this.foo = null
Foo.prototype.dispose = function() {
this.property_ = null;
};
Instead of:
Foo.prototype.dispose = function() {
delete this.property_;
};
In modern JavaScript engines, changing the number of properties on an
object is much slower than reassigning the values. The delete keyword
should be avoided except when it is necessary to remove a property
from an object's iterated list of keys, or to change the result of if
(key in obj).
Edit
Some performance test: http://jsperf.com/delete-vs-nullify
The article you have linked says that delete only deletes a reference, therefore there is no difference between the two really performance wise. The only theoretical difference is the garbage collector will be more efficient because it's been explicitly told what to do, but unless you have incredibly large complex objects with many references in hard to reach locations it shouldn't be an issue.
The other answer on the question explains an actual usecase that highlights a difference, ie. removing a property from an object. Accessing it would then give you undefined rather than null.
Here's an example of the latter: http://jsfiddle.net/YPSqM/
function cat() { this.name = 'mittens'; };
var myCat = new cat();
alert(myCat.name); // 'mittens'
myCat.name = null;
alert(myCat.name === null); // 'true' as it is null but not undefined
delete myCat.name;
alert(myCat.name === undefined); // 'true' as it is undefined (but not null)
I'm making a class that will be recreated many times, and in order to save memory I need to thoroughly delete it. Basically I need to access its containing variable if possible.
Here's the example:
function example(){
this.id=0;
this.action=function(){alert('tost');}
this.close=function(){ delete this;}
}
var foo=new example();
My question is:
How can I get access to the foo variable from within the example function so I can remove it?
window.foo will access that global variable.
this.close=function(){ delete window.foo; }
However, I remember there is something fishy with global variables, delete and window, so you might want to do otherwise, and simply use window.foo = null; for example.
If you want to access a variable defined in another function, you'll want to read the answers to this SO question.
Since what you want is to allow the garbage collector to release that object, you need to ensure that there are no references left to the object. This can be quite tricky (i.e. impossible) because the code manipulating the object can make multiple references to it, through global and local variables, and attributes.
You could prevent direct reference to the object by creating a proxy to access it, unfortunately javascript doesn't support dynamic getters and setters (also called catch-alls) very well (on some browseres you might achieve it though, see this SO question), so you can't easily redirect all field and method (which are just fields anyway) accesses to the underlying object, especially if the underlying object has many fields added to it and removed from it dynamically (i.e. this.anewfield = anewvalue).
Here is a smiple proxy (code on jsfiddle.net):
function heavyobject(destroyself, param1, param2) {
this.id=0;
this.action=function(){alert('tost ' + param1 + "," + param2);};
this.close=function(){ destroyself(); }
}
function proxy(param1, param2) {
object = null;
// overwrites object, the only reference to
// the heavyobject, with a null value.
destroyer = function() { object = null; };
object = new heavyobject(destroyer, param1, param2);
return function(fieldname, setvalue) {
if (object != null) {
if (arguments.length == 1)
return object[fieldname];
else
object[fieldname] = setvalue;
}
};
}
var foo = proxy('a', 'b');
alert(foo("action")); // get field action
foo("afield", "avalue"); // set field afield to value avalue.
foo("action")(); // call field action
foo("close")(); // call field close
alert(foo("action")); // get field action (should be 'undefined').
It works by returning a function that when called with a single argument, gets a field on the wrapped object, and when called with two arguments sets a field. It works by making sure that the only reference to the heavyobject is the object local variable in the proxy function.
The code in heavyobject must never leak this (never return it, never return a function holding a reference to var that = this, never store it into a field of another variable), otherwise some external references may be created that would point to the heavyobject, preventing its deletion.
If heavyobject's constructor calls destroyself() from within the constructor (or from a function called by the constructor), it won't have any effect.
Another simpler proxy, that will give you an empty object on which you can add fields, read fields, and call methods. I'm pretty sure that with this one, no external reference can escape.
Code (also on jsfiddle.net):
function uniquelyReferencedObject() {
object = {};
f = function(field, value) {
if (object != null) {
if (arguments.length == 0)
object = null;
else if (arguments.length == 1)
return object[field];
else
object[field] = value;
}
};
f.destroy = function() { f(); }
f.getField = function(field) { return f(field); }
f.setField = function(field, value) { f(field, value); }
return f;
}
// Using function calls
o = uniquelyReferencedObject();
o("afield", "avalue");
alert(o("afield")); // "avalue"
o(); // destroy
alert(o("afield")); // undefined
// Using destroy, getField, setField
other = uniquelyReferencedObject();
other.setField("afield", "avalue");
alert(other.getField("afield")); // "avalue"
other.destroy();
alert(other.getField("afield")); // undefined
The truth is that you can not delete objects in Javascript.
Then you use delete operator, it accepts the property of some object only.
So, when you use delete, in general you must pass to it something like obj.p. Then you pass just a variable name actually this means 'property of global object', and delete p is the same as delete window.p. Not sure what happens internally on delete this but as a result browser just skip it.
Now, what we actually deleting with delete? We deleting a reference to object. It means object itself is still somethere in memory. To eliminate it, you must delete all references to concrete object. Everythere - from other objects, from closures, from event handlers, linked data, all of them. But object itself doest have information about all this references to it, so there is no way to delete object from object itself.
Look at this code:
var obj = <our object>;
var someAnother = {
...
myObjRef: obj
...
}
var someAnotherAnother = {
...
secondRef : obj
...
}
To eliminate obj from memory you must delete someAnother.myObjRef and someAnoterAnother.secondRef. You can do it only from the part of programm which knows about all of them.
And how we delete something at all if we can have any number of references everythere? There are some ways to solve this problem:
Make only one point in program from there this object will be referenced. In fact - there will be only one reference in our program. and Then we delete it - object will be killed by garbage collector. This is the 'proxy' way described above. This has its disadvantages (no support from language itself yet, and necessarity to change cool and nice obj.x=1 to obj.val('x',1). Also, and this is less obvious, in fact you change all references to obj to references to proxy. And proxy will always remain in memory instead of object. Depending on object size, number of objects and implementation this can give you some profit or not. Or even make things worse. For example if size of your object is near size of proxy itself - you will get no worth.
add to every place there you use an object a code which will delete reference to this object. It is more clear and simple to use, because if you call a obj.close() at some place - you already knows everything what you need to delete it. Just instead of obj.close() kill the refernce to it. In general - change this reference to something another:
var x = new obj; //now our object is created and referenced
x = null;// now our object **obj** still im memory
//but doest have a references to it
//and after some milliseconds obj is killed by GC...
//also you can do delete for properties
delete x.y; //where x an object and x.y = obj
but with this approach you must remember that references can be in very hard to understand places. For example:
function func() {
var x= new obj;// our heavy object
...
return function result() {
...some cool stuff..
}
}
the reference is stored in closure for result function and obj will remain in memory while you have a reference to result somethere.
It hard to imagine object that is heavy itself, most realistic scenario - what you have some data inside it. In this case you can add a cleanup function to object which will cleans this data. Let say you have an gigant buffer (array of numbers for example) as a property of the object, and if you want to free memory - you can just clear this buffer still having object in memory as a couple dozens of bytes. And remember to put your functions to prototype to keep instances small.
Here is a link to some very detailed information on the JavaScript delete operator.
http://perfectionkills.com/understanding-delete/
What's the fastest alternative to
JSON.parse(JSON.stringify(x))
There must be a nicer/built-in way to perform a deep clone on objects/arrays, but I haven't found it yet.
Any ideas?
No, there is no build in way to deep clone objects.
And deep cloning is a difficult and edgey thing to deal with.
Lets assume that a method deepClone(a) should return a "deep clone" of b.
Now a "deep clone" is an object with the same [[Prototype]] and having all the own properties cloned over.
For each clone property that is cloned over, if that has own properties that can be cloned over then do so, recursively.
Of course were keeping the meta data attached to properties like [[Writable]] and [[Enumerable]] in-tact. And we will just return the thing if it's not an object.
var deepClone = function (obj) {
try {
var names = Object.getOwnPropertyNames(obj);
} catch (e) {
if (e.message.indexOf("not an object") > -1) {
// is not object
return obj;
}
}
var proto = Object.getPrototypeOf(obj);
var clone = Object.create(proto);
names.forEach(function (name) {
var pd = Object.getOwnPropertyDescriptor(obj, name);
if (pd.value) {
pd.value = deepClone(pd.value);
}
Object.defineProperty(clone, name, pd);
});
return clone;
};
This will fail for a lot of edge cases.
Live Example
As you can see you can't deep clone objects generally without breaking their special properties (like .length in array). To fix that you have to treat Array seperately, and then treat every special object seperately.
What do you expect to happen when you do deepClone(document.getElementById("foobar")) ?
As an aside, shallow clones are easy.
Object.getOwnPropertyDescriptors = function (obj) {
var ret = {};
Object.getOwnPropertyNames(obj).forEach(function (name) {
ret[name] = Object.getOwnPropertyDescriptor(obj, name);
});
return ret;
};
var shallowClone = function (obj) {
return Object.create(
Object.getPrototypeOf(obj),
Object.getOwnPropertyDescriptors(obj)
);
};
I was actually comparing it against angular.copy
You can run the JSperf test here:
https://jsperf.com/angular-copy-vs-json-parse-string
I'm comparing:
myCopy = angular.copy(MyObject);
vs
myCopy = JSON.parse(JSON.stringify(MyObject));
This is the fatest of all test I could run on all my computers
The 2022 solution for this is to use structuredClone
See : https://developer.mozilla.org/en-US/docs/Web/API/structuredClone
structuredClone(x)
Cyclic references are not really an issue. I mean they are but that's just a matter of proper record keeping. Anyway quick answer for this one. Check this:
https://github.com/greatfoundry/json-fu
In my mad scientist lab of crazy javascript hackery I've been putting the basic implementation to use in serializing the entirety of the javascript context including the entire DOM from Chromium, sending it over a websocket to Node and reserializing it successfully. The only cyclic issue that is problematic is the retardo navigator.mimeTypes and navigator.plugins circle jerking one another to infinity, but easily solved.
(function(mimeTypes, plugins){
delete navigator.mimeTypes;
delete navigator.plugins;
var theENTIREwindowANDdom = jsonfu.serialize(window);
WebsocketForStealingEverything.send(theENTIREwindowANDdom);
navigator.mimeTypes = mimeTypes;
navigator.plugins = plugins;
})(navigator.mimeTypes, navigator.plugins);
JSONFu uses the tactic of creating Sigils which represent more complex data types. Like a MoreSigil which say that the item is abbreviated and there's X levels deeper which can be requested. It's important to understand that if you're serializing EVERYTHING then it's obviously more complicated to revive it back to its original state. I've been experimenting with various things to see what's possible, what's reasonable, and ultimately what's ideal. For me the goal is a bit more auspicious than most needs in that I'm trying to get as close to merging two disparate and simultaneous javascript contexts into a reasonable approximation of a single context. Or to determine what the best compromise is in terms of exposing the desired capabilities while not causing performance issues. When you start looking to have revivers for functions then you cross the land from data serialization into remote procedure calling.
A neat hacky function I cooked up along the way classifies all the properties on an object you pass to it into specific categories. The purpose for creating it was to be able to pass a window object in Chrome and have it spit out the properties organized by what's required to serialize and then revive them in a remote context. Also to accomplish this without any sort of preset cheatsheet lists, like a completely dumb checker that makes the determinations by prodding the passed value with a stick. This was only designed and ever checked in Chrome and is very much not production code, but it's a cool specimen.
// categorizeEverything takes any object and will sort its properties into high level categories
// based on it's profile in terms of what it can in JavaScript land. It accomplishes this task with a bafflingly
// small amount of actual code by being extraordinarily uncareful, forcing errors, and generally just
// throwing caution to the wind. But it does a really good job (in the one browser I made it for, Chrome,
// and mostly works in webkit, and could work in Firefox with a modicum of effort)
//
// This will work on any object but its primarily useful for sorting the shitstorm that
// is the webkit global context into something sane.
function categorizeEverything(container){
var types = {
// DOMPrototypes are functions that get angry when you dare call them because IDL is dumb.
// There's a few DOM protos that actually have useful constructors and there currently is no check.
// They all end up under Class which isn't a bad place for them depending on your goals.
// [Audio, Image, Option] are the only actual HTML DOM prototypes that sneak by.
DOMPrototypes: {},
// Plain object isn't callable, Object is its [[proto]]
PlainObjects: {},
// Classes have a constructor
Classes: {},
// Methods don't have a "prototype" property and their [[proto]] is named "Empty"
Methods: {},
// Natives also have "Empty" as their [[proto]]. This list has the big boys:
// the various Error constructors, Object, Array, Function, Date, Number, String, etc.
Natives: {},
// Primitives are instances of String, Number, and Boolean plus bonus friends null, undefined, NaN, Infinity
Primitives: {}
};
var str = ({}).toString;
function __class__(obj){ return str.call(obj).slice(8,-1); }
Object.getOwnPropertyNames(container).forEach(function(prop){
var XX = container[prop],
xClass = __class__(XX);
// dumping the various references to window up front and also undefineds for laziness
if(xClass == "Undefined" || xClass == "global") return;
// Easy way to rustle out primitives right off the bat,
// forcing errors for fun and profit.
try {
Object.keys(XX);
} catch(e) {
if(e.type == "obj_ctor_property_non_object")
return types.Primitives[prop] = XX;
}
// I'm making a LOT flagrant assumptions here but process of elimination is key.
var isCtor = "prototype" in XX;
var proto = Object.getPrototypeOf(XX);
// All Natives also fit the Class category, but they have a special place in our heart.
if(isCtor && proto.name == "Empty" ||
XX.name == "ArrayBuffer" ||
XX.name == "DataView" ||
"BYTES_PER_ELEMENT" in XX) {
return types.Natives[prop] = XX;
}
if(xClass == "Function"){
try {
// Calling every single function in the global context without a care in the world?
// There's no way this can end badly.
// TODO: do this nonsense in an iframe or something
XX();
} catch(e){
// Magical functions which you can never call. That's useful.
if(e.message == "Illegal constructor"){
return types.DOMPrototypes[prop] = XX;
}
}
// By process of elimination only regular functions can still be hanging out
if(!isCtor) {
return types.Methods[prop] = XX;
}
}
// Only left with full fledged objects now. Invokability (constructor) splits this group in half
return (isCtor ? types.Classes : types.PlainObjects)[prop] = XX;
// JSON, Math, document, and other stuff gets classified as plain objects
// but they all seem correct going by what their actual profiles and functionality
});
return types;
};