var SomeObj = function() {
this.i = 0;
};
setTimeout(function() {
new SomeObj; // I mean this object
}, 0);
At what point is the SomeObj object garbage collected?
It is eligible for garbage collection as soon as it is no longer used.
That means immediately after the constructor call in your case.
How timely this actually happens is an implementation detail. If you run into GC issues, you need to dig into your specific Javascript engine.
An object that is not referenced from anywhere doesn't "exist" at all from the view of your program. How long it still resides somewhere in memory depends on the garbage collection characteristics of your interpreter, and when/whether it feels the need to collect it.
In your specific case, the object does become eligible for garbage collection right after it has been created and the reference that the expression yields is not used (e.g. in an assignment). In fact, the object might not get created at all in the first place, an optimising compiler could easily remove the whole function altogether - it has no side effects and no return value.
Related
The WeakSet is supposed to store elements by weak reference. That is, if an object is not referenced by anything else, it should be cleaned from the WeakSet.
I have written the following test:
var weakset = new WeakSet(),
numbers = [1, 2, 3];
weakset.add(numbers);
weakset.add({name: "Charlie"});
console.log(weakset);
numbers = undefined;
console.log(weakset);
Even though my [1, 2, 3] array is not referenced by anything, it's not being removed from the WeakSet. The console prints:
WeakSet {[1, 2, 3], Object {name: "Charlie"}}
WeakSet {[1, 2, 3], Object {name: "Charlie"}}
Why is that?
Plus, I have one more question. What is the point of adding objects to WeakSets directly, like this:
weakset.add({name: "Charlie"});
Are those Traceur's glitches or am I missing something?
And finally, what is the practical use of WeakSet if we cannot even iterate through it nor get the current size?
it's not being removed from the WeakSet. Why is that?
Most likely because the garbage collector has not yet run. However, you say you are using Traceur, so it just might be that they're not properly supported. I wonder how the console can show the contents of a WeakSet anyway.
What is the point of adding objects to WeakSets directly?
There is absolutely no point of adding object literals to WeakSets.
What is the practical use of WeakSet if we cannot even iterate through it nor get the current size?
All you can get is one bit of information: Is the object (or generically, value) contained in the set?
This can be useful in situations where you want to "tag" objects without actually mutating them (setting a property on them). Lots of algorithms contain some sort of "if x was already seen" condition (a JSON.stringify cycle detection might be a good example), and when you work with user-provided values the use of a Set/WeakSet would be advisable. The advantage of a WeakSet here is that its contents can be garbage-collected while your algorithm is still running, so it helps to reduce memory consumption (or even prevents leaks) when you are dealing with lots of data that is lazily (possibly even asynchronously) produced.
This is a really hard question. To be completely honest I had no idea in the context of JavaScript so I asked in esdiscuss and got a convincing answer from Domenic.
WeakSets are useful for security and validation reasons. If you want to be able to isolate a piece of JavaScript. They allow you to tag an object to indicate it belongs to a special set of object.
Let's say I have a class ApiRequest:
class ApiRequest {
constructor() {
// bring object to a consistent state, use platform code you have no direct access to
}
makeRequest() {
// do work
}
}
Now, I'm writing a JavaScript platform - my platform allows you to run JavaScript to make calls - to make those calls you need a ApiRequest - I only want you to make ApiRequests with the objects I give you so you can't bypass any constraints I have in place.
However, at the moment nothing is stopping you from doing:
ApiRequest.prototype.makeRequest.call(null, args); // make request as function
Object.create(ApiRequest.prototype).makeRequest(); // no initialization
function Foo(){}; Foo.prototype = ApiRequest.prototype; new Foo().makeRequest(); // no super
And so on, note that you can't keep a normal list or array of ApiRequest objects since that would prevent them from being garbage collected. Other than a closure, anything can be achieved with public methods like Object.getOwnPropertyNames or Object.getOwnSymbols. So you one up me and do:
const requests = new WeakSet();
class ApiRequest {
constructor() {
requests.add(this);
}
makeRequest() {
if(!request.has(this)) throw new Error("Invalid access");
// do work
}
}
Now, no matter what I do - I must hold a valid ApiRequest object to call the makeRequest method on it. This is impossible without a WeakMap/WeakSet.
So in short - WeakMaps are useful for writing platforms in JavaScript. Normally this sort of validation is done on the C++ side but adding these features will enable moving and making things in JavaScript.
(Of course, everything a WeakSet does a WeakMap that maps values to true can also do, but that's true for any map/set construct)
(Like Bergi's answer suggests, there is never a reason to add an object literal directly to a WeakMap or a WeakSet)
By definition, WeakSet has only three key functionalities
Weakly link an object into the set
Remove a link to an object from the set
Check if an object has already been linked to the set
Sounds more pretty familiar?
In some application, developers may need to implement a quick way to iterate through a series of data which is polluted by lots and lots of redundancy but you want to pick only ones which have not been processed before (unique). WeakSet could help you. See an example below:
var processedBag = new WeakSet();
var nextObject = getNext();
while (nextObject !== null){
// Check if already processed this similar object?
if (!processedBag.has(nextObject)){
// If not, process it and memorize
process(nextObject);
processedBag.add(nextObject);
}
nextObject = getNext();
}
One of the best data structure for application above is Bloom filter which is very good for a massive data size. However, you can apply the use of WeakSet for this purpose as well.
A "weak" set or map is useful when you need to keep an arbitrary collection of things but you don't want their presence in the collection from preventing those things from being garbage-collected if memory gets tight. (If garbage collection does occur, the "reaped" objects will silently disappear from the collection, so you can actually tell if they're gone.)
They are excellent, for example, for use as a look-aside cache: "have I already retrieved this record, recently?" Each time you retrieve something, put it into the map, knowing that the JavaScript garbage collector will be the one responsible for "trimming the list" for you, and that it will automatically do so in response to prevailing memory conditions (which you can't reasonably anticipate).
The only drawback is that these types are not "enumerable." You can't iterate over a list of entries – probably because this would likely "touch" those entries and so defeat the purpose. But, that's a small price to pay (and you could, if need be, "code around it").
WeakSet is a simplification of WeakMap for where your value is always going to be boolean true. It allows you to tag JavaScript objects so to only do something with them once or to maintain their state in respect to a certain process. In theory as it doesn't need to hold a value it should use a little less memory and perform slightly faster than WeakMap.
var [touch, untouch] = (() => {
var seen = new WeakSet();
return [
value => seen.has(value)) || (seen.add(value), !1),
value => !seen.has(value) || (seen.delete(value), !1)
];
})();
function convert(object) {
if(touch(object)) return;
extend(object, yunoprototype); // Made up.
};
function unconvert(object) {
if(untouch(object)) return;
del_props(object, Object.keys(yunoprototype)); // Never do this IRL.
};
Your console was probably incorrectly showing the contents due to the fact that the garbage collection did not take place yet. Therefore since the object wasn't garbage collected it would show the object still in weakset.
If you really want to see if a weakset still has a reference to a certain object then use the WeakSet.prototype.has() method. This method, as the name implies returns a boolean indicating wether the object still exists in the weakset.
Example:
var weakset = new WeakSet(),
numbers = [1, 2, 3];
weakset.add(numbers);
weakset.add({name: "Charlie"});
console.log(weakset.has(numbers));
numbers = undefined;
console.log(weakset.has(numbers));
Let me answer the first part, and try to avoid confusing you further.
The garbage collection of dereferenced objects is not observable! It would be a paradox, because you need an object reference to check if it exists in a map. But don't trust me on this, trust Kyle Simpson:
https://github.com/getify/You-Dont-Know-JS/blob/1st-ed/es6%20%26%20beyond/ch5.md#weakmaps
The problem with a lot of explanations I see here, is that they re-reference a variable to another object, or assign it a primitive value, and then check if the WeakMap contains that object or value as a key. Of course it doesn't! It never had that object/value as a key!
So the final piece to this puzzle: why does inspecting the WeakMap in a console still show all those objects there, even after you've removed all of your references to those objects? Because the console itself keeps persistent references to those Objects, for the purpose of being able to list all the keys in the WeakMap, because that is something that the WeakMap itself cannot do.
While I'm searching about use cases of Weakset I found these points:
"The WeakSet is weak, meaning references to objects in a WeakSet are held weakly.
If no other references to an object stored in the WeakSet exist, those objects can be garbage collected."
##################################
They are black boxes: we only get any data out of a WeakSet if we have both the WeakSet and a value.
##################################
Use Cases:
1 - to avoid bugs
2 - it can be very useful in general to avoid any object to be visited/setup twice
Refrence: https://esdiscuss.org/topic/actual-weakset-use-cases
3 - The contents of a WeakSet can be garbage collected.
4 - Possibility of lowering memory utilization.
Refrence: https://www.geeksforgeeks.org/what-is-the-use-of-a-weakset-object-in-javascript/
##################################
Example on Weakset: https://exploringjs.com/impatient-js/ch_weaksets.html
I Advice you to learn more about weak concept in JS: https://blog.logrocket.com/weakmap-weakset-understanding-javascript-weak-references/
In Kyle Simpson's new title, You don't know JS: ES6 and beyond, I find the following snippet:
WARNING Assigning an object or array as a constant means that value will not be able to be garbage collected until that constant’s lexical scope goes away, as the reference to the value can never be unset. That may be desirable, but be careful if it’s not your intent!
(Excerpt From: Simpson, Kyle. “You Don’t Know JS: ES6 & Beyond.” O'Reilly Media, Inc., 2015-06-02. iBooks.
This material may be protected by copyright.)
As far as I can see, he doesn't expand on this, and 10 minutes on Google turns up nothing. Is this true, and if so, what does "the reference to the value can never be unset" mean exactly? I have got into the habit of declaring variables that won't be changed as const, is this a bad habit in real concrete performance/memory terms?
WARNING Assigning an object or array as a constant means that value
will not be able to be garbage collected until that constant’s lexical
scope goes away, as the reference to the value can never be unset.
That may be desirable, but be careful if it’s not your intent!
That note sounds a bit more of a warning than is necessary (perhaps even a bit silly) and tries to make some sort of special case out of this situation.
With a const variable declaration, you can't assign to the variable something little like "" or null to clear its contents. That's really the only difference in regard to memory management. Automatic garbage collection is not affected at all by whether it is declared const or not.
So, if you would like to be able to change the contents of the variable in the future for any reason (including to manually remove a reference to something to allow something to be garbage collected sooner), then don't use const. This is the same as any other reason for using or not using const. If you want to be able to change what the variable contains at any time in the future (for any reason), then don't use const. This should be completely obvious to anyone who understand what const is for.
Calling out garbage collection as a special case for when not to use const just seems silly to me. If you want to be able to clear the contents of a variable, then that means you want to modify the variable so duh, don't use const. Yes, manually enabling garbage collection on a large data structure that might be caught in a lasting scope/closure is one reason that you might want to change the variable in the future. But, it's just one of millions of reasons. So, I repeat one more time. If you ever want to change the contents of the variable for any reason in the future, then don't declare it as const.
The garbage collector itself doesn't treat a const variable or the contents it points to any different than a var or let variable. When it goes out of scope and is no longer reachable, its contents will be eligible for garbage collection.
const has a number of advantages. It allows the developer to state some intent that the contents this variable points to are not to be changed by code and may allow the runtime to make some optimizations because it knows the contents of the variable cannot be changed. And, it prevents rogue or accidental code from ever changing the contents of that variable. These are all good things when used in an appropriate case. In general, you SHOULD use const as much as practical.
I should add the even some const data can still be reduced in size and make the majority of its contents available for garbage collection. For example, if you had a really large 100,000 element array of objects (that you perhaps received from some external http call) in a const array:
const bigData = [really large number of objects from some API call];
You can still massively reduce the size of that data by simply clearing the array which potentially makes the large number of objects that was in the array eligible for garbage collection if nothing else had a reference to them:
bigData.length = 0;
Remember, that const prevents assignment to that variable name, but does not prevent mutating the contents that the variable points to.
You could do the same thing with other built-in collection types such as map.clear() or set.clear() or even any custom object/class that has methods for reducing its memory usage.
That note in my book was referring to cases like this, where you'd like to be able to manually make a value GC'able earlier than the end of life of its parent scope:
var cool = (function(){
var someCoolNumbers = [2,4,6,8,....1E7]; // a big array
function printCoolNumber(idx) {
console.log( someCoolNumbers[idx] );
}
function allDone() {
someCoolNumbers = null;
}
return {
printCoolNumber: printCoolNumber,
allDone: allDone
};
})();
cool.printCoolNumber( 10 ); // 22
cool.allDone();
The purpose of the allDone() function in this silly example is to point out that there are times when you can decide you are done with a large data structure (array, object), even though the surrounding scope/behavior may live on (via closure) indefinitely in the app. To allow the GC to pick up that array and reclaim its memory, you unset the reference with someCoolNumbers = null.
If you had declared const someCoolNumbers = [...]; then you would be unable to do so, so that memory would remain used until the parent scope (via the closure that the methods on cool have) goes away when cool is unset or itself GCd.
Update
To make absolutely clear, because there's a lot of confusion/argument in some comment threads here, this is my point:
const absolutely, positively, undeniably has an effect on GC -- specifically, the ability of a value to be GCd manually at an earlier time. If the value is referenced via a const declaration, you cannot unset that reference, which means you cannot get the value GCd earlier. The value will only be able to be GCd when the scope is torn down.
If you'd like to be able to manually make a value eligible for GC earlier, while the parent scope is still surviving, you'll have to be able to unset your reference to that value, and you cannot do that if you used a const.
Some seem to have believed that my claim was const prevents any GC ever. That was never my claim. Only that it prevented earlier manual GC.
No, there are no performance implications. This note refers to the practise of helping the garbage collector (which is rarely enough needed) by "unsetting" the variable:
{
let x = makeHeavyObject();
window.onclick = function() {
// this *might* close over `x` even when it doesn't need it
};
x = null; // so we better clear it
}
This is obviously not possibly to do if you had declared x as a const.
The lifetime of the variable (when it goes out of scope) is not affected by this. But if the garbage collector screws up, a constant will always hold the value it was initialised with, and prevent that from being garbage-collected as well, while a normal variable might no more hold it.
The way garbage collectors (GC) work is when something is referenced by nothing ("cannot be reached"), the GC can safely say that something isn't used anymore and reclaim the memory used by that something.
Being able to replace the value of a variable allows one to remove a reference to the value. However, unlike var, const cannot be reassigned a value. Thus, one can't remove that constant from referencing the value.
A constant, like a variable, can be reclaimed when the constant goes "out of scope", like when a function exits, and nothing inside it forms a closure.
Since I've dealt in the past with javascript's funky "object model", I assume there is no such thing as a destructor. My searches were mildly unsuccessful, so you guys are my last hope. How do you execute stuff upon instance destruction?
MDN is a nice resource for JS.
No, there is nothing like calling a function when an object ceases.
FinalizationRegistry might be what you need. It is not a destructor, but it executes a function once the object is garbage collected. In any case, this is what I wish I had find when I first came on here :)
As of more recently, this link is of more use to answer this question. The most important part:
As of 2012, all modern browsers ship a mark-and-sweep
garbage-collector.
...
Cycles are no longer a problem
In the first example above, after the function call returns, the two
objects are no longer referenced by any resource that is reachable
from the global object. Consequently, they will be found unreachable
by the garbage collector and have their allocated memory reclaimed.
Limitation: Releasing memory manually
There are times when it would be convenient to manually decide when
and what memory is released. In order to release the memory of an
object, it needs to be made explicitly unreachable.
So as far as cyclic references goes, de[con]structors aren't really needed.
One cool trick I have thought of though, if you have cyclic references and you want easy manual control over deconstruction...
class Container {
constructor() {
this.thisRef = [ this ];
this.containee = new Containee({ containerRef: this.thisRef });
}
//Note: deconstructor is not an actual JS thing/keyword.
deconstructor() {
//Have to delete `this.thisRef[0]` and not `this.thisRef`, in
//order to ensure Containee's reference to Container is removed.
delete this.thisRef[0];
}
doSomething() {
}
}
class Containee {
constructor({ containerRef }) {
//Assumption here is, if the Container is destroyed, so will the Containee be
//destroyed. No need to delete containerRef, no need for a
//deconstructor function!
this.containerRef = containerRef;
}
someFunc() {
this.containerRef[0].doSomething();
}
}
let c = new Container();
...
//No cyclic references!
c.deconstructor();
So here, instead of the Containee class storing a direct reference to the Container instance, it stores a reference to a size 1 array containing the Container reference, which the Container instance itself can then delete itself from. The array, with the reference, is managed by Container.
But again, this isn't really needed, since garbage collection in all modern browsers is mark-and-sweep and can handle cyclic references.
In other languages the destructor is handy for implementing the memento pattern. That's actually what lead me to this topic. For example, in a click event it'd be nice to have a generic function that I can pass the event target to that disables the target and then re-enables it when it falls out of scope. Consider a submit button that does something like this:
function async saveMyStuff(e) {
const sleeper = new nap(e)
let data = await fetch(...)
// a bunch more code.
}
class nap {
constructor(e) {
this.button = e.currentTarget
this.button.disabled = true
}
destructor() { this.button.enabled = true }
}
This kind of construct would give me a oneliner that handles enabling/disabling all of my buttons when I'm talking to the backend or doing any other processing. I don't have to worry about cleaning up if I return somewhere in the middle or anything like that.
I have several instances where my Javascript code appears to be leaking memory but I'm not sure what I should be expecting from the garbage collector.
For example var = new Object() in an interval timer function running in Firefox seems to leak over time. There's some simple solutions but I'm curious if I should be expecting the garbage collector to handle everything or I'm responsible for helping the garbage collector.
If I need to help the garbage collector what are the rules?
Most (I believe all) Javascript (ECMAScript) engines work by a method called "reference counting." I'll leave you to goggle that term.
In short, an object is freed for release when nothing is pointing to it ... using it.
Two things may throwing off your sense of how much memory is being used.
1) ECMAScript does not release the object the instant that the system is done with it. Garbage collection is run "as needed." This can vary widely.
2) Closures can hold onto a reference longer than you think. Accidental closures can hold things longer than you might expect.
Had to make this an answer rather than a comment for length:
OK -- first a clarification of terms:
If it is running on a timer, then it is not recursive. It is a common misunderstanding, however.
It recursive code, a function calls itself -- the original function invocation remains on the stack until the whole thing finally unwinds and a value is returned to the original caller. When using a time out, each iteration of the function its in a separate execution context.
Recursive function example
function factorial(n) {
if (n == 1) {
return 1;
} else {
return n * factorial(n-1);
}
}
This is /not/ recursive:
function annoy() {
window.setTimeout(annoy, 1000);
window.alert("This will annoy every second!");
}
Each iteration of 'annoy' is completely independent and stand alone. It just sets up the timer for another instance to be called. There is no pile of 'annoy' functions on the stack and you can't return anything to the caller.
Secondly: In the example that I gave you, the variable a not go out out of scope, but the old objects that a refers to had no active references so they are free for release. What a variable points to can vary.
var a, b;
a = {};
b = a; // This object now has TWO references using it.
b = null; // The object now has one reference
a = null; // Object has no references and is free for release.
At this point, the best thing I can do is point you here:
http://www.ibm.com/developerworks/web/library/wa-sieve/
Do I have to destroy instances myself? ...if I don't assign them a variable...will they automatically go away?
new ImageUploadView();
vs
var Iu = ImageUploadView();
If there is no reference to an object in javascript, the garbage collector will clean it up.
The way the garbage collector works is it looks for javascript objects for which nobody has a reference to. If nobody has a reference to it, then it can't be used again, so it can be deleted and the memory it occupied reclaimed. On the other hand, if any javascript entity still has a reference to the object, then it is still "in use" and cannot be deleted.
In your first code example:
new ImageUploadView();
unless the constructor of the object stores away the this pointer into some other variable or object or creates some closure that causes references to the object to be held, then there will be no reference to this new object and it will be cleaned up by the garbage collector.
If you second code example:
var Iu = ImageUploadView();
as long as the Iu variable exists and stays in scope it will contain whatever the ImageUploadView() function returns. Note, the second example, is just executing a function and storing it's value. It is not necessarily creating anything. If ImageUploadView() just returns true, then that's all the Iu variable will contain.
The first method is fine. Assuming that the instance of ImageUploadView is appropriately cleaning up after itself, it will be collected by the garbage collector.
With large objects, it's not necessarily a good practice to assume that the browsers built in garbage collector will clean up once it's out of scope. You're better off to clear it ourself using "delete". For example:
delete MyImageUploadView;
Edit: it may be preferable to set the object to null if it isnt being referenced as a property.