Alternative to destructor? [duplicate] - javascript

I know ECMAScript 6 has constructors but is there such a thing as destructors for ECMAScript 6?
For example if I register some of my object's methods as event listeners in the constructor, I want to remove them when my object is deleted.
One solution is to have a convention of creating a destructor method for every class that needs this kind of behaviour and manually call it. This will remove the references to the event handlers, hence my object will truly be ready for garbage collection. Otherwise it'll stay in memory because of those methods.
But I was hoping if ECMAScript 6 has something native that will be called right before the object is garbage collected.
If there is no such mechanism, what is a pattern/convention for such problems?

Is there such a thing as destructors for ECMAScript 6?
No. EcmaScript 6 does not specify any garbage collection semantics at all[1], so there is nothing like a "destruction" either.
If I register some of my object's methods as event listeners in the constructor, I want to remove them when my object is deleted
A destructor wouldn't even help you here. It's the event listeners themselves that still reference your object, so it would not be able to get garbage-collected before they are unregistered.
What you are actually looking for is a method of registering listeners without marking them as live root objects. (Ask your local eventsource manufacturer for such a feature).
1): Well, there is a beginning with the specification of WeakMap and WeakSet objects. However, true weak references are still in the pipeline [1][2].

I just came across this question in a search about destructors and I thought there was an unanswered part of your question in your comments, so I thought I would address that.
thank you guys. But what would be a good convention if ECMAScript
doesn't have destructors? Should I create a method called destructor
and call it manually when I'm done with the object? Any other idea?
If you want to tell your object that you are now done with it and it should specifically release any event listeners it has, then you can just create an ordinary method for doing that. You can call the method something like release() or deregister() or unhook() or anything of that ilk. The idea is that you're telling the object to disconnect itself from anything else it is hooked up to (deregister event listeners, clear external object references, etc...). You will have to call it manually at the appropriate time.
If, at the same time you also make sure there are no other references to that object, then your object will become eligible for garbage collection at that point.
ES6 does have weakMap and weakSet which are ways of keeping track of a set of objects that are still alive without affecting when they can be garbage collected, but it does not provide any sort of notification when they are garbage collected. They just disappear from the weakMap or weakSet at some point (when they are GCed).
FYI, the issue with this type of destructor you ask for (and probably why there isn't much of a call for it) is that because of garbage collection, an item is not eligible for garbage collection when it has an open event handler against a live object so even if there was such a destructor, it would never get called in your circumstance until you actually removed the event listeners. And, once you've removed the event listeners, there's no need for the destructor for this purpose.
I suppose there's a possible weakListener() that would not prevent garbage collection, but such a thing does not exist either.
FYI, here's another relevant question Why is the object destructor paradigm in garbage collected languages pervasively absent?. This discussion covers finalizer, destructor and disposer design patterns. I found it useful to see the distinction between the three.
Edit in 2020 - proposal for object finalizer
There is a Stage 3 EMCAScript proposal to add a user-defined finalizer function after an object is garbage collected.
A canonical example of something that would benefit from a feature like this is an object that contains a handle to an open file. If the object is garbage collected (because no other code still has a reference to it), then this finalizer scheme allows one to at least put a message to the console that an external resource has just been leaked and code elsewhere should be fixed to prevent this leak.
If you read the proposal thoroughly, you will see that it's nothing like a full-blown destructor in a language like C++. This finalizer is called after the object has already been destroyed and you have to predetermine what part of the instance data needs to be passed to the finalizer for it to do its work. Further, this feature is not meant to be relied upon for normal operation, but rather as a debugging aid and as a backstop against certain types of bugs. You can read the full explanation for these limitations in the proposal.

You have to manually "destruct" objects in JS. Creating a destroy function is common in JS. In other languages this might be called free, release, dispose, close, etc. In my experience though it tends to be destroy which will unhook internal references, events and possibly propagates destroy calls to child objects as well.
WeakMaps are largely useless as they cannot be iterated and this probably wont be available until ECMA 7 if at all. All WeakMaps let you do is have invisible properties detached from the object itself except for lookup by the object reference and GC so that they don't disturb it. This can be useful for caching, extending and dealing with plurality but it doesn't really help with memory management for observables and observers. WeakSet is a subset of WeakMap (like a WeakMap with a default value of boolean true).
There are various arguments on whether to use various implementations of weak references for this or destructors. Both have potential problems and destructors are more limited.
Destructors are actually potentially useless for observers/listeners as well because typically the listener will hold references to the observer either directly or indirectly. A destructor only really works in a proxy fashion without weak references. If your Observer is really just a proxy taking something else's Listeners and putting them on an observable then it can do something there but this sort of thing is rarely useful. Destructors are more for IO related things or doing things outside of the scope of containment (IE, linking up two instances that it created).
The specific case that I started looking into this for is because I have class A instance that takes class B in the constructor, then creates class C instance which listens to B. I always keep the B instance around somewhere high above. A I sometimes throw away, create new ones, create many, etc. In this situation a Destructor would actually work for me but with a nasty side effect that in the parent if I passed the C instance around but removed all A references then the C and B binding would be broken (C has the ground removed from beneath it).
In JS having no automatic solution is painful but I don't think it's easily solvable. Consider these classes (pseudo):
function Filter(stream) {
stream.on('data', function() {
this.emit('data', data.toString().replace('somenoise', '')); // Pretend chunks/multibyte are not a problem.
});
}
Filter.prototype.__proto__ = EventEmitter.prototype;
function View(df, stream) {
df.on('data', function(data) {
stream.write(data.toUpper()); // Shout.
});
}
On a side note, it's hard to make things work without anonymous/unique functions which will be covered later.
In a normal case instantiation would be as so (pseudo):
var df = new Filter(stdin),
v1 = new View(df, stdout),
v2 = new View(df, stderr);
To GC these normally you would set them to null but it wont work because they've created a tree with stdin at the root. This is basically what event systems do. You give a parent to a child, the child adds itself to the parent and then may or may not maintain a reference to the parent. A tree is a simple example but in reality you may also find yourself with complex graphs albeit rarely.
In this case, Filter adds a reference to itself to stdin in the form of an anonymous function which indirectly references Filter by scope. Scope references are something to be aware of and that can be quite complex. A powerful GC can do some interesting things to carve away at items in scope variables but that's another topic. What is critical to understand is that when you create an anonymous function and add it to something as a listener to ab observable, the observable will maintain a reference to the function and anything the function references in the scopes above it (that it was defined in) will also be maintained. The views do the same but after the execution of their constructors the children do not maintain a reference to their parents.
If I set any or all of the vars declared above to null it isn't going to make a difference to anything (similarly when it finished that "main" scope). They will still be active and pipe data from stdin to stdout and stderr.
If I set them all to null it would be impossible to have them removed or GCed without clearing out the events on stdin or setting stdin to null (assuming it can be freed like this). You basically have a memory leak that way with in effect orphaned objects if the rest of the code needs stdin and has other important events on it prohibiting you from doing the aforementioned.
To get rid of df, v1 and v2 I need to call a destroy method on each of them. In terms of implementation this means that both the Filter and View methods need to keep the reference to the anonymous listener function they create as well as the observable and pass that to removeListener.
On a side note, alternatively you can have an obserable that returns an index to keep track of listeners so that you can add prototyped functions which at least to my understanding should be much better on performance and memory. You still have to keep track of the returned identifier though and pass your object to ensure that the listener is bound to it when called.
A destroy function adds several pains. First is that I would have to call it and free the reference:
df.destroy();
v1.destroy();
v2.destroy();
df = v1 = v2 = null;
This is a minor annoyance as it's a bit more code but that is not the real problem. When I hand these references around to many objects. In this case when exactly do you call destroy? You cannot simply hand these off to other objects. You'll end up with chains of destroys and manual implementation of tracking either through program flow or some other means. You can't fire and forget.
An example of this kind of problem is if I decide that View will also call destroy on df when it is destroyed. If v2 is still around destroying df will break it so destroy cannot simply be relayed to df. Instead when v1 takes df to use it, it would need to then tell df it is used which would raise some counter or similar to df. df's destroy function would decrease than counter and only actually destroy if it is 0. This sort of thing adds a lot of complexity and adds a lot that can go wrong the most obvious of which is destroying something while there is still a reference around somewhere that will be used and circular references (at this point it's no longer a case of managing a counter but a map of referencing objects). When you're thinking of implementing your own reference counters, MM and so on in JS then it's probably deficient.
If WeakSets were iterable, this could be used:
function Observable() {
this.events = {open: new WeakSet(), close: new WeakSet()};
}
Observable.prototype.on = function(type, f) {
this.events[type].add(f);
};
Observable.prototype.emit = function(type, ...args) {
this.events[type].forEach(f => f(...args));
};
Observable.prototype.off = function(type, f) {
this.events[type].delete(f);
};
In this case the owning class must also keep a token reference to f otherwise it will go poof.
If Observable were used instead of EventListener then memory management would be automatic in regards to the event listeners.
Instead of calling destroy on each object this would be enough to fully remove them:
df = v1 = v2 = null;
If you didn't set df to null it would still exist but v1 and v2 would automatically be unhooked.
There are two problems with this approach however.
Problem one is that it adds a new complexity. Sometimes people do not actually want this behaviour. I could create a very large chain of objects linked to each other by events rather than containment (references in constructor scopes or object properties). Eventually a tree and I would only have to pass around the root and worry about that. Freeing the root would conveniently free the entire thing. Both behaviours depending on coding style, etc are useful and when creating reusable objects it's going to be hard to either know what people want, what they have done, what you have done and a pain to work around what has been done. If I use Observable instead of EventListener then either df will need to reference v1 and v2 or I'll have to pass them all if I want to transfer ownership of the reference to something else out of scope. A weak reference like thing would mitigate the problem a little by transferring control from Observable to an observer but would not solve it entirely (and needs check on every emit or event on itself). This problem can be fixed I suppose if the behaviour only applies to isolated graphs which would complicate the GC severely and would not apply to cases where there are references outside the graph that are in practice noops (only consume CPU cycles, no changes made).
Problem two is that either it is unpredictable in certain cases or forces the JS engine to traverse the GC graph for those objects on demand which can have a horrific performance impact (although if it is clever it can avoid doing it per member by doing it per WeakMap loop instead). The GC may never run if memory usage does not reach a certain threshold and the object with its events wont be removed. If I set v1 to null it may still relay to stdout forever. Even if it does get GCed this will be arbitrary, it may continue to relay to stdout for any amount of time (1 lines, 10 lines, 2.5 lines, etc).
The reason WeakMap gets away with not caring about the GC when non-iterable is that to access an object you have to have a reference to it anyway so either it hasn't been GCed or hasn't been added to the map.
I am not sure what I think about this kind of thing. You're sort of breaking memory management to fix it with the iterable WeakMap approach. Problem two can also exist for destructors as well.
All of this invokes several levels of hell so I would suggest to try to work around it with good program design, good practices, avoiding certain things, etc. It can be frustrating in JS however because of how flexible it is in certain aspects and because it is more naturally asynchronous and event based with heavy inversion of control.
There is one other solution that is fairly elegant but again still has some potentially serious hangups. If you have a class that extends an observable class you can override the event functions. Add your events to other observables only when events are added to yourself. When all events are removed from you then remove your events from children. You can also make a class to extend your observable class to do this for you. Such a class could provide hooks for empty and non-empty so in a since you would be Observing yourself. This approach isn't bad but also has hangups. There is a complexity increase as well as performance decrease. You'll have to keep a reference to object you observe. Critically, it also will not work for leaves but at least the intermediates will self destruct if you destroy the leaf. It's like chaining destroy but hidden behind calls that you already have to chain. A large performance problem is with this however is that you may have to reinitialise internal data from the Observable everytime your class becomes active. If this process takes a very long time then you might be in trouble.
If you could iterate WeakMap then you could perhaps combine things (switch to Weak when no events, Strong when events) but all that is really doing is putting the performance problem on someone else.
There are also immediate annoyances with iterable WeakMap when it comes to behaviour. I mentioned briefly before about functions having scope references and carving. If I instantiate a child that in the constructor that hooks the listener 'console.log(param)' to parent and fails to persist the parent then when I remove all references to the child it could be freed entirely as the anonymous function added to the parent references nothing from within the child. This leaves the question of what to do about parent.weakmap.add(child, (param) => console.log(param)). To my knowledge the key is weak but not the value so weakmap.add(object, object) is persistent. This is something I need to reevaluate though. To me that looks like a memory leak if I dispose all other object references but I suspect in reality it manages that basically by seeing it as a circular reference. Either the anonymous function maintains an implicit reference to objects resulting from parent scopes for consistency wasting a lot of memory or you have behaviour varying based on circumstances which is hard to predict or manage. I think the former is actually impossible. In the latter case if I have a method on a class that simply takes an object and adds console.log it would be freed when I clear the references to the class even if I returned the function and maintained a reference. To be fair this particular scenario is rarely needed legitimately but eventually someone will find an angle and will be asking for a HalfWeakMap which is iterable (free on key and value refs released) but that is unpredictable as well (obj = null magically ending IO, f = null magically ending IO, both doable at incredible distances).

If there is no such mechanism, what is a pattern/convention for such problems?
The term 'cleanup' might be more appropriate, but will use 'destructor' to match OP
Suppose you write some javascript entirely with 'function's and 'var's.
Then you can use the pattern of writing all the functions code within the framework of a try/catch/finally lattice. Within finally perform the destruction code.
Instead of the C++ style of writing object classes with unspecified lifetimes, and then specifying the lifetime by arbitrary scopes and the implicit call to ~() at scope end (~() is destructor in C++), in this javascript pattern the object is the function, the scope is exactly the function scope, and the destructor is the finally block.
If you are now thinking this pattern is inherently flawed because try/catch/finally doesn't encompass asynchronous execution which is essential to javascript, then you are correct. Fortunately, since 2018 the asynchronous programming helper object Promise has had a prototype function finally added to the already existing resolve and catch prototype functions. That means that that asynchronous scopes requiring destructors can be written with a Promise object, using finally as the destructor. Furthermore you can use try/catch/finally in an async function calling Promises with or without await, but must be aware that Promises called without await will be execute asynchronously outside the scope and so handle the desctructor code in a final then.
In the following code PromiseA and PromiseB are some legacy API level promises which don't have finally function arguments specified. PromiseC DOES have a finally argument defined.
async function afunc(a,b){
try {
function resolveB(r){ ... }
function catchB(e){ ... }
function cleanupB(){ ... }
function resolveC(r){ ... }
function catchC(e){ ... }
function cleanupC(){ ... }
...
// PromiseA preced by await sp will finish before finally block.
// If no rush then safe to handle PromiseA cleanup in finally block
var x = await PromiseA(a);
// PromiseB,PromiseC not preceded by await - will execute asynchronously
// so might finish after finally block so we must provide
// explicit cleanup (if necessary)
PromiseB(b).then(resolveB,catchB).then(cleanupB,cleanupB);
PromiseC(c).then(resolveC,catchC,cleanupC);
}
catch(e) { ... }
finally { /* scope destructor/cleanup code here */ }
}
I am not advocating that every object in javascript be written as a function. Instead, consider the case where you have a scope identified which really 'wants' a destructor to be called at its end of life. Formulate that scope as a function object, using the pattern's finally block (or finally function in the case of an asynchronous scope) as the destructor. It is quite like likely that formulating that functional object obviated the need for a non-function class which would otherwise have been written - no extra code was required, aligning scope and class might even be cleaner.
Note: As others have written, we should not confuse destructors and garbage collection. As it happens C++ destructors are often or mainly concerned with manual garbage collection, but not exclusively so. Javascript has no need for manual garbage collection, but asynchronous scope end-of-life is often a place for (de)registering event listeners, etc..

Here you go. The Subscribe/Publish object will unsubscribe a callback function automatically if it goes out of scope and gets garbage collected.
const createWeakPublisher = () => {
const weakSet = new WeakSet();
const subscriptions = new Set();
return {
subscribe(callback) {
if (!weakSet.has(callback)) {
weakSet.add(callback);
subscriptions.add(new WeakRef(callback));
}
return callback;
},
publish() {
for (const weakRef of subscriptions) {
const callback = weakRef.deref();
console.log(callback?.toString());
if (callback) callback();
else subscriptions.delete(weakRef);
}
},
};
};
Although it might not happen immediately after the callback function goes out of scope, or it might not happen at all. See weakRef documentation for more details. But it works like a charm for my use case.
You might also want to check out the FinalizationRegistry API for a different approach.

"A destructor wouldn't even help you here. It's the event listeners
themselves that still reference your object, so it would not be able
to get garbage-collected before they are unregistered."
Not so. The purpose of a destructor is to allow the item that registered the listeners to unregister them. Once an object has no other references to it, it will be garbage collected.
For instance, in AngularJS, when a controller is destroyed, it can listen for a destroy event and respond to it. This isn't the same as having a destructor automatically called, but it's close, and gives us the opportunity to remove listeners that were set when the controller was initialized.
// Set event listeners, hanging onto the returned listener removal functions
function initialize() {
$scope.listenerCleanup = [];
$scope.listenerCleanup.push( $scope.$on( EVENTS.DESTROY, instance.onDestroy) );
$scope.listenerCleanup.push( $scope.$on( AUTH_SERVICE_RESPONSES.CREATE_USER.SUCCESS, instance.onCreateUserResponse ) );
$scope.listenerCleanup.push( $scope.$on( AUTH_SERVICE_RESPONSES.CREATE_USER.FAILURE, instance.onCreateUserResponse ) );
}
// Remove event listeners when the controller is destroyed
function onDestroy(){
$scope.listenerCleanup.forEach( remove => remove() );
}

Javascript does not have destructures the same way C++ does. Instead, alternative design patterns should be used to manage resources. Here are a couple of examples:
You can restrict users to using the instance for the duration of a callback, after which it'll automatically be cleaned up. (This pattern is similar to the beloved "with" statement in Python)
connectToDatabase(async db => {
const resource = await db.doSomeRequest()
await useResource(resource)
}) // The db connection is closed once the callback ends
When the above example is too restrictive, another alternative is to just create explicit cleanup functions.
const db = makeDatabaseConnection()
const resource = await db.doSomeRequest()
updatePageWithResource(resource)
pageChangeEvent.addListener(() => {
db.destroy()
})

The other answers already explained in detail that there is no destructor. But your actual goal seems to be event related. You have an object which is connected to some event and you want this connection to go away automatically when the object is garbage collected. But this won't happen because the event subscription itself references the listener function. Well, UNLESS you use this nifty new WeakRef stuff.
Here is an example:
<!DOCTYPE html>
<html>
<body>
<button onclick="subscribe()">Subscribe</button>
<button id="emitter">Emit</button>
<button onclick="free()">Free</button>
<script>
const emitter = document.getElementById("emitter");
let listener = null;
function addWeakEventListener(element, event, callback) {
// Weakrefs only can store objects, so we put the callback into an object
const weakRef = new WeakRef({ callback });
const listener = () => {
const obj = weakRef.deref();
if (obj == null) {
console.log("Removing garbage collected event listener");
element.removeEventListener(event, listener);
} else {
obj.callback();
}
};
element.addEventListener(event, listener);
}
function subscribe() {
listener = () => console.log("Event fired!");
addWeakEventListener(emitter, "click", listener);
console.log("Listener created and subscribed to emitter");
}
function free() {
listener = null;
console.log("Reference cleared. Now force garbage collection in dev console or wait some time before clicking Emit again.");
}
</script>
</body>
</html>
(JSFiddle)
Clicking the Subscribe button creates a new listener function and registers it at the click event of the Emit button. So clicking the Emit button after that prints a message to the console. Now click the Free button which simply sets the listener variable to null so the garbage collector can remove the listener. Wait some time or force gargabe collection in the developer console and then click the Emit button again. The wrapper listener function now sees that the actual listener (wrapped in a WeakRef) is no longer there and then unsubscribes itself from the button.
WeakRefs are quite powerful but note that there is no guarantee if and when your stuff is garbage collected.

The answer to the question as-stated in the title is FinalizationRegistry, available since Firefox 79 (June 2020), Chrome 84 and derivatives (July 2020), Safari 14.1 (April 2021), and Node 14.6.0 (July 2020)… however, a native JS destructor is probably not the right solution for your use-case.
function create_eval_worker(f) {
let src_worker_blob = new Blob([f.toString()], {type: 'application/javascript'});
let src_worker_url = URL.createObjectURL(src_worker_blob);
async function g() {
let w = new Worker(src_worker_url);
…
}
// Run URL.revokeObjectURL(src_worker_url) as a destructor of g
let registry = new FinalizationRegistry(u => URL.revokeObjectURL(u));
registry.register(g, src_worker_url);
return g;
}
}
Caveat:
Avoid where possible
Correct use of FinalizationRegistry takes careful thought, and it's best avoided if possible. When, how, and whether garbage collection occurs is down to the implementation of any given JavaScript engine. Any behavior you observe in one engine may be different in another engine, in another version of the same engine, or even in a slightly different situation with the same version of the same engine.
…
Developers shouldn't rely on cleanup callbacks for essential program logic. Cleanup callbacks may be useful for reducing memory usage across the course of a program, but are unlikely to be useful otherwise.
A conforming JavaScript implementation, even one that does garbage collection, is not required to call cleanup callbacks. When and whether it does so is entirely down to the implementation of the JavaScript engine. When a registered object is reclaimed, any cleanup callbacks for it may be called then, or some time later, or not at all.
–Mozilla Developer Network

Related

Does ES6 const affect garbage collection?

In Kyle Simpson's new title, You don't know JS: ES6 and beyond, I find the following snippet:
WARNING Assigning an object or array as a constant means that value will not be able to be garbage collected until that constant’s lexical scope goes away, as the reference to the value can never be unset. That may be desirable, but be careful if it’s not your intent!
(Excerpt From: Simpson, Kyle. “You Don’t Know JS: ES6 & Beyond.” O'Reilly Media, Inc., 2015-06-02. iBooks.
This material may be protected by copyright.)
As far as I can see, he doesn't expand on this, and 10 minutes on Google turns up nothing. Is this true, and if so, what does "the reference to the value can never be unset" mean exactly? I have got into the habit of declaring variables that won't be changed as const, is this a bad habit in real concrete performance/memory terms?
WARNING Assigning an object or array as a constant means that value
will not be able to be garbage collected until that constant’s lexical
scope goes away, as the reference to the value can never be unset.
That may be desirable, but be careful if it’s not your intent!
That note sounds a bit more of a warning than is necessary (perhaps even a bit silly) and tries to make some sort of special case out of this situation.
With a const variable declaration, you can't assign to the variable something little like "" or null to clear its contents. That's really the only difference in regard to memory management. Automatic garbage collection is not affected at all by whether it is declared const or not.
So, if you would like to be able to change the contents of the variable in the future for any reason (including to manually remove a reference to something to allow something to be garbage collected sooner), then don't use const. This is the same as any other reason for using or not using const. If you want to be able to change what the variable contains at any time in the future (for any reason), then don't use const. This should be completely obvious to anyone who understand what const is for.
Calling out garbage collection as a special case for when not to use const just seems silly to me. If you want to be able to clear the contents of a variable, then that means you want to modify the variable so duh, don't use const. Yes, manually enabling garbage collection on a large data structure that might be caught in a lasting scope/closure is one reason that you might want to change the variable in the future. But, it's just one of millions of reasons. So, I repeat one more time. If you ever want to change the contents of the variable for any reason in the future, then don't declare it as const.
The garbage collector itself doesn't treat a const variable or the contents it points to any different than a var or let variable. When it goes out of scope and is no longer reachable, its contents will be eligible for garbage collection.
const has a number of advantages. It allows the developer to state some intent that the contents this variable points to are not to be changed by code and may allow the runtime to make some optimizations because it knows the contents of the variable cannot be changed. And, it prevents rogue or accidental code from ever changing the contents of that variable. These are all good things when used in an appropriate case. In general, you SHOULD use const as much as practical.
I should add the even some const data can still be reduced in size and make the majority of its contents available for garbage collection. For example, if you had a really large 100,000 element array of objects (that you perhaps received from some external http call) in a const array:
const bigData = [really large number of objects from some API call];
You can still massively reduce the size of that data by simply clearing the array which potentially makes the large number of objects that was in the array eligible for garbage collection if nothing else had a reference to them:
bigData.length = 0;
Remember, that const prevents assignment to that variable name, but does not prevent mutating the contents that the variable points to.
You could do the same thing with other built-in collection types such as map.clear() or set.clear() or even any custom object/class that has methods for reducing its memory usage.
That note in my book was referring to cases like this, where you'd like to be able to manually make a value GC'able earlier than the end of life of its parent scope:
var cool = (function(){
var someCoolNumbers = [2,4,6,8,....1E7]; // a big array
function printCoolNumber(idx) {
console.log( someCoolNumbers[idx] );
}
function allDone() {
someCoolNumbers = null;
}
return {
printCoolNumber: printCoolNumber,
allDone: allDone
};
})();
cool.printCoolNumber( 10 ); // 22
cool.allDone();
The purpose of the allDone() function in this silly example is to point out that there are times when you can decide you are done with a large data structure (array, object), even though the surrounding scope/behavior may live on (via closure) indefinitely in the app. To allow the GC to pick up that array and reclaim its memory, you unset the reference with someCoolNumbers = null.
If you had declared const someCoolNumbers = [...]; then you would be unable to do so, so that memory would remain used until the parent scope (via the closure that the methods on cool have) goes away when cool is unset or itself GCd.
Update
To make absolutely clear, because there's a lot of confusion/argument in some comment threads here, this is my point:
const absolutely, positively, undeniably has an effect on GC -- specifically, the ability of a value to be GCd manually at an earlier time. If the value is referenced via a const declaration, you cannot unset that reference, which means you cannot get the value GCd earlier. The value will only be able to be GCd when the scope is torn down.
If you'd like to be able to manually make a value eligible for GC earlier, while the parent scope is still surviving, you'll have to be able to unset your reference to that value, and you cannot do that if you used a const.
Some seem to have believed that my claim was const prevents any GC ever. That was never my claim. Only that it prevented earlier manual GC.
No, there are no performance implications. This note refers to the practise of helping the garbage collector (which is rarely enough needed) by "unsetting" the variable:
{
let x = makeHeavyObject();
window.onclick = function() {
// this *might* close over `x` even when it doesn't need it
};
x = null; // so we better clear it
}
This is obviously not possibly to do if you had declared x as a const.
The lifetime of the variable (when it goes out of scope) is not affected by this. But if the garbage collector screws up, a constant will always hold the value it was initialised with, and prevent that from being garbage-collected as well, while a normal variable might no more hold it.
The way garbage collectors (GC) work is when something is referenced by nothing ("cannot be reached"), the GC can safely say that something isn't used anymore and reclaim the memory used by that something.
Being able to replace the value of a variable allows one to remove a reference to the value. However, unlike var, const cannot be reassigned a value. Thus, one can't remove that constant from referencing the value.
A constant, like a variable, can be reclaimed when the constant goes "out of scope", like when a function exits, and nothing inside it forms a closure.

AngularJS - Best practice: model properties on view or function calls?

For quite sometime I've been wondering about this question: when working with AngularJS, should I use directly the model object properties on the view or can I use a function to get the that property value?
I've been doing some minor home projects in Angular, and (specially working with read-only directives or controllers) I tend to create scope functions to access and display scope objects and their properties values on the views, but performance-wise, is this a good way to go?
This way seems easier for maintaining the view code, since, if for some reason the object is changed (due to a server implementation or any other particular reason), I only have to change the directive's JS code, instead of the HTML.
Here's an example:
//this goes inside directive's link function
scope.getPropertyX = function() {
return scope.object.subobject.propX;
}
in the view I could simply do
<span>{{ getPropertyX() }}</span>
instead of
<span>{{ object.subobject.propX }}</span>
which is harder to maintain, amidst the HTML clutter that sometimes it's involved.
Another case is using scope functions to test properties values for evaluations on a ng-if, instead of using directly that test expression:
scope.testCondition = function() {
return scope.obj.subobj.propX === 1 && scope.obj.subobj.propY === 2 && ...;
}
So, are there any pros/cons of this approach? Could you provide me with some insight on this issue? It's been bothering me lately, on how an heavy app might behave when, for example a directive can get really complex, and on top of that could be used inside a ng-repeat that could generate hundreds or thousands of its instances.
Thank you
I don't think creating functions for all of your properties is a good idea. Not just will there be more function calls being made every digest cycle to see if the function return value has changed but it really seems less readable and maintainable to me. It could add a lot of unnecessary code to your controllers and is sort of making your controller into a view model. Your second case seems perfectly fine, complex operations seems like exactly what you would want your controller to handle.
As for performance it does make a difference according to a test I wrote (fiddle, tried to use jsperf but couldn't get different setup per test). The results are almost twice as fast, i.e. 223,000 digests/sec using properties versus 120,000 digests/sec using getter functions. Watches are created for bindings that use angular's $parse.
One thing to think about is inheritance. If you uncomment the ng-repeat list in the fiddle and inspect the scope of one of the elements you can see what I'm talking about. Each child scope that is created inherits the parent scope's properties. For objects it inherits a reference, so if you have 50 properties on your object it only copies the object reference value to the child scope. If you have 50 manually created functions it will copy each of those function to each child scope that it inherits from. The timings are slower for both methods, 126,000 digests/sec for properties and 80,000 digests/sec with getter functions.
I really don't see how it would be any easier for maintaining your code and it seems more difficult to me. If you want to not have to touch your HTML if the server object changes it would probably be better to do that in a javascript object instead of putting getter functions directly on your scope, i.e.:
$scope.obj = new MyObject(obj); // MyObject class
In addition, Angular 2.0 will be using Object.observe() which should increase performance even more, but would not improve the performance using getter functions on your scope.
It looks like this code is all executed for each function call. It calls contextGetter(), fnGetter(), and ensureSafeFn(), as well as ensureSafeObject() for each argument, for the scope itself and for the return value.
return function $parseFunctionCall(scope, locals) {
var context = contextGetter ? contextGetter(scope, locals) : scope;
var fn = fnGetter(scope, locals, context) || noop;
if (args) {
var i = argsFn.length;
while (i--) {
args[i] = ensureSafeObject(argsFn[i](scope, locals), expressionText);
}
}
ensureSafeObject(context, expressionText);
ensureSafeFunction(fn, expressionText);
// IE stupidity! (IE doesn't have apply for some native functions)
var v = fn.apply
? fn.apply(context, args)
: fn(args[0], args[1], args[2], args[3], args[4]);
return ensureSafeObject(v, expressionText);
};
},
By contrast, simple properties are compiled down to something like this:
(function(s,l /**/) {
if(s == null) return undefined;
s=((l&&l.hasOwnProperty("obj"))?l:s).obj;
if(s == null) return undefined;
s=s.subobj;
if(s == null) return undefined;
s=s.A;
return s;
})
Performance wise - it is likely to matter little
Jason Goemaat did a great job providing a Benchmarking Fiddle. Where you can change the last line from:
setTimeout(function() { benchmark(1); }, 500);
to
setTimeout(function() { benchmark(0); }, 500);
to see the difference.
But he also frames the answer as properties are twice as fast as function calls. In fact, on my mid-2014 MacBook Pro, properties are three times faster.
But equally, the difference between calling a function or accessing the property directly is 0.00001 seconds - or 10 microseconds.
This means that if you have 100 getters, they will be slower by 1ms compared to having 100 properties accessed.
Just to put things in context, the time it takes sensory input (a photon hitting the retina) to reach our consciousness is 300ms (yes, conscious reality is 300ms delayed). So you'd need 30,000 getters on a single view to get the same delay.
Code quality wise - it could matter a great deal
In the days of assembler, software was looked like this:
A collection of executable lines of code.
But nowadays, specifically for software that have even the slightest level of complexity, the view is:
A social interaction between communication objects.
The latter is concerned much more about how behaviour is established via communication objects, than the actual low-level implementation. In turn, the communication is granted by an interface, which is typically achieved using the query or command principle. What matters, is the interface of (or the contract between) collaborating objects, not the low-level implementation.
By inspecting properties directly, you are hit the internals of an object bypassing its interface, thus couple the caller to the callee.
Obviously, with getters, you may rightly ask "what's the point". Well, consider these changes to implantation that won't affect the interface:
Checking for edge cases, such as whether the property is defined at all.
Change from getName() returning first + last names rather than just the name property.
Deciding to store the property in mutable construct.
So even an implementation seemingly simple may change in a way that if using getters would only require single change, where without will require many changes.
I vote getters
So I argue that unless you have a profiled case for optimisation, you should use getters.
Whatever you put in the {{}} is going be evaluated A LOT. It has to be evaluated each digest cycle in order to know if the value changed or not. Thus, one very important rule of angular is to make sure you don't have expensive operations in any $watches, including those registered through {{}}.
Now the difference between referencing the property directly or having a function do nothing else but return it, to me, seems negligible. (Please correct me if I'm wrong)
So, as long as your functions aren't performing expensive operations, I think it's really a matter of personal preference.

Javascript's equivalent of destruct in object model

Since I've dealt in the past with javascript's funky "object model", I assume there is no such thing as a destructor. My searches were mildly unsuccessful, so you guys are my last hope. How do you execute stuff upon instance destruction?
MDN is a nice resource for JS.
No, there is nothing like calling a function when an object ceases.
FinalizationRegistry might be what you need. It is not a destructor, but it executes a function once the object is garbage collected. In any case, this is what I wish I had find when I first came on here :)
As of more recently, this link is of more use to answer this question. The most important part:
As of 2012, all modern browsers ship a mark-and-sweep
garbage-collector.
...
Cycles are no longer a problem
In the first example above, after the function call returns, the two
objects are no longer referenced by any resource that is reachable
from the global object. Consequently, they will be found unreachable
by the garbage collector and have their allocated memory reclaimed.
Limitation: Releasing memory manually
There are times when it would be convenient to manually decide when
and what memory is released. In order to release the memory of an
object, it needs to be made explicitly unreachable.
So as far as cyclic references goes, de[con]structors aren't really needed.
One cool trick I have thought of though, if you have cyclic references and you want easy manual control over deconstruction...
class Container {
constructor() {
this.thisRef = [ this ];
this.containee = new Containee({ containerRef: this.thisRef });
}
//Note: deconstructor is not an actual JS thing/keyword.
deconstructor() {
//Have to delete `this.thisRef[0]` and not `this.thisRef`, in
//order to ensure Containee's reference to Container is removed.
delete this.thisRef[0];
}
doSomething() {
}
}
class Containee {
constructor({ containerRef }) {
//Assumption here is, if the Container is destroyed, so will the Containee be
//destroyed. No need to delete containerRef, no need for a
//deconstructor function!
this.containerRef = containerRef;
}
someFunc() {
this.containerRef[0].doSomething();
}
}
let c = new Container();
...
//No cyclic references!
c.deconstructor();
So here, instead of the Containee class storing a direct reference to the Container instance, it stores a reference to a size 1 array containing the Container reference, which the Container instance itself can then delete itself from. The array, with the reference, is managed by Container.
But again, this isn't really needed, since garbage collection in all modern browsers is mark-and-sweep and can handle cyclic references.
In other languages the destructor is handy for implementing the memento pattern. That's actually what lead me to this topic. For example, in a click event it'd be nice to have a generic function that I can pass the event target to that disables the target and then re-enables it when it falls out of scope. Consider a submit button that does something like this:
function async saveMyStuff(e) {
const sleeper = new nap(e)
let data = await fetch(...)
// a bunch more code.
}
class nap {
constructor(e) {
this.button = e.currentTarget
this.button.disabled = true
}
destructor() { this.button.enabled = true }
}
This kind of construct would give me a oneliner that handles enabling/disabling all of my buttons when I'm talking to the backend or doing any other processing. I don't have to worry about cleaning up if I return somewhere in the middle or anything like that.

Creating a prototype function to destroy itself?

I'm currently working with sockets.io in Node.js, I have a class called Rooms, their functions are self explanatory, its basic model looks like:
Room (owner)
this.owner = owner
occupants = []
Room.prototype = {
sendMessage()
getUsers()
leaveParty()
}
But I want to make one for destroying its self, I tried doing,
Room.prototype.destroy = function() {
this = undefined
}
and then doing
var roomVariable = new Room('blah');
roomVariable.destroy.call(roomVariable);
But that does not work, I'm running out of ideas on how to make this destroy its self, basically after there are no more users left in occupants, I want it erased from memory and all. Thanks!
Quick answer is: you can't.
Long answer is: destruction of a JS object and subsequent garbage collection is only possible outside of scope. You will need to hunt down and delete/unset all references of an item. This sucks, I know, but that's how it is.
Much like in PHP, this in a prototype method is actually not the object itself, merely an interface to it. You can't unset it, you can't re-define it (it'd lead to chaos and confusion otherwise).
The proper form for this is to let something else, an object manager, know that you intend to delete this. At which point, this object does the actual GC.
Your Room's destroy()/dispose() method should release any resources that must be explicitly released (like an open transaction that doesn't auto-commit) and signal any known "watchers" or "subscribers" that it's closing (so they can remove their references to it). this can include your RoomContainer, or you can make your RoomContainer responsible for removing references to expired rooms. In either case, once there are no more references to the room, the garbage collector is free to remove it from memory.

How do I explicitly release a JavaScript object from within the object?

I'm using John Resig's recipe for JavaScript 'classes' and inheritance. I've stripped my code back to something like this for this question:
MyClass = Class.extend({
// create an <h3>Hello world!</h3> in the HTML document
init : function (divId) {
this._divId = divId;
this._textDiv = document.createElement("h3");
this._textDiv.innerHTML = "Hello world!";
document.getElementById(divId).appendChild(this._textDiv);
},
// remove the <h3> and delete this object
remove : function () {
var container = document.getElementById(this._divId);
container.parentNode.removeChild(container);
// can I put some code here to release this object?
}
});
All works well:
var widget = new MyClass("theDivId");
...
widget.remove();
I'm going to have hundreds of these things on a page (obviously with some sensible functionality) and I'd like a simple way to release the memory for each object. I understand I can use widget = null; and trust the GC releases the object when required (?), but can I do something explicit in the remove() method? I know that placing this = null; at the end of remove() doesn't work ;)
there is no ways to destroy objects manually, only way is to free all links to your object and trust removal to GC
actually in your code you should clear this._textDiv = null and container = null in remove method too, because it can be a problem for GC in some browsers
No. You don't have any way of accessing the garbage collector directly. As you say, the best you can do is make sure the object is no longer referenced.
IMO, it's better that way. The garbage collector is much smarter than you (and me) because years of research has gone into writing the thing, and even when you try and make optimisations, you're likely still not doing a better job than it would.
Of course if you're interfacing with a JS engine you will be able to control the execution and force garbage collection (among much more), although I very much doubt you're in that position. If you're interested, download and compile spider monkey (or v8, or whatever engine tickles your fancy), and in the repl I think its gc() for both.
That brings me to another point, since the standard doesn't define the internals of garbage collection, even if you manage to determine that invoking the gc at some point in your code is helpful, it's likely that that will not reap the same benefits across all platforms.
this is a keyword, to which you cannot assign any value. The only way to remove objects from a scope is to manually assign nullto every variable.
This method doesn't always work, however: In some implementations of the XMLHttpRequest, one has to reset the onreadystate and open functions, before the XMLHttpRequest object is freed from the memory.

Categories

Resources