MDN says this is one way to remove all children from a node. But since only the first child node is referenced in code, do the others become memory orphans? Is anything known about whether this is the case in any or all browsers? Is there something in the DOM standard that calls for garbage collection when doing this?
I guess you are referring to this example
// This is one way to remove all children from a node
// box is an object reference to an element with children
while (box.firstChild) {
//The list is LIVE so it will re-index each call
box.removeChild(box.firstChild);
}
No it does not cause a memory leak.
What happens is after the 1st child is removed the 2nd one will take it's place as the 1st child, and so on until there are no more children left.
Also garbage collection can not be usually requested on demand, the virtual machine will do it when it thinks it can, and that does differ between browsers.
Related
Given this sample code:
function someMethod(elements) {
var observer = new MutationObserver(function(events) {
SomeLib.each(events, function(event, k, i) {
if ( event.removedNodes ) {
SomeLib.each(event.removedNodes, function(removedElement, k, i) {
console.log(222, removedElement)
});
}
});
});
SomeLib.each(elements, function(element, k, i) {
console.log(111, element)
observer.observe(element, {
childList : true,
subtree : false
});
});
}
I've noticed that if I call someMethod(parentElement) and then call it again later someMethod(parentElement.querySelector('someChildElement'))
The first one is the only one that triggers events and appears as if the second call does not trigger any events.
This is unfortunate as I am mostly interested in an event when the actual node is removed. Nothing else. Child nodes are really not of interest either, but childList or data... option has to be true so I am forced to I guess.
I can not organize my code around keeping track of who's a parent is already tracked or not, and therefore I would have found it much easier to simply listen to remove events on any particular node, whatever way it is eventually deleted.
Considering this dilemma, I am considering registering a MutationObserver on the document element and instead rely on detecting the element I wish to observe myself through my own event handler.
But is this really my best option?
Performance is obviously of concern since everything will fire this document listener, but perhaps just having one MutationObserver potentially efficient since I will only be triggering my own function when I detect the element of interest.
It requires iteration however, on removedNodes and addedNodes potentially, so it has a real effect on everything rather than just me observing the node.
This begs the question, is there not already a global mutation observer already registered?
Do I really have to manually observe the document myself?
What if other libraries also start to observe things similarly either on body or child elements?
Won't I destroy their implementation? (Not that I have just dependency) but this is worrying how horrible this implementation really seems to be, but not surprising considering how everything has been horrible with the web since the dawn of day. Nothing is ever correctly implemented. Thank you w3c.
Is MutationObserver really the way here? Perhaps there are node.addEventListener('someDeleteEvent') I can listen to instead?
Why are we being recommended away from DOMNodeRemoved like events, but can we really do the replacement? Especially since performance penalty seems real using MutationObserver I wonder why everywhere "smart people" are recommending us away from DOMNodeRemoved?
They are not the same. What is the idea of deprecating those anyway since this seems kind useless and potentially problematic to use.
For now, I have already implemented this global document listener that allows me to detect nodes I am interested in only, and fire the functions I desire when found. However, performance might be hit. I am not sure.
I am considering scrapping the implementation and instead rely on "deprecated" DOMNodeRemoved regardless unless someone can chip in with some thoughts.
My implementation simply registered on document and then basically looks at each element if they have the custom event key on them, and fire it if they do. Quite effecient but requires iteration similar to:
On each mutation observed across entire document.
I would like to know if detaching a div on load and appending that same div on a click will increase memory? When you detach a div (filled with images) is the div and image information kept in memory?
For example:
<div class="divContain-one">
<img src="test.png">
</div>
<div class="divContain-two"></div>
var divContainingImage = $("#divContain-one").detach();
$("button").click(function(){
$("#divContain-two").append(divContainingImage);
});
It shouldn't increase memory usage significantly. The variable simply contains a reference to same memory that held the DOM element, and any related jQuery data, before it was detached. All that has happened is that the DOM itself no longer has a reference to that memory.
When you append it, you don't increase the memory, either. It simply adds a reference to the same object to the new location in the DOM, it doesn't make a copy.
The information has to be kept in memory -- how else could it know what to add to the DOM when you append?
One caveat: the act of calling $("#divContainingImage") creates a jQuery object that wraps around the DOM element, and the variable contains a reference to this object, not just the DOM element. This does increase memory usage, independent of what .detach() does.
According to jquery documentaion detach method keeps all data associated with the removed element.
So, yes, especially taking the fact that you declared a variable and assigned returned data from detach method to it - data is definitely stored in memory. Anyway, keep in mind, that such memory usage is not significant if we are not talking about hundreds or thousands of elements.
You also need to consider if there are event listeners attached to the DOM object.
If you remove an object from the page then the memory can eventually be cleaned once there are no more references to it.
If you remove a and the has a click listener then that can continue to sit in memory indefinitely. Remove the listeners first then remove the DOM object. That way the DOM object can be removed from memory eventually.
In order to maintain a correct and highly responsive GUI overlay on a each website, I need to register and analyze every relevant DOM Element as soon as possible. Currently I am using a MutationObserver, which does this work for me and simplified, it looks like this:
var observer = new MutationObserver(
function(mutations){
mutations.forEach(
function(mutation){
if(mutation.type == 'childList')
{
var nodes = mutation.addedNodes;
var n = nodes.length;
for(var i = 0; i < n; i++)
{
if(nodes[i].nodeType == 1) // ELEMENT_NODE
{
// Save it in specific Array or something like this
AnalyzeNode(nodes[i]);
}
}
}
}
);
}
);
var config = {subtree: true, childList: true};
observer.observe(document, config);
But I've come to the realization, that the used MutationObserver isn't calling AnalyzeNode for every node contained in the DOM. When an already complete (sub)tree is created outside of the DOM (e.g. by executing an external JS script on page load) and you append its root to the DOM mutation.addedNodes will only contain the subtree's root and all of its children will go unnoticed (because no further mutations will take place there), being part of the DOM but not having been analyzed.
I had the idea of checking if the appended node may already have childNodes to identify it as root of an appended subtree, but unfortunately it seems like every addedNode may have children at the moment the MutationObserver's functions are called. So no distinction possible on this way.
I really don't want to double check every child node of an added node (the parent node) at the moment its parent node is processed by the MutationObserver. Most of the time, the child node will nevertheless be processed by the MutationObserver when itself will be part of addedNodes in an other occurring mutation and the overhead seems to get unnecessary high.
Furthermore, I thought about a Set of nodes, whose children have to be analyzed outside of a MutationObserver call. If an added node has children upon its appending to the DOM, the node is added to the Set. When another mutation takes place and one of its children is part of addedNodes, its child removes its parent from the Set by using mutation.target -- which is the parent node (mutation.type has to be childList). The problem with this approach is the timing when to check the children of the nodes in Set (and the fact, that I could query document.getElementsByTagname for every relevant Element type instead of maintaining a Set, but the timing problem is still there). Keep in mind that it should be as soon as possible to keep the overlay responsive and fitting to the website. A combination of document's onreadystatechange and appending of new script nodes to the DOM (as indicator when external JS code is executed) might work even for websites, recreating parts of its content (I am looking at you duckduckgo search result page). But it seems like a workaround, which won't solve the problem in 100% of the cases.
So, is there another, more efficient way? Or does any of these approaches may be sufficient if slightly changed? Thanks a lot!
(Please try to avoid JQuery where possible as example code, thank you. And by the way, I am using CEF, so the best case would be a solution working with Webkit/Blink)
EDIT1: Website rendering is done internally by CEF and GUI rendering is done by C++/OpenGL with information obtained by the mentioned Javascript code.
It seems your actual goal is to layout detect changes in the rendered output, not (potentially invisible) DOM changes.
On gecko based browsers you could use MozAfterPaint to get notified of the bounding boxes of changed areas, which is fairly precise but has a few gaps, such as video playback (which changes displayed content but not the layout) or asynchronous scrolling.
Layout can also be changed via the CSSOM, e.g. by manipulating a <style>.sheet.cssRules. CSS animations, already mentioned in the comments, are another thing that can also affect layout without mutations. And possibly SMIL animations.
So using mutation observers alone may be insufficient anyway.
If your overlay has some exploitable geometric properties then another possibility might be sampling the parts of the viewport that are important to you via document.elementFromPoint and calculating bounding boxes of the found elements and their children until you have whatever you need. Scheduling it via requestAnimationFrame() means you should be able to sample the state of the current layout on every frame unless it's changed by other rAF callbacks, running after yours.
In the end most available methods seem to have some gaps or need to be carefully tweaked to not hog too much CPU time.
Or does any of these approaches may be sufficient if slightly changed?
Combining tree-walking of observed mutations and a WeakSet to not process already visited nodes may work with some more careful filtering.
having already visited a node does not automatically mean you can skip its children
but having visited a child without it being a mutation target itself should mean you can skip it
removals events mean you must remove the entire subtree, node by node, from the set or just clear the set since they might be moved to another point in the tree
MutationRecords seem to be listed in the order in which the changes happened (can be easily verified).
Before you run your AnalyzeNode(nodes[i]) algorithm, you can run an AnalyzeChanges(mutations) step that can determine the over all change that happened.
For example, if you see addedNodes contains the same node 10 times, but you see the same node only 9 times in the removedNodes then you know that the net result is that the node was ultimately added to the DOM.
Of course it may be more complicated than that, you will have to detect added sub trees, and nodes that may have then been removed and added from those sub trees, etc.
Then finally, once you know what the net change was, you can run AnalyzeNode(nodes[i]).
I'm thinking about doing this to observe an entire <svg> tree and to render it (and re-render it when changes happen) in WebGL.
It may be tricky, because imagine the following happens synchronously by some user (you don't know what he/she will do) who is manipulating the DOM:
a subtree is added (queues a record for the root node in addedNodes)
a subtree of the subtree is removed (queues a record)
then appended somewhere else outside of the first subtree (queues another record, oh boy.)
an element is removed from the other subtree is removed (queues a record)
and added back to the original subtree (queues a record)
etc
etc
Finally, you receive a list of MutationRecords that details all those steps.
We could loop through all the records and basically recreate a play-by-play of what happened, then we can figure out the final net changes that happened.
After we have those net changes, it'll be like having a list of records, but they will be simpler, (for example removing then adding a node simply cancels it out, so we don't really care if a node was removed then immediately added because the end result is basically nothing for that node).
People have tried to tackle the problem of detecting changes between trees.
Many of those solutions are associated with the terms "virtual dom" and "dom diffing" as far as web goes, which yield results on Google.
So, instead of doing all that mutation analysis (which sounds like a nightmare, though if you did it I would say please please please publish it as open source so people can benefit), we can possibly use a diffing tool to find the difference between the DOM before the mutation, and the DOM at at the time the MutationObserver callback is fired.
For example, virtual-dom has a way to get a diff. I haven't tried it yet. I'm also not sure how to create VTrees from DOM trees to pass them into the diff() function.
Aha! It may be easier to get a diff with diffDOM.
Getting a diff might present the simplest changeset needed to transition from one tree to another tree, which might be much easier than analyzing the a mutation record list. It's worth trying it out. I might post back what I find when I do it with the <svg>s...
I am trying to get rid of detached DOM Elements and having a hard time finding the cause of the leak.
Can somebody help me understand what native link from DOM wrapper stored in the detached window property is. What does it mean native link?
https://developer.chrome.com/devtools/docs/heap-profiling-dom-leaks
By tracing paths to window objects, it can be observed, that the
detached DOM tree is referenced as the native link from the DOM
wrapper stored in the detached window property. To confirm this, do
the following...
Any help will be appreciated!
In the example you have linked, there is a variable called 'detached' that is being created as a global on the window object.
window.detached
They then go on to generate an entire DOM tree with lots of children and extra data and they store that reference in the window.detached variable. It is not however actually mounted into the DOM.
The block that you have quoted is just pointing out that if you have any dom nodes you've generated that still have an active reference pointing to them (in this case the reference is window.detached) then they will not be garbage collected.
They go to the trouble of pointing this out because some people may expect that as soon as you unmount a tree of nodes from the DOM that they are candidates for GC. They're pointing out that what really matters is if there is still a reachable reference to the item. If not, it will be GC'ed. Otherwise it will hang around.
<div id="target">
...
</div>
$('#target').html('') will remove the content,but how to check if the listeners or anything else that holds memory is removed at the same time?
Standard JavaScript defines no means to instrument the interpreter's garbage collector, so I don't think this is possible.
However, since removing nodes is not an uncommon operation, I would not worry about browsers leaking memory in this case. Indeed as Piskvor said, the memory is probably not released immediately, but when the garbage collector eventually runs.
I am not sure how you can detect a leak within a JavaScript (using JavaScript). but there are tools available to detect the leaks in JavaScript
sIEve
IEJSLeaksDetector2.0.1.1
I'm no expert on this, but since you're using jQuery you should use $('#target').empty(). This detaches all event handlers before removing the child elements. When these are collected is up to the browser, but this ensures that they will get collected when the time comes. You can also use $.remove() to get rid of the selected element and all children.
http://api.jquery.com/empty