Right now if you want to reuse a certain dom element, you clone it and then append it. If you just append it then it's removed from its previous position, i.e. only one of it can exist. Therefore, you clone it first. Which is great in scnearios where that's needed behavior but not in all situations, for example. If it has an iframe, cloning it would cause to refetch the iframe each time.
But if you could refer, pass by reference, to existing DOM node then you'd only have one copy of a dom and wont need to clone.
Related
In Vaadin when readding a component that was removed previously will create a new element in the DOM.
Lets look at it in detail
Button button = new Button("test");
button.getElement().executeJs("""
this.addEventListener("click", event => {
alert("hello");
});
""");
add(button);
now after some event on the server we decide to remove the component from the view. So the corresponding element in the DOM gets removed.
then after another event we add the button component again. so vaadin creates a new Element on the client and adds this to the DOM. (the new element is missing the eventlistener)
What I would expect to happen is that vaadin reuses the same element that existed before. But it does not. normally this would not really matter, but in our case we added a eventlistener with js. (yes we could add eventlisteners on the javaside, but let’s suppose that we really need to do it in js because we want to execute some code on the client)
why is vaadin doing this, and is there an option so vaadin uses always the same element.
In pure JS I could easily just create a lookup table with the element that I removed, and then later use the elements in the lookup table to add them again to the DOM. Doing this would keep all the event listeners for the element.
What really perplexes me, is that even though the element in the DOM is different everytime, the Element I get with component.getElement() is always the same. Isn’t this element supposed to represent the element on the clientside?
Of course we could just run the same js on the element everytime we add the element to the view, but that is quite cumbersome.
Is vaadin doing this because of performance reasons. What are your explanations for this behaviour?
This is indeed a mechanism to avoid leaking memory. A mechanism based on server-side reference tracking would be significantly more complex, work with a delay (because the reference is cleared only when GC runs), and make it more difficult for the developer to control what happens. The current design makes it easy for the developer to choose what should happen: hide to preserve it in the browser, detach to let it be garbage collected.
I could also clarify that the same DOM element is reused in cases when the component is detached and then attached back again during the same server visit.
Whether via JavaScript or jQuery, are there any detriments to setting values to elements that don't exist?
In some of my generic functions that addresses dynamically-built DOMS, some class elements are assigned values and attributes but they might not always exist.
You can create elements that aren't connected to the DOM and do any operations to them that you'd do to normal DOM element. This is often a better approach because all your changes to the disconnected DOM element won't cause the browser to redraw. Then, after you've applied all your changes, you can attach that element to the DOM and only cause the browser to redraw once.
Likewise, you can assign as many classes as you wish, whether there are styles associated with them or not doesn't matter.
I would like to know if detaching a div on load and appending that same div on a click will increase memory? When you detach a div (filled with images) is the div and image information kept in memory?
For example:
<div class="divContain-one">
<img src="test.png">
</div>
<div class="divContain-two"></div>
var divContainingImage = $("#divContain-one").detach();
$("button").click(function(){
$("#divContain-two").append(divContainingImage);
});
It shouldn't increase memory usage significantly. The variable simply contains a reference to same memory that held the DOM element, and any related jQuery data, before it was detached. All that has happened is that the DOM itself no longer has a reference to that memory.
When you append it, you don't increase the memory, either. It simply adds a reference to the same object to the new location in the DOM, it doesn't make a copy.
The information has to be kept in memory -- how else could it know what to add to the DOM when you append?
One caveat: the act of calling $("#divContainingImage") creates a jQuery object that wraps around the DOM element, and the variable contains a reference to this object, not just the DOM element. This does increase memory usage, independent of what .detach() does.
According to jquery documentaion detach method keeps all data associated with the removed element.
So, yes, especially taking the fact that you declared a variable and assigned returned data from detach method to it - data is definitely stored in memory. Anyway, keep in mind, that such memory usage is not significant if we are not talking about hundreds or thousands of elements.
You also need to consider if there are event listeners attached to the DOM object.
If you remove an object from the page then the memory can eventually be cleaned once there are no more references to it.
If you remove a and the has a click listener then that can continue to sit in memory indefinitely. Remove the listeners first then remove the DOM object. That way the DOM object can be removed from memory eventually.
In order to maintain a correct and highly responsive GUI overlay on a each website, I need to register and analyze every relevant DOM Element as soon as possible. Currently I am using a MutationObserver, which does this work for me and simplified, it looks like this:
var observer = new MutationObserver(
function(mutations){
mutations.forEach(
function(mutation){
if(mutation.type == 'childList')
{
var nodes = mutation.addedNodes;
var n = nodes.length;
for(var i = 0; i < n; i++)
{
if(nodes[i].nodeType == 1) // ELEMENT_NODE
{
// Save it in specific Array or something like this
AnalyzeNode(nodes[i]);
}
}
}
}
);
}
);
var config = {subtree: true, childList: true};
observer.observe(document, config);
But I've come to the realization, that the used MutationObserver isn't calling AnalyzeNode for every node contained in the DOM. When an already complete (sub)tree is created outside of the DOM (e.g. by executing an external JS script on page load) and you append its root to the DOM mutation.addedNodes will only contain the subtree's root and all of its children will go unnoticed (because no further mutations will take place there), being part of the DOM but not having been analyzed.
I had the idea of checking if the appended node may already have childNodes to identify it as root of an appended subtree, but unfortunately it seems like every addedNode may have children at the moment the MutationObserver's functions are called. So no distinction possible on this way.
I really don't want to double check every child node of an added node (the parent node) at the moment its parent node is processed by the MutationObserver. Most of the time, the child node will nevertheless be processed by the MutationObserver when itself will be part of addedNodes in an other occurring mutation and the overhead seems to get unnecessary high.
Furthermore, I thought about a Set of nodes, whose children have to be analyzed outside of a MutationObserver call. If an added node has children upon its appending to the DOM, the node is added to the Set. When another mutation takes place and one of its children is part of addedNodes, its child removes its parent from the Set by using mutation.target -- which is the parent node (mutation.type has to be childList). The problem with this approach is the timing when to check the children of the nodes in Set (and the fact, that I could query document.getElementsByTagname for every relevant Element type instead of maintaining a Set, but the timing problem is still there). Keep in mind that it should be as soon as possible to keep the overlay responsive and fitting to the website. A combination of document's onreadystatechange and appending of new script nodes to the DOM (as indicator when external JS code is executed) might work even for websites, recreating parts of its content (I am looking at you duckduckgo search result page). But it seems like a workaround, which won't solve the problem in 100% of the cases.
So, is there another, more efficient way? Or does any of these approaches may be sufficient if slightly changed? Thanks a lot!
(Please try to avoid JQuery where possible as example code, thank you. And by the way, I am using CEF, so the best case would be a solution working with Webkit/Blink)
EDIT1: Website rendering is done internally by CEF and GUI rendering is done by C++/OpenGL with information obtained by the mentioned Javascript code.
It seems your actual goal is to layout detect changes in the rendered output, not (potentially invisible) DOM changes.
On gecko based browsers you could use MozAfterPaint to get notified of the bounding boxes of changed areas, which is fairly precise but has a few gaps, such as video playback (which changes displayed content but not the layout) or asynchronous scrolling.
Layout can also be changed via the CSSOM, e.g. by manipulating a <style>.sheet.cssRules. CSS animations, already mentioned in the comments, are another thing that can also affect layout without mutations. And possibly SMIL animations.
So using mutation observers alone may be insufficient anyway.
If your overlay has some exploitable geometric properties then another possibility might be sampling the parts of the viewport that are important to you via document.elementFromPoint and calculating bounding boxes of the found elements and their children until you have whatever you need. Scheduling it via requestAnimationFrame() means you should be able to sample the state of the current layout on every frame unless it's changed by other rAF callbacks, running after yours.
In the end most available methods seem to have some gaps or need to be carefully tweaked to not hog too much CPU time.
Or does any of these approaches may be sufficient if slightly changed?
Combining tree-walking of observed mutations and a WeakSet to not process already visited nodes may work with some more careful filtering.
having already visited a node does not automatically mean you can skip its children
but having visited a child without it being a mutation target itself should mean you can skip it
removals events mean you must remove the entire subtree, node by node, from the set or just clear the set since they might be moved to another point in the tree
MutationRecords seem to be listed in the order in which the changes happened (can be easily verified).
Before you run your AnalyzeNode(nodes[i]) algorithm, you can run an AnalyzeChanges(mutations) step that can determine the over all change that happened.
For example, if you see addedNodes contains the same node 10 times, but you see the same node only 9 times in the removedNodes then you know that the net result is that the node was ultimately added to the DOM.
Of course it may be more complicated than that, you will have to detect added sub trees, and nodes that may have then been removed and added from those sub trees, etc.
Then finally, once you know what the net change was, you can run AnalyzeNode(nodes[i]).
I'm thinking about doing this to observe an entire <svg> tree and to render it (and re-render it when changes happen) in WebGL.
It may be tricky, because imagine the following happens synchronously by some user (you don't know what he/she will do) who is manipulating the DOM:
a subtree is added (queues a record for the root node in addedNodes)
a subtree of the subtree is removed (queues a record)
then appended somewhere else outside of the first subtree (queues another record, oh boy.)
an element is removed from the other subtree is removed (queues a record)
and added back to the original subtree (queues a record)
etc
etc
Finally, you receive a list of MutationRecords that details all those steps.
We could loop through all the records and basically recreate a play-by-play of what happened, then we can figure out the final net changes that happened.
After we have those net changes, it'll be like having a list of records, but they will be simpler, (for example removing then adding a node simply cancels it out, so we don't really care if a node was removed then immediately added because the end result is basically nothing for that node).
People have tried to tackle the problem of detecting changes between trees.
Many of those solutions are associated with the terms "virtual dom" and "dom diffing" as far as web goes, which yield results on Google.
So, instead of doing all that mutation analysis (which sounds like a nightmare, though if you did it I would say please please please publish it as open source so people can benefit), we can possibly use a diffing tool to find the difference between the DOM before the mutation, and the DOM at at the time the MutationObserver callback is fired.
For example, virtual-dom has a way to get a diff. I haven't tried it yet. I'm also not sure how to create VTrees from DOM trees to pass them into the diff() function.
Aha! It may be easier to get a diff with diffDOM.
Getting a diff might present the simplest changeset needed to transition from one tree to another tree, which might be much easier than analyzing the a mutation record list. It's worth trying it out. I might post back what I find when I do it with the <svg>s...
I am wondering whether it is necessary to manually delete an element that has been detached with jQuery's detach() function (and all references to it has been null'ed).
Here is the JavaScript that I have tried.
For example:
elem = $(".test").detach();
elem = null;
Is the element completely gone, like with $(".test").remove(); or is something like elem.remove() needed?
Edit: Including my comment to the question:
I am detaching multiple elements. Some of them get reused (reinjected in DOM), but others need to be removed permanently after detaching.
Is it necessary to manually delete an element that has been detached with jQuery's detach() function (and all references to it has been null'ed).
You cannot "delete" an element. Garbage collector will automatically collect it when there are no references left. If you have detached it, it will be wiped from the memory without problems.
However, this is not the difference between detach and remove. When simply detaching it, some data that jQuery stored not on the element but in its internal cache will be leaked. You would need to explicitly call the [internal!] cleanData method on the elements to fix that - but you should simply call .remove().
You shouldn't have to worry about that. If there isn't a reference around to the element that has been detached, the Garbage Collector will clean it up.
Although, you're probably better off calling remove(). detach() is meant specifically for elements that you want to keep around (it maintains all the jQuery specific data associated with the element). Both end up calling elem.parent.removeChild on the actual DOM element. As far as I know there isn't a way to manually delete or destroy it, but that's the job of the Garbage Collector anyways.
Straight from jQuery documentation:
The .detach() method is the same as .remove(), except that .detach()
keeps all jQuery data associated with the removed elements. This
method is useful when removed elements are to be reinserted into the
DOM at a later time.
and .remove()
method takes elements out of the DOM.
So you should be covered.