Minimizing browser reflow/re-rendering - javascript

I'm currently working on some code for my master's thesis. I've a few questions regarding effective DOM manipulation.
1) Consider you had to perform a bunch of DOM manipulation on a number of nodes that are close to each other. Would it make sense to make a deep copy of the topmost parentNode of all of those nodes (and keep it outside the DOM), perform the manipulations on that subtree and then swap it with it's counterpart in the DOM. Would this minimize browser reflow/re-rendering?
2) Is changing the innerHTML of a node more/less performant than manipulating it's subtree?
3) Is there any more good advice you can give me on efficient DOM manipulation in vanilla javaScript (without any frameworks/libraries)?
Thank you in advance!

The most important thing to do in order to prevent excessive browser rendering is to make sure you group your reads and writes.
If you need to do something to several nodes, and need to read something from them, then you should read from all the nodes first, and then write to all.
The way the DOM works is that each time you need to read from it, it checks if it was changed. If it was, the browser will rerender.
Therefore, first select all the elements, cache the info you need to get, then set on all of them.

1) Consider you had to perform a bunch of DOM manipulation on a number
of nodes that are close to each other. Would it make sense to make a
deep copy of the topmost parentNode of all of those nodes (and keep it
outside the DOM), perform the manipulations on that subtree and then
swap it with it's counterpart in the DOM. Would this minimize browser
reflow/re-rendering?
Yes - do the changes on the counterpart
2) Is changing the innerHTML of a node more/less performant than
manipulating it's subtree?
More performant - because you do the stringmanipulation outside dom
3) Is there any more good advice you can give me on efficient DOM
manipulation in vanilla javaScript (without any frameworks/libraries)?
document.createDocumentFragment() is the best fully controllable virtual dom ever

Related

do I understand react virtual DOM sense correctly?

My way of thinking:
If we want to perform something on dom element we can do it by:
document.getElementById("#someId").DoSomething();
document.getElementById("#someId").DoSomethingElse();
In that situation browser needs to search entire DOM for #someId object. Then it forgets element and searches again to perform DoSomethingElse().
To solve "forgetting and searching again" problem we can save our element as JavaScript object.
var someElement = document.getElementById("#someId");
someElement .DoSomething();
someElement .DoSomethingElse();
Going further we can save entire group of elements or entire nodes to achieve better performance. One more step and we have whole DOM saved as an JavaScript object named virtual dom.
Is that correct way to understand purpose of virtual DOM?
Sorry for noob questions, I'm not front end developer, I'm just curious :)
The main point of the VirtualDOM is that, effectively, you're working on a copy of the real DOM. But the working with that copy is ways faster that working with the actual DOM, because it only has the thing that React actually needs, leaving specific browser issues aside.
The main problem with working with the actual DOM is that it's slow. At least, it's faster to work with that kind of copy, make your work there, and the changes have been done, then you update the actual DOM.
Yes, it sounds a bit crazy, but is faster to compute the differences between state changes and the change everything in "just one step", than making that changes with the actual DOM.
Additionally, you've used for your example just a single DOM node, but you're working on changes on DOM subtrees the thing is not that easy.
For an explanation with more detail you can take a look to this article: http://reactkungfu.com/2015/10/the-difference-between-virtual-dom-and-dom/

What kind of performance optimization is done when assigning a new value to the innerHTML attribute

I have a DOM element (let's call it #mywriting) which contains a bigger HTML subtree (a long sequence of paragraph elements). I have to update the content of #mywriting regularly (but only small things will change, the majority of the content remains unchanged).
I wonder what is the smartest way to do this. I see two options:
In my application code I find out which child elements of #mywriting has been changed and I only update the changed child elements.
I just update the innerHTML attribute of #mywriting with the new content.
Is it worth to develop the logic of approach one to find out the changed child nodes or will the browser perform this kind of optimization when I apply approach two?
No, the browser doesn't do such optimisation. When you reassign innerHTML, it will throw away the old contents, parse the HTML, and place the new elements in the DOM.
Doing a diff to only replace (or rather, update) the parts that need an update can be worth a lot, and is done with great success in rendering libraries that employ a so-called virtual DOM.
However, they're doing that diff on an element data structure, not an HTML string. Parsing that to find out which elements changed is going to be horribly inefficient. Don't use HTML strings. (If you're already sold on them, you might as well just use innerHTML).
Without concdering the overhead of calculating which child elements has to be updated option 1 seems to be much faster (at least in chrome), according to this simple benchmark:
https://jsbench.github.io/#6d174b84a69b037c059b6a234bb5bcd0

Removing an element - optimized way

What is the most efficient way in removing an element in DOM? (js or jquery)
removeChild()
This is what I was using always. But, recently came across this
The removed child node still exists in memory, but is no longer part
of the DOM. With the first syntax-form shown, you may reuse the
removed node later in your code, via the oldChild object reference.
So, If I don't want to preserve the removed element in memory (for better performance), what is the best method?
Or like in java, if reference is null, is it automatically garbage collected and no need to worry about performance? I am asking this specifically as I am dealing with svg and many append/remove calls are made.

Is manipulating an "imaginary" element faster than an element currently in the DOM?

Say I'm using jQuery to loop through and perform some manipulation on existing web page elements. There are multiple changes to be made, the number of elements in the set is at least somewhat large, and the element structure is somewhat complex.
Assuming I get all the elements into the same jQuery object, would it be faster to use jQuery's .clone (or .detach) method to create an "imaginary" copy to work on, remove the current elements, then re-insert the changed copy into the DOM?
...or does that not make a difference -- as live DOM elements manipulate just as fast as non-DOM ones?
Yes, actually, though your mileage may vary.
When an element is visible, manipulations will cause the browser to redraw the page. Many redraws can add up to a significant performance hit.
When an element is invisible, no redraws will be triggered.
Mass element clones are probably pretty costly, so I'd avoid doing that if possible.
You can clone that object or create a new document fragment. Make changes on that new object and replace it in the DOM:
https://developer.mozilla.org/en-US/docs/Web/API/Node.cloneNode
https://developer.mozilla.org/en-US/docs/Web/API/document.createDocumentFragment
Working on an object that is not part of the DOM will not trigger any paint/reflows.
If performance is an issue do not use jQuery, use plain old Javascript as both cloneNode and createDocumentFragment are well supported.

Javascript DOM tree duplicate for manipulation

Since the DOM tree of a page is active and always reflected in the browser, what is the best way to modify this DOM tree for some purpose without affecting the actual rendered tree ? Let's say my purpose is to swap certain children nodes and see how similar the DOM tree still remains.
Is creating a duplicate tree the only solution ? If it is, is there a function to do this ? Or do I need to write my own function to create a duplicate copy of the tree. I won't need all the attributes of the element object, so I can create a simpler object with a few attributes that point to the siblings and children.
You can use document.cloneNode(true), or the same method on another node. cloneNode clones any node, and the true means it should be recursive (deep). Obviously, this could have a significant performance cost on a large page.
If you are willing to use jQuery:
var clone = $("selectorForSomeElement(s)").clone();
clone now is a copy of the element structure.
You can then work off of clone to do whatever experimenting you like.
Maybe consider one the many great JavaScript librarys out there, e.g. jQuery. These allow you to easily copy parts of or even the whole DOM of an document and have that stored appart from the DOM.
If you need to roll your own solution, a good point to start is Resig's post on document fragments: http://ejohn.org/blog/dom-documentfragments/.
Good luck.

Categories

Resources