Quoting this (https://news.ycombinator.com/item?id=9155564) article
The short answer is that the DOM is not slow. Adding & removing a DOM
node is a few pointer swaps, not much more than setting a property on
the JS object.
Are the DOM bottlenecks only those things that cause a redraw? If so then shouldn't one render from React's virtual DOM amortize to the same performance as redrawing an entire component (in one browser API call of course)? I would think that the algorithms executed by the browser only try and redraw the diff from one state to another (like git maybe?). Implying that the browser maintains a virtual DOM by itself. So then what is the point of having a virtual DOM?
Also should adding an element that has the display style property set to none not be affecting performance badly? I would profile this myself but I do not know where exactly to turn as I started javascript programming only recently.
This question may be somewhat broad for SO, but as a general answer, some other quotes from the same article are also very relevant:
However, layout is slow...
[...]
Worse, layout is triggered synchronously by accessing certain properties...
[...]
Because of this, a lot of Angular and JQuery code is stupidly slow
[...]
React doesn't help speed up layout...
What react's virtual DOM does, is calculate differences between one state of the DOM and the next state and minimizes DOM updates in a very smart way.
So:
DOM itself is not slow
but layout is slow
and almost all DOM updates require layout updates
so less DOM updates is faster
And the react engine does just that (same as several other tools/ libraries with a virtual DOM).
More info on what virtual DOM is and its advantages e.g. here.
Q: "Are the DOM bottlenecks only those things that cause a redraw?"
A:
The redraw is GPU dependent. Has nothing to do with the speed od DOM updates. DOM updates are almost instant.
Everything depends on changes that do affect the document flow. If a certain DOM or DHTML change affects the document flow. The closer the affected element is to the root of the document element the greater the impact on the document reflow.
You don't need to change the DOM content in order to cause a document reflow. A simple style property change on a given parameter may push elements of the stream to change position and cause therefore force the document reflow.
Therefore no, DOM changes on fixed size Elements will not cause a document reflow, whereas the update of display is practically instant. Will be applied only on locally affected area, most of the time in a frame which may be less than 300 x 200 pixels square; a size of an area that can be redrawn with over 120fps on a really slow GPU's. But that's 5 times smoother than watching Avengers in Cinema.
( Any spatially nonequivalent change in stream-aligned content will cause a reflow. So we have to watch for changes that affect the size and position of our floating elements, changes on inline elements inside a long stream of another inline element, etc, etc. )
'
Q: "should adding an element that has the display style property set to none not be affecting performance badly?"
A:
That's correct. Adding an element with style.display: "none" to the DOM will cause no change to the existing rendering of the document, and therefore, not trigger a document reflow and will, naturally, have no impact at all; i.e.: will be as fast as adding a new property to a JavaScript object.
Regards.
Related
I have recently started learning the React framework and read that React creates a virtual DOM that keeps track of changes. When React updates the original DOM, it only updates the objects that have changed on the virtual DOM. Does this mean that when I program with just plain javascript and I append a new object, for example a new list item, the entire DOM is updated even though I had only added one new item?
In short, it depends. There are a few types of changes you can make to the DOM. From this article on rendering performance by Google:
If you change a “layout” property, so that’s one that changes an element’s geometry, like its width, height, or its position with left or top, the browser will have to check all the other elements and “reflow” the page.
These changes will require the entire DOM to repaint. However:
If you changed a “paint only” property, like a background image, text color, or shadows, in other words one that does not affect the layout of the page, then the browser skips layout, but it will still do paint.
So, adjusting say, the color property of some text would just require that element to repaint without needing to repaint other parts of the DOM. There are also changes you can make that go straight to compositing and do not require any kind of repaint.
The browser does its best to do as little work as necessary.
When you update the DOM, the reflow and repaint happen.
Every time the DOM changes, the browser needs to recalculate the CSS, do a layout and repaint the web page.
React doesn’t really do anything new. It’s just a strategic move.
What it does is It stores a replica of real DOM in memory. When you modify the DOM, it first applies these changes to the in-memory DOM. Then, using it’s diffing algorithm, figures out what has really changed.
Finally, it batches the changes and call applies them on real-dom in one go.
Thus, minimizing the re-flow and re-paint.
I have several charts I do redraw everytime I zoom/pann using d3 brushes.
But, when I have tons of rendered elements, redrawing starts to be a little bit slow.
Instead of redrawing all elements everytime I move my brush, I was wondering whether or not it's feasible to transform (translate) the already drawn elements, and only redraw whenever I need to update my data.
I think it would increase my visualization performance a lot whenever panning to right/left, wouldn't it ?
Any insights ?
In general, the less you touch the DOM the better your performance will be. The details are browser and platform specific, but in general this is the pecking order of performance at a very high level (ordered from most expensive to least):
Creating and removing DOM elements.
Modifying properties of existing DOM elements.
In memory JavaScript (that is, not involving DOM at all... e.g. Array iteration).
So if you can get the result you want by simply modifying a targeted subset of existing elements with a transform attribute, I would guess you will be much better off.
Of course, it's impossible to say anything with certainty without seeing the actual code and use case.
I am using GWT to build a HTML application where the performance is correct in general.
Sometimes, it can load many objects in the DOM and the application becomes slow. I used Chrome Developer Tools Profiler to see where that time was spent (under Chrome once the app is compiled ie no GWT overhead) and it is clear that the methods getAbsoluteLeft()/getBoundingClientRect() consume the major part of this time.
Here is the implementation used under Chrome (com.google.gwt.dom.client.DOMImplStandardBase) :
private static native ClientRect getBoundingClientRect(Element element) /*-{
return element.getBoundingClientRect && element.getBoundingClientRect();
}-*/;
#Override
public int getAbsoluteLeft(Element elem) {
ClientRect rect = getBoundingClientRect(elem);
return rect != null ? rect.getLeft()
+ elem.getOwnerDocument().getBody().getScrollLeft()
: getAbsoluteLeftUsingOffsets(elem);
}
This makes sense to me, as the more elements in the DOM, the more time it may take to calculate absolute positions. But it is frustrating because sometimes you know just a subpart of your application has changed whereas those methods will still take time to calculate absolute positioning, probably because it unnecessarily recheck a whole bunch of DOM elements. My question is not necessarily GWT oriented as it is a browser/javascript related problem :
Is there any known solution to improve GWT getAbsoluteLeft/javascript getBoundingClientRect problem for large DOM elements application ?
I did not find any clues on the internet, but I thought about solution like :
(reducing number of calls for those methods :-) ...
isolate part of the DOM through iframe, in order to reduce the number of elements the browser has to evaluate to get an absolute position (although it would make difficult components to communicate ...)
in the same idea, there might be some css property (overflow, position ?) or some html element (like iframe) which tell the browser to skip a whole part of the dom or simply help the browser to get absolute position faster
EDIT :
Using Chrome TimeLine debugger, and doing a specific action while there are a lot of elements in the DOM, I have the average performance :
Recalculate style : nearly zero
Paint : nearly 1 ms
Layout : nearly 900ms
Layout takes 900ms through the getBoundingClientRect method. This page list all the methods triggering layout in WebKit, including getBoundingClientRect ...
As I have many elements in the dom that are not impacted by my action, I assume layout is doing recalculation in the whole DOM whereas paint is able through css property/DOM tree to narrow its scope (I can see it through MozAfterPaintEvent in firebug for example).
Except grouping and calling less the methods that trigger layout, any clues on how to reduce the time for layout ?
Some related articles :
Minimizing browser reflow
I finally solve my problem : getBoundingClientRect was triggering a whole layout event in the application, which was taking many times through heavy CSS rules.
In fact, layout time is not directly proportional to the number of elements in the DOM. You could draw hundred thousands of them with light style and layout will take only 2ms.
In my case, I had two CSS selectors and a background image which were matching hundred thousands of DOM elements, and that was consuming a huge amount of time during layout. By simply removing those CSS rules, I reduce the layout time from 900ms to 2ms.
The most basic answer to your question is to use lazy evaluation, also called delayed evaluation. The principle is that you only evaluate a new position when something it depends upon has changed. It generally requires a fair amount of code to set up but is much cleaner to use once that's done. You'd make one assignment to something (such as a window size) and then all the new values propagate automatically, and only the values that need to propagate.
I'm creating a new game engine for the web by the name of Engine1. I've currently produced a couple prototypes. So far I've been able to:
Map the transparent pixels of sprites using canvas.
Bind events to the opaque pixels of sprites.
Develop a game runtime with a set fps.
Animate sprites at variable frame timing.
Animate element movement, both
frame by fame
and with frame based motion tween
I'm happy with my progress but I seem to be uncomfortable with advancing further without consulting an expert in DOM performance.
Currently when an element is created, its appended to a DOM fragment I call the "Shadow DOM". Every frame this "Shadow DOM"'s HTML is copied and inserted into the body of the page (or the current view port).
I've set it up this way because I can add everything to the page in one re-flow of the browser.
My concern is that the performance gained will be offset by the need to re flow the contents of the browser, even if only parts of the page are changed.
Also, event binding gets much more complicated.
Any thoughts?
Should I use a "Shadow DOM"?
Is there a better way to render a large number of elements?
Is there a way to only copy differences from the "Shadow DOM" to the browser body?
Replacing large chunks of the DOM may be expensive. In general the DOM is where bottlenecks occur. It would be better to keep track of what parts of the DOM you are modifying and updating these. You can either do that in a separate data structure that you transform into DOM when updating, or use a shadow DOM like you said. If the changes are individually large then it may be a good idea to use a shadow DOM. If they are small (such as just updating text values) then it would make more sense to use a separate type of data structure.
In either case you need a third object keeping track of changes.
I wrote Cactus Templates a long time ago. You use it to bind a DOM structure together with a domain object letting updates propagate from either side to the other. It automatically attaches events to locations specified (key value paths in the domain and html class names in the DOM). It may or may not be exactly what you're looking for but perhaps you can get some ideas from it.
What kinds of activities will trigger reflow of web page with DOM?
It seems there are different points of view. According to http://www.nczonline.net/blog/2009/02/03/speed-up-your-javascript-part-4/, it happens
When you add or remove a DOM node.
When you apply a style dynamically (such as element.style.width="10px").
When you retrieve a measurement that must be calculated, such as accessing offsetWidth, clientHeight, or any computed CSS value (via getComputedStyle() in DOM-compliant browsers or currentStyle in IE).
However, according to http://dev.opera.com/articles/view/efficient-javascript/?page=3, taking measurement triggers reflow only when there is already reflow action queued.
Does anybody have any more ideas?
Both articles are correct.
One can safely assume that whenever you're doing something that could reasonably require the dimensions of elements in the DOM be calculated that you will trigger reflow.
In addition, as far as I can tell, both articles say the same thing.
The first article says reflow happens when:
When you retrieve a measurement that must be calculated, such as accessing offsetWidth, clientHeight, or any computed CSS value (via getComputedStyle() in DOM-compliant browsers or currentStyle in IE), while DOM changes are queued up to be made.
The second article states:
As stated earlier, the browser may cache several changes for you, and reflow only once when those changes have all been made. However, note that taking measurements of the element will force it to reflow, so that the measurements will be correct. The changes may or may not not be visibly repainted, but the reflow itself still has to happen behind the scenes.
This effect is created when measurements are taken using properties like offsetWidth, or using methods like getComputedStyle. Even if the numbers are not used, simply using either of these while the browser is still caching changes, will be enough to trigger the hidden reflow. If these measurements are taken repeatedly, you should consider taking them just once, and storing the result, which can then be used later.
I take this to mean the same thing they said earlier. Opera will try its hardest to cache values and avoid reflow for you, but you shouldn't rely on its ability to do so.
For all intents and purposes just believe what they both say when they say that all three types of interactions can cause reflow.
Cheers.
Look at the "Rendering triggered by Property Read Access" section of Understanding Internet Explorer Rendering Behaviour, where the following code in IE will cause rendering activity.
function askforHeight () {
$("#lower").height();
}
document.body.style.display = 'none';
document.body.style.display = 'block';
This often solves those incomprehensible layout bugs.