Reflow/Layout performance for large application - javascript

I am using GWT to build a HTML application where the performance is correct in general.
Sometimes, it can load many objects in the DOM and the application becomes slow. I used Chrome Developer Tools Profiler to see where that time was spent (under Chrome once the app is compiled ie no GWT overhead) and it is clear that the methods getAbsoluteLeft()/getBoundingClientRect() consume the major part of this time.
Here is the implementation used under Chrome (com.google.gwt.dom.client.DOMImplStandardBase) :
private static native ClientRect getBoundingClientRect(Element element) /*-{
return element.getBoundingClientRect && element.getBoundingClientRect();
}-*/;
#Override
public int getAbsoluteLeft(Element elem) {
ClientRect rect = getBoundingClientRect(elem);
return rect != null ? rect.getLeft()
+ elem.getOwnerDocument().getBody().getScrollLeft()
: getAbsoluteLeftUsingOffsets(elem);
}
This makes sense to me, as the more elements in the DOM, the more time it may take to calculate absolute positions. But it is frustrating because sometimes you know just a subpart of your application has changed whereas those methods will still take time to calculate absolute positioning, probably because it unnecessarily recheck a whole bunch of DOM elements. My question is not necessarily GWT oriented as it is a browser/javascript related problem :
Is there any known solution to improve GWT getAbsoluteLeft/javascript getBoundingClientRect problem for large DOM elements application ?
I did not find any clues on the internet, but I thought about solution like :
(reducing number of calls for those methods :-) ...
isolate part of the DOM through iframe, in order to reduce the number of elements the browser has to evaluate to get an absolute position (although it would make difficult components to communicate ...)
in the same idea, there might be some css property (overflow, position ?) or some html element (like iframe) which tell the browser to skip a whole part of the dom or simply help the browser to get absolute position faster
EDIT :
Using Chrome TimeLine debugger, and doing a specific action while there are a lot of elements in the DOM, I have the average performance :
Recalculate style : nearly zero
Paint : nearly 1 ms
Layout : nearly 900ms
Layout takes 900ms through the getBoundingClientRect method. This page list all the methods triggering layout in WebKit, including getBoundingClientRect ...
As I have many elements in the dom that are not impacted by my action, I assume layout is doing recalculation in the whole DOM whereas paint is able through css property/DOM tree to narrow its scope (I can see it through MozAfterPaintEvent in firebug for example).
Except grouping and calling less the methods that trigger layout, any clues on how to reduce the time for layout ?
Some related articles :
Minimizing browser reflow

I finally solve my problem : getBoundingClientRect was triggering a whole layout event in the application, which was taking many times through heavy CSS rules.
In fact, layout time is not directly proportional to the number of elements in the DOM. You could draw hundred thousands of them with light style and layout will take only 2ms.
In my case, I had two CSS selectors and a background image which were matching hundred thousands of DOM elements, and that was consuming a huge amount of time during layout. By simply removing those CSS rules, I reduce the layout time from 900ms to 2ms.

The most basic answer to your question is to use lazy evaluation, also called delayed evaluation. The principle is that you only evaluate a new position when something it depends upon has changed. It generally requires a fair amount of code to set up but is much cleaner to use once that's done. You'd make one assignment to something (such as a window size) and then all the new values propagate automatically, and only the values that need to propagate.

Related

Speed difference between inserting html and changing display style property

Assuming you have a relatively small piece of HTML (let's say under 100 tags and <4KB in size) and you want to occasionally display and hide it from your user (think menu, modal... etc).
Is the fastest approach to hide and show it using css, such as:
//Hide:
document.getElementById('my_element').style.display = 'none';
//Show:
document.getElementById('my_element').style.display = 'block';
Or to insert and remove it:
//Hide
document.getElementById('my_element_container').innerHTML = '';
//Show:
const my_element_html = {contents of the element};
document.getElementById('my_element_container').innerHTML = my_element_html;
// Note: insertAdjacentHTML is obviously faster when the container has other elements, but I'm showcasing this using innerHTML to get my point across, not necessarily because it's always the most efficient of the two.
Obviously, this can be benchmarked on a case by case basis, but, with some many browser versions and devices out there, any benchmarks that I'd be able to run in a reasonable amount of time aren't that meaningful.
I've been unable to find any benchmarks related to this subject.
Are there any up to date benchmarks comparing the two approaches? Is there any consensus from browsers developers as to which should, generally speaking, be preferred when it comes to speed.
In principle, DOM manipulation is slower than toggling display property of existing nodes. And I could stop my answer here, as this is technically correct.
However, repaint and reflow of the page is typically way slower and both your methods trigger it so you could be looking at:
display toggle: 1 unit
DOM nodes toggle: 2 units
repaint + reflow of page: 100 units
Which leaves you comparing 101 units with 102 units, instead of comparing 3 with 4 (or 6 with 7). I'm not saying that's the order of magnitude, it really depends on the actual DOM tree of your real page, but chances are it's close to the figures above.
If you use methods like: visibility:hidden or opacity:0, it will be way faster, not to mention opacity is animatable, which, in modern UIs, is preferred.
A few resources:
Taming huge collections of DOMs
Render-tree Construction, Layout, and Paint
How Browsers Work: Behind the scenes of modern web browsers
An introduction to Web Performance and the Critical Rendering Path
Understanding the critical rendering path, rendering pages in 1 second
Web performance, much like web development, is not a "press this button" process. You need to try, fail, learn, try again, fail again...
If your elements are always the same, you might find out (upon testing) caching them inside a variable is much faster than recreating them when your show method is called.
Testing is quite simple:
place each of the methods inside a separate function;
log the starting time (using performance.now());
use each method n times, where n is: 100, 1e3, 1e4,... 1e7
log finishing time for each test (or difference from its starting time)
Compare. You will notice conclusions drawn from 100 test are quite different than the ones from 1e7 test.
To go even deeper, you can test differences for different methods when showing and for different methods when hiding. You could test rendering elements hidden and toggle their display afterwards. Get creative. Try anything you can think of, even if it seems silly or doesn't make much sense.
That's how you learn.

How React.js speeds up rendering with a virtual DOM

Quoting this (https://news.ycombinator.com/item?id=9155564) article
The short answer is that the DOM is not slow. Adding & removing a DOM
node is a few pointer swaps, not much more than setting a property on
the JS object.
Are the DOM bottlenecks only those things that cause a redraw? If so then shouldn't one render from React's virtual DOM amortize to the same performance as redrawing an entire component (in one browser API call of course)? I would think that the algorithms executed by the browser only try and redraw the diff from one state to another (like git maybe?). Implying that the browser maintains a virtual DOM by itself. So then what is the point of having a virtual DOM?
Also should adding an element that has the display style property set to none not be affecting performance badly? I would profile this myself but I do not know where exactly to turn as I started javascript programming only recently.
This question may be somewhat broad for SO, but as a general answer, some other quotes from the same article are also very relevant:
However, layout is slow...
[...]
Worse, layout is triggered synchronously by accessing certain properties...
[...]
Because of this, a lot of Angular and JQuery code is stupidly slow
[...]
React doesn't help speed up layout...
What react's virtual DOM does, is calculate differences between one state of the DOM and the next state and minimizes DOM updates in a very smart way.
So:
DOM itself is not slow
but layout is slow
and almost all DOM updates require layout updates
so less DOM updates is faster
And the react engine does just that (same as several other tools/ libraries with a virtual DOM).
More info on what virtual DOM is and its advantages e.g. here.
Q: "Are the DOM bottlenecks only those things that cause a redraw?"
A:
The redraw is GPU dependent. Has nothing to do with the speed od DOM updates. DOM updates are almost instant.
Everything depends on changes that do affect the document flow. If a certain DOM or DHTML change affects the document flow. The closer the affected element is to the root of the document element the greater the impact on the document reflow.
You don't need to change the DOM content in order to cause a document reflow. A simple style property change on a given parameter may push elements of the stream to change position and cause therefore force the document reflow.
Therefore no, DOM changes on fixed size Elements will not cause a document reflow, whereas the update of display is practically instant. Will be applied only on locally affected area, most of the time in a frame which may be less than 300 x 200 pixels square; a size of an area that can be redrawn with over 120fps on a really slow GPU's. But that's 5 times smoother than watching Avengers in Cinema.
( Any spatially nonequivalent change in stream-aligned content will cause a reflow. So we have to watch for changes that affect the size and position of our floating elements, changes on inline elements inside a long stream of another inline element, etc, etc. )
'
Q: "should adding an element that has the display style property set to none not be affecting performance badly?"
A:
That's correct. Adding an element with style.display: "none" to the DOM will cause no change to the existing rendering of the document, and therefore, not trigger a document reflow and will, naturally, have no impact at all; i.e.: will be as fast as adding a new property to a JavaScript object.
Regards.

Understanding the report of Google Speed Tracer

I am trying to profile my YUI3 application using the Google Speed Tracer.
Here is the first snapshot:
So far so good, ST indicates a place taking 195ms. So, I zoom on it:
Even better, right? Here ST takes me to the offending line:
But what's next? I mean, here is the line:
return ('scrollTop' in node) ? node.scrollTop : Y.DOM.docScrollY(node);
And since the stack trace ends here I assume that node.scrollTop is returned, which is just a JS property access.
So what is the logic behind the claim that style recalculation took place at this point yielding 36ms execution time?
Can anyone explain it to me?
What is most likely occurring here is that you have accumulated changes to the DOM and/or stylesheets that require a style recalculation. But most rendering engines (definitely WebKit) defer style recalculation (as well as layout and rendering) as long as possible. In the best possible case, style recalculation, layout, and rendering all run in sequence once the current event handler returns control to native code.
But there are a number of things that can force an early recalculation or layout. The most common is that you access a property (e.g., scrollTop) on a DOM element that has to be computed. Other properties such as offset[Left Top Width Height] also commonly force a style recalculation and layout. The browser can't do this "in the background" because the rendering engine is (for the most part) single-threaded with the Javascript VM, so it typically has to wait for you to call into native code.
It's a bit hard to tell from your screenshots, but from what I can see it looks like you have a pretty big chunk of HTML being parsed right before this event (18ms worth), which could amount to a significant amount of style recalculation and layout (the latter of which takes 26ms just afterwards). I also see TableView._defRenderBodyFr() in your stack trace, which leads me to suspect that just before this getter was called, you have added/mutated a fair number of table rows. The TableView code most likely built up a big HTML string, but you only paid for the HTML parsing (and DOM construction) when it was set in innerHTML, but as soon as the code tried to access a property (in this case scrollTop) you paid for the style recalculation and layout.
You should be able to break these costs into smaller chunks (thus giving the UI thread a chance to breathe and generally feel more responsive) by reducing the number of rows affected by each mutation. I'm not a YUI expert, so I can't tell you how you would do it in their TableView, though.

Should I use multiple canvases (HTML 5) or use divs to display HUD data?

I am in process of making a game where the health bar (animated) and some other info represented visually like some icons showing the number of bombs the player has etc. Now, this can be done both in canvas (by making another canvas for info that sits over the main canvas, or it can be done using many divs and spans with absolute positioning. This is my first time in making a browser based game so if any experienced people view this, tell me what you recommend. I would like to know that which method would be faster.
The game will also be running on mobile devices. Thanks!
There is no straighforward answer and I suggest you do FPS testing with different browser how it plays out for your use case. If you do not wish to go such in-depth I suggest you simply draw the elements inside canvas and if you need to hide them then leave out drawHUD() call from your rendering loop.
For HTML HUD overlay on <canvas> the following factors should be considered
Can the web browser compositor do hardware accelerated <canvas> properly if there are DOM elements upon the canvas
HTML / DOM manipulation will be always slower than <canvas> operations due to inherited complexity dealing with DOM elements
<canvas> pixel space stays inside <canvas> and it might be difficult to have pixel-perfect aligment if you try to draw elements on <canvas> outside the canvas itself
HTML offers much more formatting options for text than canvas drawString() - is HTML formatting necessary
Use the canvas. Use two canvases if you want, one overlaid over the other, but use the canvas.
Touching the DOM at all is slow. Making the document redo its layout because the size of DOM elements moved is very slow. Dealing with the canceling (or not) of even more events because there are DOM items physically on top of the canvas can be a pain and why bother dealing with that?
If your HUD does not update very often then the fastest thing to do would be drawing it to an in-memory canvas when it changes, and then always drawing that canvas to the main canvas when you update the frame. In that way your drawHud method will look exactly like this:
function drawHUD() {
// This is what gets called every frame
// one call to drawImage = simple and fast
ctx.drawImage(inMemoryCanvas, 0, 0);
}
and of course updating the HUD information would be like:
function updateHUD() {
// This is only called if information in the HUD changes
inMemCtx.clearRect(0, 0, width, height);
inMemCtx.fillRect(blah);
inMemCtx.drawImage(SomeHudImage, x, y);
var textToDraw = "Actually text is really slow and if there's" +
"often repeated lines of text in your game you should be" +
"caching them to images instead";
inMemCtx.fillText(textToDraw, x, y);
}
Since HUDs often contain text I really do urge caching it if you're using any. More on text performance here.
As others have said, there is no universally best approach, as it depends on the specifics of what you need to render, how often, and possibly what messaging needs to happen between graphical components.
While it is true the DOM reflows are expensive, this blanket warning is not always applicable. For instance, using position:fixed; elements avoids triggering reflows for the page (not necessarily within the element if there are non-fixed children). Repaint is (correct me if this is wrong) expensive because it is pixel pushing, and so is not intrinsically slower than pushing the same number of pixels to a canvas. It can be faster for some things. What's more, each has certain operations that have performance advantages over the other.
Here are some points to consider:
It's increasingly possible to use WebGL-accelerated canvas elements on many A-grade browsers. This works fine for 2D, with the advantage that drawing operations are sent to the GPU, which is MUCH faster than the 2D context. However this may not be available on some target platforms (e.g., at the time of this writing, it is available in iOS Safari but not in the iOS UIWebView used if you target hybrid mobile applications.) Using a library to wrap canvas can abstract this and use WebGL if its available. Take a look at pixi.js.
Conversely, the DOM has CSS3 animations/transitions which are typically hardware-accelerated by the GPU automatically (with no reliance on WebGL). Depending on the type of animation, you can often get much faster results this way than with canvas, and often with simpler code.
Ultimately, as a rule in software performance, understanding the algorithms used is critical. That is, regardless of which approach used, how are you scheduling animation frames? Have you looked in a profiler to see what things take the most time? This practice is excellent for understanding what is impacting performance.
I've been working on an app with multiple animations, and have implemented each component both as DOM and canvas. I was initially surprised that the DOM version was higher performant than the canvas (wrapped with KineticJS) version, though I know see that this was because all the animated elements were position:fixed and using CSS (under the hood via jQuery UI), thereby getting GPU performance. However the code to manage these elements felt clunky (in my case, ymmv). Using a canvas approach allows more pixel-perfect rendering, but then it loses the ability to style with CSS (which technically allows pixel-perfect rendering as well but may be more or less complex to achieve).
I achieved a big speed up by throttling the most complex animation to a lower framerate, which for my case is indistinguishable from the 60fps version but runs smooth as butter on an older iPad 2. Throttling required using requestAnimationFrame and clamping calls to be no more often than the desired framerate. This would be hard to do with CSS animations on the DOM (though again, these are intrinsically faster for many things). The next thing I'm looking at is syncing multiple canvas-based components to the same requestAnimationFrame loop (possibly independently throttled, or a round-robin approach where each component gets a set fraction of the framerate, which may work okay for 2-3 elements. (Incidentally, I have some GUI controls like sliders that are not locked to any framerate as they are should be as close to 60fps as possible and are small/simple enough that I haven't seen performance issues with them).
I also achieved a huge speed boost by profiling and seeing that one class in my code that had nothing to do with the GUI was having a specific method called very often to calculate a property. The class in question was immutable, so I changed the method to memoize the value and saw the CPU usage drop in half. Thanks Chrome DevTools and the flame chart! Always profile.
Most of the time, the number of pixels being updated will tend to be the biggest bottleneck, though if you can do it on the GPU you have effectively regained all the CPU for your code. DOM reflows should be avoided, but this does not mean avoid the DOM. Some elements are far simpler to render using the DOM (e.g. text!) and may be optimized by the browser's (or OS's) native code more than canvas. Finally, if you can get acceptable performance for a given component using either approach (DOM or canvas), use the one that makes the code simplest for managing that type of component.
Best advice is to try isolated portions in the different approaches, run with a profiler, use techniques to over-draw or otherwise push the limits to see which approach can run fastest, and do NOT optimize before you have to. The caveat to this rule is the question you are asking: how do I know in advance which technical approach is going to allow the best performance? If you pick one based on assuming the answer, you are basically prematurely optimizing and will live with the arbitrary pain this causes. If instead you are picking by rapid prototyping or (even better) controlled experiments that focus on the needs of your application, you are doing R&D :)
Browserquest displays their HUD using HTML elements, which has the benefit that you don't have to worry about redrawing etc. (and the performance will be pretty good, given that the entire browser engine is optimized to render the DOM pretty fast.
They (browserquest) also use several layered canvas elements for different game elements. I don't know the exact structure, but I guess that on which canvas an element is displayed depends on how often it needs to be redrawn.

Optimize JS/jQuery performance (getBoundingClientRect) and eliminating layout redraw

So I have a project where I'm trying to optimize a fairly complex Javascript function to the max - partly this is due to the fact that its supposed to run on smart-phones (Webkit) and every little bit counts.
I've been using various debugging and timing techniques to go through my code and rewrite everything that might be slow - like parts of jQuery based stuff where native might do better and so on. What the function does is basically take a string of html text and cut it up to fit exactly into 3 DIVs that do not have fixed position or size (a client templating mechanism).
At the moment the entire function takes around 100ms to execute in iPads browser (but in the production environment I need to ideally execute it 200 times) and the problem is that out of those 100ms at least 20ms are because of this single line of code (in 3 loops):
var maxTop = $(cur).offset().top + $(cur).outerHeight();
"cur" is just a reference to a container DIV element and the line above is calculating its bottom position (so where my text should break). From looking at the offset jQuery code I understand it uses getBoundingClientRect and even eliminating jQuery offset/sizing and calling it directly does nothing to speed it up - so its getBoundingClientRect fault (at least in Webkit). I did a bit of research on it and I understand it causes layout redraw.
But still - can't believe that I do multiple DOM clears/clones/appends and all of those are much faster than a simple element position lookup? Any ideas out there? Maybe something webkit specific? Or something that doesn't cause redraw?
Would much appreciate it!
did you try:
var maxTop = cur.offsetTop + cur.offsetHeight;
?
point is, offsetTop and offsetHeight are native dom properties, and so access should be faster than through a function.
Since I also ran into a similar problem, I had a loop in which I was fixing a series (sometimes 1000+) of DOM elements (from float to absolute). I immediately applied the fixed styling to the elements, which was a big mistake to make: Every time something is written to the DOM the style has to be recalculated when your script asks for a position of an element. Hence, do all your reading, and then all your writing, even if that means two separate loops (you can safely write to the dataset property of your DOM element).
See also: http://gent.ilcore.com/2011/03/how-not-to-trigger-layout-in-webkit.html

Categories

Resources