Optimize JS/jQuery performance (getBoundingClientRect) and eliminating layout redraw - javascript

So I have a project where I'm trying to optimize a fairly complex Javascript function to the max - partly this is due to the fact that its supposed to run on smart-phones (Webkit) and every little bit counts.
I've been using various debugging and timing techniques to go through my code and rewrite everything that might be slow - like parts of jQuery based stuff where native might do better and so on. What the function does is basically take a string of html text and cut it up to fit exactly into 3 DIVs that do not have fixed position or size (a client templating mechanism).
At the moment the entire function takes around 100ms to execute in iPads browser (but in the production environment I need to ideally execute it 200 times) and the problem is that out of those 100ms at least 20ms are because of this single line of code (in 3 loops):
var maxTop = $(cur).offset().top + $(cur).outerHeight();
"cur" is just a reference to a container DIV element and the line above is calculating its bottom position (so where my text should break). From looking at the offset jQuery code I understand it uses getBoundingClientRect and even eliminating jQuery offset/sizing and calling it directly does nothing to speed it up - so its getBoundingClientRect fault (at least in Webkit). I did a bit of research on it and I understand it causes layout redraw.
But still - can't believe that I do multiple DOM clears/clones/appends and all of those are much faster than a simple element position lookup? Any ideas out there? Maybe something webkit specific? Or something that doesn't cause redraw?
Would much appreciate it!

did you try:
var maxTop = cur.offsetTop + cur.offsetHeight;
?
point is, offsetTop and offsetHeight are native dom properties, and so access should be faster than through a function.

Since I also ran into a similar problem, I had a loop in which I was fixing a series (sometimes 1000+) of DOM elements (from float to absolute). I immediately applied the fixed styling to the elements, which was a big mistake to make: Every time something is written to the DOM the style has to be recalculated when your script asks for a position of an element. Hence, do all your reading, and then all your writing, even if that means two separate loops (you can safely write to the dataset property of your DOM element).
See also: http://gent.ilcore.com/2011/03/how-not-to-trigger-layout-in-webkit.html

Related

Speed difference between inserting html and changing display style property

Assuming you have a relatively small piece of HTML (let's say under 100 tags and <4KB in size) and you want to occasionally display and hide it from your user (think menu, modal... etc).
Is the fastest approach to hide and show it using css, such as:
//Hide:
document.getElementById('my_element').style.display = 'none';
//Show:
document.getElementById('my_element').style.display = 'block';
Or to insert and remove it:
//Hide
document.getElementById('my_element_container').innerHTML = '';
//Show:
const my_element_html = {contents of the element};
document.getElementById('my_element_container').innerHTML = my_element_html;
// Note: insertAdjacentHTML is obviously faster when the container has other elements, but I'm showcasing this using innerHTML to get my point across, not necessarily because it's always the most efficient of the two.
Obviously, this can be benchmarked on a case by case basis, but, with some many browser versions and devices out there, any benchmarks that I'd be able to run in a reasonable amount of time aren't that meaningful.
I've been unable to find any benchmarks related to this subject.
Are there any up to date benchmarks comparing the two approaches? Is there any consensus from browsers developers as to which should, generally speaking, be preferred when it comes to speed.
In principle, DOM manipulation is slower than toggling display property of existing nodes. And I could stop my answer here, as this is technically correct.
However, repaint and reflow of the page is typically way slower and both your methods trigger it so you could be looking at:
display toggle: 1 unit
DOM nodes toggle: 2 units
repaint + reflow of page: 100 units
Which leaves you comparing 101 units with 102 units, instead of comparing 3 with 4 (or 6 with 7). I'm not saying that's the order of magnitude, it really depends on the actual DOM tree of your real page, but chances are it's close to the figures above.
If you use methods like: visibility:hidden or opacity:0, it will be way faster, not to mention opacity is animatable, which, in modern UIs, is preferred.
A few resources:
Taming huge collections of DOMs
Render-tree Construction, Layout, and Paint
How Browsers Work: Behind the scenes of modern web browsers
An introduction to Web Performance and the Critical Rendering Path
Understanding the critical rendering path, rendering pages in 1 second
Web performance, much like web development, is not a "press this button" process. You need to try, fail, learn, try again, fail again...
If your elements are always the same, you might find out (upon testing) caching them inside a variable is much faster than recreating them when your show method is called.
Testing is quite simple:
place each of the methods inside a separate function;
log the starting time (using performance.now());
use each method n times, where n is: 100, 1e3, 1e4,... 1e7
log finishing time for each test (or difference from its starting time)
Compare. You will notice conclusions drawn from 100 test are quite different than the ones from 1e7 test.
To go even deeper, you can test differences for different methods when showing and for different methods when hiding. You could test rendering elements hidden and toggle their display afterwards. Get creative. Try anything you can think of, even if it seems silly or doesn't make much sense.
That's how you learn.

React-like programming without React

I grew up using JQuery and have been following a programming pattern which one could say is "React-like", but not using React. I would like to know how my graphics performance is doing so well, nonetheless.
As an example, in my front-end, I have a table that displays some "state" (in React terms). However, this "state" for me is just kept in global variables. I have an update_table() function which is the central place where updates to the table happen. It takes the "state" and renders the table with it. The first thing it does is call $("#table").empty() to get a clean start and then fills in the rows with the "state" information.
I have some dynamically changing data (the "state") every 2-3 seconds on the server side which I poll using Ajax and once I get the data/"state", I just call update_table().
This is the perfect problem for solving with React, I know. However, after implementing this simple solution with JQuery, I see that it works just fine (I'm not populating a huge table here; I have a max of 20 rows and 5 columns).
I expected to see flickering because of the $("#table").empty() call followed by adding rows one-by-one inside the update_table() function. However, the browser (chrome/safari) somehow seems to be doing a very good job of updating only that elements that have actually changed (Almost as if the browser has an implementation of Virtual DOM/diffing, like React!)
I guess your question is why you can have such a good graphics performance without React.
What you see as a "good graphics performance" really is a matter of definition or, worse, opinion.
The classic Netscape processing cycle (which all modern browsers inherit) has basically four main stages. Here is the full-blown Gecko engine description.
As long as you manipulate the DOM, you're in the "DOM update" stage and no rendering is performed AT ALL. Only when your code yields, the next stage starts. Because of the DOM changes the sizes or positions of some elements may have changed, too. So this stage recomputes the layout. After this stage, the next is rendering, where the pixels are redrawn.
This means that if your code changes a very large number elements in the DOM, they are all still rendered together, and not in an incremental fashion. So, the empty() call does not render if you repopulate the table immediately after.
Now, when you see the pixels of an element like "13872", the rendering stage may render those at the exact same position with the exact same colors. You don't have any change in pixel color, and thus there is no flickering you could see.
That said, your graphics performance is excellent -- yes. But how did you measure it? You just looked at it and decided that it's perfect. Now, visually it really may be very very good. Because all you need is avoid the layout stage from sizing/positioning something differently.
But actual performance is not measured with the lazy eyes of us humans (there are many usability studies in that field, let's say that one frame at 60 Hz takes 16.6 ms, so it is enough to render in less than that). It is measured with an actual metric (updates per second or whatever). Consider that on older machines with older browsers and slower graphics cards your "excellent" performance may look shameful. How do you know it is still good on an old Toshiba tablet with 64 MB graphics memory?
And what about scaling? If you have 100x the elements you have now, are you sure it will scale well? What if some data takes more (or less) space and changes the whole layout? All of these edge conditions may not be covered by your simple approach.
A library like React takes into account those cases you may not have encountered yet, and offers a uniform pattern to approach them.
So if you are happy with your solution you don't need React. I often avoid jQuery because ES5/ES6 is already pretty good these days and I can just jot down 3-4 lines of code using document.getElementById() and such. But I realize that on larger projects or complex cases jQuery is the perfect tool.
Look at React like that: a tool that is useful when you realize you need it, and cumbersome when you think you can do without. It's all up to you :)
When you have something like this:
$("#table").empty()
.html("... new content of the table ... ");
then the following happens:
.empty() removes content and marks rendering tree / layout as invalid.
.html() adds new content and marks rendering tree / layout as invalid.
mark as invalid among other things calls InvalidateRect() (on Windows) that causes the window to receive WM_PAINT event at some point in future.
By handling WM_PAINT the browser will calculate layout and render all the result.
Therefore multiple change requests will be collapsed into single window painting operation.

Reflow/Layout performance for large application

I am using GWT to build a HTML application where the performance is correct in general.
Sometimes, it can load many objects in the DOM and the application becomes slow. I used Chrome Developer Tools Profiler to see where that time was spent (under Chrome once the app is compiled ie no GWT overhead) and it is clear that the methods getAbsoluteLeft()/getBoundingClientRect() consume the major part of this time.
Here is the implementation used under Chrome (com.google.gwt.dom.client.DOMImplStandardBase) :
private static native ClientRect getBoundingClientRect(Element element) /*-{
return element.getBoundingClientRect && element.getBoundingClientRect();
}-*/;
#Override
public int getAbsoluteLeft(Element elem) {
ClientRect rect = getBoundingClientRect(elem);
return rect != null ? rect.getLeft()
+ elem.getOwnerDocument().getBody().getScrollLeft()
: getAbsoluteLeftUsingOffsets(elem);
}
This makes sense to me, as the more elements in the DOM, the more time it may take to calculate absolute positions. But it is frustrating because sometimes you know just a subpart of your application has changed whereas those methods will still take time to calculate absolute positioning, probably because it unnecessarily recheck a whole bunch of DOM elements. My question is not necessarily GWT oriented as it is a browser/javascript related problem :
Is there any known solution to improve GWT getAbsoluteLeft/javascript getBoundingClientRect problem for large DOM elements application ?
I did not find any clues on the internet, but I thought about solution like :
(reducing number of calls for those methods :-) ...
isolate part of the DOM through iframe, in order to reduce the number of elements the browser has to evaluate to get an absolute position (although it would make difficult components to communicate ...)
in the same idea, there might be some css property (overflow, position ?) or some html element (like iframe) which tell the browser to skip a whole part of the dom or simply help the browser to get absolute position faster
EDIT :
Using Chrome TimeLine debugger, and doing a specific action while there are a lot of elements in the DOM, I have the average performance :
Recalculate style : nearly zero
Paint : nearly 1 ms
Layout : nearly 900ms
Layout takes 900ms through the getBoundingClientRect method. This page list all the methods triggering layout in WebKit, including getBoundingClientRect ...
As I have many elements in the dom that are not impacted by my action, I assume layout is doing recalculation in the whole DOM whereas paint is able through css property/DOM tree to narrow its scope (I can see it through MozAfterPaintEvent in firebug for example).
Except grouping and calling less the methods that trigger layout, any clues on how to reduce the time for layout ?
Some related articles :
Minimizing browser reflow
I finally solve my problem : getBoundingClientRect was triggering a whole layout event in the application, which was taking many times through heavy CSS rules.
In fact, layout time is not directly proportional to the number of elements in the DOM. You could draw hundred thousands of them with light style and layout will take only 2ms.
In my case, I had two CSS selectors and a background image which were matching hundred thousands of DOM elements, and that was consuming a huge amount of time during layout. By simply removing those CSS rules, I reduce the layout time from 900ms to 2ms.
The most basic answer to your question is to use lazy evaluation, also called delayed evaluation. The principle is that you only evaluate a new position when something it depends upon has changed. It generally requires a fair amount of code to set up but is much cleaner to use once that's done. You'd make one assignment to something (such as a window size) and then all the new values propagate automatically, and only the values that need to propagate.

Executing JavaScript "in the background"

do you have any experiences with the following problem: JavaScript has to run hundreds of performance intensive function calls which cannot be skipped and causing the browser to feel crashed for a few seconds (e.g. no scrolling and clicking)? Example: Imagine 500 calls for getting an elements height and then doing hundreds of DOM modifications, e.g. setting classes etc.
Unfortunately there is no way to avoid the performance intensive tasks. Web workers might be an approach, but they are not very well supported (IE...). I'm thinking of a timeout or callback based step by step rendering giving the browser time to do something in between. Do you have any experiences you can share on this?
Best regards
Take a look at this topic this is some thing related to your question.
How to improve the performance of your java script in your page?
If your doing that much DOM manipulation, you should probably clone the elements in question or the DOM itself, and do the changes on a cached version, and then replace the whole ting in one go or in larger sections, and not one element at the time.
What takes time is'nt so much the calculations and functions etc. but the DOM manipulation itself, and doing that only once, or a couple of times in sections, will greatly improve the speed of what you're doing.
As far as I know web workers aren't really for DOM manipulation, and I don't think there will be much of an advantage in using them, as the problem probably is the fact that you are changing a shitload of elements one by one instead of replacing them all in the DOM in one batch instead.
Here is what I can recommend in this case:
Checking the code again. Try to apply some standard optimisations as suggested, e.g. reducing lookups, making DOM modifications offline (e.g. with document.createDocumentFragment()...). Working with DOM fragments only works in a limited way. Retrieving element height and doing complex formating won't work sufficient.
If 1. does not solve the problem create a rendering solution running on demand, e.g. triggered by a scroll event. Or: Render step by step with timeouts to give the browser time to do something in between, e.g. clicking a button or scrolling.
Short example for step by step rendering in 2.:
var elt = $(...);
function timeConsumingRendering() {
// some rendering here related to the element "elt"
elt = elt.next();
window.setTimeout((function(elt){
return timeConsumingRendering;
})(elt));
}
// start
timeConsumingRendering();

Performance of setting img src to unchanged value?

If I have an img tag like
<img src="example.png" />
and I set it via
myImg.src = "example.png";
to the same value again, will this be a no-op, or will browsers unnecessarily redraw the image? (I'm mainly interested in the behaviour of IE6-8, FF3.x, Safari 4-5 and Chrome.)
I need to change many (hundreds of) images at once, and manually comparing the src attribute might be a little bit superfluous - as I assume, that the browser already does this for me?
Don't assume the browser will do it for you. I am working on a project of similar scale which requires hundreds of (dynamic-loading) images, with speed as the top priority.
Caching the 'src' property of every element is highly recommended. It is expensive to read and set hundreds of DOM element properties, and even setting src to the same value may cause reflow or paint events.
[Edit] The majority of sluggishness in the interface was due to all my loops and processes. Once those were optimized, the UI was very snappy, even when continuously loading hundreds of images.
[Edit 2] In light of your additional information (the images are all small status icons), perhaps you should consider simply declaring a class for each status in your CSS. Also, you might want to look into using cloneNode and replaceNode for a very quick and efficient swap.
[Edit 3] Try absolutely-positioning your image elements. It will limit the amount of reflow that needs to happen, since absolutely-positioned elements are outside of the flow.
When you change a bunch of elements at once, you're usually blocking the UI thread anyway, so only one redraw after the JavaScript completes is happening, meaning the per-image redraw really isn't a factor.
I wouldn't double check anything here, let the browser take care of it, the new ones are smart enough to do this in an efficient way (and it's never really been that much of a problem anyway).
The case you'll see here is new images loading and re-flowing the page as they load, that's what's expensive here, existing images are very minor compared to this cost.
I recommend using CSS Sprite technique. More info at: http://www.alistapart.com/articles/
You can use an image that contains all the icons. Then instead of changing the src attribute, you update the background property.

Categories

Resources