I was trying to understand better what happens when DOM manipulations happen in a browser. Something that I could not find a lot of resources about, was when exactly repaints and reflows are executed by the browser (there is a lot of literature what a repaint or reflow is, but almost nothing about when it happens).
Very specifically, I was wondering about code like this:
function clickHandler() {
domElement1.textContent = 'Hello';
domElement2.textContent = 'World';
}
Assume that there are two DOM elements stored in the variables domElement1 and domElement2. The clickHandler function is attached to a button, and should update the content of both DOM elements when it is clicked.
Now the question: Is it possible that this causes two repaints? I.e. that the browser updates only domElement1, shows that change on the screen causing repaints/reflows/etc., and afterwards updates domElement2 causing another repaint/reflow/etc.? Or is there a guarantee that the event handler of the button will be completely executed before anything is changed in the actual DOM?
My guess would have been that the browser tries to keep 60 frames per seconds, and that theoretically it could happen that these two changes might happen in two different frames. Is that correct?
Browser reflow and browser repaint are two different, but related, ideas.
Reflow is essentially the computation of the offsets and dimensions of given elements to understand what comes where and takes how much of space, and so on.
Repaint is the next stage after reflow. It is when the computation made in the reflow stage is actually used to paint pixels on the screen. This process is sometimes also known as rasterization.
First thing's first: repaint only happens when the main thread is free, i.e. when no event listener, no callback, just simply nothing is currently in line for execution. And, as I can make its intuition, repaint only happens when a reflow was triggered previously.
In contrast to this, a reflow might be triggered with every single DOM mutation, depending on the underlying code.
As you can reason, the most naive approach for any browser is to trigger reflow after every single DOM mutation (e.g. the domElement1.textContent = 'Hello'; statement that you shared above). But as you might agree, this would be extremely inefficient.
What if there is a whole bunch of such statements in a line? Clearly, in this case, it would be much much better for the browser to do reflow just once at the very end of the whole block of statements and thus keep itself from wasting computing resources.
Now here's one thing to keep in mind. As you may know, browser engines, especially for JavaScript, have become extremely complicated, sophisticated and intelligent over the past few years. These days, they can make extremely intelligent guesses on what a code precisely asks for and then ultimately apply numerous kinds of optimizations over it in order to run it at the top-most speed.
These optimizations include holding on to the execution of the reflow algorithm when there is no need for it. That how every single browser exactly does this is out of the scope of this answer and is obviously implementation-dependent. Chrome might rely on one method, Firefox on the other, Safari on the other, and so on.
There isn't really any point in digging into each one's code bases and finding the exact way it works.
However, there is a common thing between all browsers these days and that is that they try to do the least amount of unnecessary work. That's one of the reasons of the success of modern-day JavaScript — the robust engines are right at its back.
So, to end this answer, I would say that no, there is almost no possibility that the following code would trigger reflow twice:
function clickHandler() {
domElement1.textContent = 'Hello';
domElement2.textContent = 'World';
// Reflow should happen at the end of this function.
}
However, the following might trigger it twice because the second statement wants to know about something (i.e. the distance of domElement1 from the top edge of the viewport) that can only be obtained once the reflow algorithm has been run by the browser:
function clickHandler() {
domElement1.textContent = 'Hello';
console.log(dom1Element.getBoundingClientRect().top); // Should trigger reflow.
domElement2.textContent = 'World';
// Reflow should happen at the end of this function.
}
Note the usage of the words 'almost' and 'might' in the sentences above. I've used them to emphasize on the fact that I am not 100% sure. There is over 50% probability that what I say does actually happen on every single browser, but the probability is still not 100%. As I said before, engines work differently, and I can't tell you something blindly which I've never inspected myself.
Related
Assuming you have a relatively small piece of HTML (let's say under 100 tags and <4KB in size) and you want to occasionally display and hide it from your user (think menu, modal... etc).
Is the fastest approach to hide and show it using css, such as:
//Hide:
document.getElementById('my_element').style.display = 'none';
//Show:
document.getElementById('my_element').style.display = 'block';
Or to insert and remove it:
//Hide
document.getElementById('my_element_container').innerHTML = '';
//Show:
const my_element_html = {contents of the element};
document.getElementById('my_element_container').innerHTML = my_element_html;
// Note: insertAdjacentHTML is obviously faster when the container has other elements, but I'm showcasing this using innerHTML to get my point across, not necessarily because it's always the most efficient of the two.
Obviously, this can be benchmarked on a case by case basis, but, with some many browser versions and devices out there, any benchmarks that I'd be able to run in a reasonable amount of time aren't that meaningful.
I've been unable to find any benchmarks related to this subject.
Are there any up to date benchmarks comparing the two approaches? Is there any consensus from browsers developers as to which should, generally speaking, be preferred when it comes to speed.
In principle, DOM manipulation is slower than toggling display property of existing nodes. And I could stop my answer here, as this is technically correct.
However, repaint and reflow of the page is typically way slower and both your methods trigger it so you could be looking at:
display toggle: 1 unit
DOM nodes toggle: 2 units
repaint + reflow of page: 100 units
Which leaves you comparing 101 units with 102 units, instead of comparing 3 with 4 (or 6 with 7). I'm not saying that's the order of magnitude, it really depends on the actual DOM tree of your real page, but chances are it's close to the figures above.
If you use methods like: visibility:hidden or opacity:0, it will be way faster, not to mention opacity is animatable, which, in modern UIs, is preferred.
A few resources:
Taming huge collections of DOMs
Render-tree Construction, Layout, and Paint
How Browsers Work: Behind the scenes of modern web browsers
An introduction to Web Performance and the Critical Rendering Path
Understanding the critical rendering path, rendering pages in 1 second
Web performance, much like web development, is not a "press this button" process. You need to try, fail, learn, try again, fail again...
If your elements are always the same, you might find out (upon testing) caching them inside a variable is much faster than recreating them when your show method is called.
Testing is quite simple:
place each of the methods inside a separate function;
log the starting time (using performance.now());
use each method n times, where n is: 100, 1e3, 1e4,... 1e7
log finishing time for each test (or difference from its starting time)
Compare. You will notice conclusions drawn from 100 test are quite different than the ones from 1e7 test.
To go even deeper, you can test differences for different methods when showing and for different methods when hiding. You could test rendering elements hidden and toggle their display afterwards. Get creative. Try anything you can think of, even if it seems silly or doesn't make much sense.
That's how you learn.
Update: Similiar question with a very good answer that shows how to use requestAnimationFrame with scroll in a useful way:
scroll events: requestAnimationFrame VS requestIdleCallback VS passive event listeners
So let's say I want to add some expensive action on my site triggered by scrolling. For example, I'm using parallax effects in my jsfiddle.
Now I keep reading it must not be bound to the event directly, sometimes followed by snippets that are meant to be better. Just some examples:
Attaching JavaScript Handlers to Scroll Events = BAD!
How to develop high performance onScroll event?
How to make faster scroll effects?
60FPS onscroll event listener
What they say is basically don't do this:
// Bad guy 1
$(window).scroll( function() {
animate(ex1);
});
or this
// Bad guy 2
window.addEventListener('scroll', onScroll, false);
function onScroll() {
animate(ex2);
}
But use timeouts, intervals, requestAnimationFrame and whatnot, for example:
// Good guy
$(window).scroll( function() {
scrolling1 = true;
});
setInterval( function() {
if (scrolling1) {
scrolling1 = false;
animate(ex3);
}
}, 50 );
So, I went and added the options I found in the links above to a jsfiddle that tries to compare them by adding a counter to every approach, like so:
// Test
$(window).scroll( function() {
counter = counter + 1;
// output result of counter
animate(ex1);
});
Best to check the complete jsfiddle
Outcome: Everything that works smooth is about the same number of calculations. If I can live with choppy effects, maybe I can safe some resources. And against everything I read, this seems logical to me!
First question:
Am I missing something or is this a valid test? If it's invalid, how could I test correctly?
Edit: To clarify, I want to test whether any of the above methods save performance at all.
Second question:
If it is valid, why is everyone nervous about onscroll? If fluid animations require 5000 calculations over the complete site, there's no way to change it anyway?
(Well, sometimes I use checks to determine whether an object is in the viewport or not, but honestly I don't even know if those checks aren't as expensive as the prevented code itself, especially if they involve five different variables such as offset, windowHeight, scrolltTop, getBoundingClientRect and outerHeight...)
So, #SirPeople already answered your first question correctly, it is indeed a good test to see how often the animate function gets called, but it's a bad test to compare the performance of the different snippets.
This is a performance recording of the excecution:
The function animate isn't expensive at all. I took a performance recording (next picture), which shows that it takes between 0.64ms and 1.29ms in the one iteration I looked at (points 1-5). And once the function is done, the repaint takes no time at all (point 6), which might be because the page has almost no content. When we take a look at the time, we can see that all five animation functions and the repaint happen in less than 10ms, which, under normal circumstances, mean that we can get a fluid 60fps animation (point 7).
Also, if we want to compare onscroll event listeners we need to test each on it's own and compare the results. If one of the listeners would really be blocking it would have an influence on the whole page and without performance debugging you wouldn't know which one it was.
I made two jsfiddles window.scroll and RAF. And, to my surprise, there does not seem to be any difference.
Why are people concerned about this?
As you can see in the jsfiddles linked above, if the event handlers get too large, the entire page is going to lag.
Now what?
I'm no performance guru myself, but:
Perhaps one of the other solutions is correct
We can mark your event listeners as passive, although in my test it didn't really improve at all
https://developers.google.com/web/updates/2016/06/passive-event-listeners
We can optimize the event listener by removing parallax effects
There's also this new thing called Intersection Observer which is supposed to be much faster, I didn't test it
https://developer.mozilla.org/en-US/docs/Web/API/Intersection_Observer_API
I am not totally sure if I got correctly your questions and all your statements but I will try to give you an answer:
Am I missing something or is this a valid test? If it's invalid, how could I test correctly?
It is a valid test if you are measuring the number of times a function has been called, this will of course depend on the browser, SO, if is GPU enhanced and some other benchmark parameters that has been commented in your question already.
If we consider that measurement correct then it can be said that by using timeouts or requestAnimationFramework could save time because we are basically following the principles of debouncing or throttling. Basically we do not want to request or called a function more times than is needed. In the case of the timer we will queue less functions calls and in the case of requestAnimationFrame because it enqueue calls before repainting and will execute them sequentially. In timeouts it could happen that calculations overlap if they are very heavy.
I found a better answer in why using requestAnimationFrame explaining the main problems with animations in the browser like Shear, flickering or frame skip. It also includes a good demo.
I think your testing method is correct, you also should interpret it correctly, maybe calls are close to be the same number because of your hardware and your engine, but as said, debounce and throttling are a performance relieve.
Here also one more article supporting not attach handlers to window scroll from Twitter. (Disclaimer: this article is from 2011 and browsers have deal with optimizations for scroll in different ways).
why is everyone nervous about onscroll? If fluid animations require 5000 calculations over the complete site, there's no way to change it anyway?
I do not think there is nervousness in the performance hit, but the user experience will be worst for the above mentioned animation problems that your overcalling of scroll can cause, or even if there is a desynchronization with your timer you could still get the same 'performance' problems. People just recommend saving calls to scroll because:
Human visual permanence doesnt require a super high frame rate and so it is useless to try to show images more often.
For more complex calculations or heavy animations browsers are already working on optimizations, like you have check, some browsers had optimize this things in comparison with the 2, 3 or 6 years ago the articles you expose were written.
I want to know if the use of if-else inside the jQuery .scroll() to compare positions with functions like:
offset = $(window).scrollTop()
and:
nameVar = $("#divID").offset()
These comparisons also adds css styles, inside the .scroll(), I have like four if-else statements.I am trying to find out if I will get performance issues when comparing DIVs' positions.
In my first tests the page load okay, but if I leave the page open I noticed that it is getting pretty laggy.
So:
Is it a better way to use if-else inside scroll()?
Am I using if-else wrong?
Is there another way to do it right?
The issue is that .scroll() events can be called very rapidly during a scroll operation. You either need to respond to a given event very quickly OR you need to defer handling of the scroll events (usually with a timer) until the user pauses with the scroll bar and stops moving it. Either one of those behaviors will prevent the laggy behavior.
A couple if statements take almost no time. But doing heavy duty selector queries or particularly making page changes that cause relayout and repainting can take significant CPU.
There is no right or wrong description for what exactly you can and can't do in a scroll handler and not see laggy behavior because it depends upon exactly what you're doing, what computer you're running it on, how much repaint and relayout operations you might cause by your actions, etc... The best you can do is either just decide to take the deferred route so you don't make your changes live until the user pauses the scroll or you have to test your code on all relevant platforms (particularly lower horsepower platforms) to see if your scroll handler is responsive enough.
You can see this post: More efficient way to handle $(window).scroll functions in jquery? for a couple methods of deferring the scroll processing until the user pauses including a jQuery plugin that makes this automatic.
Generally speaking finding the divs on the page will be much more computationally expensive that a couple if statements. So long as you cache lookups like so (outside of the scroll handler):
var $myDivOfAwesome = $('#awesomeDiv');
$(document).scroll(function() {
if ($myDivOfAwesome.position().top > 1337) {
// do stuff...
}
});
You should be fine.
If you still want to compare different snippets of code to determine what's faster, check out JSperf
do you have any experiences with the following problem: JavaScript has to run hundreds of performance intensive function calls which cannot be skipped and causing the browser to feel crashed for a few seconds (e.g. no scrolling and clicking)? Example: Imagine 500 calls for getting an elements height and then doing hundreds of DOM modifications, e.g. setting classes etc.
Unfortunately there is no way to avoid the performance intensive tasks. Web workers might be an approach, but they are not very well supported (IE...). I'm thinking of a timeout or callback based step by step rendering giving the browser time to do something in between. Do you have any experiences you can share on this?
Best regards
Take a look at this topic this is some thing related to your question.
How to improve the performance of your java script in your page?
If your doing that much DOM manipulation, you should probably clone the elements in question or the DOM itself, and do the changes on a cached version, and then replace the whole ting in one go or in larger sections, and not one element at the time.
What takes time is'nt so much the calculations and functions etc. but the DOM manipulation itself, and doing that only once, or a couple of times in sections, will greatly improve the speed of what you're doing.
As far as I know web workers aren't really for DOM manipulation, and I don't think there will be much of an advantage in using them, as the problem probably is the fact that you are changing a shitload of elements one by one instead of replacing them all in the DOM in one batch instead.
Here is what I can recommend in this case:
Checking the code again. Try to apply some standard optimisations as suggested, e.g. reducing lookups, making DOM modifications offline (e.g. with document.createDocumentFragment()...). Working with DOM fragments only works in a limited way. Retrieving element height and doing complex formating won't work sufficient.
If 1. does not solve the problem create a rendering solution running on demand, e.g. triggered by a scroll event. Or: Render step by step with timeouts to give the browser time to do something in between, e.g. clicking a button or scrolling.
Short example for step by step rendering in 2.:
var elt = $(...);
function timeConsumingRendering() {
// some rendering here related to the element "elt"
elt = elt.next();
window.setTimeout((function(elt){
return timeConsumingRendering;
})(elt));
}
// start
timeConsumingRendering();
I'm starting on a javascript MMORPG that will actually work smoothly. Currently, I created a demo to prove that I can move characters around and have them chat with each other, as well as see eachother move around live.
http://set.rentfox.net/
Now Javascript timers are something I have not used extensively, but from what I know, correct me if I'm wrong, is that having multiple setIntervals happening at the same time doesn't really work well b/c it's all on a single thread.
Lets say I wanted to have 10 different people nuking fireballs at a monster by using sprite background positioning with setInterval -- that animation would require 10 setIntervals doing repainting of the DOM for sprite background-position shifts. Wouldn't that be a big buggy?
I was wondering if there was a way around all this, perhaps using Canvas, so that animations can all happen concurrently without creating an event queue and I don't have to worry about timers.
Hope that makes sense, and please let me know if I need to clarify further.
The issue with multiple setIntervals is twofold. The first is as you indicate, since all Javascript on browsers is (currently) single-threaded, one timer's execution may hold up the next timer's execution. (Worker threads are coming, though; Firefox already has them, as does Safari 4 [and maybe others].) The second is that the timer happens at a set interval, but if your handler is still running when that interval expires, the second interval is completely skipped. E.g., the timer can interfere with itself.
That last part needs more explanation: Say you have a setInterval at 10ms (which is the fastest you can reasonably expect any implementation to do it; may are clamped so that they don't go faster than that). If your handler takes 13ms, the interval that should have happened 10ms after it began will be completely skipped.
I usually use setTimeout for this kind of thing. When my handler is triggered, I do my work and then schedule the next event at the end of the handler. Then (within the bounds of what you can be certain of), I know the next event will happen at that interval.
For what you're doing, it seems like a single "pulse" timer would be best, working through whatever it needs to do on the pulse. Whether that pulse timer uses setInterval or setTimeout is a judgment call based on what you're seeing with your actual code.
+1 to T. J. Crowder, the answer was perfect. I strongly recommend learning to use Canvas over DOM nodes for game animation; the latter is slow and buggy, and will hang the browser in any non-trivial situation. OTOH, Canvas is much faster and can be hardware accelerated, and even has a 3D context if you need it.