I am throttling a scroll event like so:
window.addEventListener('scroll', throttle(() => {
console.log('scroll event triggered with throttle');
}, 150));
Because I do not want the scroll event to trigger 100's of times per second when somebody scrolls I have throttled it with a lodash throttle function.
I have read up on numerous articles which talk about doing this for obvious performance reasons but none talk about what kind of amount to set for the milliseconds on the throttle.
Of course this might depend on the use case and what code actually gets executed within the throttle. In my case I am doing a viewport check to see if something is still within the viewport.
How would you try and find a suitable amount for the milliseconds? Of course I would love to go as low as possible because it will fire an AJAX requests faster when it comes to an infinite scroll but I also do not want to cause performance issues.
It is hard testing this on a high-end desktop because I will probably never run into performance issues.
A pretty broad question but I would love to know (preferably in chrome) if this can be profiled for worst case scenarios.
In your case, I suggest using IntersectionObserver.
There still will be performance considerations, but if you only want to check that content is visible, it is a relatively fast operation - the observer will give you this information on IntersectionObserverEntry.
When doing some animations, try to target your animation's frame rate of 60fps (one frame per 16ms) and use. If you want to handle orientation change or resize, I'd suggest to add a delay of 450ms (empirically counted duration of orientation change on iPad which in our experiments was largest value of all devices tested).
As for testing, chromium-based browsers allow you to throttle the CPU:
I'm not sure if it is available in other browsers.
In any case it might be a good idea to postpone code execution with requestAnumationFrame (for animations) or requestIdleCallback (for computations). It will help to avoid bottlenecks to some degree.
Related
Update: Similiar question with a very good answer that shows how to use requestAnimationFrame with scroll in a useful way:
scroll events: requestAnimationFrame VS requestIdleCallback VS passive event listeners
So let's say I want to add some expensive action on my site triggered by scrolling. For example, I'm using parallax effects in my jsfiddle.
Now I keep reading it must not be bound to the event directly, sometimes followed by snippets that are meant to be better. Just some examples:
Attaching JavaScript Handlers to Scroll Events = BAD!
How to develop high performance onScroll event?
How to make faster scroll effects?
60FPS onscroll event listener
What they say is basically don't do this:
// Bad guy 1
$(window).scroll( function() {
animate(ex1);
});
or this
// Bad guy 2
window.addEventListener('scroll', onScroll, false);
function onScroll() {
animate(ex2);
}
But use timeouts, intervals, requestAnimationFrame and whatnot, for example:
// Good guy
$(window).scroll( function() {
scrolling1 = true;
});
setInterval( function() {
if (scrolling1) {
scrolling1 = false;
animate(ex3);
}
}, 50 );
So, I went and added the options I found in the links above to a jsfiddle that tries to compare them by adding a counter to every approach, like so:
// Test
$(window).scroll( function() {
counter = counter + 1;
// output result of counter
animate(ex1);
});
Best to check the complete jsfiddle
Outcome: Everything that works smooth is about the same number of calculations. If I can live with choppy effects, maybe I can safe some resources. And against everything I read, this seems logical to me!
First question:
Am I missing something or is this a valid test? If it's invalid, how could I test correctly?
Edit: To clarify, I want to test whether any of the above methods save performance at all.
Second question:
If it is valid, why is everyone nervous about onscroll? If fluid animations require 5000 calculations over the complete site, there's no way to change it anyway?
(Well, sometimes I use checks to determine whether an object is in the viewport or not, but honestly I don't even know if those checks aren't as expensive as the prevented code itself, especially if they involve five different variables such as offset, windowHeight, scrolltTop, getBoundingClientRect and outerHeight...)
So, #SirPeople already answered your first question correctly, it is indeed a good test to see how often the animate function gets called, but it's a bad test to compare the performance of the different snippets.
This is a performance recording of the excecution:
The function animate isn't expensive at all. I took a performance recording (next picture), which shows that it takes between 0.64ms and 1.29ms in the one iteration I looked at (points 1-5). And once the function is done, the repaint takes no time at all (point 6), which might be because the page has almost no content. When we take a look at the time, we can see that all five animation functions and the repaint happen in less than 10ms, which, under normal circumstances, mean that we can get a fluid 60fps animation (point 7).
Also, if we want to compare onscroll event listeners we need to test each on it's own and compare the results. If one of the listeners would really be blocking it would have an influence on the whole page and without performance debugging you wouldn't know which one it was.
I made two jsfiddles window.scroll and RAF. And, to my surprise, there does not seem to be any difference.
Why are people concerned about this?
As you can see in the jsfiddles linked above, if the event handlers get too large, the entire page is going to lag.
Now what?
I'm no performance guru myself, but:
Perhaps one of the other solutions is correct
We can mark your event listeners as passive, although in my test it didn't really improve at all
https://developers.google.com/web/updates/2016/06/passive-event-listeners
We can optimize the event listener by removing parallax effects
There's also this new thing called Intersection Observer which is supposed to be much faster, I didn't test it
https://developer.mozilla.org/en-US/docs/Web/API/Intersection_Observer_API
I am not totally sure if I got correctly your questions and all your statements but I will try to give you an answer:
Am I missing something or is this a valid test? If it's invalid, how could I test correctly?
It is a valid test if you are measuring the number of times a function has been called, this will of course depend on the browser, SO, if is GPU enhanced and some other benchmark parameters that has been commented in your question already.
If we consider that measurement correct then it can be said that by using timeouts or requestAnimationFramework could save time because we are basically following the principles of debouncing or throttling. Basically we do not want to request or called a function more times than is needed. In the case of the timer we will queue less functions calls and in the case of requestAnimationFrame because it enqueue calls before repainting and will execute them sequentially. In timeouts it could happen that calculations overlap if they are very heavy.
I found a better answer in why using requestAnimationFrame explaining the main problems with animations in the browser like Shear, flickering or frame skip. It also includes a good demo.
I think your testing method is correct, you also should interpret it correctly, maybe calls are close to be the same number because of your hardware and your engine, but as said, debounce and throttling are a performance relieve.
Here also one more article supporting not attach handlers to window scroll from Twitter. (Disclaimer: this article is from 2011 and browsers have deal with optimizations for scroll in different ways).
why is everyone nervous about onscroll? If fluid animations require 5000 calculations over the complete site, there's no way to change it anyway?
I do not think there is nervousness in the performance hit, but the user experience will be worst for the above mentioned animation problems that your overcalling of scroll can cause, or even if there is a desynchronization with your timer you could still get the same 'performance' problems. People just recommend saving calls to scroll because:
Human visual permanence doesnt require a super high frame rate and so it is useless to try to show images more often.
For more complex calculations or heavy animations browsers are already working on optimizations, like you have check, some browsers had optimize this things in comparison with the 2, 3 or 6 years ago the articles you expose were written.
There's an oft-cited blog post written by John Resig in January 2011 that advises against attaching handlers to the window scroll event.
Instead, the common wisdom says to throttle your handler, for example:
$(window).scroll(_.throttle(myScrollHandler, 250));
In my recent testing, the UI response is much smoother when the handler is attached directly to the scroll event. Throttling the handler causes a visible lag.
Have modern browsers solved this problem? Is there any testing or browser compatibility data available?
The throttle reduces the number of events fired to 4 every second. Without throttling, it's possible for a significantly larger number of events to fire per second. 4 per second is easily detectable by the human eye (depending upon what you are doing).
As for whether you still need to throttle, it really depends on your clients. If you are dealing with a lot of older computers with bad graphics cards using IE6, high speed event firing will probably cause a lot of noticeable problems. It also depends on what your scroll event is actually doing (how long it takes to respond, whether it consumes memory and how fast it releases it, etc.)
When adapting a web page for mobile devices I always rely on css media queries.
Recently I no longer worry only about the screen size, but also the javascript engine of many mobile devices. Some common javascript effects that rely on window scrolls or a quick sequence of DOM transformations work really bad on slow devices.
Is there any way to guess the device performance so I can enable/disable elements that look bad on slow devices?
So far I can only think of bad solutions:
screen size. narrow screen "might" mean slow device
user agent information. I could look at the device, browser or cpu, but that does not seem a stable long term solution because of the amount of devices to consider
UPDATE:
Fixed my question to focus on one problem. In the comments there is a good solution for the touch interface problem.
It certainly seems as though there is no particularly good solution for this issue (which would make sense since this type of stuff is normally supposed to be the type of stuff that's hidden away). I think either way your best starting with UA detection to take care of those platforms that are known to fall into one category or another. Then you'd have 2 options to flexibly adapt to unknown/uncertain platforms:
Progressive Enhancement: Start with a stripped down test and load a small performance test or tests to gauge the device performance and then load the files for the appropriate enhancements. Test such as already provided or at: Skip some code if the computer is slow
Graceful Degradation: Wrap those features that are candidates for causing unfavorable UX on slower devices in a higher order function that replaces them if they take too long on first execution. In this case I'd probably add it to Function.prototype and then allow an acceptable delay argument to be chained on to the function definition. After the first invocation store the time lapsed, and then on the second invocation if the time lapsed is over the delay swap out the function with a fallback. If the time elapsed is acceptable then remove the profiling code by swapping in the standard function. I'd need to sit down and work out sample code (maybe this weekend). This could also be adjusted by additional arguments such as to profile multiple times before swapping.
The first option would likely be the friendlier option, but the 2nd may be less intrusive to existing code. Cookies or collecting further UA data would also help from continuing to profile after information is retrieved.
The only way I could think of would be to run some kind of speed test in JS in the background either before or while using the effects. This should catch devices that are slow due to their processor speed or vice versa, catch devices that are time accurate/fast. However if the devices have optimisations meaning they use different processors to calculate graphical effects, this will not work.
var speedtest = function(){
/// record when we start
var target = new Date().getTime(), count = 0, slow = 0;
/// trigger a set interval to keep a constant eye on things
var iid = setInterval(function(){
/// get where we actually are in time
var actual = new Date().getTime();
/// calculate where we expect time to be
target += 100;
/// 100 value here would need to be tested to find the best sensitivity
if ( (actual - target) > 100 ) {
/// make sure we aren't firing on a one off slow down, wait until this
/// has happened a few times in a row. 5 could be too much / too little.
if ( (++slow) > 5 ) {
/// finally if we are slow, stop the interval
clearInterval(iid);
/// and disable our fancy resource-hogging things
turnOffFancyAnimations();
}
}
else if ( slow > 0 ){
/// decrease slow each time we pass a speedtest
slow--;
}
/// update target to take into account browsers not being exactly accurate
target = actual;
/// use interval of 100, any lower than this might be unreliable
},100);
}
Of course by running this you'll affect the speed of the device as well, so it's not the best solution really. As a rule I tend to disable animations and other superfluous things simply when the screen is small.
One other downside to this method - that I've experience before - is that one certain browsers that implement multi-tabbed environments setIntervals are automatically limited to a certain speed when the tab is not being viewed. This would mean for this browsers tabbing away would automatically downgrade their experience - unless this imposed speed limited could be detected some way.
You could make your own mini benchmark of sorts. Do some demanding calculations and time the result. If it's slower than the device you consider to be slowest supported device then you drop to a less intensive version of your site.
You could also just make an easily accessible link to switch to the more basic site if the user is experiencing performance issues.
Going off screen size is not a good idea. Plenty of modern phones have small screens with fast processors.
When writing a JavaScript animation, you of course make a loop using setInterval (or repeated setTimeout). But what is the best delay to use in the setInterval/setTimeout call(s)?
In the jQuery API page for the .animate() function, the user "fbogner" says:
Just if anyone is interested: Animations are "rendered" using a setInterval with a time out of 13ms. This is quite fast! Chrome's fastest possible interval is about 10ms. All other browsers "sample" at about 20-30ms.
Any idea how jQuery determined to use this specific number?
Started bounty. I'm hoping someone with knowledge of the source code behind Chromium or Firefox can provide some hard facts that might back up the decision of a specific framerate. Or perhaps a list of animations (or frameworks) and their delays. I believe this makes for an interesting opportunity to do a bit of research.
Interesting - I just took the time to look at Google's Pac-Man source to see what they did. They set up an array of possible FPSes (90, 45, 30), start at the first one, and then each frame they check the "slowness" of the frame (amount the frame exceeded its allotted time). If the slowness exceeds 50ms 20 times, the framerate is notched down to the next in the list (90 -> 45, 45 -> 30). It appears that the framerate is never raised back up, presumably because the game is so short-lived that it wouldn't be worth the trouble to code that.
Oh, and the setInterval delay is of course set to 1000 / framerate. They do, in fact, use setInterval and not repeated setTimeouts.
I think this dynamic framerate feature is pretty neat!
I would venture to say that a substantial fraction of web users are using monitors that refresh at 60Hz, which translates to one frame every 16.66ms. So to make the monitor the bottleneck, you need to produce frames faster than 16.66ms.
There are two reasons you would pick a value like 13ms. First, the browser needs a little bit of time to repaint the screen (in my experience, never less than 1ms). Which puts you at, say, updating every 15ms, which happens to be a very interesting number - the standard timer resolution on Windows is 15ms (see John Resig's blog post). I suspect that an well-written 15ms animation looks very close to the same on a wide variety of browsers/operating systems.
FWIW, fbogner is plain wrong about non-Chrome browsers firing setInterval every 20-30ms. I wrote a test to measure the speed of setInterval firing, and got these numbers:
Chrome - 4ms
Firefox 3.5 - 15ms
IE6 - 15ms
IE8 - 15ms
The pseudo-code for this is this one:
FPS_WANTED = 25
(just a number, it can be changed while executing, or it can be constant)
TIME_OF_DRAWING = 1000/FPS_WANTED
(this is in milliseconds, I believe it is accurate enough)
( should be updated when FPS_WANTED changes)
UntilTheUserLeavesTheDrawingApplication()
{
time1 = getTime();
doAnimation();
time2 = getTime();
animationTime = time2-time1;
if (animationTime > TIME_OF_DRAWING)
{
[the FPS_WANTED cannot be reached]
You can:
1. Decrease the number of FPS to see if a lower framerate can be achieved
2. Do nothing because you want to get all you can from the CPU
}
else
{
[the FPS can be reached - you can decide to]
1. wait(TIME_OF_DRAWING-animationTime) - to keep a constant framerate of FPS_WANTED
2. increase framerate if you want
3. Do nothing because you want to get all you can from the CPU
}
}
Of course there can be variations of this but this is the basic algorithm that is valid in any case of animation.
When doing loops for animations, it's best that you find a balance between the speed of the loop, and how much work needs to be done.
For example, if you want to slide a div across the page within a second so it is a nice effect and timely. You would skip coordinates and have a reasonably fast loop time so the effect is noticeable, but not jumpy.
So it's a trial and error thing (by having to put work, time, and browser capability into account). So it doesn't only look nice on one browser.
The number told by fbogner have been tested.
The browsers throttle the js-activity to a certain degree to be usable every time.
If your javascript would be possible to run every 5msec the browser runtime would have much less cpu time to refresh the rendering or react on user input (clicks) because javascript-execution blocks the browser.
I think the chrome-devs allow you to run your javascript at much shorter intervals than the other browsers because their V8-Javascript-Engine compiles the JavaScript and therefore it runs faster and the browser will noch be blocked as long as with interpreted js-code.
But the engine is not only so much faster to allow shorter intervals the devs have certainly tested which is the best possible shortest interval to allow short intervals and don't blocking the browser for to long
Don't know the reasoning behind jQuery's interval time, as 13ms translates to 80fps which is very fast. The "standard" fps that's used in movies and such is 25fps and is fast enough that human eye won't notice any jittering. 25fps translates to 40ms, so to answer your question: anything below 40ms is enough for an animation.
I'm playing some notes at regular intervals. Each one is delayed by a random number of milliseconds, creating a jarring irregular effect. How do I fix it?
Note: I'm OK with some latency, just as long as it's consistent.
Answers of the type "implement your own small SoundManager2 replacement, optimized for timing-sensitive playback" are OK, if you know how to do that :) but I'm trying to avoid rewriting my whole app in Flash for now.
For an example of app with zero audible latency see the flash-based ToneMatrix.
Testcase
(see it here live or get it in an zip):
<head>
<title></title>
<script type="text/javascript"
src="http://www.schillmania.com/projects/soundmanager2/script/soundmanager2.js">
</script>
<script type="text/javascript">
soundManager.url = '.'
soundManager.flashVersion = 9
soundManager.useHighPerformance = true
soundManager.useFastPolling = true
soundManager.autoLoad = true
function recur(func, delay) {
window.setTimeout(function() { recur(func, delay); func(); }, delay)
}
soundManager.onload = function() {
var sound = soundManager.createSound("test", "test.mp3")
recur(function() { sound.play() }, 300)
}
</script>
</head>
<body>
</body>
</html>
I know this isn't the answer you want to hear, but there is no way to stop this, regardless of whether you wrote your own flash library to play sound or not.
For everyone who said "it works fine for me!" try resizing or moving your browser window as the poster's demo plays out. You'll hear more than just a subtle amount of delay. This is most noticeable in Firefox and IE, but even Chrome will experience it.
What's worse, if you click and hold the mouse down on the close box for the browser window, the sound completely stops until you release your mouse (you can release it outside of the close box and not actually close the window, FYI).
What is going on here?
It turns out that when you start resizing or moving around the browser window, the browser tries to multi-task the act of changing the window properties with the act of keeping up with the javascript going on in the window. It short-changes the window when it needs to.
When you hold down the mouse over the close box in the browser window, time stops completely. This is what is happening in smaller increments when you are re-sizing or moving the window: time is standing still in the javascript world in small, sporadic chunks (or large chunks, depending on how slow your machine is).
Now, you might say "sure, resizing the browser or holding down the close button makes the browser pause, but normally this wouldn't happen". Unfortunately you would be wrong.
It happens all the time, actually. I've run tests and it turns out that even by leaving the browser window completely still, not touching the mouse, and not touching the keyboard, backgrounds processes on the computer can still cause "hiccups", which means that for brief periods (perhaps as small as a few milliseconds) time is "standing still" in the browser, at completely random intervals outside of your control.
What do I mean by "standing still"? Let's say you have a setInterval() call (this applies to setTimeout also) running every 33 milliseconds (about 30 frames per second). Now, you would expect that after every 33 "real world" milliseconds your function would get called. And most of the time, this is true.
But when "hiccups" start happening, your setInterval call might happen in 43 milliseconds. What happened during the 10 ms? Nothing. Time stood still. Nothing on the browser was being updated. If you had sound playing, it will continue playing, but no NEW sound calls would start playing, because no javascript is being executed at all. If you had 5 setInterval() functions running, they would have all been paused for 10ms at some point.
The only way to tell that "time stood still" is to poll real-world time in your setInterval function callbacks. You'll be able to see that the browser tries to keep up most of the time, but that when you start resizing the window or doing something stressfull, the intervals will be longer than usual, but that all of your code will remain synched up (I'm making games using this technique, so you will see that all your game updates happen in synch, but just get slightly stuttered).
Usually, I should point out, these stutters are completely unnoticeable, and unless you write a function to log real-world time during setInterval times (as I have done in my own testing) you wouldn't even know about it. But it becomes a problem if you try to create some type of repetitive sound (like the beeping in the background of Asteriods) using repetitive play() calls.
My suggestion? If you have a sound that you know will loop, give it a long duration, maybe 10 seconds, and you'll be less likely to notice the hiccups (now, the graphics on the screen could still hiccup, but you're screwed there).
If you are writing a game and having the main character fire off a machine gun, don't do 10 rapid-succession calls to playSound('singleShot'), do one call to playSound('machineGunFire10Rounds'), or something along those lines.
You'll have to do some trickery to get around it, but in most cases you'll be alright.
It seems that Flash applets run in a process that is somehow less affected this whole "time freezing" thing going on in the regular browser/javascript environment, but I can still get it to happen, even on your link to the ToneMatrix example, by resizing or moving the browser window.
But Flash still seems much better than javascript. When I leave the browser alone I'd be willing to bet that Flash is not freezing for any amount of time and that intervals are always running on time.
tl;dr:
you're screwed in what you're hoping to achieve
try to deal with it using some workarounds
re-write your project in pure flash (no javascript)
wait for browsers to get better (Firefox 4 is getting a new javascript engine called JaegerMonkey which will be interesting to watch)
how do I know all this? I've done a lot of testing & logging with javascript, setInterval, and soundManager/html5 audio calls
In my comment to your question I mentioned that I don't hear the irregularity when I play your sample. That means I'm either "rhythm deaf", or that there may be something in your setup that interferes with good realtime performance. You don't mention any details of your environment, but you may have other processes running on your computer that are sucking up CPU cycles, or an older version of Flash that may not do a good job of handling sound latencies. I myself am using a recent version of Flash (10.something), whereas your parameters call for Flash 9. But maybe I should assume that if you're smart enough to be using SoundManager2 and StackOverflow that you would have eliminated these problems.
So here are some troubleshooting possibilities and comments that come to mind:
1) the SoundManager site has a number of demos, including JS-DOM "painting" + Sound, V2. Are you hearing irregular latencies and delays there? If not, maybe you can compare what they're doing there against what you're doing. If you are, then maybe look at your machine environment. When I run that demo, it is very responsive. (EDIT: Looking at it more closely, however, you can watch how the size of the brush stamps varies during a stroke. Since it varies with the time interval between mouse events (assuming you are keeping a constant mouse speed), you can visually see any irregularities in the pattern of mouse events. I can see occasional variation in stamp sizes, which does indicate that mouse events are not coming in at regular times. Which brings us to Javascript events.)
2) Javascript setTimeout() and setInterval() are not very reliable when it comes to timing. Mostly they will come back in some ballpark of the interval you have requested, but there can be large variations, usually delays, that make them unreliable. I've found that the same is true when using ActionScript inside Flash. You might want to print out the times that your sound.play() call is being made to see whether the irregularities are due to the irregularities in setTimeout/setInterval(). If that's the case, you could try shortening the interval, and then polling the system time to get much closer to the 300ms interval that you want. You can poll system time using new Date().getTime(), and this seems to have ms accuracy. Polling is of course a hideous hack that sucks up cycles when they could be used for something else, and I don't recommend it in general, but you may try it to see whether it helps. EDIT: Here's a writeup by John Resig on the handling of input and timer events in js.
3) When flash plays sounds, there is usually a latency involved, just so that the player can "build up a head of steam" and make sure there's enough stuff in the buffer to be played before the next buffer request is filled. There's a trade off between this latency and the reliability of uninterrupted playback. This might be a limitation you can't do anything about, short of "implement[ing] your own small SoundManager2 replacement", which I know you don't want to do.
Hope this helps. I recently wrote an AS3 sound experiment which exposed me to some of the basics, and will be watching this space to see what other suggestions people come up with.
You are using the javascript interval, which can not be guaranteed to fire at an exact time. I am sure that the internal Flash timing is far more reliable.
But this might help, fire recur AFTER you have triggered playing the sound.
window.setTimeout(function() { func(); recur(func, delay); }, delay);
As explained in another answer, there's no way you can avoid this. But...
I've done some experiments to mitigate this issues, and in the end I resorted to using:
lowLag: responsive html5 audio, which uses SoundManager2 for some cases where it's the fastest option available.
GSAP JS – Professional-Grade JavaScript Animation, in order to do the animation of properties and syncing of the audio (you probably don't care about this :P)
Take a peek at the source on the prototype of the demo, and (if possible) give lowLag a shot. It worked nicely for me.
Good luck!