How can I recognize slow devices in my website? - javascript

When adapting a web page for mobile devices I always rely on css media queries.
Recently I no longer worry only about the screen size, but also the javascript engine of many mobile devices. Some common javascript effects that rely on window scrolls or a quick sequence of DOM transformations work really bad on slow devices.
Is there any way to guess the device performance so I can enable/disable elements that look bad on slow devices?
So far I can only think of bad solutions:
screen size. narrow screen "might" mean slow device
user agent information. I could look at the device, browser or cpu, but that does not seem a stable long term solution because of the amount of devices to consider
UPDATE:
Fixed my question to focus on one problem. In the comments there is a good solution for the touch interface problem.

It certainly seems as though there is no particularly good solution for this issue (which would make sense since this type of stuff is normally supposed to be the type of stuff that's hidden away). I think either way your best starting with UA detection to take care of those platforms that are known to fall into one category or another. Then you'd have 2 options to flexibly adapt to unknown/uncertain platforms:
Progressive Enhancement: Start with a stripped down test and load a small performance test or tests to gauge the device performance and then load the files for the appropriate enhancements. Test such as already provided or at: Skip some code if the computer is slow
Graceful Degradation: Wrap those features that are candidates for causing unfavorable UX on slower devices in a higher order function that replaces them if they take too long on first execution. In this case I'd probably add it to Function.prototype and then allow an acceptable delay argument to be chained on to the function definition. After the first invocation store the time lapsed, and then on the second invocation if the time lapsed is over the delay swap out the function with a fallback. If the time elapsed is acceptable then remove the profiling code by swapping in the standard function. I'd need to sit down and work out sample code (maybe this weekend). This could also be adjusted by additional arguments such as to profile multiple times before swapping.
The first option would likely be the friendlier option, but the 2nd may be less intrusive to existing code. Cookies or collecting further UA data would also help from continuing to profile after information is retrieved.

The only way I could think of would be to run some kind of speed test in JS in the background either before or while using the effects. This should catch devices that are slow due to their processor speed or vice versa, catch devices that are time accurate/fast. However if the devices have optimisations meaning they use different processors to calculate graphical effects, this will not work.
var speedtest = function(){
/// record when we start
var target = new Date().getTime(), count = 0, slow = 0;
/// trigger a set interval to keep a constant eye on things
var iid = setInterval(function(){
/// get where we actually are in time
var actual = new Date().getTime();
/// calculate where we expect time to be
target += 100;
/// 100 value here would need to be tested to find the best sensitivity
if ( (actual - target) > 100 ) {
/// make sure we aren't firing on a one off slow down, wait until this
/// has happened a few times in a row. 5 could be too much / too little.
if ( (++slow) > 5 ) {
/// finally if we are slow, stop the interval
clearInterval(iid);
/// and disable our fancy resource-hogging things
turnOffFancyAnimations();
}
}
else if ( slow > 0 ){
/// decrease slow each time we pass a speedtest
slow--;
}
/// update target to take into account browsers not being exactly accurate
target = actual;
/// use interval of 100, any lower than this might be unreliable
},100);
}
Of course by running this you'll affect the speed of the device as well, so it's not the best solution really. As a rule I tend to disable animations and other superfluous things simply when the screen is small.
One other downside to this method - that I've experience before - is that one certain browsers that implement multi-tabbed environments setIntervals are automatically limited to a certain speed when the tab is not being viewed. This would mean for this browsers tabbing away would automatically downgrade their experience - unless this imposed speed limited could be detected some way.

You could make your own mini benchmark of sorts. Do some demanding calculations and time the result. If it's slower than the device you consider to be slowest supported device then you drop to a less intensive version of your site.
You could also just make an easily accessible link to switch to the more basic site if the user is experiencing performance issues.
Going off screen size is not a good idea. Plenty of modern phones have small screens with fast processors.

Related

Scroll throttle how to choose a good amount for miliseconds?

I am throttling a scroll event like so:
window.addEventListener('scroll', throttle(() => {
console.log('scroll event triggered with throttle');
}, 150));
Because I do not want the scroll event to trigger 100's of times per second when somebody scrolls I have throttled it with a lodash throttle function.
I have read up on numerous articles which talk about doing this for obvious performance reasons but none talk about what kind of amount to set for the milliseconds on the throttle.
Of course this might depend on the use case and what code actually gets executed within the throttle. In my case I am doing a viewport check to see if something is still within the viewport.
How would you try and find a suitable amount for the milliseconds? Of course I would love to go as low as possible because it will fire an AJAX requests faster when it comes to an infinite scroll but I also do not want to cause performance issues.
It is hard testing this on a high-end desktop because I will probably never run into performance issues.
A pretty broad question but I would love to know (preferably in chrome) if this can be profiled for worst case scenarios.
In your case, I suggest using IntersectionObserver.
There still will be performance considerations, but if you only want to check that content is visible, it is a relatively fast operation - the observer will give you this information on IntersectionObserverEntry.
When doing some animations, try to target your animation's frame rate of 60fps (one frame per 16ms) and use. If you want to handle orientation change or resize, I'd suggest to add a delay of 450ms (empirically counted duration of orientation change on iPad which in our experiments was largest value of all devices tested).
As for testing, chromium-based browsers allow you to throttle the CPU:
I'm not sure if it is available in other browsers.
In any case it might be a good idea to postpone code execution with requestAnumationFrame (for animations) or requestIdleCallback (for computations). It will help to avoid bottlenecks to some degree.

asynchronous / variable framerate in javascript game

This may be a stupid/previously answered question, but it is something that has been stumping me and my friends for a little while, and I have been unable to find a good answer.
Right now, i make all my JS Canvas games run in ticks. For example:
function tick(){
//calculate character position
//clear canvas
//draw sprites to canvas
if(gameOn == true)
t = setTimeout(tick(), timeout)
}
This works fine for CPU-cheep games on high-end systems, but when i try to draw a little more every tick, it starts to run in slow motion. So my question is, how can i keep the x,y position and hit-detection calculations going at full speed while allowing a variable framerate?
Side Note: I have tried to use the requestAnimationFrame API, but to be honest it was a little confusing (not all that many good tutorials on it) and, while it might speed up your processing, it doesn't entirely fix the problem.
Thanks guys -- any help is appreciated.
RequestAnimationFrame makes a big difference. It's probably the solution to your problem. There are two more things you could do: set up a second tick system which handles the model side of it, e.g. hit detection. A good example of this is how PhysiJS does it. It goes one step further, and uses a feature of some new browsers called a web worker. It allows you to utilise a second CPU core. John Resig has a good tutorial. But be warned, it's complicated, is not very well supported (and hence buggy, it tends to crash a lot).
Really, request animation frame is very simple, it's just a couple of lines which once you've set up you can forget about it. It shouldn't change any of your existing code. It is a bit of a challenge to understand what the code does but you can pretty much cut-and-replace your setTimeout code for the examples out there. If you ask me, setTimeout is just as complicated! They do virtually the same thing, except setTimeout has a delay time, whereas requestAnimationFrame doesn't - it just calls your function when it's ready, rather than after a set period of time.
You're not actually using the ticks. What's hapenning is that you are repeatedly calling tick() over and over and over again. You need to remove the () and just leave setTimeout(tick,timeout); Personally I like to use arguments.callee to explicitly state that a function calls itself (and thereby removing the dependency of knowing the function name).
With that being said, what I like to do when implementing a variable frame rate is to simplify the underlying engine as much as possible. For instance, to make a ball bounce against a wall, I check if the line from the ball's previous position to the next one hits the wall and, if so, when.
That being said you need to be careful because some browsers halt all JavaScript execution when a contaxt menu (or any other menu) is opened, so you could end up with a gap of several seconds or even minutes between two "frames". Personally I think frame-based timing is the way to go in most cases.
As Kolink mentioned. The setTimeout looks like a bug. Assuming it's only a typo and not actually a bug I'd say that it is unlikely that it's the animation itself (that is to say, DOM updates) that's really slowing down your code.
How much is a little more? I've animated hundreds of elements on screen at once with good results on IE7 in VMWare on a 1.2GHz Atom netbook (slowest browser I have on the slowest machine I have, the VMWare is because I use Linux).
In my experience, hit detection if not done properly causes the most slowdown when the number of elements you're animating increases. That's because a naive implementation is essentially exponential (it will try to do n^n compares). The way around this is to filter out the comparisons to avoid unnecessary comparisons.
One of the most common ways of doing this in game engines (regardless of language) is to segment your world map into a larger set of grids. Then you only do hit detection of items in the same grid (and adjacent grids if you want to be more accurate). This greatly reduces the number of comparisons you need to make especially if you have lots of characters.

Control and measure precisely how long an image is displayed

For a reaction time study (see also this question if you're interested) we want to control and measure the display time of images. We'd like to account for the time needed to repaint on different users' machines.
Edit: Originally, I used only inline execution for timing, and thought I couldn't trust it to accurately measure how long the picture was visible on the user's screen though, because painting takes some time.
Later, I found the event "MozAfterPaint". It needs a configuration change to run on users' computers and the corresponding WebkitAfterPaint didn't make it. This means I can't use it on users' computers, but I used it for my own testing. I pasted the relevant code snippets and the results from my tests below.
I also manually checked results with SpeedTracer in Chrome.
// from the loop pre-rendering images for faster display
var imgdiv = $('<div class="trial_images" id="trial_images_'+i+'" style="display:none"><img class="top" src="' + toppath + '"><br><img class="bottom" src="'+ botpath + '"></div>');
Session.imgs[i] = imgdiv.append(botimg);
$('#trial').append(Session.imgs);
// in Trial.showImages
$(window).one('MozAfterPaint', function () {
Trial.FixationHidden = performance.now();
});
$('#trial_images_'+Trial.current).show(); // this would cause reflows, but I've since changed it to use the visibility property and absolutely positioned images, to minimise reflows
Trial.ImagesShown = performance.now();
Session.waitForNextStep = setTimeout(Trial.showProbe, 500); // 500ms
// in Trial.showProbe
$(window).one('MozAfterPaint', function () {
Trial.ImagesHidden = performance.now();
});
$('#trial_images_'+Trial.current).hide();
Trial.ProbeShown = performance.now();
// show Probe etc...
Results from comparing the durations measured using MozAfterPaint and inline execution.
This doesn't make me too happy. First, the median display duration is about 30ms shorter than I'd like. Second, the variance using MozAfterPaint is pretty large (and bigger than for inline execution), so I can't simply adjust it by increasing the setTimeout by 30ms. Third, this is on my fairly fast computer, results for other computers might be worse.
Results from SpeedTracer
These were better. The time an image was visible was usually within 4 (sometimes) 10 ms of the intended duration. It also looked like Chrome accounted for the time needed to repaint in the setTimeout call (so there was a 504ms difference between the call, if the image needed to repaint).
Unfortunately, I wasn't able to analyse and plot results for many trials in SpeedTracer, because it only logs to console. I'm not sure whether the discrepancy between SpeedTracer and MozAfterPaint reflects differences in the two browsers or something that is lacking in my usage of MozAfterPaint (I'm fairly sure I interpreted the SpeedTracer output correctly).
Questions
I'd like to know
How can I measure the time it was actually visible on the user's machine or at least get comparable numbers for a set of different browsers on different testing computers (Chrome, Firefox, Safari)?
Can I offset the rendering & painting time to arrive at 500ms of actual visibility? If I have to rely on a universal offset, that would be worse, but still better than showing the images for such a short duration that the users don't see them consciously on somewhat slow computers.
We use setTimeout. I know about requestAnimationFrame but it doesn't seem like we could obtain any benefits from using it:
The study is supposed to be in focus for the entire duration of the study and it's more important that we get a +/-500ms display than a certain number of fps. Is my understanding correct?
Obviously, Javascript is not ideal for this, but it's the least bad for our purposes (the study has to run online on users' own computers, asking them to install something would scare some off, Java isn't bundled in Mac OS X browsers anymore).
We're allowing only current versions of Safari, Chrome, Firefox and maybe MSIE (feature detection for performance.now and fullscreen API, I haven't checked how MSIE does yet) at the moment.
Because I didn't get any more answers yet, but learnt a lot while editing this question, I'm posting my progress so far as an answer. As you'll see it's still not optimal and I'll gladly award the bounty to anyone who improves on it.
Statistics
In the leftmost panel you can see the distribution that led me to doubt the time estimates I was getting.
The middle panel shows what I achieved after caching selectors, re-ordering some calls, using some more chaining, minimising reflows by using visibility and absolute positioning instead of display.
The rightmost panel shows what I got after using an adapted function by Joe Lambert using requestAnimationFrame. I did that after reading a blogpost about rAF now having sub-millisecond precision too. I thought it would only help me to smooth animations, but apparently it helps with getting better actual display durations as well.
Results
In the final panel the mean for the "paint-to-paint" timing is ~500ms, the mean for inline execution timing scatters realistically (makes sense, because I use the same timestamp to terminate the inner loop below) and correlates with "paint-to-paint" timing.
There is still a good bit of variance in the durations and I'd love to reduce it further, but it's definitely progress. I'll have to test it on some slower and some Windows computers to see if I'm really happy with it, originally I'd hoped to get all deviations below 10ms.
I could also collect way more data if I made a test suite that does not require user interaction, but I wanted to do it in our actual application to get realistic estimates.
window.requestTimeout using window.requestAnimationFrame
window.requestTimeout = function(fn, delay) {
var start = performance.now(),
handle = new Object();
function loop(){
var current = performance.now(),
delta = current - start;
delta >= delay ? fn.call() : handle.value = window.requestAnimationFrame(loop);
};
handle.value = window.requestAnimationFrame(loop);
return handle;
};
Edit:
An answer to another question of mine links to a good new article.
Did you try getting the initial milliseconds, and after the event is fired, calculate the difference? instead of setTimeout. something like:
var startDate = new Date();
var startMilliseconds = startDate.getTime();
// when the event is fired :
(...), function() {
console.log(new Date().getTime() - startMilliseconds);
});
try avoiding the use of jQuery if possible. plain JS will give you better response times and better overall performance

How to implement renderer with respect to CPU capabilities

I was wondering what is the best way to implement a renderer in JavaScript. It's not the content part of the rendering that's really important here - I would rather like to hear when and how to effectively run the renderer code.
Currently, I have window.setInterval(renderFunc, 1000 / 20), which will just render a frame each 50 ms (i.e., fps = 20).
The point is that faster computers won't render more frames, moreover slower computers will not be able to catch up with 20 fps, so the function is called more than the computer might be able to handle.
I was thinking of a while(true) loop, but this uses 100% CPU and will freeze the computer itself - so actually my game (the renderer is of my 3D game) won't be playable anymore at all since you cannot click on buttons anymore.
What is the most efficient option in this scenario, or is there a better method that has not come to my mind?
Thanks in advance.
There's a feature that is made specifically for animation, called window.requestAnimationFrame. This means that the browsers chooses when to call your animation function instead of you hardcoding intervals. Lots of benefits from using it:
Faster computers will get higher frame rates
Windows/tabs that aren't visible are updated much less often
Varying frame rate depending on CPU usage
This article explains how to use it in a cross browser fashion:
http://paulirish.com/2011/requestanimationframe-for-smart-animating/
The article also links to http://jsfiddle.net/paul/XQpzU/2/
You could time how long a frame render takes, and adjust frame spacing to achieve an acceptable load. E.g., if your current frame took 5ms to render, and you want 50% load, wait 5ms before the next frame. You'll want to put some sanity checks on it, and also probably use times from the last several frames.
Look into trying the while(true) loop inside of a web worker thread. This should prevent the browser from locking up. Note that you can't directly manipulate a <canvas> or any other part of the DOM from within the worker thread.
https://developer.mozilla.org/En/Using_web_workers
"setInterval() will pass the number of milliseconds late the callback was called"
-- https://developer.mozilla.org/en/window.setInterval
You could use this to dynamically adjust your interval time.
N.B. MDC docs are for Javascript as implemented by Mozilla, not ECMA Script or other implementations. You should check if this works in other browsers.

How to determine the best "framerate" (setInterval delay) to use in a JavaScript animation loop?

When writing a JavaScript animation, you of course make a loop using setInterval (or repeated setTimeout). But what is the best delay to use in the setInterval/setTimeout call(s)?
In the jQuery API page for the .animate() function, the user "fbogner" says:
Just if anyone is interested: Animations are "rendered" using a setInterval with a time out of 13ms. This is quite fast! Chrome's fastest possible interval is about 10ms. All other browsers "sample" at about 20-30ms.
Any idea how jQuery determined to use this specific number?
Started bounty. I'm hoping someone with knowledge of the source code behind Chromium or Firefox can provide some hard facts that might back up the decision of a specific framerate. Or perhaps a list of animations (or frameworks) and their delays. I believe this makes for an interesting opportunity to do a bit of research.
Interesting - I just took the time to look at Google's Pac-Man source to see what they did. They set up an array of possible FPSes (90, 45, 30), start at the first one, and then each frame they check the "slowness" of the frame (amount the frame exceeded its allotted time). If the slowness exceeds 50ms 20 times, the framerate is notched down to the next in the list (90 -> 45, 45 -> 30). It appears that the framerate is never raised back up, presumably because the game is so short-lived that it wouldn't be worth the trouble to code that.
Oh, and the setInterval delay is of course set to 1000 / framerate. They do, in fact, use setInterval and not repeated setTimeouts.
I think this dynamic framerate feature is pretty neat!
I would venture to say that a substantial fraction of web users are using monitors that refresh at 60Hz, which translates to one frame every 16.66ms. So to make the monitor the bottleneck, you need to produce frames faster than 16.66ms.
There are two reasons you would pick a value like 13ms. First, the browser needs a little bit of time to repaint the screen (in my experience, never less than 1ms). Which puts you at, say, updating every 15ms, which happens to be a very interesting number - the standard timer resolution on Windows is 15ms (see John Resig's blog post). I suspect that an well-written 15ms animation looks very close to the same on a wide variety of browsers/operating systems.
FWIW, fbogner is plain wrong about non-Chrome browsers firing setInterval every 20-30ms. I wrote a test to measure the speed of setInterval firing, and got these numbers:
Chrome - 4ms
Firefox 3.5 - 15ms
IE6 - 15ms
IE8 - 15ms
The pseudo-code for this is this one:
FPS_WANTED = 25
(just a number, it can be changed while executing, or it can be constant)
TIME_OF_DRAWING = 1000/FPS_WANTED
(this is in milliseconds, I believe it is accurate enough)
( should be updated when FPS_WANTED changes)
UntilTheUserLeavesTheDrawingApplication()
{
time1 = getTime();
doAnimation();
time2 = getTime();
animationTime = time2-time1;
if (animationTime > TIME_OF_DRAWING)
{
[the FPS_WANTED cannot be reached]
You can:
1. Decrease the number of FPS to see if a lower framerate can be achieved
2. Do nothing because you want to get all you can from the CPU
}
else
{
[the FPS can be reached - you can decide to]
1. wait(TIME_OF_DRAWING-animationTime) - to keep a constant framerate of FPS_WANTED
2. increase framerate if you want
3. Do nothing because you want to get all you can from the CPU
}
}
Of course there can be variations of this but this is the basic algorithm that is valid in any case of animation.
When doing loops for animations, it's best that you find a balance between the speed of the loop, and how much work needs to be done.
For example, if you want to slide a div across the page within a second so it is a nice effect and timely. You would skip coordinates and have a reasonably fast loop time so the effect is noticeable, but not jumpy.
So it's a trial and error thing (by having to put work, time, and browser capability into account). So it doesn't only look nice on one browser.
The number told by fbogner have been tested.
The browsers throttle the js-activity to a certain degree to be usable every time.
If your javascript would be possible to run every 5msec the browser runtime would have much less cpu time to refresh the rendering or react on user input (clicks) because javascript-execution blocks the browser.
I think the chrome-devs allow you to run your javascript at much shorter intervals than the other browsers because their V8-Javascript-Engine compiles the JavaScript and therefore it runs faster and the browser will noch be blocked as long as with interpreted js-code.
But the engine is not only so much faster to allow shorter intervals the devs have certainly tested which is the best possible shortest interval to allow short intervals and don't blocking the browser for to long
Don't know the reasoning behind jQuery's interval time, as 13ms translates to 80fps which is very fast. The "standard" fps that's used in movies and such is 25fps and is fast enough that human eye won't notice any jittering. 25fps translates to 40ms, so to answer your question: anything below 40ms is enough for an animation.

Categories

Resources