Having discovered requestAnimationFrame just a moment ago, I have dived into all the information I could find about it. To name just a few of the resources I came across in case there are others looking for more info about it:
http://creativejs.com/resources/requestanimationframe/ - explains the basics about it.
http://www.html5rocks.com/en/tutorials/speed/animations/ - explains how to use it.
Anyway, all of these resources tell me something about how requestAnimationFrame works or how it could/should be used, but none of them tell me when it is right to use it.
Should I use it for animations (repeated changes to the style of an element, much like CSS animations)?
Should I use it when an automated event wants to change the css/classes of one or multiple elements?
Should I use it when an automated event wants to change the text value of one or multiple elements? (e.g. updating the value of a clock once every second)
Should I use it when an automated event wants to modify the DOM?
Should I use it when an automated event needs values like .offsetTop, .offsetLeft and then wants to change styles such as top and left a few lines further?
Should I use it when a user generated event causes any of the above changes?
TL;DR: When is it right to use requestAnimationFrame?
You shouldn't yet. Not really, at least. This is still experimental and may or may not reach full recommendation (it is still a working draft at this point). That said, if you don't care about older browsers, or willing to work with the polyfill available the best time to use it is when you are looking to draw things to the screen that will require the browser to repaint (most animations).
For many simple modifications of the DOM, this method is overkill. This only becomes useful when you are doing animations when you will be drawing or moving items quickly and need to make sure that the browser repainting is keeping up enough to make it smooth. It will allow you to ensure that every frame you calculate will be drawn to the screen. It also provides a utility for more accurate time measurements to your animations. The first argument is the time at which the paint will occur, so you can ensure that you are where you should be at that moment.
You should not use it when you are doing many simple modifications to the DOM, or things that don't need to be smoothly transitioned. This will be more expensive on your users' computers so you want to limit this to making things smoother in transitions, movements and animations. Forcing a frame redraw is not needed every time you make a change on the page, since the response will be fast enough most of the time you don't need to worry about that extra couple milliseconds between draws.
As the previous answer says, you should not use it in the discontinuous animation because that don't need to be smoothly transitioned. In most cases, it's used for properties which vary continuously with time.
Related
I'm currently maintaining an old legacy project, where I just noticed a message from my browser about how scroll-linked effects reduce rendering performance.
The application is rendering a large form, on the bottom there is a slider with lots of data, which will be loaded asynchronously (and only if the user scrolls down, so it gets into his view). Afterwards it creates a lot DOM-elements (a lot means a typical of 100-300 figures with an image, texts and some attributes each) that will be added into the slider.
So I was reading about Scroll-linked effects on MDN to take care about this performance issue, but I don't know what's the best practice is. Neither I can use the examples about "sticky positioning" nor "scroll snapping". Also I do not want to customize scrolling in any way.
The goal is of this behavior is to delay loading the big bunch of data as much as possible (since only when the user scrolls this page down, he really needs this data).
Can you please help me how to take advantage on this, to optimize the scrolling performance of the application without losing the lazy-loading feature?
You can use a combination of check element visible in the viewport + requestAnimationframe
https://stackoverflow.com/a/125106/800765
I have a very long page with multiple canvases. They don't overlap each other, and are mostly used separately for PIXI.js to play spritesheets.
I use requestAnimationFrame to render each canvas.
I have a few questions since I'm unsure how to optimize.
1) When a canvas is offscreen, do I need to cancelAnimationFrame? Or does it not matter because it is offscreen and therefore won't be painted?
2) Should I have all my render functions within the same requestAnimationFrame? Will that improve performance?
Any other suggestions?
Comments provide most of the information, let me still answer all the points:
requestAnimationFrame is naturally a window, not canvas method. Thus it is not aware of canvas visibility. There is "element" parameter in WebKit, but it is not in current spec http://www.w3.org/TR/animation-timing/#requestAnimationFrame. Thus there are no benefits in having multiple handlers, it will improve performance if you have just one requestAnimationFrame taking care of the entire flow.
Browsers (tested on Safari/FF/Chrome) will not call requestAnimationFrame if tab is not visible at all. There's no much benefit in cancelling animation frame request manually.
PIXI is not checking for canvas visibility in renderer.render(stage). You can use getBoundingClientRect to check canvas visibility before rendering. This may increase performance quite a bit. How to tell if a DOM element is visible in the current viewport?
Best way to profile this particular scenario is probably through Chrome timeline view https://developer.chrome.com/devtools/docs/timeline . Animation skipping may reduce browser composite/paint times. Only checking javascript execution time might be a little misleading, especially if we remember that WebGL calls may execute asynchronously.
This may be a stupid/previously answered question, but it is something that has been stumping me and my friends for a little while, and I have been unable to find a good answer.
Right now, i make all my JS Canvas games run in ticks. For example:
function tick(){
//calculate character position
//clear canvas
//draw sprites to canvas
if(gameOn == true)
t = setTimeout(tick(), timeout)
}
This works fine for CPU-cheep games on high-end systems, but when i try to draw a little more every tick, it starts to run in slow motion. So my question is, how can i keep the x,y position and hit-detection calculations going at full speed while allowing a variable framerate?
Side Note: I have tried to use the requestAnimationFrame API, but to be honest it was a little confusing (not all that many good tutorials on it) and, while it might speed up your processing, it doesn't entirely fix the problem.
Thanks guys -- any help is appreciated.
RequestAnimationFrame makes a big difference. It's probably the solution to your problem. There are two more things you could do: set up a second tick system which handles the model side of it, e.g. hit detection. A good example of this is how PhysiJS does it. It goes one step further, and uses a feature of some new browsers called a web worker. It allows you to utilise a second CPU core. John Resig has a good tutorial. But be warned, it's complicated, is not very well supported (and hence buggy, it tends to crash a lot).
Really, request animation frame is very simple, it's just a couple of lines which once you've set up you can forget about it. It shouldn't change any of your existing code. It is a bit of a challenge to understand what the code does but you can pretty much cut-and-replace your setTimeout code for the examples out there. If you ask me, setTimeout is just as complicated! They do virtually the same thing, except setTimeout has a delay time, whereas requestAnimationFrame doesn't - it just calls your function when it's ready, rather than after a set period of time.
You're not actually using the ticks. What's hapenning is that you are repeatedly calling tick() over and over and over again. You need to remove the () and just leave setTimeout(tick,timeout); Personally I like to use arguments.callee to explicitly state that a function calls itself (and thereby removing the dependency of knowing the function name).
With that being said, what I like to do when implementing a variable frame rate is to simplify the underlying engine as much as possible. For instance, to make a ball bounce against a wall, I check if the line from the ball's previous position to the next one hits the wall and, if so, when.
That being said you need to be careful because some browsers halt all JavaScript execution when a contaxt menu (or any other menu) is opened, so you could end up with a gap of several seconds or even minutes between two "frames". Personally I think frame-based timing is the way to go in most cases.
As Kolink mentioned. The setTimeout looks like a bug. Assuming it's only a typo and not actually a bug I'd say that it is unlikely that it's the animation itself (that is to say, DOM updates) that's really slowing down your code.
How much is a little more? I've animated hundreds of elements on screen at once with good results on IE7 in VMWare on a 1.2GHz Atom netbook (slowest browser I have on the slowest machine I have, the VMWare is because I use Linux).
In my experience, hit detection if not done properly causes the most slowdown when the number of elements you're animating increases. That's because a naive implementation is essentially exponential (it will try to do n^n compares). The way around this is to filter out the comparisons to avoid unnecessary comparisons.
One of the most common ways of doing this in game engines (regardless of language) is to segment your world map into a larger set of grids. Then you only do hit detection of items in the same grid (and adjacent grids if you want to be more accurate). This greatly reduces the number of comparisons you need to make especially if you have lots of characters.
I am in process of making a game where the health bar (animated) and some other info represented visually like some icons showing the number of bombs the player has etc. Now, this can be done both in canvas (by making another canvas for info that sits over the main canvas, or it can be done using many divs and spans with absolute positioning. This is my first time in making a browser based game so if any experienced people view this, tell me what you recommend. I would like to know that which method would be faster.
The game will also be running on mobile devices. Thanks!
There is no straighforward answer and I suggest you do FPS testing with different browser how it plays out for your use case. If you do not wish to go such in-depth I suggest you simply draw the elements inside canvas and if you need to hide them then leave out drawHUD() call from your rendering loop.
For HTML HUD overlay on <canvas> the following factors should be considered
Can the web browser compositor do hardware accelerated <canvas> properly if there are DOM elements upon the canvas
HTML / DOM manipulation will be always slower than <canvas> operations due to inherited complexity dealing with DOM elements
<canvas> pixel space stays inside <canvas> and it might be difficult to have pixel-perfect aligment if you try to draw elements on <canvas> outside the canvas itself
HTML offers much more formatting options for text than canvas drawString() - is HTML formatting necessary
Use the canvas. Use two canvases if you want, one overlaid over the other, but use the canvas.
Touching the DOM at all is slow. Making the document redo its layout because the size of DOM elements moved is very slow. Dealing with the canceling (or not) of even more events because there are DOM items physically on top of the canvas can be a pain and why bother dealing with that?
If your HUD does not update very often then the fastest thing to do would be drawing it to an in-memory canvas when it changes, and then always drawing that canvas to the main canvas when you update the frame. In that way your drawHud method will look exactly like this:
function drawHUD() {
// This is what gets called every frame
// one call to drawImage = simple and fast
ctx.drawImage(inMemoryCanvas, 0, 0);
}
and of course updating the HUD information would be like:
function updateHUD() {
// This is only called if information in the HUD changes
inMemCtx.clearRect(0, 0, width, height);
inMemCtx.fillRect(blah);
inMemCtx.drawImage(SomeHudImage, x, y);
var textToDraw = "Actually text is really slow and if there's" +
"often repeated lines of text in your game you should be" +
"caching them to images instead";
inMemCtx.fillText(textToDraw, x, y);
}
Since HUDs often contain text I really do urge caching it if you're using any. More on text performance here.
As others have said, there is no universally best approach, as it depends on the specifics of what you need to render, how often, and possibly what messaging needs to happen between graphical components.
While it is true the DOM reflows are expensive, this blanket warning is not always applicable. For instance, using position:fixed; elements avoids triggering reflows for the page (not necessarily within the element if there are non-fixed children). Repaint is (correct me if this is wrong) expensive because it is pixel pushing, and so is not intrinsically slower than pushing the same number of pixels to a canvas. It can be faster for some things. What's more, each has certain operations that have performance advantages over the other.
Here are some points to consider:
It's increasingly possible to use WebGL-accelerated canvas elements on many A-grade browsers. This works fine for 2D, with the advantage that drawing operations are sent to the GPU, which is MUCH faster than the 2D context. However this may not be available on some target platforms (e.g., at the time of this writing, it is available in iOS Safari but not in the iOS UIWebView used if you target hybrid mobile applications.) Using a library to wrap canvas can abstract this and use WebGL if its available. Take a look at pixi.js.
Conversely, the DOM has CSS3 animations/transitions which are typically hardware-accelerated by the GPU automatically (with no reliance on WebGL). Depending on the type of animation, you can often get much faster results this way than with canvas, and often with simpler code.
Ultimately, as a rule in software performance, understanding the algorithms used is critical. That is, regardless of which approach used, how are you scheduling animation frames? Have you looked in a profiler to see what things take the most time? This practice is excellent for understanding what is impacting performance.
I've been working on an app with multiple animations, and have implemented each component both as DOM and canvas. I was initially surprised that the DOM version was higher performant than the canvas (wrapped with KineticJS) version, though I know see that this was because all the animated elements were position:fixed and using CSS (under the hood via jQuery UI), thereby getting GPU performance. However the code to manage these elements felt clunky (in my case, ymmv). Using a canvas approach allows more pixel-perfect rendering, but then it loses the ability to style with CSS (which technically allows pixel-perfect rendering as well but may be more or less complex to achieve).
I achieved a big speed up by throttling the most complex animation to a lower framerate, which for my case is indistinguishable from the 60fps version but runs smooth as butter on an older iPad 2. Throttling required using requestAnimationFrame and clamping calls to be no more often than the desired framerate. This would be hard to do with CSS animations on the DOM (though again, these are intrinsically faster for many things). The next thing I'm looking at is syncing multiple canvas-based components to the same requestAnimationFrame loop (possibly independently throttled, or a round-robin approach where each component gets a set fraction of the framerate, which may work okay for 2-3 elements. (Incidentally, I have some GUI controls like sliders that are not locked to any framerate as they are should be as close to 60fps as possible and are small/simple enough that I haven't seen performance issues with them).
I also achieved a huge speed boost by profiling and seeing that one class in my code that had nothing to do with the GUI was having a specific method called very often to calculate a property. The class in question was immutable, so I changed the method to memoize the value and saw the CPU usage drop in half. Thanks Chrome DevTools and the flame chart! Always profile.
Most of the time, the number of pixels being updated will tend to be the biggest bottleneck, though if you can do it on the GPU you have effectively regained all the CPU for your code. DOM reflows should be avoided, but this does not mean avoid the DOM. Some elements are far simpler to render using the DOM (e.g. text!) and may be optimized by the browser's (or OS's) native code more than canvas. Finally, if you can get acceptable performance for a given component using either approach (DOM or canvas), use the one that makes the code simplest for managing that type of component.
Best advice is to try isolated portions in the different approaches, run with a profiler, use techniques to over-draw or otherwise push the limits to see which approach can run fastest, and do NOT optimize before you have to. The caveat to this rule is the question you are asking: how do I know in advance which technical approach is going to allow the best performance? If you pick one based on assuming the answer, you are basically prematurely optimizing and will live with the arbitrary pain this causes. If instead you are picking by rapid prototyping or (even better) controlled experiments that focus on the needs of your application, you are doing R&D :)
Browserquest displays their HUD using HTML elements, which has the benefit that you don't have to worry about redrawing etc. (and the performance will be pretty good, given that the entire browser engine is optimized to render the DOM pretty fast.
They (browserquest) also use several layered canvas elements for different game elements. I don't know the exact structure, but I guess that on which canvas an element is displayed depends on how often it needs to be redrawn.
do you have any experiences with the following problem: JavaScript has to run hundreds of performance intensive function calls which cannot be skipped and causing the browser to feel crashed for a few seconds (e.g. no scrolling and clicking)? Example: Imagine 500 calls for getting an elements height and then doing hundreds of DOM modifications, e.g. setting classes etc.
Unfortunately there is no way to avoid the performance intensive tasks. Web workers might be an approach, but they are not very well supported (IE...). I'm thinking of a timeout or callback based step by step rendering giving the browser time to do something in between. Do you have any experiences you can share on this?
Best regards
Take a look at this topic this is some thing related to your question.
How to improve the performance of your java script in your page?
If your doing that much DOM manipulation, you should probably clone the elements in question or the DOM itself, and do the changes on a cached version, and then replace the whole ting in one go or in larger sections, and not one element at the time.
What takes time is'nt so much the calculations and functions etc. but the DOM manipulation itself, and doing that only once, or a couple of times in sections, will greatly improve the speed of what you're doing.
As far as I know web workers aren't really for DOM manipulation, and I don't think there will be much of an advantage in using them, as the problem probably is the fact that you are changing a shitload of elements one by one instead of replacing them all in the DOM in one batch instead.
Here is what I can recommend in this case:
Checking the code again. Try to apply some standard optimisations as suggested, e.g. reducing lookups, making DOM modifications offline (e.g. with document.createDocumentFragment()...). Working with DOM fragments only works in a limited way. Retrieving element height and doing complex formating won't work sufficient.
If 1. does not solve the problem create a rendering solution running on demand, e.g. triggered by a scroll event. Or: Render step by step with timeouts to give the browser time to do something in between, e.g. clicking a button or scrolling.
Short example for step by step rendering in 2.:
var elt = $(...);
function timeConsumingRendering() {
// some rendering here related to the element "elt"
elt = elt.next();
window.setTimeout((function(elt){
return timeConsumingRendering;
})(elt));
}
// start
timeConsumingRendering();