I've studied requestanimationframe documentation and looked for many posts about the usage of it, but i still haven't get a clear answer on one of my thought.
I understand that requestanimationframe is scheduling a task to be executed right at the beginning of the next frame, so the code that does dom manipulations will have a better chance to be finished and painted before the pain cyle. (unless setInterval or setTimeout which usually executes a lot later, causing the well known 'running out of time before the frame gets painted' => dropping frames).
1. The recursive way
The simplest example to use requestanimation frame is the following:
function animate() {
requestAnimationFrame(animate);
// drawing code comes here
}
requestAnimationFrame(animate);
This will give you a smooth animation if you have something that needs to be updated frequently, and also giving you the benefit of not dropping any frames during your animations. This will usually gives you 60fps animations, but if your browser and screen supports 144hz/fps, then you can easily end up having 144fps animations (6.95 ms cycle).
2. Fps limited animations
Other examples also introduce ways to limit the fps to a certain number. The following code snippnet shows how to limit your fps to 30 fps:
const fpsInterval = 1000 / 30;
let previousTime = 0;
function animate(time) {
requestAnimationFrame(animate);
const deltaTime = time - previousTime;
if (deltaTime > fpsInterval) {
// Get ready for next frame by setting then=now, but also adjust for your
// specified fpsInterval not being a multiple of RAF's interval (16.7ms)
previousTime = time - (deltaTime % fpsInterval);
}
// drawing code comes here
}
requestAnimationFrame(animate);
3. One-off animations
I've been wondering a third case, when you just want your animation to be scheduled precisely, even if you have 1 or just a few amount of updates in each second.
A best example is when you have a websocket connection and each update will introduce a dom manipulation, but the update rate is far too low to do it in a recursive way.
// setting up websocket connection
ws.onmessage = (data) => {
// changing application state
myApplicationState = JSON.parse(data);
requestAnimationFrame(animate);
}
function animate () {
// drawing code comes here
}
Now here is my question for you all:
Does this make sense to call requestanimationframe right from the callback of a websocket onmessage function, or should i be using the recursive way?
So far I haven't tested it (in progress), but i have a feeling it does still going to give you the benefit of well-timed animations that can be executed without dropping a frame.
My real-life example is similar, i only have 5 messages in a second and i'd like to call requestanimationframe ONLY 5 times in a second.
My thought of doing this vs the recursive way:
Using requestanimation frame in a recursive way will incredibly increase the script execution time when measured in chrome profiling tools.
Only calling requestanimationframe when a websocket comes will still make sure to have the benefit of the feature, yet not polluting the callstack and reducing execution time
My initial measures were the following. I've spin up chrome profiling and run it for 10 seconds and measured the script execution times (we're not measuring render or paint since they are basically identical):
Script execution times:
recursive way: 4500ms
fps limited way: 4300ms
one-off animated way: 1700ms
While recursive requestanimationframe solution is giving you a super smooth and good user experience, it's also very costy for your CPU and execution times.
If you have multiple components doing animations with recursive requestanimationframe, you're going to hit a CPU bottleneck pretty soon.
Oddly this last case causing some fps drops, which I do not understand. My understanding is that you can call requestanimationframe whenever you want and it'll only execute the begginning of the next frame. But it seems there is something i dont know about.
Here is a picture of what is happening. I still don't understand it. requestanimationframe function was called before the end of the frame, but somehow because it was part of a bigger function call, it's marked as 'dropped' in chrome. Wonder if that's just a bug in the chrome profiling or was it for real dropped.
I wonder what you guys thinking about this topic. I'll update this post with some chrome performance metrics soon.
There seems to some misconception about requestAnimationFrame (rAF) magically preventing dropped frames by ensuring that whatever is executed in it will somehow run fast enough. It doesn't.
requestAnimationFrame is just a timer for "right before the next paint"*.
Its main goal is to limit the number of callbacks to just what is needed, avoiding to waste drawing operations which won't even be rendered on screen.
It does actually allow to drop frames smartly, so if one execution took the time of 3 frames to render, it won't stupidly try to execute the 3 missing frames as soon as possible, instead it will nicely discard them and get your script ready to get back from this hiccup.
So using it for updating something that doesn't match the screen refresh rate is not very useful.
One should remember that calling requestAnimationFrame is not free, this will mark the document as animated and force the event-loop to enter the update-the-rendering steps, which in itself has some performance costs. So if what you are doing in there is not going to update the rendering of the page, it can actually be detrimental to wrap your callback in a rAF callback.
Still, there are cases where it could make sense, for instance in complex pages it may be good to have a method that batches all the changes to the DOM in a rAF callback, and have all the scripts that need to access the CSSOM boxes before these changes take effect, thus avoiding useless and costly reflows.
An other case is to avoid executing scripts when the page is in the background: rAF is heavily throttled when in background and if you have some script running that doesn't need to run when the page is hidden (e.g a clock or similar), it may make sense to wrap a timer in a rAF callback to take advantage of this heavy throttling.
*Actually both Chrome and Firefox have broken this conception. In these browsers if you call requestAnimationFrame(cb) from a non-animated document (a document where no animation frame callback is scheduled, and no mouse-event occurred in the last frame), they will force the update the rendering steps to fire immediately, making this frame not synced with the monitor (and, in Chrome, sometimes not even rendered on the screen at all).
Related
I have a javascript client that runs on a web page, drawing with requestAnimationFrame to the canvas and communicating via websockets to my NodeJS backend server (using the 'ws' module on the server side).
Profiling with Chrome DevTools, it seems that the combined time for scripting, rendering, & drawing each frame is only at maximum a few milliseconds. Yet there's still jank -- long frames from 20 - 40ms.
The timeline shows that in almost all of these cases there is a "response" that exceeds the length of the frame and/or a "Composite Layers" that occurs towards the end too.
This is essentially how I'm using requestAnimationFrame:
function drawGame() {
// Drawing to gameCanvas from cacheCanvas
// cacheCanvas is updated whenever an update is received from the server
ctx.drawImage(cacheCanvas,
// source rectangle
0, 0,
gameCanvas.width*2, gameCanvas.height*2,
// destination
100, 100,
gameCanvas.width*2, gameCanvas.height*2
);
requestAnimationFrame(drawGame);
}
requestAnimationFrame(drawGame);
The server sends updates using setInterval() at 60hz. When a message is received from the server, the client immediately draws it. I suspect that this timing may be incorrect in conjunction with requestAnimationFrame, and is leading to the composite layers at the end of the frame.
Even so, I'm confused as to why there is so much idle time in-between scripting and "composite layers" for each frame.
So...
Is there a way to control when "composite layers" is called?
Should I be saving the data from each update message and only drawing it at the beginning of the next animation frame?
What is the "response" referring to?
Thanks!
The version of Chrome, rendering options, and video drivers may all affect this. Post that information with your question. Also try searching on the Chromium bug list.
You can also try the latest dev build of Firefox which is supposed to have better performance by using multiple processes.
To determine whether server responses etc. have anything to do with performance, remove them and use fake data from the client only as a test.
I think you hit on some of the problems, there.
Solutioning:
Let's talk about potential solutions as a TLDR, and then explain how I get there.
Cache your messages to a buffer (eg: push them into an array), when the socket sends data; draw the buffered messages in the next animation frame; clear the buffer (or at least the ones that have been drawn), to await the next set of messages. Don't do heavy processing (drawing is one of the heaviest possible) on the main thread during I/O event handling.
a. If this is still not good enough, move the WebSocket (and data parsing, etc) into a WebWorker, and get the data handling off of the main thread.
b. If 2a is still not good enough, also make your canvas an OffscreenCanvas which animates in the worker, and draws to a "bitmap" context (not "2d") on the main thread... or just have a "2d" canvas (or whatever you are using) on the front end and use .transferControlToOffscreen() to move the draw calls into the WebWorker
c. regardless of solution in 2b, continue to draw based on animationFrame, not whenever a WebSocket hands you data, if animation is at all important (if you are just updating a bar chart with new data every few seconds, none of any of this, including Chrome's complaints, matters)
You have a weird thing going on where you are only drawing portions of your canvas images, and you don't explain why... but if ctx belongs to gameCanvas and you are drawing to 100, 100, canvas.width * 2, canvas.height * 2 then something is off, because you are drawing to 2x the size of the canvas, and showing the top-left quadrant of the drawing, with a padding-top and padding-left of 100px... and that seems like a lot of waste (though it's not actually going to make you pay for all of the missed draw calls, checking the bounds is something you should be doing, yourself). Of course, if ctx isn't owned by gameCanvas and ctx.canvas.width is actually 100px + 2 * gameCanvas.width then feel free to disregard all of #3.
This isn't guaranteed to solve all of your problems, but I do think these go a long way to smoothing out performance, by decoupling WebSocket and data parsing from your actual drawing performance... and preventing duplicate drawing actions (where one is potentially delayed by the other).
Justification:
Ultimately, I think these problems comes down to the following:
frame-pacing
browser animation-frame scheduling
timing of network handling
time spent on main thread, during event callbacks
First, it sounds like your frame-pacing is off, and that will show up in Chrome's complaints. If you're comfortable with frame-pacing, skip the following paragraph.
If you aren't familiar with the concept of frame-pacing, imagine that you are running at a solid 30fps (~33.3ms/frame), but some frames take, say 30fps, and some frames take 36ms... in that regard, while the average framerate might still be correctly described as 30fps, in human experience, some of your frames are now 20% longer than other frames (30ms followed by 36ms), and your eye notices the judder; presuming your animation requests were aiming for 30fps (probably 60+), then Chrome is going to highlight every frame that pushes longer than the 33.3ms time (or ~16.6ms for 60fps).
The next thing to understand is that requestAnimationFrame tries as hard as it can to lock itself to your monitor's refresh rate (or clean fractions thereof); back to the frame-pacing. But here's the problem, because in your case, this canvas is on the main thread (and I presume your websocket... and the initial paint for the other canvas...) all of these things are threatening to push the timing of your animation callback off. Consider a setTimeout(f, 100) It seems like f will run in exactly 100ms. But that's not true. It's guaranteed to run at some point, at least 100ms from now. If 99.8ms from now, a 10.2ms process starts running, then f won't run for 110ms, even if it was scheduled for 100ms.
In reality, we are talking about 60fps, or 120fps, or 144fps, or 165fps. This monitor is 144Hz, so I would expect 144fps or 72fps or 36fps updates, but even assuming the lax 30fps, the problem is that the timing is really fragile. A 4ms update, if it happens at the wrong time (ie: right before an animation callback is scheduled to run) is going to mess up your pacing, and show up on that Chrome timeline as a warning (that 4ms is a 10%+ delay for 30fps, it's 20%+ for 60fps, etc). This is also why your idle times are going to be huge. It's sitting and waiting and doing nothing... and just before it's ready to run the next animation frame at the perfect time to fit in with your screen refresh... a WebSocket message comes in, and then you do a billion things (like drawing in a 2D canvas is a huge for loop, even if it's hidden by the API) in that event, which delays the calling of the animation frame.
The last two I will sum up like this:
In JS, there is a saying... "Do not block the main thread". It's not really a saying. It's a state of being; a way of life. Do. Not. Block. The. Thread. Drawing pixels on a canvas (which is later going to have its pixels drawn on another canvas), and doing that inside of an event callback, is the epitome of blocking the main thread. It would be like having a 3,000 line long function run on window.onscroll or window.onmousemove. It doesn't matter how fast your PC is, your page performance is going to tank. In your handler, especially if it is an oft-fired handler, do the bare minimum to prep the data, store the data for later, and either return if you are set up to poll for this data (like a game loop), or schedule something setTimeout(f, 0) or Promise.resolve().then(f) or requestIdleCallback (if it's a low-importance thing), etc, to look at it later.
To sum it up, performance is critical, but performance isn't just the time it takes to run, it's also the precision of the time when it runs. Keep things off the main thread, so that this time can stay as accurate as possible.
I am developing a game using HTML5 Canvas and JavaScript. Initial fps is decent but as the game continues the fps decreases. The initial fps is around 45 fps but it reduces to 5 fps.
Following is my gameloop
var last_draw = Date.now(); //To track when last time GameDraw was called last time
var fps;
function gameloop()
{
var elapsed = Date.now() - last_draw;
last_draw = Date.now()
fps = 1000/elapsed;
context.clearRect(0,0,canvas.height,canvas.width);// This function clears the canvas.
GameUpdate();// This function updates property of all game elements.
GameDraw(); //This function draws all visible game elements in the canvas.
window.requestAnimationFrame(gameloop); //Requests next frame
}
window.requestAnimationFrame(gameloop);
It have tested this in following browsers:
Mozilla Firefox 32.0.3
Google Chrome 38.0.2125.101 m
My questions are:
Why rAF is calling it less frequently as the game continues?
Is it due to Memory leak?
Is it because time taken by GameDraw and GameUpdate is very high?
Is time to execute Gamedraw function is different from time taken to actually draw elements in canvas. Gamedraw calls draw function of each game element.
You'll find a lot of online tutorials about optimizing canvas performance. It's not about using this-or-that function, it's about the amount of processing that happens between each two frames.
Since your question(s) can't have one solid answer, I'll briefly address each of the sub-questions:
Why rAF is calling it less frequently as the game continues?
Like you guessed in the next question - something is leaking: it could be anything from, say, adding more textures, event listeners, DOM objects, etc. in every cycle... to simply having too many JS objects piling up because they remain referenced so the Garbage Collector can't get rid of them. But the bottom line is that you need to discover what is changing/incresing between each two frames.
Is it due to Memory leak?
Very probable, and yet so easy to test. In Chrome, Shift+Escape opens the task manager where you can see memory, cpu, etc. usage for each open tab. Monitor that.
Is it because time taken by GameDraw and GameUpdate is very high?
Most definitely! This could also be causing memory leaks. Learn to do CPU and canvas profiling, it will help you a lot. I believe canvas profiling in Chrome is still an experimental feature, so you'd need to enable it first from the config flags. These two functions are where 99% of the lag comes from, investigate what's going on there.
Is time to execute Gamedraw function is different from time taken to actually draw elements in canvas. Gamedraw calls draw function of each game element.
That shouldn't matter because both of them are blocking codes, meaning that one will only happen after another. The time to render a frame is roughly the sum of the two. Again, proper canvas rendering optimization can do wonders here.
I have been programming a javascript demo/test to learn WebGL. I have a fairly efficient game loop structure that (according to Chrome Dev Tools) only takes 1-2 milliseconds to run. I am using requestAnimationFrame to schedule the running of the loop (because this is apparently the "proper" way to perform 60fps animation). When I look at the Timeline of constructing the frame, the actual javascript code is minimal, but the 'idle' part of the frame can push the frame well over the 30 fps line. The FPS counter shows 20-40fps with lots of drops (almost like a saw tooth).
Is there anything else I can account for if my rendering loop is already 1-2ms when it has to fit into 16 ms to run 60fps?
If I convert the loop into a setTimeout loop it can hold 60fps easily. I can even render it in Retina Resolution without impacting the 60fps.
e.g.
// Timeout version
function gameLoop()
{
setTimeout(gameLoop, 1000/60);
//Process movement, AI, game logic
renderLoop();
}
function renderLoop()
{
//Drawing all of the 3d stuff
}
v.s.
function gameLoop()
{
requestAnimationFrame(gameLoop);
//Process movement, AI, game logic
renderLoop()
}
Function renderLoop()
{
//draw objects
}
I also at some point had the gameLoop running "separately" on a setTimeout while the renderLoop was being called by a requestAnimationFrame. Since they are all on the same thread, this seems a bit dodgey since they could step on each others toes.
requestAnimationFrame implementation varies in different browsers, and it is up to a browser to maintain it's underlying behaviour.
There is no guarantee that it will render 60fps, the only it guarantees that you will get your function executed as close as possible to the moment of rendering (just before swapping buffers to send image data to the screen).
If you use setTimeout you might get more frequent function calls, but that does not gives you 60fps necessary, as screen might refresh still in 30fps, or any other. In that case you are trying to render too often - and that is gpu and energy inefficient (especially on mobile).
Most people couple their update and render logic in single frequency (same function). In any case you need to update your logic (speeds etc.) with delta-time modifier.
That way even with 30fps, speeds of things moving will be same, regardless of fps.
My problem is that my javascript/canvas performs very slowly on lower end computers (Even though they can run even more challenging canvas scripts smoothly).
I'm trying to do a simple animation depending on user selection.
When drawing on the canvas directly proved to be too slow, I draw on a hidden canvas and saved all frames (getImageData) to data and then called animate(1); to draw on my real canvas.
function animate(i){
if(i < 12){
ctx2.putImageData(data[i], 0, 0);
setTimeout(function(){animate(i+1)},1);
}
}
But even this is too slow. What do I do?
Do not use putImageData if you can help it. The performance on FF3.6 is abysmal:
(source: phrogz.net)
Use drawing commands on off-screen canvases and blit sprites to sub-regions using drawImage instead.
As mentioned by #MartinJespersen, rewrite your frame drawing loop:
var animate = function(){
// ...
setTimeout(animate,30); //Max out around 30fps
};
animate();
If you're using a library that forces a clearRect every frame, but you don't need that, stop using that library. Clear and redraw only the portions you need.
Use a smaller canvas size. If you find it sufficient, you could even scale it up using CSS.
Accept that slow computers are slow, and you are standing on the shoulders of a great many abstraction layers. If you want to eek out performance for low-end computers, write in C++ and OpenGL. Otherwise, set minimum system requirements.
The timeout you specified is 1 millisecond. No browser can update the canvas that fast. Change it to 1000 - that'll be 1 second, i.e:
setTimeout(function(){animate(i+1)}, 1000)
UPD. Another thing to try is to prepare as many canvases as there are frames in your animation, set all of them to display:none, then turn display:block on them sequentially. I doubt it's going to be faster than putImageData, but still worth trying.
As already mentioned timeouts with 1 millisecond interval are doomed to fail, so the first step is to stop that.
You are calling setTimeout recursivly which is not ideal for creating animations. Instead initiate all the setTimeouts you need for the entire animation at the same time with increasing delays in a loop and let them run their course, or better yet use setInterval which is the better way of doing animations, and how for instance jQuery's animations work.
It looks like you are trying to redraw the entire canvas at each step of your animation - this is not optimal, try only manipulation the pixels that change. The link you have given to "more challanging canvas scripts" are actually a lot simpler than what you are trying to do, since it's all vector based math - which is what the canvas element is optimized for - it was never made to do full re-rendering every x milliseconds, and it likely never will be.
If what you really need to do is changing the entire image for every frame in your animation - don't use canvas but normal image tags with preloaded images, then it will run smoothly in ie6 on a singlecore atom.
I've got an app that works kind of like Google maps - it lets you click and pan over a large image. I redraw my Canvas heavily, sampling and scaling from a big image each redraw.
Anyway, I happened to try a dual canvas approach - drawing to a (larger) buffer one when needed, then doing a canvas_display.drawImage(canvas_buffer) to output a region to the screen. Not only did I not see a performance gain, but it got significantly slower with the iPhone. Just a datapoint...
OK, first things first. What else is happening while you're doing this animation? Any other javascript, any other timers, any other handlers? The answer, by the way, cannot be nothing. Your browser is repainting the window - the bits you're changing, at least. If other javascript is 'running', remember, that's not strictly true. Javascript is single-threaded by design. You can only queue for execution, so if some other javascript is hogging the thread, you won't get a look in.
Secondly, learn about how timers work. http://ejohn.org/blog/how-javascript-timers-work/ is my personal favorite post on this. In particular, setTimeout is asking the browser to run something after at least the specified time, but only when the browser has an opening to do that.
Third, know what you're doing with function(){animate(i+1);}. That anonymous function can only exist within the scope of its parent. In other words, when you queue up a function like this, the parent scope still exists on the callstack, as #MartinJespersen pointed out. And since that function queues up another, and another, and another... each is going to get progressively slower.
I've put everything discussed in a little fiddle:
http://jsfiddle.net/KzGRT/
(the first time I've ever used jsfiddle, so be kind). It's a simple 10-frame animation at (nominally) 100ms, using setTimeout for each. (I've done it this way instead of setInterval because, in theory, the one that takes longer to execute should start lagging behind the others. In theory - again, because javascript is single-threaded, if one slows down, it would delay the others as well).
The top method just has all ten images drawn on overlapping canvases, with only one showing at a time. Animation is just hiding the previous frame and showing the next. The second performs the putImageData into a canvas with a top-level function. The third uses an anonymous function as you tried. Watch for the red flash on frame zero, and you'll see who is executing the quickest - for me, it takes a while, but they eventually begin to drift (in Chrome, on a decent machine. It should be more obvious in FF on something lower-spec).
Try it on your low-end test machine and see what happens.
I did the setTimeout this way, hope it helps somebody at boosting application:
var do = true;
var last = false;
window.onmousemove = function(evt){
E.x = evt.pageX - cvs.offsetLeft;
E.y = evt.pageY - cvs.offsetTop;
if(do){
draw();
do = false;
//in 23 ms drawing enabled again
var t = setTimeout(function(){do = true;},23);
}else{
//the last operation must be done to catch the cursor point
clearTimeout(last );
last = setTimeout(function(){draw();},23);
}
};
I'm starting on a javascript MMORPG that will actually work smoothly. Currently, I created a demo to prove that I can move characters around and have them chat with each other, as well as see eachother move around live.
http://set.rentfox.net/
Now Javascript timers are something I have not used extensively, but from what I know, correct me if I'm wrong, is that having multiple setIntervals happening at the same time doesn't really work well b/c it's all on a single thread.
Lets say I wanted to have 10 different people nuking fireballs at a monster by using sprite background positioning with setInterval -- that animation would require 10 setIntervals doing repainting of the DOM for sprite background-position shifts. Wouldn't that be a big buggy?
I was wondering if there was a way around all this, perhaps using Canvas, so that animations can all happen concurrently without creating an event queue and I don't have to worry about timers.
Hope that makes sense, and please let me know if I need to clarify further.
The issue with multiple setIntervals is twofold. The first is as you indicate, since all Javascript on browsers is (currently) single-threaded, one timer's execution may hold up the next timer's execution. (Worker threads are coming, though; Firefox already has them, as does Safari 4 [and maybe others].) The second is that the timer happens at a set interval, but if your handler is still running when that interval expires, the second interval is completely skipped. E.g., the timer can interfere with itself.
That last part needs more explanation: Say you have a setInterval at 10ms (which is the fastest you can reasonably expect any implementation to do it; may are clamped so that they don't go faster than that). If your handler takes 13ms, the interval that should have happened 10ms after it began will be completely skipped.
I usually use setTimeout for this kind of thing. When my handler is triggered, I do my work and then schedule the next event at the end of the handler. Then (within the bounds of what you can be certain of), I know the next event will happen at that interval.
For what you're doing, it seems like a single "pulse" timer would be best, working through whatever it needs to do on the pulse. Whether that pulse timer uses setInterval or setTimeout is a judgment call based on what you're seeing with your actual code.
+1 to T. J. Crowder, the answer was perfect. I strongly recommend learning to use Canvas over DOM nodes for game animation; the latter is slow and buggy, and will hang the browser in any non-trivial situation. OTOH, Canvas is much faster and can be hardware accelerated, and even has a 3D context if you need it.