I have a javascript client that runs on a web page, drawing with requestAnimationFrame to the canvas and communicating via websockets to my NodeJS backend server (using the 'ws' module on the server side).
Profiling with Chrome DevTools, it seems that the combined time for scripting, rendering, & drawing each frame is only at maximum a few milliseconds. Yet there's still jank -- long frames from 20 - 40ms.
The timeline shows that in almost all of these cases there is a "response" that exceeds the length of the frame and/or a "Composite Layers" that occurs towards the end too.
This is essentially how I'm using requestAnimationFrame:
function drawGame() {
// Drawing to gameCanvas from cacheCanvas
// cacheCanvas is updated whenever an update is received from the server
ctx.drawImage(cacheCanvas,
// source rectangle
0, 0,
gameCanvas.width*2, gameCanvas.height*2,
// destination
100, 100,
gameCanvas.width*2, gameCanvas.height*2
);
requestAnimationFrame(drawGame);
}
requestAnimationFrame(drawGame);
The server sends updates using setInterval() at 60hz. When a message is received from the server, the client immediately draws it. I suspect that this timing may be incorrect in conjunction with requestAnimationFrame, and is leading to the composite layers at the end of the frame.
Even so, I'm confused as to why there is so much idle time in-between scripting and "composite layers" for each frame.
So...
Is there a way to control when "composite layers" is called?
Should I be saving the data from each update message and only drawing it at the beginning of the next animation frame?
What is the "response" referring to?
Thanks!
The version of Chrome, rendering options, and video drivers may all affect this. Post that information with your question. Also try searching on the Chromium bug list.
You can also try the latest dev build of Firefox which is supposed to have better performance by using multiple processes.
To determine whether server responses etc. have anything to do with performance, remove them and use fake data from the client only as a test.
I think you hit on some of the problems, there.
Solutioning:
Let's talk about potential solutions as a TLDR, and then explain how I get there.
Cache your messages to a buffer (eg: push them into an array), when the socket sends data; draw the buffered messages in the next animation frame; clear the buffer (or at least the ones that have been drawn), to await the next set of messages. Don't do heavy processing (drawing is one of the heaviest possible) on the main thread during I/O event handling.
a. If this is still not good enough, move the WebSocket (and data parsing, etc) into a WebWorker, and get the data handling off of the main thread.
b. If 2a is still not good enough, also make your canvas an OffscreenCanvas which animates in the worker, and draws to a "bitmap" context (not "2d") on the main thread... or just have a "2d" canvas (or whatever you are using) on the front end and use .transferControlToOffscreen() to move the draw calls into the WebWorker
c. regardless of solution in 2b, continue to draw based on animationFrame, not whenever a WebSocket hands you data, if animation is at all important (if you are just updating a bar chart with new data every few seconds, none of any of this, including Chrome's complaints, matters)
You have a weird thing going on where you are only drawing portions of your canvas images, and you don't explain why... but if ctx belongs to gameCanvas and you are drawing to 100, 100, canvas.width * 2, canvas.height * 2 then something is off, because you are drawing to 2x the size of the canvas, and showing the top-left quadrant of the drawing, with a padding-top and padding-left of 100px... and that seems like a lot of waste (though it's not actually going to make you pay for all of the missed draw calls, checking the bounds is something you should be doing, yourself). Of course, if ctx isn't owned by gameCanvas and ctx.canvas.width is actually 100px + 2 * gameCanvas.width then feel free to disregard all of #3.
This isn't guaranteed to solve all of your problems, but I do think these go a long way to smoothing out performance, by decoupling WebSocket and data parsing from your actual drawing performance... and preventing duplicate drawing actions (where one is potentially delayed by the other).
Justification:
Ultimately, I think these problems comes down to the following:
frame-pacing
browser animation-frame scheduling
timing of network handling
time spent on main thread, during event callbacks
First, it sounds like your frame-pacing is off, and that will show up in Chrome's complaints. If you're comfortable with frame-pacing, skip the following paragraph.
If you aren't familiar with the concept of frame-pacing, imagine that you are running at a solid 30fps (~33.3ms/frame), but some frames take, say 30fps, and some frames take 36ms... in that regard, while the average framerate might still be correctly described as 30fps, in human experience, some of your frames are now 20% longer than other frames (30ms followed by 36ms), and your eye notices the judder; presuming your animation requests were aiming for 30fps (probably 60+), then Chrome is going to highlight every frame that pushes longer than the 33.3ms time (or ~16.6ms for 60fps).
The next thing to understand is that requestAnimationFrame tries as hard as it can to lock itself to your monitor's refresh rate (or clean fractions thereof); back to the frame-pacing. But here's the problem, because in your case, this canvas is on the main thread (and I presume your websocket... and the initial paint for the other canvas...) all of these things are threatening to push the timing of your animation callback off. Consider a setTimeout(f, 100) It seems like f will run in exactly 100ms. But that's not true. It's guaranteed to run at some point, at least 100ms from now. If 99.8ms from now, a 10.2ms process starts running, then f won't run for 110ms, even if it was scheduled for 100ms.
In reality, we are talking about 60fps, or 120fps, or 144fps, or 165fps. This monitor is 144Hz, so I would expect 144fps or 72fps or 36fps updates, but even assuming the lax 30fps, the problem is that the timing is really fragile. A 4ms update, if it happens at the wrong time (ie: right before an animation callback is scheduled to run) is going to mess up your pacing, and show up on that Chrome timeline as a warning (that 4ms is a 10%+ delay for 30fps, it's 20%+ for 60fps, etc). This is also why your idle times are going to be huge. It's sitting and waiting and doing nothing... and just before it's ready to run the next animation frame at the perfect time to fit in with your screen refresh... a WebSocket message comes in, and then you do a billion things (like drawing in a 2D canvas is a huge for loop, even if it's hidden by the API) in that event, which delays the calling of the animation frame.
The last two I will sum up like this:
In JS, there is a saying... "Do not block the main thread". It's not really a saying. It's a state of being; a way of life. Do. Not. Block. The. Thread. Drawing pixels on a canvas (which is later going to have its pixels drawn on another canvas), and doing that inside of an event callback, is the epitome of blocking the main thread. It would be like having a 3,000 line long function run on window.onscroll or window.onmousemove. It doesn't matter how fast your PC is, your page performance is going to tank. In your handler, especially if it is an oft-fired handler, do the bare minimum to prep the data, store the data for later, and either return if you are set up to poll for this data (like a game loop), or schedule something setTimeout(f, 0) or Promise.resolve().then(f) or requestIdleCallback (if it's a low-importance thing), etc, to look at it later.
To sum it up, performance is critical, but performance isn't just the time it takes to run, it's also the precision of the time when it runs. Keep things off the main thread, so that this time can stay as accurate as possible.
Related
I've studied requestanimationframe documentation and looked for many posts about the usage of it, but i still haven't get a clear answer on one of my thought.
I understand that requestanimationframe is scheduling a task to be executed right at the beginning of the next frame, so the code that does dom manipulations will have a better chance to be finished and painted before the pain cyle. (unless setInterval or setTimeout which usually executes a lot later, causing the well known 'running out of time before the frame gets painted' => dropping frames).
1. The recursive way
The simplest example to use requestanimation frame is the following:
function animate() {
requestAnimationFrame(animate);
// drawing code comes here
}
requestAnimationFrame(animate);
This will give you a smooth animation if you have something that needs to be updated frequently, and also giving you the benefit of not dropping any frames during your animations. This will usually gives you 60fps animations, but if your browser and screen supports 144hz/fps, then you can easily end up having 144fps animations (6.95 ms cycle).
2. Fps limited animations
Other examples also introduce ways to limit the fps to a certain number. The following code snippnet shows how to limit your fps to 30 fps:
const fpsInterval = 1000 / 30;
let previousTime = 0;
function animate(time) {
requestAnimationFrame(animate);
const deltaTime = time - previousTime;
if (deltaTime > fpsInterval) {
// Get ready for next frame by setting then=now, but also adjust for your
// specified fpsInterval not being a multiple of RAF's interval (16.7ms)
previousTime = time - (deltaTime % fpsInterval);
}
// drawing code comes here
}
requestAnimationFrame(animate);
3. One-off animations
I've been wondering a third case, when you just want your animation to be scheduled precisely, even if you have 1 or just a few amount of updates in each second.
A best example is when you have a websocket connection and each update will introduce a dom manipulation, but the update rate is far too low to do it in a recursive way.
// setting up websocket connection
ws.onmessage = (data) => {
// changing application state
myApplicationState = JSON.parse(data);
requestAnimationFrame(animate);
}
function animate () {
// drawing code comes here
}
Now here is my question for you all:
Does this make sense to call requestanimationframe right from the callback of a websocket onmessage function, or should i be using the recursive way?
So far I haven't tested it (in progress), but i have a feeling it does still going to give you the benefit of well-timed animations that can be executed without dropping a frame.
My real-life example is similar, i only have 5 messages in a second and i'd like to call requestanimationframe ONLY 5 times in a second.
My thought of doing this vs the recursive way:
Using requestanimation frame in a recursive way will incredibly increase the script execution time when measured in chrome profiling tools.
Only calling requestanimationframe when a websocket comes will still make sure to have the benefit of the feature, yet not polluting the callstack and reducing execution time
My initial measures were the following. I've spin up chrome profiling and run it for 10 seconds and measured the script execution times (we're not measuring render or paint since they are basically identical):
Script execution times:
recursive way: 4500ms
fps limited way: 4300ms
one-off animated way: 1700ms
While recursive requestanimationframe solution is giving you a super smooth and good user experience, it's also very costy for your CPU and execution times.
If you have multiple components doing animations with recursive requestanimationframe, you're going to hit a CPU bottleneck pretty soon.
Oddly this last case causing some fps drops, which I do not understand. My understanding is that you can call requestanimationframe whenever you want and it'll only execute the begginning of the next frame. But it seems there is something i dont know about.
Here is a picture of what is happening. I still don't understand it. requestanimationframe function was called before the end of the frame, but somehow because it was part of a bigger function call, it's marked as 'dropped' in chrome. Wonder if that's just a bug in the chrome profiling or was it for real dropped.
I wonder what you guys thinking about this topic. I'll update this post with some chrome performance metrics soon.
There seems to some misconception about requestAnimationFrame (rAF) magically preventing dropped frames by ensuring that whatever is executed in it will somehow run fast enough. It doesn't.
requestAnimationFrame is just a timer for "right before the next paint"*.
Its main goal is to limit the number of callbacks to just what is needed, avoiding to waste drawing operations which won't even be rendered on screen.
It does actually allow to drop frames smartly, so if one execution took the time of 3 frames to render, it won't stupidly try to execute the 3 missing frames as soon as possible, instead it will nicely discard them and get your script ready to get back from this hiccup.
So using it for updating something that doesn't match the screen refresh rate is not very useful.
One should remember that calling requestAnimationFrame is not free, this will mark the document as animated and force the event-loop to enter the update-the-rendering steps, which in itself has some performance costs. So if what you are doing in there is not going to update the rendering of the page, it can actually be detrimental to wrap your callback in a rAF callback.
Still, there are cases where it could make sense, for instance in complex pages it may be good to have a method that batches all the changes to the DOM in a rAF callback, and have all the scripts that need to access the CSSOM boxes before these changes take effect, thus avoiding useless and costly reflows.
An other case is to avoid executing scripts when the page is in the background: rAF is heavily throttled when in background and if you have some script running that doesn't need to run when the page is hidden (e.g a clock or similar), it may make sense to wrap a timer in a rAF callback to take advantage of this heavy throttling.
*Actually both Chrome and Firefox have broken this conception. In these browsers if you call requestAnimationFrame(cb) from a non-animated document (a document where no animation frame callback is scheduled, and no mouse-event occurred in the last frame), they will force the update the rendering steps to fire immediately, making this frame not synced with the monitor (and, in Chrome, sometimes not even rendered on the screen at all).
As seen from Stuck with SetInterval ,SetTimeOut and Requestanimationframe or the like, requestAnimationFrame repeat "once the browser is ready". In other words, it keeps the browser busy.
I'm creating a "hover" effect using "mousemove" when plotting a chart with many data points. It's easy to do by reploting the whole chart/canvas using requestAnimationFrame repeatedly. Code is short in this case.
Instead of the whole canvas, I tried to replot only the data point under mouse (hover, <1% of the canvas) using requestAnimationFrame. For that several arrays need to be added and the code is longer.
It can be different from case to case, but in general, is requestAnimationFrame a resource-intensive method? Redrawing the whole canvas for the sake of <1% of the area seems not sound economically.
requestAnimationFrame is not resource intensive, its purpose is to adjust the CPU consumption to what the screen can display (in terms of framerate).
You can assume that requestAnimationFrame allows your code to be ran once per frame of the screen. It's up to you to optimize the code of the callback so it doesn't recompute positions, shapes and colors of static things (only the point under the cursor).
Redrawing the whole canvas isn't the problem, the problem is redrawing the same image every frame.
Instead, redraw only when something has changed in your graphic.
You could start an infinite requestAnimationFrame (rAF) loop waiting for the state to change, but this will force the browser to stay in an animated mode, which forces it to enter some branches in the event-loop that it could otherwise ignore (specs). See this answer of mine for more details.
Given that mouse events are now throttled to screen refresh rate, in modern browsers you wouldn't even win by throttling this event in rAF, except that all the browsers still don't do that, looking at you Safari....
So to summarize,
Clear all / Redraw all. Painting only part of the canvas doesn't improve perfs that much, and this way you avoid a lot of trouble at coding.
Redraw only when your graphics changed. Avoid useless renderings.
Avoid keeping a requestAnimationFrame loop active for nothing. It just saves trees.
I am developing a game using HTML5 Canvas and JavaScript. Initial fps is decent but as the game continues the fps decreases. The initial fps is around 45 fps but it reduces to 5 fps.
Following is my gameloop
var last_draw = Date.now(); //To track when last time GameDraw was called last time
var fps;
function gameloop()
{
var elapsed = Date.now() - last_draw;
last_draw = Date.now()
fps = 1000/elapsed;
context.clearRect(0,0,canvas.height,canvas.width);// This function clears the canvas.
GameUpdate();// This function updates property of all game elements.
GameDraw(); //This function draws all visible game elements in the canvas.
window.requestAnimationFrame(gameloop); //Requests next frame
}
window.requestAnimationFrame(gameloop);
It have tested this in following browsers:
Mozilla Firefox 32.0.3
Google Chrome 38.0.2125.101 m
My questions are:
Why rAF is calling it less frequently as the game continues?
Is it due to Memory leak?
Is it because time taken by GameDraw and GameUpdate is very high?
Is time to execute Gamedraw function is different from time taken to actually draw elements in canvas. Gamedraw calls draw function of each game element.
You'll find a lot of online tutorials about optimizing canvas performance. It's not about using this-or-that function, it's about the amount of processing that happens between each two frames.
Since your question(s) can't have one solid answer, I'll briefly address each of the sub-questions:
Why rAF is calling it less frequently as the game continues?
Like you guessed in the next question - something is leaking: it could be anything from, say, adding more textures, event listeners, DOM objects, etc. in every cycle... to simply having too many JS objects piling up because they remain referenced so the Garbage Collector can't get rid of them. But the bottom line is that you need to discover what is changing/incresing between each two frames.
Is it due to Memory leak?
Very probable, and yet so easy to test. In Chrome, Shift+Escape opens the task manager where you can see memory, cpu, etc. usage for each open tab. Monitor that.
Is it because time taken by GameDraw and GameUpdate is very high?
Most definitely! This could also be causing memory leaks. Learn to do CPU and canvas profiling, it will help you a lot. I believe canvas profiling in Chrome is still an experimental feature, so you'd need to enable it first from the config flags. These two functions are where 99% of the lag comes from, investigate what's going on there.
Is time to execute Gamedraw function is different from time taken to actually draw elements in canvas. Gamedraw calls draw function of each game element.
That shouldn't matter because both of them are blocking codes, meaning that one will only happen after another. The time to render a frame is roughly the sum of the two. Again, proper canvas rendering optimization can do wonders here.
I have been playing around with requestAnimationframe for chrome, and wondered how it actually behaves.
When i load my canvas and draw, I get a steady 60FPS. If i scroll around using offset like a click and drag around a map, the FPS will drop (as expected)...once i stop dragging around the map, the FPS creeps back up to its steady 60fps, again as expected.
Here how ever is where I'm wondered if this is delibrate for requestAnimationframe. If i drag the map around until the FPS drop, drops below 30 for an extended period of time, once i stop dragging, it climbs back up, but this time it hits 30FPS and will not go higher. It appears as if the browser decided 30FPS is perhaps the best option.
Is this delibrately done by the browser, i been trying to find out if this is the case. Because it will go to 60fps if i dont drop below 30fps for too long.
Yes, it's something that the browsers are capable of doing.
"How it's supposed to work" isn't really something that anybody can answer, here.
The reason for that is simply that under the hood is 100% browser-specific.
But it's very safe to say that yes, the browser is capable of deciding when you should be locked into a 30Hz refresh, rather than a 60Hz refresh.
An illustration of why this is the case:
requestAnimationFrame() is also tied into the Page Visibility API if the vendors want (very true for Chrome).
Basically, if the page isn't visible, they can slow the requestAnimationFrame() updates down to a few times per second or pause them altogether.
Given that knowledge, it's entirely plausible to believe that one of two things is happening:
they're intentionally capping you at 30fps, because they feel your experience will be more stable there, based on averaged performance data
they're intentionally throttling you, but there's some bug in the system (or some less than lenient math) which is preventing you from going back up to 60, after the coast has cleared, .and if they are using averaged performance data, then that might be part of the issue.
Either way, it is at very least mostly-intentional, with the only unanswered question being why it sticks to 30fps.
Did you leave it alone for 20 or 30 minutes after the fact, to see if it went back up at any time, afterwards?
You can run Timeframe analysis from Chrome DevTools to look for maverick JS that is slowing down your animation times.
https://developers.google.com/chrome-developer-tools/docs/timeline
RAF will find the best place to paint your changes not the closest one. So, if the JS in the RAF callback is taking two frames worth of time(around 16ms per frame on your 60hz hardware), then you FPS will drop to 30.
From Paul Irish via Boris
Actually, “It’s currently capped at 1000/(16 + N) fps, where N is the number of ms it takes your callback to execute. If your callback takes 1000ms to execute, then it’s capped at under 1fps. If your callback takes 1ms to execute, you get about 60fps.” (thx, Boris)
http://www.paulirish.com/2011/requestanimationframe-for-smart-animating/
My problem is that my javascript/canvas performs very slowly on lower end computers (Even though they can run even more challenging canvas scripts smoothly).
I'm trying to do a simple animation depending on user selection.
When drawing on the canvas directly proved to be too slow, I draw on a hidden canvas and saved all frames (getImageData) to data and then called animate(1); to draw on my real canvas.
function animate(i){
if(i < 12){
ctx2.putImageData(data[i], 0, 0);
setTimeout(function(){animate(i+1)},1);
}
}
But even this is too slow. What do I do?
Do not use putImageData if you can help it. The performance on FF3.6 is abysmal:
(source: phrogz.net)
Use drawing commands on off-screen canvases and blit sprites to sub-regions using drawImage instead.
As mentioned by #MartinJespersen, rewrite your frame drawing loop:
var animate = function(){
// ...
setTimeout(animate,30); //Max out around 30fps
};
animate();
If you're using a library that forces a clearRect every frame, but you don't need that, stop using that library. Clear and redraw only the portions you need.
Use a smaller canvas size. If you find it sufficient, you could even scale it up using CSS.
Accept that slow computers are slow, and you are standing on the shoulders of a great many abstraction layers. If you want to eek out performance for low-end computers, write in C++ and OpenGL. Otherwise, set minimum system requirements.
The timeout you specified is 1 millisecond. No browser can update the canvas that fast. Change it to 1000 - that'll be 1 second, i.e:
setTimeout(function(){animate(i+1)}, 1000)
UPD. Another thing to try is to prepare as many canvases as there are frames in your animation, set all of them to display:none, then turn display:block on them sequentially. I doubt it's going to be faster than putImageData, but still worth trying.
As already mentioned timeouts with 1 millisecond interval are doomed to fail, so the first step is to stop that.
You are calling setTimeout recursivly which is not ideal for creating animations. Instead initiate all the setTimeouts you need for the entire animation at the same time with increasing delays in a loop and let them run their course, or better yet use setInterval which is the better way of doing animations, and how for instance jQuery's animations work.
It looks like you are trying to redraw the entire canvas at each step of your animation - this is not optimal, try only manipulation the pixels that change. The link you have given to "more challanging canvas scripts" are actually a lot simpler than what you are trying to do, since it's all vector based math - which is what the canvas element is optimized for - it was never made to do full re-rendering every x milliseconds, and it likely never will be.
If what you really need to do is changing the entire image for every frame in your animation - don't use canvas but normal image tags with preloaded images, then it will run smoothly in ie6 on a singlecore atom.
I've got an app that works kind of like Google maps - it lets you click and pan over a large image. I redraw my Canvas heavily, sampling and scaling from a big image each redraw.
Anyway, I happened to try a dual canvas approach - drawing to a (larger) buffer one when needed, then doing a canvas_display.drawImage(canvas_buffer) to output a region to the screen. Not only did I not see a performance gain, but it got significantly slower with the iPhone. Just a datapoint...
OK, first things first. What else is happening while you're doing this animation? Any other javascript, any other timers, any other handlers? The answer, by the way, cannot be nothing. Your browser is repainting the window - the bits you're changing, at least. If other javascript is 'running', remember, that's not strictly true. Javascript is single-threaded by design. You can only queue for execution, so if some other javascript is hogging the thread, you won't get a look in.
Secondly, learn about how timers work. http://ejohn.org/blog/how-javascript-timers-work/ is my personal favorite post on this. In particular, setTimeout is asking the browser to run something after at least the specified time, but only when the browser has an opening to do that.
Third, know what you're doing with function(){animate(i+1);}. That anonymous function can only exist within the scope of its parent. In other words, when you queue up a function like this, the parent scope still exists on the callstack, as #MartinJespersen pointed out. And since that function queues up another, and another, and another... each is going to get progressively slower.
I've put everything discussed in a little fiddle:
http://jsfiddle.net/KzGRT/
(the first time I've ever used jsfiddle, so be kind). It's a simple 10-frame animation at (nominally) 100ms, using setTimeout for each. (I've done it this way instead of setInterval because, in theory, the one that takes longer to execute should start lagging behind the others. In theory - again, because javascript is single-threaded, if one slows down, it would delay the others as well).
The top method just has all ten images drawn on overlapping canvases, with only one showing at a time. Animation is just hiding the previous frame and showing the next. The second performs the putImageData into a canvas with a top-level function. The third uses an anonymous function as you tried. Watch for the red flash on frame zero, and you'll see who is executing the quickest - for me, it takes a while, but they eventually begin to drift (in Chrome, on a decent machine. It should be more obvious in FF on something lower-spec).
Try it on your low-end test machine and see what happens.
I did the setTimeout this way, hope it helps somebody at boosting application:
var do = true;
var last = false;
window.onmousemove = function(evt){
E.x = evt.pageX - cvs.offsetLeft;
E.y = evt.pageY - cvs.offsetTop;
if(do){
draw();
do = false;
//in 23 ms drawing enabled again
var t = setTimeout(function(){do = true;},23);
}else{
//the last operation must be done to catch the cursor point
clearTimeout(last );
last = setTimeout(function(){draw();},23);
}
};