I know that one of the most expensive operations in HTML5 gamedev is drawing on the canvas. But, what about drawing images outside of it? How expensive is that? What exactly happens when the canvas is 100 by 100 pixels and I try to draw an image at (1000, 1000)? Would checking sprite coordinates to make sure it is inside the canvas make rendering more efficient?
In these tests I used Google Chrome version 21.0.1180.57.
I've made a small fiddle that tests this situation... You can check it out here: http://jsfiddle.net/Yannbane/Tnahv/.
I've ran the tests 1000000 times, and this is the data I got:
Rendering the image inside the canvas lasted 2399 milliseconds.
Rendering the image outside the canvas lasted 888 milliseconds.
This means that drawing outside the canvas does take some time, roughly, 37% of time it would take to render it inside.
Conclusion: It's better to check if the image is inside the canvas before rendering it.
But, of course, I wanted to know how much better... So, I did another test. This time, I, of course, implemented boundary checking, and got that it only took 3 milliseconds to "render" the image outside the canvas 1000000 times. That's 29600% better than simply rendering it outside.
You can see those tests here: http://jsfiddle.net/Yannbane/PVZnz/3/.
You need to perform this check yourself and skip drawing if a figure is out of the screen.
That being said, some browsers do optimize this in some conditions. I found out while writing an article on the IE9 performance profiler a while back that IE9 will optimize away drawing an image if it is out of bounds. The transformation matrix may have to be identity for this optimization to work, and either way you shouldn't rely on browsers doing it.
Always always check.
edit: You can run this simple test to see: http://jsperf.com/on-screen-vs-off
It looks like Chrome and Safari certainly optimize it, at least in simple cases, and firefox doesn't really
Related
I'm coding a JavaScript game. This game obviously needs to be constantly rendering a screen, which, using canvas, must be an Uint8Array(width * height * 4) (as used by Canvas's ImageData). In order to test the expected FPS, I've tried filling that array with white. Much to my surprise, the performance was mediocre. I could barely fill a 1000x1000 bitmap with white in a high-end computer at 150 fps. Considering this is the best performance, without any game logic running, the end result will probably be much lower. My question is: why is the performance so low and what can I do to improve it?
Figuring out how many times you can fill a 1000x1000 canvas using putImageData will not give you any kind of realistic results. The reason is the graphics are pipelined. In a normal app you'd only call putImageData once a frame. If you call it more than once a frame at some point you'll fill the pipeline and stall it. In a real app though you'd be manipulating your data most the frame and only uploading it once not stalling the pipeline.
If you want to see how much work you can do, make a 1000x1000 canvas, call getImageData on it to get an ImageData, manipulate that image data a certain amount, call putImageData, then call requestAnimationFrame and do it again. Slowly increase the amount of manipulation you do until it starts running slower than 60fps. That will tell you how much work you can do realistically.
Here's a fiddle that tries this out
http://jsfiddle.net/greggman/TVA34/
Also, using putImageData has a few issues. One is that Canvas requires pre-multiplied alpha but putImageData supplies un-premultiplied alpha which means some process has to convert your data to pre-multiplied alpha. Then, now-a-days, most Canvas implementations are GPU accelerated. That's great for nearly every feature of the Canvas API but it sucks for putImageData since that data has to be transferred from the CPU to the GPU. It's even worse for getImageData since copying data from the GPU back to the CPU generally stalls the GPU. The stories is even more bad on an HD-DPI machine. getImageData and putImageData have to convert from CSS pixels to the resolution that's actually being used.
If you use WebGL you can at least skip the pre-multiplied alpha conversion step.
Here's a WebGL version
http://jsfiddle.net/greggman/XLgs6/
Interestingly on my 2012 Macbook Pro Retina I find the canvas version is faster on Chrome and Safari. I'm curious why since I would not expect that having worked on them.
/*
Canvas WebGL
Chrome 32 : 710k 650k numbers are in 'operations per frame`
Firefox 26 : 80k 190k
Safari 7.0.1 : 150k 120k
*/
My test also might not be valid. Manipulating only 710k pixels (of 1000k pixels) seems pretty slow. Maybe one of the functions like Math.random or Math.floor is particularly slow, especially given they're using doubles.
See these 2 tests
Only doing loop and assign to the same address
http://jsperf.com/variable-assign
Good old array expansion (and also another popular for loop trick that suppose to be faster)
http://jsperf.com/fill-an-type-array-expand/3
First test shows the address look up is taking about 3/4 of the time.
Second test shows the for loop is taking more than 30% of the time.
I think what typed array is really in lack of is some native code that does the block copy that would really be used in a game development (rather than set pixel by pixel in the tests). WebGL might worth considering.
I'm trying to build something in HTML5/Canvas to allow tracing over an image and alert if deviating from a predefined path.
I've figured out how to load an external image into the canvas, and allow mousedown/mousemovement events over it to draw over the image, but what I'm having trouble getting my head around is comparing the two.
Images are all simple black on white outlines, so from what I can tell a getPixel style event can tell if there is black underneath where has been drawn upon or underneath where the mouse is on.
I could do it with just the mouse position, but that would require defining the paths of every image outline (and there are a fair number, hence ideally wanting to do it by analyzing the underlying image)..
I've been told that its possible with Flash, but would like to avoid that if possible so that compatability with non-flash platforms (namely the ipad) can be maintained as they are the primary target for the page to run.
Any insight or assistance would be appreciated!
I think you already touched upon the most straight-forward approach to solving this.
Given a black and white image on a canvas, you can attach a mousemove event handler to the element to track where the cursor is. If the user is holding left-mouse down, you want to determine whether or not they are currently tracing the pre-defined path. To make things less annoying for the user, I would approach this part of the problem by sampling a small window of pixels. Something around 9x9 pixels would probably be a good size. Note that you want your window size to be odd in both dimensions so that you have a symmetric sampling in both directions.
Using the location of the cursor, call getImageData() on the canvas. Your function call would look something like this: getImageData(center_x - Math.floor(window_size / 2), center_y - Math.floor(window_size / 2), window_size, window_size) so that you get a sample window of pixels with the center right over the cursor. From there, you could do a simple check to see if any non-white pixels are within the window, or you could be more strict and require a certain number of non-white pixels to declare the user on the path.
The key to making this work well, I think, is making sure the user doesn't receive negative feedback when they deviate the tiniest bit from the path (unless that's what you want). At that point you run the risk of making the user annoyed and frustrated.
Ultimately it comes down to one of two approaches. Either you load the actual vector path for the application to compare the user's cursor to (ie. do point-in-path checks), or you sample pixel data from the image. If you don't require the perfect accuracy of point-in-path checking, I think pixel sampling should work fine.
Edit: I just re-read your question and realized that, based on your reference to getPixel(), you might be using WebGL for this. The approach for WebGL would be the same, except you would of course be using different functions. I don't think you need to require WebGL, however, as a 2D context should give you enough flexibility (unless the app is more involved than it seems).
I seem to be experiencing varying performance using an HTML5 canvas based on the memory size of the page... perhaps the number of images (off-screen canvases) that are loaded. How to I best locate the source of the performance problem? Or does anyone know if in fact there is a performance issue when there's a lot of data loaded, even if it isn't all being used at once?
Here's an example of good performance. I have a relatively simple map. It's between 700 and 800 KB. You can drag to scroll around this map relatively smoothly.
There's another file (which you may not want to look at due to its large size).
It's about 16 MB and contains dozens, maybe on the order of a hundred images/canvases. It draws a smaller view so it should go faster. But it doesn't. Many of the maps lag quite severely compared to the simpler demo.
I could try to instrument the code to start timing things, but I have not done this in JavaScript before, and could use some help. If there are easier ways to locate the source of performance problems, I'd be interested.
In Google Chrome and Chromium, you can open the developer tools (tools->developer tools) and then click on "Profiles". Press the circle at the bottom, let the canvas redraw and then click on the circle again. This gives you a profile that shows how much time was spent where.
I've been working on some complex canvas stuff where rendering performance mattered to me.
I wrote some test cases in jsperf and came to the conclusion that a rule of thumb is that a source offscreen canvas should never be more than 65536 pixels.
I haven't yet come to a conclusion about why this is, but likely a data structure or data type has to be changed when dealing with large source canvases.
putImageData showed similar results.
destination canvas size didn't seem to matter.
Here are some tests I wrote that explore this performance limitation:
http://jsperf.com/magic-canvas/2
http://jsperf.com/pixel-count-matters/2
I've started a Breakout game in Canvas.
At the moment, I've only coded the display of the blocks and player.
When the game needs to update itself (every 10ms or so) it will need to call draw() which is currently going to repaint the entire canvas based on the current state of player, blocks and ball.
Its performance is starting to become an issue.
Is it never a good idea to repaint the entire canvas per frame? Should I be altering my code to only paint the sections which are changing?
First off: Yes, altering your code to only paint the sections that are changing may help lots, but you should always be testing specific improvements with your own code as the performance of any one optimization varies by app (sometimes greatly).
But its not only drawing that can cause slowness. Make sure, for instance, that you aren't recomputing/reconstructing anything in your draw loop that never changes.
Also, if you have a lot of objects, don't set fillStyle unless you have to, which means there's an optimization to be had by setting the fill to one color, drawing all the objects of that color, and then setting the second fill color, etc.
Finally, I'd suggest writing your entire game (or most of it) and then going back and doing optimizations.
There are loads of optimizations to be had with Canvas. I've recently begun a guidebook on game-related performance enhancements, hopefully it'll be done by the year's end.
You will have to try to be sure, but I disagree with both answers here. In my simple tests, attempting to clear and redraw just particular regions of the canvas produces slightly worse performance, not better.
You can see my test here: http://phrogz.net/tmp/image_move_sprites_canvas.html
This depends on your needs, however. If you have hundreds of items that do not need to be updated and only a small portion of your canvas changes each frame, perhaps clearing and redrawing only that section will be good.
Is it never a good idea to repaint the entire canvas per frame?
No, sometimes it's a perfectly good idea
Its performance is starting to become an issue
But not then.
That said, though, your code is not that complicated and there's nothing to cause obvious massive slowdowns. How are you measuring its performance?
First I'd take all this out of init():
canvas = document.createElement('canvas');
canvas.width = 400;
canvas.height = 300;
ctx = canvas.getContext('2d');
document.body.appendChild(canvas);
There's no point dealing with that every millisecond!
Secondly, look at this: http://paulirish.com/2011/requestanimationframe-for-smart-animating/
My problem is that my javascript/canvas performs very slowly on lower end computers (Even though they can run even more challenging canvas scripts smoothly).
I'm trying to do a simple animation depending on user selection.
When drawing on the canvas directly proved to be too slow, I draw on a hidden canvas and saved all frames (getImageData) to data and then called animate(1); to draw on my real canvas.
function animate(i){
if(i < 12){
ctx2.putImageData(data[i], 0, 0);
setTimeout(function(){animate(i+1)},1);
}
}
But even this is too slow. What do I do?
Do not use putImageData if you can help it. The performance on FF3.6 is abysmal:
(source: phrogz.net)
Use drawing commands on off-screen canvases and blit sprites to sub-regions using drawImage instead.
As mentioned by #MartinJespersen, rewrite your frame drawing loop:
var animate = function(){
// ...
setTimeout(animate,30); //Max out around 30fps
};
animate();
If you're using a library that forces a clearRect every frame, but you don't need that, stop using that library. Clear and redraw only the portions you need.
Use a smaller canvas size. If you find it sufficient, you could even scale it up using CSS.
Accept that slow computers are slow, and you are standing on the shoulders of a great many abstraction layers. If you want to eek out performance for low-end computers, write in C++ and OpenGL. Otherwise, set minimum system requirements.
The timeout you specified is 1 millisecond. No browser can update the canvas that fast. Change it to 1000 - that'll be 1 second, i.e:
setTimeout(function(){animate(i+1)}, 1000)
UPD. Another thing to try is to prepare as many canvases as there are frames in your animation, set all of them to display:none, then turn display:block on them sequentially. I doubt it's going to be faster than putImageData, but still worth trying.
As already mentioned timeouts with 1 millisecond interval are doomed to fail, so the first step is to stop that.
You are calling setTimeout recursivly which is not ideal for creating animations. Instead initiate all the setTimeouts you need for the entire animation at the same time with increasing delays in a loop and let them run their course, or better yet use setInterval which is the better way of doing animations, and how for instance jQuery's animations work.
It looks like you are trying to redraw the entire canvas at each step of your animation - this is not optimal, try only manipulation the pixels that change. The link you have given to "more challanging canvas scripts" are actually a lot simpler than what you are trying to do, since it's all vector based math - which is what the canvas element is optimized for - it was never made to do full re-rendering every x milliseconds, and it likely never will be.
If what you really need to do is changing the entire image for every frame in your animation - don't use canvas but normal image tags with preloaded images, then it will run smoothly in ie6 on a singlecore atom.
I've got an app that works kind of like Google maps - it lets you click and pan over a large image. I redraw my Canvas heavily, sampling and scaling from a big image each redraw.
Anyway, I happened to try a dual canvas approach - drawing to a (larger) buffer one when needed, then doing a canvas_display.drawImage(canvas_buffer) to output a region to the screen. Not only did I not see a performance gain, but it got significantly slower with the iPhone. Just a datapoint...
OK, first things first. What else is happening while you're doing this animation? Any other javascript, any other timers, any other handlers? The answer, by the way, cannot be nothing. Your browser is repainting the window - the bits you're changing, at least. If other javascript is 'running', remember, that's not strictly true. Javascript is single-threaded by design. You can only queue for execution, so if some other javascript is hogging the thread, you won't get a look in.
Secondly, learn about how timers work. http://ejohn.org/blog/how-javascript-timers-work/ is my personal favorite post on this. In particular, setTimeout is asking the browser to run something after at least the specified time, but only when the browser has an opening to do that.
Third, know what you're doing with function(){animate(i+1);}. That anonymous function can only exist within the scope of its parent. In other words, when you queue up a function like this, the parent scope still exists on the callstack, as #MartinJespersen pointed out. And since that function queues up another, and another, and another... each is going to get progressively slower.
I've put everything discussed in a little fiddle:
http://jsfiddle.net/KzGRT/
(the first time I've ever used jsfiddle, so be kind). It's a simple 10-frame animation at (nominally) 100ms, using setTimeout for each. (I've done it this way instead of setInterval because, in theory, the one that takes longer to execute should start lagging behind the others. In theory - again, because javascript is single-threaded, if one slows down, it would delay the others as well).
The top method just has all ten images drawn on overlapping canvases, with only one showing at a time. Animation is just hiding the previous frame and showing the next. The second performs the putImageData into a canvas with a top-level function. The third uses an anonymous function as you tried. Watch for the red flash on frame zero, and you'll see who is executing the quickest - for me, it takes a while, but they eventually begin to drift (in Chrome, on a decent machine. It should be more obvious in FF on something lower-spec).
Try it on your low-end test machine and see what happens.
I did the setTimeout this way, hope it helps somebody at boosting application:
var do = true;
var last = false;
window.onmousemove = function(evt){
E.x = evt.pageX - cvs.offsetLeft;
E.y = evt.pageY - cvs.offsetTop;
if(do){
draw();
do = false;
//in 23 ms drawing enabled again
var t = setTimeout(function(){do = true;},23);
}else{
//the last operation must be done to catch the cursor point
clearTimeout(last );
last = setTimeout(function(){draw();},23);
}
};