Why do images lose quality after the context has been rotated? - javascript

I'm making a top-down shooter game that relies on the avatar always being rotated pointing to the mouse cursor. I achieve rotation like this:
//Rendering.
context.save(); //Save the context state, we're about to change it a lot.
context.translate(position[0] + picture.width/2, position[1] + picture.height/2); //Translate the context to the center of the image.
context.rotate(phi); //Rotate the context by the object's phi.
context.drawImage(picture.image, -picture.width/2, -picture.height/2); //Draw the image at the appropriate position (center of the image = [0, 0]).
context.restore(); //Get the state back.
When the phi is zero, the image is rendered in its normal quality, with sharp edges and detectable pixels. But, when I set the phi to a nonzero value (actually, when it's not 0, Pi/2, Pi, Pi+Pi/2 or 2Pi), the image looses it's sharpness and the individual pixels can't be seen anymore, because they are blurred out.
Here's a screenshot (sorry about the general bad quality of the screenshot, but I think that the difference is more than noticeable):
This is, well, a bit unacceptable. I can't have the images always blurred out! Why is this happening and can I solve it?

You could try
context.imageSmoothingEnabled = false;
See docs:
context.imageSmoothingEnabled [ = value ]
Returns whether pattern fills and the drawImage() method will attempt to smooth images if they have to rescale them (as opposed to just rendering the images with "big pixels").
Can be set, to change whether images are smoothed (true) or not (false).
If you want a true pixel-art retro style effect, you'd need to manually create rotated sprite images for several angles, look up the appropriate sprite for the current value of phi, and draw it without rotation. This obviously requires a fair amount of art work!

IF you are rotating images around their center point, make sure the image itself has an even number of pixels. Once you end up on odd coordinates the image data needs to be interpolated for the target canvas. Apple has some nice documentation on translating and rotating the canvas.
So for any image, as suggested above use rounding to snap to full pixels.
context.translate(Math.floor(img.width/2), Math.floor(img.height/2));
This way every source pixel of your image will always be drawn exactly into a pixel inside the canvas and blurring does not occur. This however is only true for multiples of 90 degrees.
It seems that all browsers do, to some extend, antialiasing in image drawing so you will probably have to provide rotated images as sprites.
According to this Chromium bug report you might be lucky there if they haven't fixed it yet. Read through and you'll learn that Ian Hickson likely opposed making antialiased image drawing optional.

(picture.width/2, picture.height/2) point won't always work.
(Math.floor(picture.width/2) + 0.5, Math.floor(picture.height/2) + 0.5) should help.

Well, actually it is something you cannot get around
If you rotate an image by a multiple of 90 degrees, your library should smart enough so that no interpolation is applied.
But as soon as you rotate an image by an angle different from a multiple of 90 degrees, you need to interpolate. As a consequence, you get that smoothing. If you are interested in the theory, you may look for a book on computer graphics or image processing.
For the concrete case of image rotation you may have a look at this paper,
http://bigwww.epfl.ch/publications/unser9502.html

Related

How to perform this animation using canvas?

I am creating rotating planets by projecting a 200x100 bitmap on a sphere. Since this projection
is costly for an animation, I hacked an array with value-pairs ["pixel address in sphere projection image", "what pixels of original bitmap goes to that address"].
To just transfer them quickly with no math.
I end up with 16008 values, which represent the 8004 pixels we need to draw a circle (representing a sphere), on
a 100x100 canvas - planets always have radius 50 and I scale them later.
Now, to rotate a planet, all I have to do is to "shift" the second item of the pair by 1 pixel for a very slow rotation, and
higher values for a the illusion of faster rotations. I end up with this bottleneck:
for (var i=0;i<16008;i+=2)
{
//the planet object has a canvas holding its texture, and
//the texturePixels array is the data of that texture after
//a getImageData() of that texture canvas. finalPixels is
//the pixel data for the final planet on screen projected image.
locationInTexture=valuePairs[i]+planet.angle;//angle increases by 4 cuz each pixel is 4 rgba values
locationInProjection=valuePairs[i+1];
finalPixels[locationInProjection]=planet.texturePixels[locationInTexture]
finalPixels[locationInProjection+1]=planet.texturePixels[locationInTexture+1]
finalPixels[locationInProjection+2]=planet.texturePixels[locationInTexture+2]
finalPixels[locationInProjection+3]=255; //alpha isnt relevant
}
I also made the variables global to accelerate things. But it is still slow. My problem might
be that I should minimize DOM access, but I am accessing the pixel data in 2 canvases thousands
of times, and my guess is these don't behave just like normal 'arrays', though I might
be wrong in this case of simple array items reading/writing. The alternative seems to be to do this:
1-at load, get texture pixel data in a normal array instead of the one inherited from the canvas (is this relevant?)
2-get final pixel arrangement to a new normal array, instead of to the final planet display canvas directly.
3-create an image from that last array data and drawImage on the planet display canvas, and done, assuming
the (create image+draw it on final canvas) would be faster.
Or maybe we can even create a canvas with the sphere directly from the normal array data? Or using images
would be cheaper than canvases? How do I do this? Help please.
Thanks in advance :)
P.S. A screen can have tens of planets+moons when showing a solar system. I decided to ask before embarking
on an "img instead of canvas" approach, and find later that that is not the problem.

How to rotate image - canvas pixel manipulation?

I use getImageData and putImageData to draw on canvas from a buffer canvas. I use these methods because I have a large number of particles and these proved to provide the best performance.
Now I'd like to add rotation of particles but I'm having problems with that.
Here is a jsfiddle which uses transformation matrix for rotation. As you can see in the picture (or fiddle) there are holes in the resulting image which I kinda expected from using this matrix.
nx = ~~ (xx * Math.cos(angle) + yy * Math.sin(angle) + cx);
ny = ~~ (xx * Math.sin(angle) - yy * Math.cos(angle) + cy);
But I don't know how to make this better, especially when I'm looking performance effecient solution?
jsfiddle demo
Image - square after rotation (square is used as a simple body):
Currently my backup is procedurally generated sprite animation which is prepared in advance with standard canvas states: save -> translate -> rotate -> restore.
Thank you very much for any directions you can give me.
The problem is that you are trying to map a single pixel to a single pixel. When you rotate an image, each pixel in the original can influence any of the surrounding pixels in the new image. You are effectively mapping the top left corner of each pixel to it's location in the new image, but you need map the center of each pixel to it's location in the new image and then check the overlap of this rotated pixel with that location, and the 8 surrounding pixels in the new image.
Here you can see the effect. The yellow dots are the centers of the pixel which find the "home" location for the pixel (i.e. where the majority of the influence will be placed). You then need to figure out the percentage of that pixel (the underlying blue/white grid) cell is covered by the original pixel (black box surrounding the yellow dot). Once you figure out the home location influence, you need to repeat that process for the 8 surrounding pixel with respect to current pixel in the original image. In your current code, you are using the top left corner of each pixel to find the home pixel for the new image. You should use the center of the pixel.
Since multiple iterations might affect the same pixel, you'll need to calculate the transformation in a buffer before drawing it to the final image. For pixels in the transformation that are not fully covered by pixels in the original image, figure out the percentage of the pixel that is covered and use that to influence the alpha channel. You'll have to take care when applying the pixels to the final image that you account for the alpha portion and blend with what's already there.

Three mouse detection techniques for HTML5 canvas, none adequate

I've built a canvas library for managing scenes of shapes for some work projects. Each shape is an object with a drawing method associated with it. During a refresh of the canvas, each shape on the stack is drawn. A shape may have typical mouse events bound which are all wrapped around the canvas' own DOM mouse events.
I found some techniques in the wild for detecting mouseover on individual shapes, each of which works but with some pretty serious caveats.
A cleared ghost canvas is used to draw an individual shape by itself. I then store a copy of the ghost canvas with getImageData(). As you can imagine, this takes up a LOT of memory when there are many points with mouse events bound (100 clickable shapes on a 960x800 canvas is ~300MB in memory).
To sidestep the memory issue, I began looping over the pixel data and storing only addresses to pixels with non-zero alpha. This worked well for reducing memory, but dramatically increased the CPU load. I only iterate on every 4th index (RGBA), and any pixel address with a non-zero alpha is stored as a hash key for fast lookups during mouse moves. It still overloads mobile browsers and Firefox on Linux for 10+ seconds.
I read about a technique where all shapes would be drawn to one ghost canvas using color to differentiate which shape owned each pixel. I was really happy with this idea, because it should theoretically be able to differentiatate between millions of shapes.
Unfortunately, this is broken by anti-aliasing, which cannot be disabled on most canvas implementations. Each fuzzy edge creates dozens of colors which might be safely ignored except that /they can blend/ with overlapping shape edges. The last thing I want to happen when someone crosses the mouse over a shape boundary is to fire semi-random mouseover events for unrelated shapes associated with colors that have emerged from the blending due to AA.
I know that this not a new problem for video game developers and there must be fast algorithms for this kind of thing. If anyone is aware of an algorithm that can resolve (realistically) hundreds of shapes without occupying the CPU for more than a few seconds or blowing up RAM consumption dramatically, I would be very grateful.
There are two other Stack Overflow topics on mouseover detection, both of which discuss this topic, but they go no further than the 3 methods I describe.
Detect mouseover of certain points within an HTML canvas? and
mouseover circle HTML5 canvas.
EDIT: 2011/10/21
I tested another method which is more dynamic and doesn't require storing anything, but it's crippled by a performance problem in Firefox. The method is basically to loop over the shapes and: 1) clear 1x1 pixel under mouse, 2) draw shape, 3) get 1x1 pixel under mouse. Surprisingly this works very well in Chrome and IE, but miserably under Firefox.
Apparently Chrome and IE are able to optimize if you only want a small pixel area, but Firefox doesn't appear to be optimizing at all based on the desired pixel area. Maybe internally it gets the entire canvas, then returns your pixel area.
Code and raw output here: http://pastebin.com/aW3xr2eB.
If I understand the question correctly, you want to detect when the mouse enters/leaves a shape on the canvas, correct?
If so, then you can use simple geometric calculations, which are MUCH simpler and faster than looping over pixel data. Your rendering algorithm already has a list of all visible shapes, so you know the position, dimension and type of each shape.
Assuming you have some kind of list of shapes, similar to what #Benjammmin' is describing, you can loop over the visible shapes and do point-inside-polygon checks:
// Track which shape is currently under the mouse cursor, and raise
// mouse enter/leave events
function trackHoverShape(mousePos) {
var shape;
for (var i = 0, len = visibleShapes.length; i < len; i++) {
shape = visibleShapes[i];
switch (shape.type ) {
case 'arc':
if (pointInCircle(mousePos, shape) &&
_currentHoverShape !== shape) {
raiseEvent(_currentHoverShape, 'mouseleave');
_currentHoverShape = shape;
raiseEvent(_currentHoverShape, 'mouseenter');
return;
}
break;
case 'rect':
if (pointInRect(mousePos, shape) &&
_currentHoverShape !== shape) {
raiseEvent(_currentHoverShape, 'mouseleave');
_currentHoverShape = shape;
raiseEvent(_currentHoverShape, 'mouseenter');
}
break;
}
}
}
function raiseEvent(shape, eventName) {
var handler = shape.events[eventName];
if (handler)
handler();
}
// Check if the distance between the point and the shape's
// center is greater than the circle's radius. (Pythagorean theroem)
function pointInCircle(point, shape) {
var distX = Math.abs(point.x - shape.center.x),
distY = Math.abs(point.y - shape.center.y),
dist = Math.sqrt(distX * distX + distY * distY);
return dist < shape.radius;
}
So, just call the trackHoverShape inside your canvas mousemove event and it will keep track of the shape currently under the mouse.
I hope this helps.
From comment:
Personally I would just switch to using SVG. It's more what it was
made for. However it may be worth looking at EaselJS
source. There's a method Stage.getObjectUnderPoint(), and their demo's
of this seem to work perfectly fine.
I ended up looking at the source, and the library utilises your first approach - separate hidden canvas for each object.
One idea that came to mind was attempting to create some kind of a content-aware algorithm to detect anti-aliased pixels and with what shapes they belong. I quickly dismissed this idea.
I do have one more theory, however. There doesn't seem to be a way around using ghost canvases, but maybe there is a way to generate them only when they're needed.
Please note the following idea is all theoretical and untested. It is possible I may have overlooked something that would mean this method would not work.
Along with drawing an object, store the method in which you drew that object. Then, using the method of drawing an object you can calculate a rough bounding box for that object. When clicking on the canvas, run a loop through all the objects you have on the canvas and extract ones which bounding boxes intercept with the point. For each of these extracted objects, draw them separately onto a ghost canvas using the method reference for that object. Determine if the mouse is positioned over a non-white pixel, clear the canvas, and repeat.
As an example, consider I have drawn two objects. I will store the methods for drawing the rectangle and circle in a readable manner.
circ = ['beginPath', ['arc', 75, 75, 10], 'closePath', 'fill']
rect = ['beginPath', ['rect', 150, 5, 30, 40], 'closePath', 'fill']
(You may want to minify the data saved, or use another syntax, such as the SVG syntax)
As I am drawing these circles for the first time, I will also keep note of the dimensional values and use them to determine a bounding box (Note: you will need to compensate for stroke widths).
circ = {left: 65, top: 65, right: 85, bottom: 85}
rect = {left: 150, top: 5, right: 180, bottom: 45}
A click event has occurred on the canvas. The mouse point is {x: 70, y: 80}
Looping through the two objects, we find that the mouse coordinates fall within the circle bounds. So we mark the circle object as a possible candidate for collision.
Analysing the circles drawing method, we can recreate it on a ghost canvas and then test if the mouse coordinates fall on a non-white pixel.
After determining if it does or does not, we can clear the ghost canvas to prepare for any more objects to be drawn on it.
As you can see this removes the need to store 960 x 800 x 100 pixels and only 960 x 800 x2 at most.
This idea would best be implemented as some kind of API for automatically handling the data storage (such as the method of drawing, dimensions...).

html canvas motion blur with transparent background

I just created a fancy canvas effect using cheap motion blur
ctx.fillStyle = "rgba(255,255,255,0.2)";
ctx.fillRect(0,0,canvas.width,canvas.height);
Now i want to do the same, but with transparent background. Is there any way to do something like that? I'm playing with globalAlpha, but this is probably a wrong way.
PS: Google really don't like me today
Here's a more performance friendly way of doing it, it requires an invisible buffer and a visible canvas.
buffer.save();
buffer.globalCompositeOperation = 'copy';
buffer.globalAlpha = 0.2;
buffer.drawImage(screen.canvas, 0, 0, screen.canvas.width, screen.canvas.height);
buffer.restore();
Basically you draw your objs to the buffer, which being invisible is very fast, then draw it to the screen. Then you replace clearing the buffer with copying the last frame onto the buffer using the global alpha, and globalCompositeOperation 'copy' to make the buffer into a semi-transparent version of the previous frame.
You can create an effect like this by using globalAlpha and two different canvas objects: one for the foreground, and one for the background. For example, with the following canvas elements:
<canvas id="bg" width="256" height="256"></canvas>
<canvas id="fg" width="256" height="256"></canvas>
You could copy draw both a background texture and a motion blurred copied of foreground like so:
bg.globalAlpha = 0.1;
bg.fillStyle = bgPattern;
bg.fillRect(0, 0, bgCanvas.width, bgCanvas.height);
bg.globalAlpha = 0.3;
bg.drawImage(fgCanvas, 0, 0);
Here is a jsFiddle example of this.
OP asked how to do this with an HTML background. Since you can't keep a copy of the background, you have to hold onto copies of previous frames, and draw all of them at various alphas each frame. Nostalgia: the old 3dfx Voodoo 5 video card had a hardware feature called a "t-buffer", which basically let you do this technique with hardware acceleration.
Here is a jsFiddle example of that style. This is nowhere near as performant as the previous method, though.
What you are doing in the example is partially clear the screen with a semi transparent color, but as it is, you will always gonna to "add" to the alpha channel up to 1 (no transparency).
To have this working with transparent canvas (so you can see what lies below) you should subtract the alpha value instead of adding, but I don't know a way to do this with the available tools, except running all the pixels one by one and decrease the alpha value, but this will be really, really slow.
If you are keeping track of the entities on screen you can do this by spawning new entities as the mouse moves and then setting their alpha level in a tween down to zero. Once they reach zero alpha, remove the entity from memory.
This requires multiple drawing and will slow down rendering if you crank it up too much. Obviously the two-canvas approach is the simplest and cheapest from a render performance perspective but it doesn't allow you to control other features like making the "particles" move erratically or apply physics to them!

Isometric Screen to Map

I'm trying to figure out how I can get the correct "active" tile under the mouse when I have "ramp" and +1 height tiles (see picture below).
When my world is flat, everything works no problem. Once I add a tile with a height of say +1, along with a ramp going back to +0, my screen -> map routine is still looking as if everything is "flat".
In the picture above, the green "ramp" is the real tile I want to render and calculate mouse -> map, however the blue tile you see "below" it is the area which gets calculated. So if you move your mouse into any of the dark green areas, it thinks you're on another tile.
Here is my map render (very simple)
canvas.width = canvas.width; // cheap clear in firefox 3.6, does not work in other browsers
for(i=0;i<map_y;i++){
for(j=0;j<map_x;j++){
var xpos = (i-j)*tile_h + current_x;
var ypos = (i+j)*tile_h/2+ current_y;
context.beginPath();
context.moveTo(xpos, ypos+(tile_h/2));
context.lineTo(xpos+(tile_w/2), ypos);
context.lineTo(xpos+(tile_w), ypos+(tile_h/2));
context.lineTo(xpos+(tile_w/2), ypos+(tile_h));
context.fill();
}
}
And here is my mouse -> map routine:
ymouse=( (2*(ev.pageY-canvas.offsetTop-current_y)-ev.pageX+canvas.offsetLeft+current_x)/2 );
xmouse=( ev.pageX+ymouse-current_x-(tile_w/2)-canvas.offsetLeft );
ymouse=Math.round(ymouse/tile_h);
xmouse=Math.round(xmouse/(tile_w/2));
current_tile=[xmouse,ymouse];
I have a feeling I'll have to start over and implement a world based map system rather than a simple screen -> map routine.
Thanks.
Your assumption is correct. In order to "pick" against world geometry, your routine needs to be aware of the world (and not just the base-level tile configuration). That is, without any concept of the height of the tiles near the one that is currently picked (by your current algorithm), there's no way to determine whether a neighboring tile (or one even further away, depending on the permitted height) should be intercepted by picking ray.
You've got the final possible point of your picking ray, already. What remains is to define the remainder of the ray, in world-space, and to check that ray for intersections with world geometry.
If, like the picture, your view angle is always 45 degrees and always from the same direction, your mouse -> map routine could use an algorithm something like:
calculate i,j of tile as you're doing currently (your final value of xmouse, ymouse)
look up height and angle of tile at i,j
given the height and angle, does this tile intersect the picking ray? If so, set lasti, lastj = i, j
increment/decrement i,j one step diagonally toward viewer
have we fallen off the edge of the map? If so, return lasti, lastj. Otherwise go back to 2.
Depending on the maximum height of a tile, you might have to check only 2 tiles, rather than going all the way to the edge of the map.
3 is the tricky part, and depends on your world geometry. Draw some triangles and you should be able to figure it out. Or you might try looking at the function intersect_quadrilateral_ray() here.

Categories

Resources