I use getImageData and putImageData to draw on canvas from a buffer canvas. I use these methods because I have a large number of particles and these proved to provide the best performance.
Now I'd like to add rotation of particles but I'm having problems with that.
Here is a jsfiddle which uses transformation matrix for rotation. As you can see in the picture (or fiddle) there are holes in the resulting image which I kinda expected from using this matrix.
nx = ~~ (xx * Math.cos(angle) + yy * Math.sin(angle) + cx);
ny = ~~ (xx * Math.sin(angle) - yy * Math.cos(angle) + cy);
But I don't know how to make this better, especially when I'm looking performance effecient solution?
jsfiddle demo
Image - square after rotation (square is used as a simple body):
Currently my backup is procedurally generated sprite animation which is prepared in advance with standard canvas states: save -> translate -> rotate -> restore.
Thank you very much for any directions you can give me.
The problem is that you are trying to map a single pixel to a single pixel. When you rotate an image, each pixel in the original can influence any of the surrounding pixels in the new image. You are effectively mapping the top left corner of each pixel to it's location in the new image, but you need map the center of each pixel to it's location in the new image and then check the overlap of this rotated pixel with that location, and the 8 surrounding pixels in the new image.
Here you can see the effect. The yellow dots are the centers of the pixel which find the "home" location for the pixel (i.e. where the majority of the influence will be placed). You then need to figure out the percentage of that pixel (the underlying blue/white grid) cell is covered by the original pixel (black box surrounding the yellow dot). Once you figure out the home location influence, you need to repeat that process for the 8 surrounding pixel with respect to current pixel in the original image. In your current code, you are using the top left corner of each pixel to find the home pixel for the new image. You should use the center of the pixel.
Since multiple iterations might affect the same pixel, you'll need to calculate the transformation in a buffer before drawing it to the final image. For pixels in the transformation that are not fully covered by pixels in the original image, figure out the percentage of the pixel that is covered and use that to influence the alpha channel. You'll have to take care when applying the pixels to the final image that you account for the alpha portion and blend with what's already there.
Related
I'm currently working on a mini map for a game in which keeps track of different items of importance on and off the screen. When I first created the mini map through a secondary camera rendered onto a texture and displayed on screen in a miniature display, it was rectangle shape. I was able to ensure when the item of importance left the view of the map, an arrow pointing to the target showed up and remained on the edge of the map. It was basically clamping the x & y positions of the arrow to half the camera view's width and length (with some suitable margin space).
Anyway. Now I am trying to make the mini map circular and while I have the proper render mask on to guarantee that shape of the mini map, I am having difficulties in clamping the arrows to the shape of the new mini-map. In the rectangular mini map, the arrows stayed in the corners while clamped, but obviously, circles don't have corners.
I am thinking clamping the arrow's x & y positions have to do with the radius of the circle (half of the height of the screen/minimap), but because I'm a little weak on the math side, I am kindly requesting some help. How would I clamp the arrows to the edge of a new circle shape?
The code I have now is as follows:
let {width: canvasWidth, height: canvasHeight} = cc.Canvas.instance.node, // 960, 640
targetScreenPoint = cc.Camera.main.getWorldToScreenPoint(this.targetNode.position)
// other code for rotation of arrow, etc...
// FIXME: clamp the the edge of the minimap mask which is circular
// This is the old clamping code for a rectangle shape.
let arrowPoint = targetScreenPoint;
arrowPoint.x = utils.clamp(arrowPoint.x, (-canvasWidth / 2) + this.arrowMargin,
(canvasWidth / 2) - this.arrowMargin);
arrowPoint.y = utils.clamp(arrowPoint.y, (-canvasHeight / 2) + this.arrowMargin,
(canvasHeight /2) - this.arrowMargin);
this.node.position = cc.v2(arrowPoint.x, arrowPoint.y);
I should probably also note that all mini-map symbols and arrows technically are on screen but only are displayed in on the secondary camera through a culling mask... you know, just in case it helps.
Just for anyone else looking to do the same, I basically normalized the direction from the target node that the arrow points at and multiplied it by the radius of the image mask (with appropriate margin space).
Since the player node and the centre of the mask is at origin, I just got the difference from the player. The (640/2) is the diameter, which of course, shouldn't be hardcoded, but meh for now. Thanks to those who commented and got me thinking in the right direction.
let direction = this.targetNode.position.sub(this.playerNode.position).normalize();
let arrowPos = direction.mul((640/2) - this.arrowMargin);
this.node.position = arrowPos;
In the example in Leaflet (for non geographic image), they set "bounds". I am trying to understand how they computed the values
var bounds = [[-26.5,-25], [1021.5,1023]];
The origin is bottom-left and y increases upwards / x towards the right. How did negative numbers turn up here? Also, after experimentation, I see that the actual pixel coordinates change if you specify different coordinates for bounds. I have a custom png map which I would like to use but I am unable to proceed due to this.
Oh, you mean this image:
If you open the full file (available at https://github.com/Leaflet/Leaflet/blob/v1.4.0/docs/examples/crs-simple/uqm_map_full.png ) with an image editor, you'll see that it measures 2315x2315 pixels. Now, the pixel that represents the (0,0) coordinate is not at a corner of the image, but rather 56 pixels away from the lower-left corner of the image:
Similarly, the (1000, 1000) coordinate is about 48 pixels from the top-right corner of the image:
Therefore, if we measure pixel coordinates of the grid corners:
Game coordinate (0, 0) → Pixel coordinate (59, 56)
Game coordinate (1000, 1000) → Pixel coordinate (2264, 2267)
The problem here is finding the bounds (measured in game coordinates) of the image. Or, in other words:
Pixel coordinate (0, 0) → Game coordinate (?, ?)
Pixel coordinate (2315, 2315) → Game coordinate (?, ?)
We know that the pixel-to-game-coordinate ratio is constant, we know the image size and the distance to the coordinates grid, so we can infer stuff:
1000 horizontal game units = image width - left margin - right margin
or
1000 horizontal game units = 2315px - 56px - 48px = 2213px
therefore the pixel/game unit ratio is
2213px / 1000 game units = 2.213 px/unit
therefore the left margin is...
~59px = ~59px / (2.213px/unit) ~= 26.66 game units
...therefore the left edge of the image is at ~ -26.66 game units. Idem for the right margin...
~51px = ~51px / (2.213px/unit) = ~23.04 game units
...therefore the right edge of the image is at ~1023.04 game units
Repeating that for the top and bottom margins we can fill up all the numbers:
Pixel coordinate (0, 0) → Game coordinate (-26.66, -25)
Pixel coordinate (2315, 2315) → Game coordinate (1023.04, 1025)
Why don't these numbers match the ones in the example exactly? Because I might have used a different pixel for measurement when I wrote that Leaflet tutorial. Still, the error is negligible.
Let me remark a sentence from that tutorial:
One common mistake when using CRS.Simple is assuming that the map units equal image pixels. In this case, the map covers 1000x1000 units, but the image is 2315x2315 pixels big. Different cases will call for one pixel = one map unit, or 64 pixels = one map unit, or anything. Think in map units in a grid, and then add your layers (L.ImageOverlays, L.Markers and so on) accordingly.
If you have your own game map (or anything else), you should ask yourself: Where is the (0,0) coordinate? What are the coordinates of the image edges in the units I'm gonna use?
I am working on a school project that includes these conditions:
Make maze with using only JS, HTML5 and CSS.
Make a torch effect around the character. You cannot light through walls.
I started making this game with the use of canvas.
I have succeeded to make a torch effect around the character as shown here:
http://people.inf.elte.hu/tunyooo/web2/HTML5-Maze.html
However, I cannot make it NOT to light through walls.
I am fairly sure I should do something like this:
Start a loop in all directions from the current position of the character up until it reaches the view distance OR if the context.getImageData() returns [0,0,0,255]. This way, I could get the character's distance from northern, eastern, western and southern walls.
Then, I could light the maze around the character with a (viewdistance-DistanceFrom*Wall) rectangle.
Unfortunately though, after 15 hours of thinking about this I am running out of ideas how to make this work.
Any tips are appreciated.
A simpler way of doing this is (ps: I get a "forbidden" error on the link provided so i cannot see what you did):
Have a matte version of the maze, a transparent and white image where white represent allowed drawing areas. This matte image should match the maze image in size and placement.
Create an off-screen canvas the size of the torch image
When you need to draw the torch first draw the matte image into the off-screen canvas. Draw it offset so the correct part of the matte is drawn. For example: if the torch will be drawn at position 100, 100 on the maze then draw the matte into the off-screen canvas at -100,-100 - or simply create the canvas the same size as the maze and draw in the matte at 0,0 and the torch at the relative position. More memory is used but simpler to maintain.
Change composite mode to source-in and then draw the torch. Change composite mode back to copy for the next draw.
Now your torch is clipped to fit within the wall. Now simply draw the off-screen canvas to your main canvas instead of the torch.
Note: it's important that the torch is made such as it cannot reach the other side of the wall (diameter size) or it will instead shine "under" the maze walls - this can be solved other ways though by using matte for different zones which is chosen depending on player position (not shown here).
To move in the demo below just move the mouse over the canvas area.
Live demo
function mousemoved(e) {
var rect = canvas.getBoundingClientRect(), // adjust mouse pos.:
x = e.clientX - rect.left - iTorch.width * 0.5, // center of torch
y = e.clientY - rect.top - iTorch.height * 0.5;
octx.drawImage(iMatte, 0, 0); // draw matte to off-screen
octx.globalCompositeOperation = 'source-in'; // change comp mode
octx.drawImage(iTorch, x, y); // clip torch
octx.globalCompositeOperation = 'copy'; // change comp mode for next
ctx.drawImage(iMaze, 0, 0); // redraw maze
ctx.drawImage(ocanvas, 0, 0); // draw clipped torch on top
}
In the demo the torch is of more or less random size, a bit too big in fact - something I made quick and dirty. But try to move within the maze path to see it being clipped. The off-screen canvas is added on the size of the main canvas to show what goes on.
An added bonus is that you could use the same matte for hit-testing.
Make your maze hallways into clipping paths.
Your torch effects will be contained within the clipping paths.
[ Addition to answer based on questioner's comments ]
To create a clipping path from your existing maze image:
Open up your maze image in a Paint program. The mouse cursors X/Y position are usually displayed as you move over the maze image.
Record the top-left and bottom-right of each maze hallway in an array.
var hallways=[];
hallways.push({left:100, y:50, right: 150, bottom: 65}); // for each hallway
Listen for mouse events and determine which hallway the mouse is in.
// hallwayIndex is the index of the hallway the mouse is inside
var hallwayIndex=-1;
// x=mouse's x coordinate, y=mouse's y coordinate
for(var i=0;i<hallways;i++){
var hall=hallways[i];
if(x>=hall.left &&
x<=hall.right &&
y>=hall.top &&
y<=hall.bottom)
{ hallwayIndex=i; }
}
Redraw the maze on the canvas
Create a clipping path for the current hallway:
var width=hall.right-hall.left;
var height=hall.bottom-hall.top;
ctx.beginPath();
ctx.Rect(hall.left,hall.top,width,height);
ctx.clip();
Draw the player+torch into the hallway (the torch will not glow thru the walls).
There is a brilliant article on this topic: http://www.redblobgames.com/articles/visibility/
Doing it accurately like that, however, is a lot of work. If you want to go with a quick and dirty solution, I would suggest the following. Build the world from large blocks (think retro pixels). This makes collision detection simpler too. Now you can consider all points within the torch radius. Walk in a straight line from the character to the point. If you reach the point without hitting a wall, make it bright.
(You could do the same with 1-pixel blocks too, but you might hit performance issues.)
I have an <img> within a <div> which can be moved around using four directional buttons, for example:
The image is obviously larger than its container, hence the directional buttons to move it in different directions.
There is also a zoom control where you can zoom in and out. I set up the scaling method with ease, by just applying a zoom factor as a percentage to the base width and height:
scale: function(zoom)
{
image.width = baseWidth * zoom;
image.height = baseHeight * zoom;
}
// Zoom in 50%.
Scene.scale(1.5);
This is fine however the image scales from top-left, meaning that the image looks like it's getting sucked out towards the top left during a zoom in and spat back out when zooming out.
I'm trying to have the zoom effect apply from the centre of the container, like this:
But I'm finding it hard to get my head around the mathematics required to move the image after scaling applies to give this effect.
The closest I've gotten is to move the image based on the difference between the current zoom and the new zoom level, but it's still slightly off and gives a 'curved' effect when zooming.
Is there a common formula used to reposition an image so that it scales around a different origin (i.e. not top-left (0,0)).
This is what it looks like currently.
You have to take your original coordinates and calculate the center of your original image, that is x_center = x_orig + width_orig / 2; Then you can calculate the new x coordinate of your scaled image: x_new = x_center - width_new / 2. The same applies for y. Then move your scaled image to these new coordinates. If you do it after each time you scale the image, it will look as though it is scaled around its center.
I'm making a top-down shooter game that relies on the avatar always being rotated pointing to the mouse cursor. I achieve rotation like this:
//Rendering.
context.save(); //Save the context state, we're about to change it a lot.
context.translate(position[0] + picture.width/2, position[1] + picture.height/2); //Translate the context to the center of the image.
context.rotate(phi); //Rotate the context by the object's phi.
context.drawImage(picture.image, -picture.width/2, -picture.height/2); //Draw the image at the appropriate position (center of the image = [0, 0]).
context.restore(); //Get the state back.
When the phi is zero, the image is rendered in its normal quality, with sharp edges and detectable pixels. But, when I set the phi to a nonzero value (actually, when it's not 0, Pi/2, Pi, Pi+Pi/2 or 2Pi), the image looses it's sharpness and the individual pixels can't be seen anymore, because they are blurred out.
Here's a screenshot (sorry about the general bad quality of the screenshot, but I think that the difference is more than noticeable):
This is, well, a bit unacceptable. I can't have the images always blurred out! Why is this happening and can I solve it?
You could try
context.imageSmoothingEnabled = false;
See docs:
context.imageSmoothingEnabled [ = value ]
Returns whether pattern fills and the drawImage() method will attempt to smooth images if they have to rescale them (as opposed to just rendering the images with "big pixels").
Can be set, to change whether images are smoothed (true) or not (false).
If you want a true pixel-art retro style effect, you'd need to manually create rotated sprite images for several angles, look up the appropriate sprite for the current value of phi, and draw it without rotation. This obviously requires a fair amount of art work!
IF you are rotating images around their center point, make sure the image itself has an even number of pixels. Once you end up on odd coordinates the image data needs to be interpolated for the target canvas. Apple has some nice documentation on translating and rotating the canvas.
So for any image, as suggested above use rounding to snap to full pixels.
context.translate(Math.floor(img.width/2), Math.floor(img.height/2));
This way every source pixel of your image will always be drawn exactly into a pixel inside the canvas and blurring does not occur. This however is only true for multiples of 90 degrees.
It seems that all browsers do, to some extend, antialiasing in image drawing so you will probably have to provide rotated images as sprites.
According to this Chromium bug report you might be lucky there if they haven't fixed it yet. Read through and you'll learn that Ian Hickson likely opposed making antialiased image drawing optional.
(picture.width/2, picture.height/2) point won't always work.
(Math.floor(picture.width/2) + 0.5, Math.floor(picture.height/2) + 0.5) should help.
Well, actually it is something you cannot get around
If you rotate an image by a multiple of 90 degrees, your library should smart enough so that no interpolation is applied.
But as soon as you rotate an image by an angle different from a multiple of 90 degrees, you need to interpolate. As a consequence, you get that smoothing. If you are interested in the theory, you may look for a book on computer graphics or image processing.
For the concrete case of image rotation you may have a look at this paper,
http://bigwww.epfl.ch/publications/unser9502.html