Understanding HTML 5 canvas scale and translate order - javascript

I'm doing some graphing around a center X,Y of 0,0. When it's time to render, I reposition with translate, and then use scale to make the graph fill the canvas (ie scale everything by 50% for example).
I notice that it matters whether you call scale and then translate, or translate and then scale and I can't quite get my head around it. This is a problem since everything doesn't always fit, but my mental model isn't complete so having a hard time fixing it.
Can someone explain why the order of the scale and translate calls matter?

So let's draw a grid on a 300x300 canvas:
http://jsfiddle.net/simonsarris/4uaZy/
This will do. Nothing special. A red line denotes where the origin is located by running through (0,0) and extending very very far, so when we translate it we'll see something. The origin here is the top left corner, where the red lines meet at (0,0).
All of the translations below happen before we draw the grid, so we'll be moving the grid. This lets you see exactly what's happening to the orgiin.
So lets translate the canvas by 100,100, moving it down+right:
http://jsfiddle.net/simonsarris/4uaZy/1/
So we've translated the origin, which is where the red X is centered. The origin is now at 100,100.
Now we'll translate and then scale. Notice how the origin is in the same place as the last image, everything is just twice as large.
http://jsfiddle.net/simonsarris/4uaZy/2/
Boom. The orgin is still at 100,100. Everything is puffed up by 2 though. The origin changed, then everything gets puffed up in place.
Now lets look at them in reverse. This time we scale first, so everything is fat from the start:
http://jsfiddle.net/simonsarris/4uaZy/3/
Everything is puffed by 2. The origin is at 0,0, its starting point.
Now we do a scale and then a translate.
http://jsfiddle.net/simonsarris/4uaZy/4/
We're translating by 100,100 still, but the origin has moved by 200,200 in real pixels. Compare this to two images previous.
This is because everything that happens after a scale must be scaled, including additional transforms. So transforming by (100,100) on a scaled canvas leads to it moving by 200, 200.
The takeaway thing to remember here is that changing the transformation affects how things are drawn (or transformed!) from then on. If you scale x2, and then translate, the translation will be x2
If you want to see, mathematically, what is happening at each step I encourage you to take a look at the code here:
https://github.com/simonsarris/Canvas-tutorials/blob/master/transform.js
This mimics the entire transformation process done by canvas and lets you see how previous transforms modify those that come afterwards.

Scaling and rotation are done respect to the origin so if your transform has a translation, for example, then this will make the order important.
Heres a good read:
Why Transformation Order Is Significant

Related

Writing fragment shaders: cannot make sense of how the uniforms are defined

I'm trying to make custom filters with Phaser, but I don't get how the uniforms, and vTextureCoord in particular are specified. Here's a JSFiddle (EDIT: Ignore the image, the minimal case lays in the square gradient):
Why isn't the top-right corner white? I've set both the filter resolution and the sprite size to 256, yet vTextureCoord only goes from [0,0] to [.5,.5] (or so it seems)
Try dragging the sprite: it seems to be blocked by a wall at the top and left borders. It's only shader-related though, as the game object itself is correctly dragged. How come?
I pulled my hair on this one during the last Ludum Dare, trying to figure out the pixel position within the sprite (i.e. [0,0] on the bottom left corner and [sprite.w, sprite.h] on the top right one)... But I couldn't find any reliable way to compute that whatever the sprite position and size are.
Thanks for your help!
EDIT: As emackey pointed out, it seems like either Phaser or Pixi (not sure at which level it's handled?) uses an intermediate texture. Because of this the uSampler I get is not the original texture, but a modified one, that is, for example, shifted/cropped if the sprite is beyond the top-left corner of the screen. The uSampler and vTextureCoord work well together, so as long as I'm making simple things like color tweaks all seems well, but for toying with texture coordinates it's simply not reliable.
Can a Phaser/Pixi guru explain why it works that way, and what I'm supposed to do to get clear coordinates and work with my actual source texture? I managed to hack a shader by "fixing vTextureCoord" and plugging my texture in iChannel0, but this feels a bit hacky.
Thanks.
I'm not too familiar with Phaser, but we can shed a little light on what that fragment shader is really doing. Load your jsFiddle and replace the GLSL main body with this:
void main() {
gl_FragColor = vec4(vTextureCoord.x * 2., vTextureCoord.y * 2., 1., 1.);
gl_FragColor *= texture2D(uSampler, vTextureCoord) * 0.6 + 0.4;
}
The above filter shader is a combination of the original texture (with some gray added) and your colors, so you can see both the texture and the UVs at the same time.
You're correct that vTextureCoord only goes to 0.5, hence the * 2. above, but that's not the whole story: Try dragging your sprite off the top-left. The texture slides but the texture coordinates don't move!
How is that even possible? My guess is that the original sprite texture is being rendered to an intermediate texture, using some of the sprite's location info for the transform. By the time your custom filter runs, your filter GLSL code is running on what's now the transformed intermediate texture, and the texture coordinates no longer have a known relation to the original sprite texture.
If you run the Chrome Canvas Inspector you can see that indeed there are multiple passes, including a render-to-texture pass. You can also see that the filter pass is using coordinates that appear to be the ratio of the filter area size to the game area size, which in this case is 0.5 on both dimensions.
I don't know Phaser well enough to know if there's a quick fix for any of this. Maybe you can add some uniforms to the filter that would give the shader the extra transform it needs, if you can figure out where that comes from exactly. Or perhaps there's a way to attach a shader directly on the sprite itself (there's a null field of the same name) so you could possibly run your GLSL code there instead of in the filter. I hope this answer has at least explained the "why" of your two questions above.

fixing height issues in isometric perlin noise

I've been using perlin noise to generate tile based, isometric landscapes. So far I've been using the noise value as a height map for the tiles themselves: Math.floor(noise * 10), basically. This generates perfectly nice looking but linear maps. However, I found the "mountains" rather boring looking so I applied an exponent: Math.floor(Math.pow((noise / 4), 2.3)). This pushes the higher values up, producing the image attached.
These height values are the are stored in a 2d grid, giving me the x, y and z i need to draw the map to the screen.
The drawback is kind of obvious: there are gaps in my mountain that should be filled up. I'm just not sure where to start since that is information that I can no longer store in a 2d grid. I guess I can cheat using "longer" tiles but that is kind of lame. Any suggestions?
If you need more info I'm happy to explain. Maybe I'm barking up the wrong tree.
Before you draw the first tile far in the back have a look to the two neighboring tiles on the left and right that are closer to the viewer. Get the lowest height of them and check if this height is lower than your back-tile minus one height. (Because this would cause a gap). Now you can start to draw on the pile in the back starting with this "low height" and stack tiles on it until you reach the height you want. Then you can draw the next tile that is closer to the viewer using the same algorithm.
Edit: But I am just wondering if it would maybe look a but awkward with so many stacked tiles. Maybe it better to just stretch the soil layer until the "low height".

Canvas transformation transforms drawImage

I am currently working on a game (Purely a hobby) found at http://game.mersholm.dk
I got most things working out great (transformation, selection, movement, objects etc) But theres one nut of which i just cannot crack.
I am trying to add an isometric building using drawimage(experimenting). Ofcourse the image also undergoes a transformation due to the transformation matrix defined. This just makes the image twirl and rotate.
If i reset the matrix, draw the image and sets the matrix again it will break my screen to world cordinate calculations.
How would i go around adding isometric graphics to the world without twirling them with the matrix?
best regards.
Jonas
The right way to go when drawing an image with transform is this one :
save the context.
reset the context's transform.
translate to the screen point where you will start drawing the image.
apply the transform required for the image : rotate/scale/skew.
draw the image at (0,0).
restore the context.
In case you are confident with the previous state of the context, do not reset it. But, then, if you don't reset the context -which is faster- just be sure to use world OR screen coordinates according to the current scale/transform.

Rotating a Rectangle from any point

I'm trying to write a script (javascript) in an API of a Virtual Table Top program so I can manipulate some tokens (Car Wars :)).
I'm sort of finding the answer, but it seems like I'm struggling and reinventing the wheel so I thought I'd ask for help. One reason I'm getting confused is the program returns results based on +y is down and Deg go clockwise which is different than what all the trig formulas want (counter clockwise and +y is up).
Here is what I have access to. Rectangle rotates around centre, Centre point(x,y), width, height, and rotation. I've got the code working for moving the rectangle in the direction of the rotation, side to side, up and down, etc. Now I need to be able to rotate it around any of the four corners or any point would be nice, but four corners are all thats needed.
It won't let me include an image since I'm new so I hope the description is good enough. I had an image all done up. :(
In the API I can't actually draw the rectangle, I can only set its rotation, and centre value. So my thought was if I can find the x,y of one corner currently, then rotate it the desired degs around the centre (I can do this easily by setting the rectangles rotation), find the new x,y of that same corner. Then I will know the offset and apply that to the centre (thats how the rectangle is moved as well).
So I need to be able to find the x,y of any corner of a rectangle at any given starting angle, then again at a new angle rotated at its centre. This offset would then be easily applied to the centre x,y and the rectangle would see to have rotated along one of its corners.
Thanks for any help you can give. I'm hoping I will eventually figure it out, just writing this description out actually has helped me think it through. But I'm currently stuck!
Konrad
The trick to rotating around an arbitrary point in 2d (eg, one of the four corners of the rectangle), is to first translate the vertices of the shape so that the point around which you want to rotate is in the origin (ie 0,0).
To achieve this:
1. Translate your rectangle by (-x, -y).
2. Rotate your rectangle by the desired angle.
3. Translate your rectangle by (x, y) to place it back where it originally was.
where (x,y) is the x/y coordinates of the point around which to rotate.
You can use negative angles to adjust for clockwise rotations.
There is a lot of info about this on the net, for example:
http://www.siggraph.org/education/materials/HyperGraph/modeling/mod_tran/2drota.htm

Webkit 3D CSS. Rotate camera like in a First Person Shooter

What I want to achieve is a camera rotation like http://www.keithclark.co.uk/labs/3dcss/demo/ . It's not perfect and sometimes the camera breaks, but that's the idea.
I like the rotation to be similar like a human view, but I only managed to obtain a rotation across a certain point. This is an example of what I obtained http://jsfiddle.net/gaAXk/3/.
As i said before, i would like a human like behaviour.
I also tried with -webkit-transform-origin but with no better result.
Any help/suggestion will be highly appreciated.
The problem here is the following:
To give a human-like behavior, when the point of view moves, you should calculate the new positions on the x/y/z axis for the objects (not just the rotation angle in case of a rotation, for instance).
CSS transform should work in the follwing way, we give a perspective, for example of 800px to a scene. Then the objects will be visible with a Z position up to 800px, if the Z position is, for example 1000px, it will be behind our point of view, so we won't be able to see the element.
That said, after a rotation you should calculate the new position for the items based on our new point of view.
To be clearer I've updated your example with much simpler code (it only supports rotation and there's just one image): http://jsfiddle.net/gaAXk/12/
The perspective in the example is 800px.
The image is initially placed at x=0px, y=0px, z=0px. So it will be visible in front of us at a "distance" of 800px.
When we rotate the point of view, the element should move along a circumference around the point of view, so the x,z positions and the rotation angle of the element, needs to be updated.
The element in the example moves along a circumference with 800px radius (the calculatePos() function does the trick).
The same calculation should be updated if we change position (if the point of view gets closer to some objects, and further from others).
This isn't so trivial. If anyone has better solutions (I'm not a 3D expert), I will be glad to hear some.

Categories

Resources