How to calculate FOV from VRFrameData? - javascript

There used to be field of view information in the VREyeParameters, but that was deprecated. So now i am wondering: Is possible to calculate that using the view/projection matrices provided by VRFrameData?

The projection matrix describes the mapping from 3D points of a scene, to 2D points of the viewport. The projection matrix transforms from view space to the clip space. Clip space coordinates are Homogeneous coordinates. The coordinates in the clip space are transformed to the normalized device coordinates (NDC) in the range (-1, -1, -1) to (1, 1, 1) by dividing with the w component of the clip coordinates.
At Perspective Projection the projection matrix describes the mapping from 3D points in the world as they are seen from of a pinhole camera, to 2D points of the viewport. The eye space coordinates in the camera frustum (a truncated pyramid) are mapped to a cube (the normalized device coordinates).
If you want to know the corners of the camera frustum in view space, then you have to transform the corners of the normalized device space (-1, -1, -1), ..., (1, 1, 1) by the inverse projection matrix. To get cartesian coordinates, the X, Y, and Z component of the result has to be divided by the W (4th) component of the result.
glMatrix is a library which provides matrix operations and data types such as mat4 and vec4:
projection = mat4.clone( VRFrameData.leftProjectionMatrix );
inverse_prj = mat4.create();
mat4.invert( inverse_prj, projection );
pt_ndc = [-1, -1, -1];
v4_ndc = vec4.fromValues( pt_ndc[0], pt_ndc[1], pt_ndc[2], 1 );
v4_view = vec4.create();
vec4.transformMat4( v4_view, v4_ndc, inverse_prj );
pt_view = [v4_view[0]/v4_view[3], v4_view[1]/v4_view[3], v4_view[2]/v4_view[3]];
The transformation view coordinates to world coordinates can be done by the inverse view matrix.
view = mat4.clone( VRFrameData.leftViewMatrix );
inverse_view = mat4.create();
mat4.invert( inverse_view, view );
v3_view = vec3.clone( pt_view );
v3_world = vec3.create();
mat4.transformMat4( v3_world, v3_view, inverse_view );
Note, the left and right projection matrix are not symmetric. This means the line of sight is not in the center of the frustum and they are different for the left and the right eye.
Further note, a perspective projection matrix looks like this:
r = right, l = left, b = bottom, t = top, n = near, f = far
2*n/(r-l) 0 0 0
0 2*n/(t-b) 0 0
(r+l)/(r-l) (t+b)/(t-b) -(f+n)/(f-n) -1
0 0 -2*f*n/(f-n) 0
where :
a = w / h
ta = tan( fov_y / 2 );
2 * n / (r-l) = 1 / (ta * a)
2 * n / (t-b) = 1 / ta
If the projection is symmetric, where the line of sight is in the center of the view port and the field of view is not displaced, then the matrix can be simplified:
1/(ta*a) 0 0 0
0 1/ta 0 0
0 0 -(f+n)/(f-n) -1
0 0 -2*f*n/(f-n) 0
This means the field of view angle can be calculated by:
fov_y = Math.atan( 1/prjMat[5] ) * 2; // prjMat[5] is prjMat[1][1]
and the aspect ratio by:
aspect = prjMat[5] / prjMat[0];
The calculation for the field of view angle also works, if the projection matrix is only symmetric along the horizontal. This means if -bottom is equal to top. For the projection matrices of the 2 eyes this should be the case.
Furthermore:
z_ndc = 2.0 * depth - 1.0;
z_eye = 2.0 * n * f / (f + n - z_ndc * (f - n));
by substituting the fields of the projection matrix this is:
A = prj_mat[2][2]
B = prj_mat[3][2]
z_eye = B / (A + z_ndc)
This means the distance to the near plane and to the far plane can be calculated by:
A = prj_mat[10]; // prj_mat[10] is prj_mat[2][2]
B = prj_mat[14]; // prj_mat[14] is prj_mat[3][2]
near = - B / (A - 1);
far = - B / (A + 1);

SOHCAHTOA pronounced "So", "cah", "toe-ah"
SOH -> Sine(angle) = Opposite over Hypotenuse
CAH -> Cosine(angle) = Adjacent over Hypotenuse
TOA -> Tangent(angle) = Opposite over Adjacent
Tells us the relationships of the various sides of a right triangle to various trigonometry functions
So looking at a frustum image we can take the right triangle from the eye to the near plane to the top of the frustum to compute the tangent of the field of view and we can use the arc tangent to turn a tangent back into an angle.
Since we know the result of the projection matrix takes our world space frustum and converts it to clip space and ultimately to normalized device space (-1, -1, -1) to (+1, +1, +1) we can get the positions we need by multiplying the corresponding points in NDC space by the inverse of the projection matrix
eye = 0,0,0
centerAtNearPlane = inverseProjectionMatrix * (0,0,-1)
topCenterAtNearPlane = inverseProjectionMatrix * (0, 1, -1)
Then
opposite = topCenterAtNearPlane.y
adjacent = -centerAtNearPlane.z
halfFieldOfView = Math.atan2(opposite, adjacent)
fieldOfView = halfFieldOfView * 2
Let's test
const m4 = twgl.m4;
const fovValueElem = document.querySelector("#fovValue");
const resultElem = document.querySelector("#result");
let fov = degToRad(45);
function updateFOV() {
fovValueElem.textContent = radToDeg(fov).toFixed(1);
// get a projection matrix from somewhere (like VR)
const projection = getProjectionMatrix();
// now that we have projection matrix recompute the FOV from it
const inverseProjection = m4.inverse(projection);
const centerAtZNear = m4.transformPoint(inverseProjection, [0, 0, -1]);
const topCenterAtZNear = m4.transformPoint(inverseProjection, [0, 1, -1]);
const opposite = topCenterAtZNear[1];
const adjacent = -centerAtZNear[2];
const halfFieldOfView = Math.atan2(opposite, adjacent);
const fieldOfView = halfFieldOfView * 2;
resultElem.textContent = radToDeg(fieldOfView).toFixed(1);
}
updateFOV();
function getProjectionMatrix() {
// doesn't matter. We just want a projection matrix as though
// someone else made it for us.
const aspect = 2 / 1;
// choose some zNear and zFar
const zNear = .5;
const zFar = 100;
return m4.perspective(fov, aspect, zNear, zFar);
}
function radToDeg(rad) {
return rad / Math.PI * 180;
}
function degToRad(deg) {
return deg / 180 * Math.PI;
}
document.querySelector("input").addEventListener('input', (e) => {
fov = degToRad(parseInt(e.target.value));
updateFOV();
});
<script src="https://twgljs.org/dist/4.x/twgl-full.min.js"></script>
<input id="fov" type="range" min="1" max="179" value="45"><label>fov: <span id="fovValue"></span></label>
<div>computed fov: <span id="result"></span></div>
Note this assumes the center of the frustum is directly in front of the eye. If it's not then you'd probably have to compute adjacent by computing the length of the vector from the eye to centerAtZNear
const v3 = twgl.v3;
...
const adjacent = v3.length(centerAtZNear);

Related

How to detect when rotated rectangles are colliding each other

After saw this question many times and replied with an old (an not usable) code I decide to redo everything and post about it.
Rectangles are defined by:
center : x and y for his position (remember that 0;0 is TOP Left, so Y go down)
size: x and y for his size
angle for his rotation (in deg, 0 deg is following axis OX and turn clockwise)
The goal is to know if 2 rectangles are colliding or not.
Will use Javascript in order to demo this (and also provide code) but I can be done on every language following the process.
Links
Final Demo on Codepen
GitHub repository
Concept
In order to achieve this we'll use corners projections on the other rectangle 2 axis (X and Y).
The 2 rectangles are only colliding when the 4 projections on one rectangles hit the others:
Rect Blue corners on Rect Orange X axis
Rect Blue corners on Rect Orange Y axis
Rect Orange corners on Rect Blue X axis
Rect Orange corners on Rect Blue Y axis
Process
1- Find the rects axis
Start by creating 2 vectors for axis 0;0 (center of rect) to X (OX) and Y (OY) then rotate both of them in order to get aligned to rectangles axis.
Wikipedia about rotate a 2D vector
const getAxis = (rect) => {
const OX = new Vector({x:1, y:0});
const OY = new Vector({x:0, y:1});
// Do not forget to transform degree to radian
const RX = OX.Rotate(rect.angle * Math.PI / 180);
const RY = OY.Rotate(rect.angle * Math.PI / 180);
return [
new Line({...rect.center, dx: RX.x, dy: RX.y}),
new Line({...rect.center, dx: RY.x, dy: RY.y}),
];
}
Where Vector is a simple x,y object
class Vector {
constructor({x=0,y=0}={}) {
this.x = x;
this.y = y;
}
Rotate(theta) {
return new Vector({
x: this.x * Math.cos(theta) - this.y * Math.sin(theta),
y: this.x * Math.sin(theta) + this.y * Math.cos(theta),
});
}
}
And Line represent a slop using 2 vectors:
origin: Vector for Start position
direction: Vector for unit direction
class Line {
constructor({x=0,y=0, dx=0, dy=0}) {
this.origin = new Vector({x,y});
this.direction = new Vector({x:dx,y:dy});
}
}
Step Result
2- Use Rect Axis to get corners
First want extend our axis (we are 1px unit size) in order to get the half of width (for X) and height (for Y) in order to be able by adding when (and inverse) to get all corners.
const getCorners = (rect) => {
const axis = getAxis(rect);
const RX = axis[0].direction.Multiply(rect.w/2);
const RY = axis[1].direction.Multiply(rect.h/2);
return [
rect.center.Add(RX).Add(RY),
rect.center.Add(RX).Add(RY.Multiply(-1)),
rect.center.Add(RX.Multiply(-1)).Add(RY.Multiply(-1)),
rect.center.Add(RX.Multiply(-1)).Add(RY),
]
}
Using this 2 news methods for Vector:
// Add(5)
// Add(Vector)
// Add({x, y})
Add(factor) {
const f = typeof factor === 'object'
? { x:0, y:0, ...factor}
: {x:factor, y:factor}
return new Vector({
x: this.x + f.x,
y: this.y + f.y,
})
}
// Multiply(5)
// Multiply(Vector)
// Multiply({x, y})
Multiply(factor) {
const f = typeof factor === 'object'
? { x:0, y:0, ...factor}
: {x:factor, y:factor}
return new Vector({
x: this.x * f.x,
y: this.y * f.y,
})
}
Step Result
3- Get corners projections
For every corners of a rectangle, get the projection coord on both axis of the other rectangle.
Simply by adding this function to Vector class:
Project(line) {
let dotvalue = line.direction.x * (this.x - line.origin.x)
+ line.direction.y * (this.y - line.origin.y);
return new Vector({
x: line.origin.x + line.direction.x * dotvalue,
y: line.origin.y + line.direction.y * dotvalue,
})
}
(Special thank to Mbo for the solution to get projection.)
Step Result
4- Select externals corners on projections
In order to sort (along the rect axis) all the projected point and take the min and max projected points we can:
Create a vector to represent: Rect Center to Projected corner
Get the distance using the Vector Magnitude function.
get magnitude() {
return Math.sqrt(this.x * this.x + this.y * this.y);
}
Use the dot product to know if the vector is facing the same direction of axis of inverse (where signed distance" is negative)
getSignedDistance = (rect, line, corner) => {
const projected = corner.Project(line);
const CP = projected.Minus(rect.center);
// Sign: Same directon of axis : true.
const sign = (CP.x * line.direction.x) + (CP.y * line.direction.y) > 0;
const signedDistance = CP.magnitude * (sign ? 1 : -1);
}
Then using a simple loop and test of min/max we can find the 2 externals corners. The segment between them is the projection of a Rect on the other one axis.
Step result
5- Final: Do all projections hit rect ?
Using simple 1D test along the axis we can know if they hit or not:
const isProjectionHit = (minSignedDistance < 0 && maxSignedDistance > 0
|| Math.abs(minSignedDistance) < rectHalfSize
|| Math.abs(maxSignedDistance) < rectHalfSize);
Done
Testing all 4 projections will give you the final result. =] !!
Hope this answer will help as many people as possible. Any comments are appreciated.

Converting an equirectangular depth map into 3d point cloud

I have a 2D equirectangular depth map that is a 1024 x 512 array of floats, each ranging between 0 to 1. Here example (truncated to grayscale):
I want to convert it to a set of 3D points but I am having trouble finding the right formula to do so - it's sort of close - pseudocode here (using a vec3() library):
for(var y = 0; y < array_height; ++y) {
var lat = (y / array_height) * 180.0 - 90.0;
var rho = Math.cos(lat * Math.PI / 180.0);
for(var x = 0; x < array_width; ++x) {
var lng = (x / array_width) * 360.0 - 180.0;
var pos = new vec3();
pos.x = (r * Math.cos(lng * Math.PI / 180.0));
pos.y = (Math.sin(lat * Math.PI / 180.0));
pos.z = (r * Math.sin(lng * Math.PI / 180.0));
pos.norm();
var depth = parseFloat(depth[(y * array_width) + x] / 255);
pos.multiply(depth);
// at this point I can plot pos as an X, Y, Z point
}
}
What I end up with isn't quite right and I can't tell why not. I am certain the data is correct. Can anyone suggest what I am doing wrong.
Thank you.
Molly.
Well looks like the texture is half-sphere in spherical coordinates:
x axis is longitude angle a <0,180> [deg]
y axis is latitude angle b <-45,+45> [deg]
intensity is radius r <0,1> [-]
So for each pixel simply:
linearly convert x,y to a,b
in degrees:
a = x*180 / (width -1)
b = -45 + ( y* 90 / (height-1) )
or in radians:
a = x*M_PI / (width -1)
b = -0.25*M_PI + ( 0.5*y*M_PI / (height-1) )
apply spherical to cartesian conversion
x=r*cos(a)*cos(b);
y=r*sin(a)*cos(b);
z=r* sin(b);
Looks like you have wrongly coded this conversion as latitude angle should be in all x,y,z not just y !!! Also you should not normalize the resulting position that would corrupt the shape !!!
store point into point cloud.
When I put all together in VCL/C++ (sorry do not code in javascript):
List<double> pnt; // 3D point list x0,y0,z0,x1,y1,z1,...
void compute()
{
int x,y,xs,ys; // texture positiona and size
double a,b,r,da,db; // spherical positiona and angle steps
double xx,yy,zz; // 3D point
DWORD *p; // texture pixel access
// load and prepare BMP texture
Graphics::TBitmap *bmp=new Graphics::TBitmap;
bmp->LoadFromFile("map.bmp");
bmp->HandleType=bmDIB;
bmp->PixelFormat=pf32bit;
xs=bmp->Width;
ys=bmp->Height;
/*
// 360x180 deg
da=2.0*M_PI/double(xs-1);
db=1.0*M_PI/double(ys-1);
b=-0.5*M_PI;
*/
// 180x90 deg
da=1.0*M_PI/double(xs-1);
db=0.5*M_PI/double(ys-1);
b=-0.25*M_PI;
// proces all its pixels
pnt.num=0;
for ( y=0; y<ys; y++,b+=db)
for (p=(DWORD*)bmp->ScanLine[y],a=0.0,x=0; x<xs; x++,a+=da)
{
// pixel access
r=DWORD(p[x]&255); // obtain intensity from texture <0..255>
r/=255.0; // normalize to <0..1>
// convert to 3D
xx=r*cos(a)*cos(b);
yy=r*sin(a)*cos(b);
zz=r* sin(b);
// store to pointcloud
pnt.add(xx);
pnt.add(yy);
pnt.add(zz);
}
// clean up
delete bmp;
}
Here preview for 180x90 deg:
and preview for 360x180 deg:
Not sure which one is correct (as I do not have any context to your map) but the first option looks more correct to me ...
In case its the second just use different numbers (doubled) for the interpolation in bullet #1
Also if you want to remove the background just ignore r==1 pixels:
simply by testing the intensity to max value (before normalization) in my case by adding this line:
if (r==255) continue;
after this one
r=DWORD(p[x]&255);
In your case (you have <0..1> already) you should test r>=0.9999 or something like that instead.

How Does a camera convert from clip space into screen space?

I want to render a bunch of 3d points into 2d canvas without webgl.
I thought clip space and screen space are the same thing, and camera is used to convert from 3d world space to 2d screen space,
but apperently they are not.
So on webgl, when setting gl_Position, it's in clip space,
later this position is converted to screen space by webgl, and gl_FragCoord is set.
How is this calculation is done and where?
And Camera matrix and view projection matrices has nothing to do with converting clip space to screen space.
I can have a 3d world space that fit's into clip space, and I wouldn't need to use a camera right?
If all my assumptions are true, I need to learn how to convert from clip space into screen space.
Here's my code:
const uMatrix = mvpMatrix(modelMatrix(transform));
// transform each vertex into 2d screen space
vertices = vertices.map(vertex => {
let res = mat4.multiplyVector(uMatrix, [...vertex, 1.0]);
// res is vec4 element, in clip space,
// how to transform this into screen space?
return [res[0], res[1]];
});
// viewProjectionMatrix calculation
const mvpMatrix = modelMatrix => {
const { pos: camPos, target, up } = camera;
const { fov, aspect, near, far } = camera;
let camMatrix = mat4.lookAt(camPos, target, up);
let viewMatrix = mat4.inverse(camMatrix);
let projectionMatrix = mat4.perspective(fov, aspect, near, far);
let viewProjectionMatrix = mat4.multiply(projectionMatrix, viewMatrix);
return mat4.multiply(viewProjectionMatrix, modelMatrix);
};
The camera mentioned in this article transforms clip space to screen space, If so it shouldn't be named a camera right?
First the geometry is clipped, according to the clip space coordinate (gl_Position). The clip space coordinate is a Homogeneous coordinates. The condition for a homogeneous coordinate to be in clip space is:
-w <= x, y, z <= w.
The clip space coordinate is transformed to a Cartesian coordinate in normalized device space, by Perspective divide:
ndc_position = gl_Position.xyz / gl_Position.w
The normalized device space is a cube, with the left bottom front of (-1, -1, -1) and the right top back of (1, 1, 1).
The x and y component of the normalized device space coordinate is linear mapped to the viewport, which is set by gl.viewport (See WebGL Viewport). The viewport is a rectangle with an origin (x, y) and a width and a height:
xw = (ndc_position.x + 1) * (width / 2) + x
yw = (ndc_position.y + 1) * (height / 2 ) + y
xw and yw can be accessed by gl_FragCoord.xy in the fragment shader.
The z component of the normalized device space coordinate is linear mapped to the depth range, which is by default [0.0, 1.0], but can be set by gl.depthRange. See Viewport Depth Range. The depth range consists of a near value and a far value. far has to be greater than near and both values have to be in [0.0, 1.0]:
depth = (ndc_position.z + 1) * (far-near) / 2 + near
The depth can be accessed by gl_FragCoord.z in the fragment shader.
All this operations are done automatically in the rendering pipeline and are part of the Vertex Post-Processing.

Calculate a 3D vector out of another vector and a rotation

So this is a kind of difficult question. I am using three.js and I need to calculate a specific vector.
I have one vector (let's call it C) and I have a rotation (r) in form of an euler. Now I want to calculate a second vector (X), that, minus itself (X) with the rotation euler applied, would equal the first vector.
So in pseudo code:
X - X(r) = C
Convert euler into a 3x3 matrix. Note that three.js uses Tait-Bryan angles rather than Proper Euler angles. Construct the required matrix R from these axis rotation matrices by multiplying in the order Rz * Ry * Rx.
Note that X = I * X where I is the 3x3 identity matrix.
The equation becomes (I - R) * X = C, which is simple to invert: X = inverse(I - R) * C. (Matrix3.getInverse)
EDIT: how to compute R:
Use standard rotation matrices:
For euler = (x, y, z), R = Rz(z) * Ry(y) * Rx(x):
var c_x = Math.Cos(euler.x), s_x = Math.Sin(euler.x);
var Rx = (new Matrix3()).set(
1, 0, 0,
0, c_x, -s_x,
0, s_x, c_x
);
// and similarly with Ry and Rz
Finally:
var R = Rz.multiply(Ry.multiply(Rx));
var ImR = (new Matrix3()).set(
1.0 - R.elements[0], -R.elements[1], -R.elements[2],
-R.elements[3], 1.0 - R.elements[4], -R.elements[5],
-R.elements[6], -R.elements[7], 1.0 - R.elements[8]
);
// multiply ImR.getInverse() with C to get X

HTML5 Canvas Grid Tilt [duplicate]

I was trying to do a perspective grid on my canvas and I've changed the function from another website with this result:
function keystoneAndDisplayImage(ctx, img, x, y, pixelHeight, scalingFactor) {
var h = img.height,
w = img.width,
numSlices = Math.abs(pixelHeight),
sliceHeight = h / numSlices,
polarity = (pixelHeight > 0) ? 1 : -1,
heightScale = Math.abs(pixelHeight) / h,
widthScale = (1 - scalingFactor) / numSlices;
for(var n = 0; n < numSlices; n++) {
var sy = sliceHeight * n,
sx = 0,
sHeight = sliceHeight,
sWidth = w;
var dy = y + (sliceHeight * n * heightScale * polarity),
dx = x + ((w * widthScale * n) / 2),
dHeight = sliceHeight * heightScale,
dWidth = w * (1 - (widthScale * n));
ctx.drawImage(img, sx, sy, sWidth, sHeight,
dx, dy, dWidth, dHeight);
}
}
It creates almost-good perspective grid, but it isn't scaling the Height, so every square has got the same height. Here's a working jsFiddle and how it should look like, just below the canvas. I can't think of any math formula to distort the height in proportion to the "perspective distance" (top).
I hope you understand. Sorry for language errors. Any help would be greatly appreciatedRegards
There is sadly no proper way besides using a 3D approach. But luckily it is not so complicated.
The following will produce a grid that is rotatable by the X axis (as in your picture) so we only need to focus on that axis.
To understand what goes on: We define the grid in Cartesian coordinate space. Fancy word for saying we are defining our points as vectors and not absolute coordinates. That is to say one grid cell can go from 0,0 to 1,1 instead of for example 10,20 to 45, 45 just to take some numbers.
At the projection stage we project these Cartesian coordinates into our screen coordinates.
The result will be like this:
ONLINE DEMO
Ok, lets dive into it - first we set up some variables that we need for projection etc:
fov = 512, /// Field of view kind of the lense, smaller values = spheric
viewDist = 22, /// view distance, higher values = further away
w = ez.width / 2, /// center of screen
h = ez.height / 2,
angle = -27, /// grid angle
i, p1, p2, /// counter and two points (corners)
grid = 10; /// grid size in Cartesian
To adjust the grid we don't adjust the loops (see below) but alter the fov and viewDist as well as modifying the grid to increase or decrease the number of cells.
Lets say you want a more extreme view - by setting fov to 128 and viewDist to 5 you will get this result using the same grid and angle:
The "magic" function doing all the math is as follows:
function rotateX(x, y) {
var rd, ca, sa, ry, rz, f;
rd = angle * Math.PI / 180; /// convert angle into radians
ca = Math.cos(rd);
sa = Math.sin(rd);
ry = y * ca; /// convert y value as we are rotating
rz = y * sa; /// only around x. Z will also change
/// Project the new coords into screen coords
f = fov / (viewDist + rz);
x = x * f + w;
y = ry * f + h;
return [x, y];
}
And that's it. Worth to mention is that it is the combination of the new Y and Z that makes the lines smaller at the top (at this angle).
Now we can create a grid in Cartesian space like this and rotate those points directly into screen coordinate space:
/// create vertical lines
for(i = -grid; i <= grid; i++) {
p1 = rotateX(i, -grid);
p2 = rotateX(i, grid);
ez.strokeLine(p1[0], p1[1], p2[0], p2[1]); //from easyCanvasJS, see demo
}
/// create horizontal lines
for(i = -grid; i <= grid; i++) {
p1 = rotateX(-grid, i);
p2 = rotateX(grid, i);
ez.strokeLine(p1[0], p1[1], p2[0], p2[1]);
}
Also notice that position 0,0 is center of screen. This is why we use negative values to get out on the left side or upwards. You can see that the two center lines are straight lines.
And that's all there is to it. To color a cell you simply select the Cartesian coordinate and then convert it by calling rotateX() and you will have the coordinates you need for the corners.
For example - a random cell number is picked (between -10 and 10 on both X and Y axis):
c1 = rotateX(cx, cy); /// upper left corner
c2 = rotateX(cx + 1, cy); /// upper right corner
c3 = rotateX(cx + 1, cy + 1); /// bottom right corner
c4 = rotateX(cx, cy + 1); /// bottom left corner
/// draw a polygon between the points
ctx.beginPath();
ctx.moveTo(c1[0], c1[1]);
ctx.lineTo(c2[0], c2[1]);
ctx.lineTo(c3[0], c3[1]);
ctx.lineTo(c4[0], c4[1]);
ctx.closePath();
/// fill the polygon
ctx.fillStyle = 'rgb(200,0,0)';
ctx.fill();
An animated version that can help see what goes on.

Categories

Resources