calculate canvas context.setTransform matrix based on a 4x4 matrix - javascript

I am trying to simulate 3d rotation of a bitmap using javascript and html5 canvas (without webGL). I think the way to go is to compute the terms for the 2D matrix of the canvas context.setTransform method, but I can't figure out how to obtain that matrix ideally from a general 4x4 matrix representing the desired transformation or the desired final position of the bitmap in pixels (I can compute the desired final coordinates in pixels projecting the corners with the 4x4 matrix and the project-view matrix).
in this fiddle I had been playing with manipulating a couple of angles (representing the rotation of a 3d camera with two degrees of freedom) to calculate shear terms for the setTransform matrix, but I really can't obtain a clear insight of the ideal procedure to follow. http://jsfiddle.net/cesarpachon/GQvp2/
context.save();
/*alpha affects the scaley with sin
*/
var skew = Math.sin(alpha);
var scalex = 1;//(1-skew) +0.01;//Math.sin(alpha);
var scaley = 1;//Math.abs(Math.sin(alpha))+0.01;
var offx = 0;
var offy = 0;
var skewx = Math.sin(alpha);
var skewy = Math.sin(beta);
context.setTransform(scalex,skewx,skewy,scaley,offx,offy);
context.drawImage(image, 0, 0, height, width);
context.restore();

A severe limitation when projecting 3d into 2d is that the 2d transforms only use an affine matrix.
This means any 2d transform will always result in a parallelogram.
3d perspectives most often require at least a trapezoidal result (not achievable in affine transforms).
Bottom line: You can't consistently get 3d perspective from 2d transforms.
One workaround is to use image slicing to fit an image into 3d coordinates
http://www.subshell.com/en/subshell/blog/image-manipulation-html5-canvas102.html
Another more mathematically intense workaround is to map an image onto perspective triangles:
http://mathworld.wolfram.com/PerspectiveTriangles.html
Good luck with your project! :-)

Related

How do I build a camera-facing matrix with a position using gl-matrix?

I'm building a simple particle system using javascript, webgl2, and gl-matrix. I have a circle mesh, some colors, some positions, a camera view matrix and a projection matrix. I can render un-rotated dots like this:
render(viewMatrix, projectionMatrix){
const matrix = mat4.create();
for (let index = 0; index < this.particleCount; index++){
mat4.fromTranslation(matrix, this.positions[index]);
mat4.multiply(matrix, viewMatrix, matrix);
mat4.multiply(matrix, projectionMatrix, matrix);
this.renderer.render(this.mesh, this.indices, this.colors[index], matrix, this.gl.TRIANGLE_FAN);
}
}
This produces the following render:
Obviously, they're not facing the camera.
I'm certain there's a way to derive a single matrix that combines the camera's facing and up vectors with the position of the center of the circle, but matrix math is voodoo witch magic to me. How do I build a matrix (either including the projection or not) that translates using the position of the particle and rotates using the matrix of the camera? Do I need to know the position of the camera?

Project visible pixels in one view onto another

In WebGL or in pure matrix math I would like to match the pixels in one view to another view. That is, imagine I take pixel with x,y = 0,0. This pixel lies on the surface of a 3d object in my world. I then orbit around the object slightly. Where does that pixel that was at 0,0 now lie in my new view?
How would I calculate a correspondence between each pixel in the first view with each pixel in the second view?
The goal of all this is to run a genetic algorithm to generate camouflage patterns that disrupt a shape from multiple directions.
So I want to know what the effect of adding a texture over the object would be from multiple angles. I want the pixel correspondencies because rendering all the time would be too slow.
To transform a point from world to screen coordinates, you multiply it by view and projection matrices. So if you have a pixel on the screen, you can multiply its coordinates (in range -1..1 for all three axes) by inverse transforms to find the corresponding point in world space, then multiply it by new view/projection matrices for the next frame.
The catch is that you need the correct depth (Z coordinate) if you want to find the movement of mesh points. For that, you can either do trace a ray through that pixel and find its intersection with your mesh the hard way, or you can simply read the contents of the Z-buffer by rendering it to texture first.
A similar technique is used for motion blur, where a velocity of each pixel is calculated in fragment shader. A detailed explanation can be found in GPU Gems 3 ch27.
I made a jsfiddle with this technique: http://jsfiddle.net/Rivvy/f9kpxeaw/126/
Here's the relevant fragment code:
// reconstruct normalized device coordinates
ivec2 coord = ivec2(gl_FragCoord.xy);
vec4 pos = vec4(v_Position, texelFetch(u_Depth, coord, 0).x * 2.0 - 1.0, 1.0);
// convert to previous frame
pos = u_ToPrevFrame * pos;
vec2 prevCoord = pos.xy / pos.w;
// calculate velocity
vec2 velocity = -(v_Position - prevCoord) / 8.0;

How to get 3D point coordinates given UV coordinates on a 3d plane object - Threejs

I'm trying to build some simple data visualisation and my weapon of choice is Three.js.I'have a series of PlaneGeometry meshes on which I apply a transparent texture dynamically created with a series of red square on it drawn at different opacity values.My plan is to use those points to create other meshes ( eg. CylinderGeometry ) and place them on top of the red square with an height value based on the red square opacity value.So far I could manage to find the UV values for each square and store it to an array, but I'm getting blocked at converting such red square UV coordinates to the 3D world coordinates system.I've found several resource describing the same concept applied to a sphere, and surprisingly it is pretty straight forward, but no other resources about applying the same concept to other mesh.
How can I get the 3D coordinates of those red square inside the texture?
EDIT: I think this is it:
function texturePosToPlaneWorld(planeOb, texcoord)
{
var pos = new THREE.Vector3();
pos.x = (texcoord.x - 0.5) * PLANESIZE;
pos.y = (texcoord.y - 0.5) * PLANESIZE;
pos.applyMatrix4(planeOb.matrix);
return pos;
}
Is used like this in the jsfiddle I made: http://jsfiddle.net/L0rdzbej/2/
var texcoord = new THREE.Vector2(0.8, 0.65);
var newpos = texturePosToPlaneWorld(planeOb, texcoord);
cubeOb.position.copy(newpos);
Planes are simple. The edge between vertices A, B -- vector A->B defines the direction for 'x' in your texture, and A->C similarily for the other direction in which the plane goes in the 3d space .. where you have texture's 'y' mapped on the plane.
Let's assume your pivot point is in the middle. So that's known in world space. Then as UV go from 0 to 1, e.g. UV coord (1.0, 0.5) would be in half way of the full width of the plane in the direction of the vector from of your Plane object pivot .. going from middle all the way to the edge. And then in the middle otherwise, where you have 0.5 in V (as in normalized texture y pixelcoord).
To know the coordinates of the vertices of the plane in world space, you just multiple them with the orientation of the object..
Given you know the size of your plane, you actually don't need to look at the vertices as the orientation of the plane is already in the object matrix. So you just need to adapt your UV coord to the pivot in middle (-0.5) and multiply with the plane size to get the point in plane space. Then the matrix multiplication converts that to world space.

WebGL Vertex Space Coordinates

I try to draw a simple rectangle in webgl ( i use webgl like a 2d api ). The idea is to send attributes ( the points ), and transform them in the vertex shader to fit on the screen. But when i render with the vertex shader : gl_Position = vec4( a_point, 0.0, 1.0 ); i don't see anything. I saw WebGL Fundamentals for 2d webgl and it does not seem to work on my computer. There's rectangles but i think they are not on the good coordinates !
Can you explain me how to draw a rectangle in a special coordinate system :
-width/2 < x < width/2
-height/2 < y < height/2
and then transform them in the vertex shader to have the same position in each browser( chrome, firefox, internet explorer 11. It seems to be very simple but i have not reach my goal. I tried to make a transformation of the vertex in the vertex shader too. Maybe i can use viewport ?
In WebGL, all coordinates are in the range from -1.00f (f=float) to +1.00f. They automatically represent whatever canvas width and height you got. By default, you don't use absolute pixel numbers in WebGL.
If you set a vertex (point) to be on x=0.00 and y=0.00, it will be in the center of your screen. If one of the coordinates goes below -1 or above +1, it will be outside of your rendered canvas and some pixels from your triangle won't even be passed to fragment shader (fragment = a pixel of your framebuffer).
This way guarantees that all of your objects will have the same relative position and size, no matter how many pixels your canvas will be.
If you want to have an object of a specific pixel size, you can pre-calculate it like this:
var objectWidth = OBJECT_WIDTH_IN_PIXEL / CANVAS_WIDTH;
var objectHeight = OBJECT_HEIGHT_IN_PIXEL / CANVAS_HEIGHT;
In some cases, as you might see down below, it's better to know the half width and height in floating point -1.00 to +1.00 universe. To position this object's center into the center of your canvas, you need to setup your vertex data like:
GLfloat vVertices[] = {
-(objectWidth/2.0), -(objectHeight/2.0), 0.0, // Point 1, Triangle 1
+(objectWidth/2.0), -(objectHeight/2.0), 0.0, // Point 2, Triangle 1
-(objectWidth/2.0), +(objectHeight/2.0), 0.0, // Point 3, Triangle 1
-(objectWidth/2.0), +(objectHeight/2.0), 0.0, // Point 4, Triangle 2
+(objectWidth/2.0), -(objectHeight/2.0), 0.0, // Point 5, Triangle 2
+(objectWidth/2.0), +(objectHeight/2.0), 0.0 // Point 6, Triangle 2
}
The above vertex data sets up two triangles to create a rectangle in the center of your screen. Many of these things can be found in tutorials like WebGL Fundamentals by Greggman.
Please have a look at this post:
http://games.greggman.com/game/webgl-fundamentals/
It shows how to do 2d drawing with WebGL.
I guess you can easily adapt it to suit your need for custom 2d space coordinates.

How to get the 2d coordinates of webgl vertices?

In this tutorial: http://learningwebgl.com/blog/?p=28 we draw a triangle and a square in the 3d space, and I want to get the vertices' x,y coordinates on the canvas.
So I want to get the 2d coordinates of these vertices: http://s11.postimage.org/ig6irk9lf/static.png
Sorry for my bad english, I hope you understand it.
You have to do the same calculation that WebGL does. It takes a 3d point [X,Y,Z] to homogeneous point [x,y,z,w] via
[x,y,z,w] = pMatrix * mvMatrix * [X,Y,Z,1]
To get clip space coordinates, divide through by w:
[x/w,y/w,z/w]
x/w and y/w are in the range [-1,1]. To convert them to viewport coordinates, scale them according to the canvas size.
[x/w,y/w] -> [(1 + x/w)*canvas.width/2, (1 - y/w)*canvas.height/2]
Note how the 'direction' of the y coordinate changes in the last transformation.
For a little more information, you can Google "graphics pipeline". E.g. http://en.wikipedia.org/wiki/Graphics_pipeline
You have to do the math to compute them manually. WebGL only computes them for its own purposes, ie: rendering.
Desktop GL has ways of getting those positions back (transform feedback), but WebGL does not.

Categories

Resources