WebGL Vertex Space Coordinates - javascript

I try to draw a simple rectangle in webgl ( i use webgl like a 2d api ). The idea is to send attributes ( the points ), and transform them in the vertex shader to fit on the screen. But when i render with the vertex shader : gl_Position = vec4( a_point, 0.0, 1.0 ); i don't see anything. I saw WebGL Fundamentals for 2d webgl and it does not seem to work on my computer. There's rectangles but i think they are not on the good coordinates !
Can you explain me how to draw a rectangle in a special coordinate system :
-width/2 < x < width/2
-height/2 < y < height/2
and then transform them in the vertex shader to have the same position in each browser( chrome, firefox, internet explorer 11. It seems to be very simple but i have not reach my goal. I tried to make a transformation of the vertex in the vertex shader too. Maybe i can use viewport ?

In WebGL, all coordinates are in the range from -1.00f (f=float) to +1.00f. They automatically represent whatever canvas width and height you got. By default, you don't use absolute pixel numbers in WebGL.
If you set a vertex (point) to be on x=0.00 and y=0.00, it will be in the center of your screen. If one of the coordinates goes below -1 or above +1, it will be outside of your rendered canvas and some pixels from your triangle won't even be passed to fragment shader (fragment = a pixel of your framebuffer).
This way guarantees that all of your objects will have the same relative position and size, no matter how many pixels your canvas will be.
If you want to have an object of a specific pixel size, you can pre-calculate it like this:
var objectWidth = OBJECT_WIDTH_IN_PIXEL / CANVAS_WIDTH;
var objectHeight = OBJECT_HEIGHT_IN_PIXEL / CANVAS_HEIGHT;
In some cases, as you might see down below, it's better to know the half width and height in floating point -1.00 to +1.00 universe. To position this object's center into the center of your canvas, you need to setup your vertex data like:
GLfloat vVertices[] = {
-(objectWidth/2.0), -(objectHeight/2.0), 0.0, // Point 1, Triangle 1
+(objectWidth/2.0), -(objectHeight/2.0), 0.0, // Point 2, Triangle 1
-(objectWidth/2.0), +(objectHeight/2.0), 0.0, // Point 3, Triangle 1
-(objectWidth/2.0), +(objectHeight/2.0), 0.0, // Point 4, Triangle 2
+(objectWidth/2.0), -(objectHeight/2.0), 0.0, // Point 5, Triangle 2
+(objectWidth/2.0), +(objectHeight/2.0), 0.0 // Point 6, Triangle 2
}
The above vertex data sets up two triangles to create a rectangle in the center of your screen. Many of these things can be found in tutorials like WebGL Fundamentals by Greggman.

Please have a look at this post:
http://games.greggman.com/game/webgl-fundamentals/
It shows how to do 2d drawing with WebGL.
I guess you can easily adapt it to suit your need for custom 2d space coordinates.

Related

3D model in HTML/CSS; Calculate Euler rotation of triangle

TLDR; Given a set of triangle vertices and a normal vector (all in unit space), how do I calculate X, Y, Z Euler rotation angles of the triangle in world space?
I am attemping to display a 3D model in HTML - with actual HTML tags and CSS transforms. I've already loaded an OBJ file into a Javascript class instance.
The model is triangulated. My first aim is just to display the triangles as planes (HTML elements are rectangular) - I'll be 'cutting out' the triangle shapes with CSS clip-path later on.
I am really struggling to understand and get the triangles of the model rotated correctly.
I thought a rotation matrix could help me out, but my only experience with those is where I already have the rotation vector and I need to convert and send that to WebGL. This time there is no WebGL (or tutorials) to make things easier.
The following excerpt shows the face creation/'rendering' of faces. I'm using the face normal as the rotation but I know this is wrong.
for (const face of _obj.faces) {
const vertices = face.vertices.map(_index => _obj.vertices[_index]);
const center = [
(vertices[0][0] + vertices[1][0] + vertices[2][0]) / 3,
(vertices[0][1] + vertices[1][1] + vertices[2][1]) / 3,
(vertices[0][2] + vertices[1][2] + vertices[2][2]) / 3
];
// Each vertex has a normal but I am just picking the first vertex' normal
// to use as the 'face normal'.
const normals = face.normals.map(_index => _obj.normals[_index]);
const normal = normals[0];
// HTML element creation code goes here; reference is 'element'.
// Set face position (unit space)
element.style.setProperty('--posX', center[0]);
element.style.setProperty('--posY', center[1]);
element.style.setProperty('--posZ', center[2]);
// Set face rotation, converting to degrees also.
const rotation = [
normal[0] * toDeg,
normal[1] * toDeg,
normal[2] * toDeg,
];
element.style.setProperty('--rotX', rotation[0]);
element.style.setProperty('--rotY', rotation[1]);
element.style.setProperty('--rotZ', rotation[2]);
}
The CSS first translates the face on X,Y,Z, then rotates it on X,Y,Z in that order.
I think I need to 'decompose' my triangles' rotation into separate axis rotations - i.e rotate on X, then on Y, then on Z to get the correct rotation as per the model face.
I realise that the normal vector gives me an orientation but not a rotation around itself - I need to calculate that. I think I have to determine a vector along one triangle side and cross it with the normal, but this is something I am not clear on.
I have spent hours looking at similar questions on SO but I'm not smart enough to understand or make them work for me.
Is it possible to describe what steps to take without Latex equations? I'm good with pseudo code but my Math skills are severely lacking.
The full code is here: https://whoshotdk.co.uk/cssfps/ (view HTML source)
The mesh building function is at line 422.
The OBJ file is here: https://whoshotdk.co.uk/cssfps/data/model/test.obj
The Blender file is here: https://whoshotdk.co.uk/cssfps/data/model/test.blend
The mesh is just a single plane at an angle, displayed in my example (wrongly) in pink.
The world is setup so that -X is left, -Y is up, -Z is into the screen.
Thank You!
If you have a plane and want to rotate it to be in the same direction as some normal, you need to figure out the angles between that plane's normal vector and the normal vector you want. The Euler angles between two 3D vectors can be complicated, but in this case the initial plane normal should always be the same, so I'll assume the plane normal starts pointing towards positive X to make the maths simpler.
You also probably want to rotate before you translate, so that everything is easier since you'll be rotating around the origin of the coordinate system.
By taking the general 3D rotation matrix (all three 3D rotation matrices multiplied together, you can find it on the Wikipedia page) and applying it to the vector (1,0,0) you can then get the equations for the three angles a, b, and c needed to rotate that initial vector to the vector (x,y,z). This results in:
x = cos(a)*cos(b)
y = sin(a)*cos(b)
z = -sin(b)
Then rearranging these equations to find a, b and c, which will be the three angles you need (the three values of the rotation array, respectively):
a = atan(y/x)
b = asin(-z)
c = 0
So in your code this would look like:
const rotation = [
Math.atan2(normal[1], normal[0]) * toDeg,
Math.asin(-normal[2]) * toDeg,
0
];
It may be that you need to use a different rotation matrix (if the order of the rotations is not what you expected) or a different starting vector (although you can just use this method and then do an extra 90 degree rotation if each plane actually starts in the positive Y direction, for example).

Project visible pixels in one view onto another

In WebGL or in pure matrix math I would like to match the pixels in one view to another view. That is, imagine I take pixel with x,y = 0,0. This pixel lies on the surface of a 3d object in my world. I then orbit around the object slightly. Where does that pixel that was at 0,0 now lie in my new view?
How would I calculate a correspondence between each pixel in the first view with each pixel in the second view?
The goal of all this is to run a genetic algorithm to generate camouflage patterns that disrupt a shape from multiple directions.
So I want to know what the effect of adding a texture over the object would be from multiple angles. I want the pixel correspondencies because rendering all the time would be too slow.
To transform a point from world to screen coordinates, you multiply it by view and projection matrices. So if you have a pixel on the screen, you can multiply its coordinates (in range -1..1 for all three axes) by inverse transforms to find the corresponding point in world space, then multiply it by new view/projection matrices for the next frame.
The catch is that you need the correct depth (Z coordinate) if you want to find the movement of mesh points. For that, you can either do trace a ray through that pixel and find its intersection with your mesh the hard way, or you can simply read the contents of the Z-buffer by rendering it to texture first.
A similar technique is used for motion blur, where a velocity of each pixel is calculated in fragment shader. A detailed explanation can be found in GPU Gems 3 ch27.
I made a jsfiddle with this technique: http://jsfiddle.net/Rivvy/f9kpxeaw/126/
Here's the relevant fragment code:
// reconstruct normalized device coordinates
ivec2 coord = ivec2(gl_FragCoord.xy);
vec4 pos = vec4(v_Position, texelFetch(u_Depth, coord, 0).x * 2.0 - 1.0, 1.0);
// convert to previous frame
pos = u_ToPrevFrame * pos;
vec2 prevCoord = pos.xy / pos.w;
// calculate velocity
vec2 velocity = -(v_Position - prevCoord) / 8.0;

How to get 3D point coordinates given UV coordinates on a 3d plane object - Threejs

I'm trying to build some simple data visualisation and my weapon of choice is Three.js.I'have a series of PlaneGeometry meshes on which I apply a transparent texture dynamically created with a series of red square on it drawn at different opacity values.My plan is to use those points to create other meshes ( eg. CylinderGeometry ) and place them on top of the red square with an height value based on the red square opacity value.So far I could manage to find the UV values for each square and store it to an array, but I'm getting blocked at converting such red square UV coordinates to the 3D world coordinates system.I've found several resource describing the same concept applied to a sphere, and surprisingly it is pretty straight forward, but no other resources about applying the same concept to other mesh.
How can I get the 3D coordinates of those red square inside the texture?
EDIT: I think this is it:
function texturePosToPlaneWorld(planeOb, texcoord)
{
var pos = new THREE.Vector3();
pos.x = (texcoord.x - 0.5) * PLANESIZE;
pos.y = (texcoord.y - 0.5) * PLANESIZE;
pos.applyMatrix4(planeOb.matrix);
return pos;
}
Is used like this in the jsfiddle I made: http://jsfiddle.net/L0rdzbej/2/
var texcoord = new THREE.Vector2(0.8, 0.65);
var newpos = texturePosToPlaneWorld(planeOb, texcoord);
cubeOb.position.copy(newpos);
Planes are simple. The edge between vertices A, B -- vector A->B defines the direction for 'x' in your texture, and A->C similarily for the other direction in which the plane goes in the 3d space .. where you have texture's 'y' mapped on the plane.
Let's assume your pivot point is in the middle. So that's known in world space. Then as UV go from 0 to 1, e.g. UV coord (1.0, 0.5) would be in half way of the full width of the plane in the direction of the vector from of your Plane object pivot .. going from middle all the way to the edge. And then in the middle otherwise, where you have 0.5 in V (as in normalized texture y pixelcoord).
To know the coordinates of the vertices of the plane in world space, you just multiple them with the orientation of the object..
Given you know the size of your plane, you actually don't need to look at the vertices as the orientation of the plane is already in the object matrix. So you just need to adapt your UV coord to the pivot in middle (-0.5) and multiply with the plane size to get the point in plane space. Then the matrix multiplication converts that to world space.

Three.js - ExtrudeGeometry using depth and a direction vector

I want to extrude a shape and create an ExtrudeGeometry, but the shape has to be extruded into a certain direction. I have a direction in a Vector3
The shape is drawn in in the x, y plane and normally the z is the extrude direction (extrusion depth). So a direction vector (0,0,1) would result in the default extrusion. But for example a (0,0,-1) would extrude the shape in the other direction.
I first tried to use an extrude path to achieve this, but when using a path the shape is allowed to "spin" freely and the initial orientation is arbitrary. This is not what I need, the shape must stay oriented as is. You can read details on this here in my previous question.
I already came up with the idea of applying a matrix to the second half of the vertices of the resulting ExtrudedGeometry, but I cannot seem to get the geometry I want. Maybe it is my clumsy use of matrices, but I think that the face normals are pointing inside out after this trick.
Note The direction vector will never be orthogonal to the z axis since this would give invalid shapes
So the question:
How do I get a reliable solution to extrude my shape into the given direction. Here an example. The shape is a square in the x,y plane (width and length 2000) the extrusion depth is also 2000 and three different vectors with a drawing of the expected result seen in 2D (front view) and 3D.
Extrude your geometry in the usual way by specifying an extrusion depth, and then apply a shear matrix to your geometry.
Here is how to specify a shear matrix that will tilt a geometry.
var matrix = new THREE.Matrix4();
var dir = new THREE.Vector3( 0.25, 1, 0.25 ); // you set this. a unit-length vector is not required.
var Syx = dir.x / dir.y,
Syz = dir.z / dir.y;
matrix.set( 1, Syx, 0, 0,
0, 1, 0, 0,
0, Syz, 1, 0,
0, 0, 0, 1 );
geometry.applyMatrix4( matrix );
(The three.js coordinate system has the y-axis up -- unlike in your illustration. You will have to accommodate.)
three.js r.113

3D normal/look-at vector from Euler angles

I'm working on a JavaScript/Canvas 3D FPS-like engine and desperately need a normal vector (or look-at vector if you will) for near and far plane clipping. I have the x and y axis rotation angles and am able to do it easily with only one of them at the time, but I just can't figure out how to get both of them...
The idea is to use this vector it to calculate a point in front of the camera, the near and far clipping planes must also be definable by constants so the vector has to be normalized, I hoped that with only the angles it would be possible to get this vector length to 1 without normalizing, but that's not the problem.
I don't have any roll (rotation around z axis) so it's that much easier.
My math looks like this:
zNear = 200; // near plane at an arbitrary 200 "points" away from camera position
// normal calculated with only y rotation angle (vertical axis)
normal = {
x: Math.sin(rotation.y),
y: 0,
z: Math.cos(rotation.y)};
Then clip a point in 3D space by testing the vector from the plane to it by means of a dot product.
nearPlane = {
x: position.x+normal.x*zNear,
y: position.y+normal.y*zNear,
z: position.z+normal.z*zNear};
// test a point at x, y, z against the near clipping plane
if(
(nearPlane.x-x)*normal.x+
(nearPlane.y-y)*normal.y+
(nearPlane.z-z)*normal.z < 0
)
{
return;
}
// then project the 3D point to screen
When a point is behind the player its projection coordinates are reversed (x=-x, y=-y) so nothing makes sense any more, that's why I'm trying to remove them.
I want that green arrow there, but in 3D.
After some intensive brain processing I figured out that
My original look-at vector was (0, 0, 1)
The z-rotation angle (roll) was always 0
There was no reason for the rotation matrix found on Wikipedia not to work
By applying the full rotation matrix on the (0, 0, 1) vector and taking in account that rz = 0 the solution I got was:
normal = {
x: Math.cos(camera.rotation.x)*Math.sin(camera.rotation.y),
y: -Math.sin(camera.rotation.x),
z: Math.cos(camera.rotation.y)*Math.cos(camera.rotation.x)};
And now everything works perfectly. The error was using only the x and y rotation matrices without taking in account rz = 0 for all angles which changed the matrix a little.

Categories

Resources