How to get the 2d coordinates of webgl vertices? - javascript

In this tutorial: http://learningwebgl.com/blog/?p=28 we draw a triangle and a square in the 3d space, and I want to get the vertices' x,y coordinates on the canvas.
So I want to get the 2d coordinates of these vertices: http://s11.postimage.org/ig6irk9lf/static.png
Sorry for my bad english, I hope you understand it.

You have to do the same calculation that WebGL does. It takes a 3d point [X,Y,Z] to homogeneous point [x,y,z,w] via
[x,y,z,w] = pMatrix * mvMatrix * [X,Y,Z,1]
To get clip space coordinates, divide through by w:
[x/w,y/w,z/w]
x/w and y/w are in the range [-1,1]. To convert them to viewport coordinates, scale them according to the canvas size.
[x/w,y/w] -> [(1 + x/w)*canvas.width/2, (1 - y/w)*canvas.height/2]
Note how the 'direction' of the y coordinate changes in the last transformation.
For a little more information, you can Google "graphics pipeline". E.g. http://en.wikipedia.org/wiki/Graphics_pipeline

You have to do the math to compute them manually. WebGL only computes them for its own purposes, ie: rendering.
Desktop GL has ways of getting those positions back (transform feedback), but WebGL does not.

Related

Map (vrm) animated humanoid model based on skeleton coordinates in three.js

I'm really new to three.js and animation in general, and currently pretty confused with concepts like what rotation angles are/what exactly a VRM is and how it interacts with three.js/what is humanoid animation etc, but i will try to be as explicit as i can about my question below.
So i have a sequence of frames, where each frame has a set of coordinates (xyz, imagine x goes from left to right on your screen, y from top to bottom and z comes out the screen) for human joints (e.g. left foot, right foot, left shoulder etc...). And I would like to have a 3D animated model move based on the provided coordinates.
What I have seen people done so far (e.g. RM motion capture demo using pixiv three-vrm), it seems like they would modify the rotation (z) of the human bone node (returned by getBoneNode) in order to map the human action onto the animated model.
My questions are:
You can (e.g. like the author of above link) and only need to compute the rotation around z-axis since the input is a 2D video, but in my case it's 3D coordinates, how can I calculate the rotation value? From the documention on Object3D of three.js, looks like the rotation are Euler angles.
i. But how can one calculate these Euler angles given e.g. the coordinate of left shoulder?
ii. And what angles of which humanoid body/bone part do I need to do this calculation for? e.g. Does it even make sense to talk about rotation of LeftShoulder or nose?
iii. this probably is silly, but just thinking out loud here, why can't I just supply the xyz coordinate value as the position attribute of these humanoid bone node? e.g. something like:
currentVrm.humanoid.getBoneNode(THREE.VRMSchema.HumanoidBoneName.Neck).position = (10, -2.5, 1)
this would not get the animated model moving the same as the person in the frames with coordinates provided?
What exactly does a humanoid bone node look like or how are they represented? from three.js doc, it only says it's a Object3D object, it cannot be just a vector right? because from my limited understanding of Euler angles, it doesn't make complete sense to have all three Eulers angles of a vector (since it can't rotate like a cylinder). The reason im asking this, is because im confused on what angle and how needs to be calculated for each humanoid bone node, e.g. i have leftShoulder = (3, 11.2, -8.72), do i just calculate its angle to each xyz axis and supply these angles to the rotation. attributes of the bone node?
Can't tell much about three.js, but I can tell something about VRM.
Basically you have bones hierarchy. That is root-hips-spine-chest-neck... etc,
from chest you have left/right_shoulder - l/r_upper_arm - l/r_lower_arm - l/r_hand etc, from hips you have legs and feet.
Every bone has 3 position coordinates (X,Y,Z) and a quaternion (X,Y,Z,W). Which means that if you want to find a position of some bone in the world coordinate systems you have to go through all hierarchy (starting from root) applying quaternions and adding positions.
For example, if I want to find 'neck bone' position I have to:
take 'root' coordinates and apply 'root' quaternion
take 'hips' position and apply 'hips' quaternion, add resulting coordinates to 'root' coordinates;
take 'spine' coordinates and apply 'spine' quaternion, add resulting coordinates to 'hips' coordinates
take 'chest' coordinates and apply 'chest' quaternion, add resulting coordinates to 'spine' coordinates
take 'neck' coordinates and apply 'neck' quaternion, add resulting coordinates to 'chest' coordinates
Also, 'applying quaternion' means that you also keep previous quaternion in mind (you do that by multiplication); that is the resulting quaternion for 'neck' would be
qneck_res = qneckqchestqspineqhipsqroot
There is a procedure to convert between Euler angles and quaternion if needed.

3D model in HTML/CSS; Calculate Euler rotation of triangle

TLDR; Given a set of triangle vertices and a normal vector (all in unit space), how do I calculate X, Y, Z Euler rotation angles of the triangle in world space?
I am attemping to display a 3D model in HTML - with actual HTML tags and CSS transforms. I've already loaded an OBJ file into a Javascript class instance.
The model is triangulated. My first aim is just to display the triangles as planes (HTML elements are rectangular) - I'll be 'cutting out' the triangle shapes with CSS clip-path later on.
I am really struggling to understand and get the triangles of the model rotated correctly.
I thought a rotation matrix could help me out, but my only experience with those is where I already have the rotation vector and I need to convert and send that to WebGL. This time there is no WebGL (or tutorials) to make things easier.
The following excerpt shows the face creation/'rendering' of faces. I'm using the face normal as the rotation but I know this is wrong.
for (const face of _obj.faces) {
const vertices = face.vertices.map(_index => _obj.vertices[_index]);
const center = [
(vertices[0][0] + vertices[1][0] + vertices[2][0]) / 3,
(vertices[0][1] + vertices[1][1] + vertices[2][1]) / 3,
(vertices[0][2] + vertices[1][2] + vertices[2][2]) / 3
];
// Each vertex has a normal but I am just picking the first vertex' normal
// to use as the 'face normal'.
const normals = face.normals.map(_index => _obj.normals[_index]);
const normal = normals[0];
// HTML element creation code goes here; reference is 'element'.
// Set face position (unit space)
element.style.setProperty('--posX', center[0]);
element.style.setProperty('--posY', center[1]);
element.style.setProperty('--posZ', center[2]);
// Set face rotation, converting to degrees also.
const rotation = [
normal[0] * toDeg,
normal[1] * toDeg,
normal[2] * toDeg,
];
element.style.setProperty('--rotX', rotation[0]);
element.style.setProperty('--rotY', rotation[1]);
element.style.setProperty('--rotZ', rotation[2]);
}
The CSS first translates the face on X,Y,Z, then rotates it on X,Y,Z in that order.
I think I need to 'decompose' my triangles' rotation into separate axis rotations - i.e rotate on X, then on Y, then on Z to get the correct rotation as per the model face.
I realise that the normal vector gives me an orientation but not a rotation around itself - I need to calculate that. I think I have to determine a vector along one triangle side and cross it with the normal, but this is something I am not clear on.
I have spent hours looking at similar questions on SO but I'm not smart enough to understand or make them work for me.
Is it possible to describe what steps to take without Latex equations? I'm good with pseudo code but my Math skills are severely lacking.
The full code is here: https://whoshotdk.co.uk/cssfps/ (view HTML source)
The mesh building function is at line 422.
The OBJ file is here: https://whoshotdk.co.uk/cssfps/data/model/test.obj
The Blender file is here: https://whoshotdk.co.uk/cssfps/data/model/test.blend
The mesh is just a single plane at an angle, displayed in my example (wrongly) in pink.
The world is setup so that -X is left, -Y is up, -Z is into the screen.
Thank You!
If you have a plane and want to rotate it to be in the same direction as some normal, you need to figure out the angles between that plane's normal vector and the normal vector you want. The Euler angles between two 3D vectors can be complicated, but in this case the initial plane normal should always be the same, so I'll assume the plane normal starts pointing towards positive X to make the maths simpler.
You also probably want to rotate before you translate, so that everything is easier since you'll be rotating around the origin of the coordinate system.
By taking the general 3D rotation matrix (all three 3D rotation matrices multiplied together, you can find it on the Wikipedia page) and applying it to the vector (1,0,0) you can then get the equations for the three angles a, b, and c needed to rotate that initial vector to the vector (x,y,z). This results in:
x = cos(a)*cos(b)
y = sin(a)*cos(b)
z = -sin(b)
Then rearranging these equations to find a, b and c, which will be the three angles you need (the three values of the rotation array, respectively):
a = atan(y/x)
b = asin(-z)
c = 0
So in your code this would look like:
const rotation = [
Math.atan2(normal[1], normal[0]) * toDeg,
Math.asin(-normal[2]) * toDeg,
0
];
It may be that you need to use a different rotation matrix (if the order of the rotations is not what you expected) or a different starting vector (although you can just use this method and then do an extra 90 degree rotation if each plane actually starts in the positive Y direction, for example).

Project visible pixels in one view onto another

In WebGL or in pure matrix math I would like to match the pixels in one view to another view. That is, imagine I take pixel with x,y = 0,0. This pixel lies on the surface of a 3d object in my world. I then orbit around the object slightly. Where does that pixel that was at 0,0 now lie in my new view?
How would I calculate a correspondence between each pixel in the first view with each pixel in the second view?
The goal of all this is to run a genetic algorithm to generate camouflage patterns that disrupt a shape from multiple directions.
So I want to know what the effect of adding a texture over the object would be from multiple angles. I want the pixel correspondencies because rendering all the time would be too slow.
To transform a point from world to screen coordinates, you multiply it by view and projection matrices. So if you have a pixel on the screen, you can multiply its coordinates (in range -1..1 for all three axes) by inverse transforms to find the corresponding point in world space, then multiply it by new view/projection matrices for the next frame.
The catch is that you need the correct depth (Z coordinate) if you want to find the movement of mesh points. For that, you can either do trace a ray through that pixel and find its intersection with your mesh the hard way, or you can simply read the contents of the Z-buffer by rendering it to texture first.
A similar technique is used for motion blur, where a velocity of each pixel is calculated in fragment shader. A detailed explanation can be found in GPU Gems 3 ch27.
I made a jsfiddle with this technique: http://jsfiddle.net/Rivvy/f9kpxeaw/126/
Here's the relevant fragment code:
// reconstruct normalized device coordinates
ivec2 coord = ivec2(gl_FragCoord.xy);
vec4 pos = vec4(v_Position, texelFetch(u_Depth, coord, 0).x * 2.0 - 1.0, 1.0);
// convert to previous frame
pos = u_ToPrevFrame * pos;
vec2 prevCoord = pos.xy / pos.w;
// calculate velocity
vec2 velocity = -(v_Position - prevCoord) / 8.0;

three js Object3D rotation

I'm still new for using three js. When I'm rotating a camera object, there is a rotation property has x,y,z value.
I'm wondering where the x,y,z in Object 3D rotation come from? I know the x,y,z represent the radians of Object Euler angle, but according the link three.js document provide:https://en.wikipedia.org/wiki/Euler_angles the range of α and γ covers 2π radians, and the range of β covers π radians. However, the range of all the x,y,z only covers π radians when i test it.
From initially x=0,y=0,z=0, straight look up, why only x value changed? And if the Object3D is a camera, is that means the center pixel in the camera view represents the x-axis?
Appreciate of your help.
You might work on formulating your question a bit clearer and give code to reproduce your error.
On the question why only the x-vaue changes when you look straight up from (0,0,0) is because x, y and z represents the axis that you rotate around.
The standard in 3js and in 3D-graphics in general is to have x as the horizontal axis, y as the vertical axis and z is the "depth" axis.
Looking straight up you will rotate 90 degrees aound the horiontal x-axis, thus changing the x-value in the rotation.
I can give you the link to
https://threejs.org/docs/#api/math/Euler
Three.js provides two ways of representing 3D rotations: Euler angles and Quaternions, as well as methods for converting between the two. Euler angles are subject to a problem called "gimbal lock," where certain configurations can lose a degree of freedom (preventing the object from being rotated about one axis). For this reason, object rotations are always stored in the object's quaternion.
Previous versions of the library included a useQuaternion property which, when set to false, would cause the object's matrix to be calculated from an Euler angle. This practice is deprecated---instead, you should use the setRotationFromEuler method, which will update the quaternion.
https://threejs.org/docs/manual/introduction/Matrix-transformations.html

calculate canvas context.setTransform matrix based on a 4x4 matrix

I am trying to simulate 3d rotation of a bitmap using javascript and html5 canvas (without webGL). I think the way to go is to compute the terms for the 2D matrix of the canvas context.setTransform method, but I can't figure out how to obtain that matrix ideally from a general 4x4 matrix representing the desired transformation or the desired final position of the bitmap in pixels (I can compute the desired final coordinates in pixels projecting the corners with the 4x4 matrix and the project-view matrix).
in this fiddle I had been playing with manipulating a couple of angles (representing the rotation of a 3d camera with two degrees of freedom) to calculate shear terms for the setTransform matrix, but I really can't obtain a clear insight of the ideal procedure to follow. http://jsfiddle.net/cesarpachon/GQvp2/
context.save();
/*alpha affects the scaley with sin
*/
var skew = Math.sin(alpha);
var scalex = 1;//(1-skew) +0.01;//Math.sin(alpha);
var scaley = 1;//Math.abs(Math.sin(alpha))+0.01;
var offx = 0;
var offy = 0;
var skewx = Math.sin(alpha);
var skewy = Math.sin(beta);
context.setTransform(scalex,skewx,skewy,scaley,offx,offy);
context.drawImage(image, 0, 0, height, width);
context.restore();
A severe limitation when projecting 3d into 2d is that the 2d transforms only use an affine matrix.
This means any 2d transform will always result in a parallelogram.
3d perspectives most often require at least a trapezoidal result (not achievable in affine transforms).
Bottom line: You can't consistently get 3d perspective from 2d transforms.
One workaround is to use image slicing to fit an image into 3d coordinates
http://www.subshell.com/en/subshell/blog/image-manipulation-html5-canvas102.html
Another more mathematically intense workaround is to map an image onto perspective triangles:
http://mathworld.wolfram.com/PerspectiveTriangles.html
Good luck with your project! :-)

Categories

Resources