I'm working on a JavaScript/Canvas 3D FPS-like engine and desperately need a normal vector (or look-at vector if you will) for near and far plane clipping. I have the x and y axis rotation angles and am able to do it easily with only one of them at the time, but I just can't figure out how to get both of them...
The idea is to use this vector it to calculate a point in front of the camera, the near and far clipping planes must also be definable by constants so the vector has to be normalized, I hoped that with only the angles it would be possible to get this vector length to 1 without normalizing, but that's not the problem.
I don't have any roll (rotation around z axis) so it's that much easier.
My math looks like this:
zNear = 200; // near plane at an arbitrary 200 "points" away from camera position
// normal calculated with only y rotation angle (vertical axis)
normal = {
x: Math.sin(rotation.y),
y: 0,
z: Math.cos(rotation.y)};
Then clip a point in 3D space by testing the vector from the plane to it by means of a dot product.
nearPlane = {
x: position.x+normal.x*zNear,
y: position.y+normal.y*zNear,
z: position.z+normal.z*zNear};
// test a point at x, y, z against the near clipping plane
if(
(nearPlane.x-x)*normal.x+
(nearPlane.y-y)*normal.y+
(nearPlane.z-z)*normal.z < 0
)
{
return;
}
// then project the 3D point to screen
When a point is behind the player its projection coordinates are reversed (x=-x, y=-y) so nothing makes sense any more, that's why I'm trying to remove them.
I want that green arrow there, but in 3D.
After some intensive brain processing I figured out that
My original look-at vector was (0, 0, 1)
The z-rotation angle (roll) was always 0
There was no reason for the rotation matrix found on Wikipedia not to work
By applying the full rotation matrix on the (0, 0, 1) vector and taking in account that rz = 0 the solution I got was:
normal = {
x: Math.cos(camera.rotation.x)*Math.sin(camera.rotation.y),
y: -Math.sin(camera.rotation.x),
z: Math.cos(camera.rotation.y)*Math.cos(camera.rotation.x)};
And now everything works perfectly. The error was using only the x and y rotation matrices without taking in account rz = 0 for all angles which changed the matrix a little.
Related
I am recreating conic sections in P5.js and need to find the equation of a square plane.
I know the size of the square plane, and the rotation in each axis in degrees from its center point (as dictated by the P5 sliders).
I want to calculate the coordinates (x, y, z) of the four vertices of this square plane, given known values for rotation.
This is my P5 sketch.
https://editor.p5js.org/inglog/sketches/HsMUb8UPA
I want to use these coordinates to create an equation for the plane, in the form ax+by+cz+d=0
Once I have the vertices of the square, I will use this calculator to get the equation of the plane: https://keisan.casio.com/exec/system/1223596129)
Any advice on how to calculate the coordinates of the vertices of the plane, given a known rotation about its center point?
Is this related to conversion between Cylindrical and Cartesian Coordinates? I also wonder if this answer is connected to the solution (Rotating vertices about point)
Thank you in advance for reading through.
Since there is a unique plane that goes through a given set of 3 (noncollinear) points, you don't need the vertices of the square in order to find an equation for the plane. You just need 3 random points on the plane.
Answer:
A = (0, slider.value(), 0)
B = (1, slider.value(), 0)
C = (0, slider.value() - sin(slider2.value()), cos(slider2.value()))
From here, you can get the plane equation as they describe on the site you gave:
AB = (1, 0, 0)
AC = (0, -sin, cos)
AB x AC = (0, -cos, -sin)
Equation:
0x - cos(slider2.value())*y - sin(slider2.value())*z + cos(slider2.value())*slider.value() = 0
simplifies to
cos(slider2.value())*(y - slider.value()) + sin(slider2.value())*z = 0
or
z = (slider.value() - y)/tan(slider2.value())
I'm about 99.9% sure this is the right equation. I used it to overlay points on the plane in your program, and it looked right.
How I got the three points:
We know that there will be one point at the center, which is on the y-axis. At this point, x=z=0 and y=slider.value(). So that is point A: (0, slider.value(), 0).
We also know that the plane intersects the xy-plane in the line defined by y=slider.value(). So point B can be any point on this line, let's arbitrarily pick (1, slider.value(), 0).
The third point is the hardest, since we can't have z=0, and we have to consider the angle. Starting at the center, let's walk one unit along the plane, keeping x=0 and moving in the positive z direction.
It's hard to convey that point over text, but this is a classic unit circle problem: x=0, and y, z are on a unit circle centered at point A: C = (0, slider.value() - sin(slider2.value()), cos(slider2.value())).
I am trying to adapt this code (which itself is a implementation of this). I have gotten the general visualization and rendering to work, but I'm now trying to animate some realistic movement.
The point of the light source is determined by a normalized vector (for example THREE.Vector3(1, 0.75, 0). The light will appear to be coming from the top right). Now what I would like to do is to have the vector rotate around the sphere in such a way that it seems like the sphere is orbiting the light source. But I can't figure out how to do this. I've tried updating/changing the position of the vector. But I'm not sure how to calculate the proper next x,y,z values. I've tried applying euler angles and rotational matrix, like so:
euler = new THREE.Euler(f,g,h, 'XYZ');
matrix = new THREE.Matrix4().makeRotationFromEuler(euler);
light = vector.applyMatrix4(matrix);
But here, I'm not sure how to get the correct values of f,g,h such that the light doesn't just wobble around the sphere.
Am I even on the right track?
Working example: http://jsfiddle.net/VsWb9/3890/
You are increasing linearly two of 3 coordinates in euler angles, that is the issue.
For other rotations than around x/y/z axis, the best for understanding/avoiding issues/coding/computational cost is to trust quaternions.
They are rather intuitive, in threejs too. They are made of 4 coordinates : 3 for a rotation axis, and the fourth for the rotation value.
var quat=new THREE.Quaternion();
//we set the axis around which the rotation will occur. It needs to be normalized
var axis=new THREE.Vector3(0,1,0).normalize();
//and the angle value (radians)
var angle=0;
//this is your light vector (=original position of your light)
var light=new THREE.Vector3(1,0,0).normalize();
Then in your render loop you change the angle value and tell the quaternion to use the axis above, and the updating angle :
angle+=.001//(or angle -= if your axis points in the +y direction like here)
quat.setFromAxisAngle(axis,angle);
And finally apply it :
light.applyQuaternion(quat);
Updated fiddle : http://jsfiddle.net/Atrahasis/0v93p2xy/
Note about normalized vectors :
a normalized vector is 1-unit length.
( 1 , 0 , 0 ),( 0 , 1 , 0 ),( 0 , 0 , 1 ) are native-normalized vectors (and unit vectors).
( 1 , .75 , 0 ) is not a normalized vector, its length is √(1²+.75²)=1.5625 (pythagoras)
for example ( 0.6614... , .75 , 0 ) is a normalized vector
I am getting the intersections of mouse click with Three.js like this
me.vector.set(
(event.clientX / window.innerWidth) * 2 - 1,
-(event.clientY / window.innerHeight) * 2 + 1,
0.5);
me.vector.unproject(app.camera);
me.ray.set(app.camera.position, me.vector.sub(app.camera.position).normalize());
var intersects = me.ray.intersectObjects(app.colliders, false);
So, i got intersects perfectly, with following properties:
distance, face, faceIndex, object, point, and then I execute a function.
The problem is the following:
I want to detect when i click a face of a cube, that is like a floor, in the next example would be the gray face.
sorry about my engilsh D:
WebGL defines vertices and faces with coordinates, colors, and normals. A face normal is a normalized vector, perpendicular to the face plane (and generally pointing 'outside' the mesh). It defines its orientation and enables calculation of lightning for instance. In three.js you can access it via face.normal.
If your floor-like faces are stricly horizontal, then their normals are all precisely {x:0,y:1,z:0}. And since normals are normalized, simply check whether face.normal.y === 1 also checks that x and y equal 0.
If your faces are not strictly horizontal, you may need to set a limit angle with the y-axis. You can calculate this angle with var angle=Math.acos(Yaxis.dot(faceNormal)) where Yaxis=new THREE.Vector3(0,1,0).
I want to extrude a shape and create an ExtrudeGeometry, but the shape has to be extruded into a certain direction. I have a direction in a Vector3
The shape is drawn in in the x, y plane and normally the z is the extrude direction (extrusion depth). So a direction vector (0,0,1) would result in the default extrusion. But for example a (0,0,-1) would extrude the shape in the other direction.
I first tried to use an extrude path to achieve this, but when using a path the shape is allowed to "spin" freely and the initial orientation is arbitrary. This is not what I need, the shape must stay oriented as is. You can read details on this here in my previous question.
I already came up with the idea of applying a matrix to the second half of the vertices of the resulting ExtrudedGeometry, but I cannot seem to get the geometry I want. Maybe it is my clumsy use of matrices, but I think that the face normals are pointing inside out after this trick.
Note The direction vector will never be orthogonal to the z axis since this would give invalid shapes
So the question:
How do I get a reliable solution to extrude my shape into the given direction. Here an example. The shape is a square in the x,y plane (width and length 2000) the extrusion depth is also 2000 and three different vectors with a drawing of the expected result seen in 2D (front view) and 3D.
Extrude your geometry in the usual way by specifying an extrusion depth, and then apply a shear matrix to your geometry.
Here is how to specify a shear matrix that will tilt a geometry.
var matrix = new THREE.Matrix4();
var dir = new THREE.Vector3( 0.25, 1, 0.25 ); // you set this. a unit-length vector is not required.
var Syx = dir.x / dir.y,
Syz = dir.z / dir.y;
matrix.set( 1, Syx, 0, 0,
0, 1, 0, 0,
0, Syz, 1, 0,
0, 0, 0, 1 );
geometry.applyMatrix4( matrix );
(The three.js coordinate system has the y-axis up -- unlike in your illustration. You will have to accommodate.)
three.js r.113
I try to draw a simple rectangle in webgl ( i use webgl like a 2d api ). The idea is to send attributes ( the points ), and transform them in the vertex shader to fit on the screen. But when i render with the vertex shader : gl_Position = vec4( a_point, 0.0, 1.0 ); i don't see anything. I saw WebGL Fundamentals for 2d webgl and it does not seem to work on my computer. There's rectangles but i think they are not on the good coordinates !
Can you explain me how to draw a rectangle in a special coordinate system :
-width/2 < x < width/2
-height/2 < y < height/2
and then transform them in the vertex shader to have the same position in each browser( chrome, firefox, internet explorer 11. It seems to be very simple but i have not reach my goal. I tried to make a transformation of the vertex in the vertex shader too. Maybe i can use viewport ?
In WebGL, all coordinates are in the range from -1.00f (f=float) to +1.00f. They automatically represent whatever canvas width and height you got. By default, you don't use absolute pixel numbers in WebGL.
If you set a vertex (point) to be on x=0.00 and y=0.00, it will be in the center of your screen. If one of the coordinates goes below -1 or above +1, it will be outside of your rendered canvas and some pixels from your triangle won't even be passed to fragment shader (fragment = a pixel of your framebuffer).
This way guarantees that all of your objects will have the same relative position and size, no matter how many pixels your canvas will be.
If you want to have an object of a specific pixel size, you can pre-calculate it like this:
var objectWidth = OBJECT_WIDTH_IN_PIXEL / CANVAS_WIDTH;
var objectHeight = OBJECT_HEIGHT_IN_PIXEL / CANVAS_HEIGHT;
In some cases, as you might see down below, it's better to know the half width and height in floating point -1.00 to +1.00 universe. To position this object's center into the center of your canvas, you need to setup your vertex data like:
GLfloat vVertices[] = {
-(objectWidth/2.0), -(objectHeight/2.0), 0.0, // Point 1, Triangle 1
+(objectWidth/2.0), -(objectHeight/2.0), 0.0, // Point 2, Triangle 1
-(objectWidth/2.0), +(objectHeight/2.0), 0.0, // Point 3, Triangle 1
-(objectWidth/2.0), +(objectHeight/2.0), 0.0, // Point 4, Triangle 2
+(objectWidth/2.0), -(objectHeight/2.0), 0.0, // Point 5, Triangle 2
+(objectWidth/2.0), +(objectHeight/2.0), 0.0 // Point 6, Triangle 2
}
The above vertex data sets up two triangles to create a rectangle in the center of your screen. Many of these things can be found in tutorials like WebGL Fundamentals by Greggman.
Please have a look at this post:
http://games.greggman.com/game/webgl-fundamentals/
It shows how to do 2d drawing with WebGL.
I guess you can easily adapt it to suit your need for custom 2d space coordinates.