THREE.js Jittering/Shaking Vertices - javascript

I'm building a tiled slippy map globe in THREE.js much like Cesium. The issue I've run into is that when, at high zoom levels, I rotate the Perspective/OrbitControls camera, the tiles jitter/shake. It's probably a precision issue but I don't know how to work around this one.
How I'm projecting tiles:
Create a THREE.PlaneBufferGeometry centered at the origin of the
parent planet (an Object3D at (0, -planetRadius, 0) ).
Project each of the plane's attribute vertices (one by one) to a point matching it's correct latitude, longitude and elevation (lonLatToVector3()).
I then have a camera that can zoom in on a plane/tile and the
distance from the camera to the tile determines which zoom level
tiles to render.
Things that don't solve the issue:
A near clipping plane of 1.
A close far clipping range (such as near: 1, far: 2).
A logarthmic or non-logarthmic depth buffer.
Keeping the camera lookat at (0,0,0).
Panning rotating the planet instead of moving the camera.
Scaling up or down the entire world.
This question most relates to this stackoverflow question and I've modified that fiddle to better represent the core of my issue:
https://jsfiddle.net/6qfaf5h0/
for (var p = 0; p < mesh.geometry.attributes.position.array.length; p += 3) {
var xyzPos = lonLatToVector3(0 + (((p / 3) % tileRes) / (tileRes - 1)) * anglePlane,
0 + (parseInt((p / 3) / tileRes) / (tileRes - 1)) * anglePlane,
0);
mesh.geometry.attributes.position.array[p] = xyzPos.x;
mesh.geometry.attributes.position.array[p + 1] = xyzPos.y;
mesh.geometry.attributes.position.array[p + 2] = xyzPos.z;
//mesh.geometry.attributes.position.array[p + 2] = 0; //<- uncommenting fixes shake but I need this axis
}
I want a mesh with an anglePlane < 0.01 that (when zoomed in to a point where only a few vertices are visible) doesn't shake.
Other possibly helpful notes:
Sprites don't shake despite using the same projection algorithm.
In some cases even a zoomed in flat (elevation = 0) plane can look all wavy.
Any help would be greatly appreciated.

Related

Clamping rotated camera in Three.js

In Three.js, I have a 3D scene that contains a floor and an orthographic camera.
I have it set up so that the user can move the camera around the scene with their mouse. I want to limit the camera's movement to the dimensions of the floor.
I got it working if the camera is rotated to -90 deg on the x-axis, i.e. if the camera is looking straight down at the floor from above.
But changing the camera to any other angle causes issues with the clamp limits. For example, if I change the camera angle to -40 instead, I can pan further up and down than I should be able to, and hence see parts of the scene that I should not be able to.
How can I integrate the camera's rotation into my below solution?
// Create camera
this.camera = new THREE.OrthographicCamera(...);
this.camera.rotation.x = THREE.MathUtils.degToRad(-90);
// The size of the floor
const modelBoundingBoxWidth = 14;
const modelBoundingBoxHeight = 14;
// The limits for the camera (in world units)
const cameraLimits = {
left: -modelBoundingBoxWidth / 2,
right: modelBoundingBoxWidth / 2,
top: - modelBoundingBoxHeight / 2,
bottom: modelBoundingBoxHeight / 2,
};
// Calculate the camera's new position due to the mouse pan
// (...)
// Apply the camera limits to the new camera position
if ((cameraPosNew.x - cameraHalfWidth) < cameraLimits.left) {
cameraPosNew.x = cameraLimits.left + cameraHalfWidth;
}
else if ((cameraPosNew.x + cameraHalfWidth) > cameraLimits.right) {
cameraPosNew.x = cameraLimits.right - cameraHalfWidth;
}
if ((cameraPosNew.z - cameraHalfHeight) < cameraLimits.top) {
cameraPosNew.z = cameraLimits.top + cameraHalfHeight;
}
else if ((cameraPosNew.z + cameraHalfHeight) > cameraLimits.bottom) {
cameraPosNew.z = cameraLimits.bottom - cameraHalfHeight;
}
// Move the camera to the new position
this.camera.position.set(
cameraPosNew.x,
cameraPosNew.y,
cameraPosNew.z
);
I believe I need to project the floor's vertical length onto the camera's vertical length using the camera's rotation angle, so that I can determine how much I need to reduce the vertical clamp limits by (due to the rotation). But I don't know where to start regarding the math. I tried various dot product / vector projection approaches but didn't get anywhere.
I also noticed that at an angle of -40, the space above and below the floor is not equal, meaning either the top and bottom clamp limits need to be different, or perhaps I need to move the camera back by some value (due to the rotation)?
Also note that due to the rotation to -40, I can see more of the scene than I could at -90.
Update: I think this question is a little unclear, due to me bringing panning into it when I think I need to first improve my understanding of how to calculate what the camera sees when rotated. I have created a separate question for specifically that: How does rotation influence an orthographic camera in Three.js
If you're trying to simply clamp the position vector, you could use the THREE.Vector3.clamp() method.
const boxMin = new THREE.Vector3(-7, 0, -7);
const boxMax = new THREE.Vector3(7, 100, 7);
// Calculate the camera's new position due to the mouse pan
// (...)
// Perform clamping so it doesn't go outside box
cameraPosNew.clamp(boxMin, boxMax);
this.camera.position.copy(cameraPosNew);
You're not showing how the rotations are affecting the position within this box, but if you still have to take rotations into consideration to calculate the min/max box bounds, you might need to do some manual calculations with some basic trigonometric functions.

Three.js - Distance from camera to the origin plane

In the process of developing my own zoom system besides the ones provided by the Three.js controls, I need to find the current distance from camera to the plane that contains the (0, 0, 0) origin point and is perpendicular on the camera direction (let's call that value opdistance), so I can use it to translate the camera along its direction according to a zoomfactor value and a -1 or 1 bias value given by the mouse wheel, like so:
camera.translateZ(opdistance * zoomfactor ** bias - opdistance);
In a 2D triangle's mathematical terms, that distance is the camera .distanceTo the (0, 0, 0) origin point multiplied by the cosine of the angle between the camera direction and the camera to origin direction vectors and that can be calculated like:
var cameradirection = new THREE.Vector3();
camera.getWorldDirection(cameradirection);
var distancedirection = new THREE.Vector3();
camera.getWorldPosition(distancedirection).negate();
var positionangle = cameradirection.angleTo(distancedirection);
var opdistance = camera.position.distanceTo(new THREE.Vector3(0, 0, 0)) * Math.cos(positionangle);
camera.translateZ(opdistance * zoomfactor ** bias - opdistance);
but I'm interested in a simpler way using Three.js' standard methods, which I'm sure it exists.
The reason for asking is that I initially used the camera.position.z value as the opdistance, but that only works if the camera is instantiated with the (0, 0, 0) point "in front" of it along the Z axis, and it's otherwise changing to other values, depending on the camera position and direction (for example, the opdistance becomes the camera.position.y if the camera is placed "above" the (0, 0, 0) point and looking at it, and so on).
P.S. I could also achieve this via camera.position.setLength(finaldistance) but that produces some unwanted side effects as the camera jumps to other positions on resuming actions via other controls (probably some missing matrix update issue), so alternative ideas are welcomed.

3D model in HTML/CSS; Calculate Euler rotation of triangle

TLDR; Given a set of triangle vertices and a normal vector (all in unit space), how do I calculate X, Y, Z Euler rotation angles of the triangle in world space?
I am attemping to display a 3D model in HTML - with actual HTML tags and CSS transforms. I've already loaded an OBJ file into a Javascript class instance.
The model is triangulated. My first aim is just to display the triangles as planes (HTML elements are rectangular) - I'll be 'cutting out' the triangle shapes with CSS clip-path later on.
I am really struggling to understand and get the triangles of the model rotated correctly.
I thought a rotation matrix could help me out, but my only experience with those is where I already have the rotation vector and I need to convert and send that to WebGL. This time there is no WebGL (or tutorials) to make things easier.
The following excerpt shows the face creation/'rendering' of faces. I'm using the face normal as the rotation but I know this is wrong.
for (const face of _obj.faces) {
const vertices = face.vertices.map(_index => _obj.vertices[_index]);
const center = [
(vertices[0][0] + vertices[1][0] + vertices[2][0]) / 3,
(vertices[0][1] + vertices[1][1] + vertices[2][1]) / 3,
(vertices[0][2] + vertices[1][2] + vertices[2][2]) / 3
];
// Each vertex has a normal but I am just picking the first vertex' normal
// to use as the 'face normal'.
const normals = face.normals.map(_index => _obj.normals[_index]);
const normal = normals[0];
// HTML element creation code goes here; reference is 'element'.
// Set face position (unit space)
element.style.setProperty('--posX', center[0]);
element.style.setProperty('--posY', center[1]);
element.style.setProperty('--posZ', center[2]);
// Set face rotation, converting to degrees also.
const rotation = [
normal[0] * toDeg,
normal[1] * toDeg,
normal[2] * toDeg,
];
element.style.setProperty('--rotX', rotation[0]);
element.style.setProperty('--rotY', rotation[1]);
element.style.setProperty('--rotZ', rotation[2]);
}
The CSS first translates the face on X,Y,Z, then rotates it on X,Y,Z in that order.
I think I need to 'decompose' my triangles' rotation into separate axis rotations - i.e rotate on X, then on Y, then on Z to get the correct rotation as per the model face.
I realise that the normal vector gives me an orientation but not a rotation around itself - I need to calculate that. I think I have to determine a vector along one triangle side and cross it with the normal, but this is something I am not clear on.
I have spent hours looking at similar questions on SO but I'm not smart enough to understand or make them work for me.
Is it possible to describe what steps to take without Latex equations? I'm good with pseudo code but my Math skills are severely lacking.
The full code is here: https://whoshotdk.co.uk/cssfps/ (view HTML source)
The mesh building function is at line 422.
The OBJ file is here: https://whoshotdk.co.uk/cssfps/data/model/test.obj
The Blender file is here: https://whoshotdk.co.uk/cssfps/data/model/test.blend
The mesh is just a single plane at an angle, displayed in my example (wrongly) in pink.
The world is setup so that -X is left, -Y is up, -Z is into the screen.
Thank You!
If you have a plane and want to rotate it to be in the same direction as some normal, you need to figure out the angles between that plane's normal vector and the normal vector you want. The Euler angles between two 3D vectors can be complicated, but in this case the initial plane normal should always be the same, so I'll assume the plane normal starts pointing towards positive X to make the maths simpler.
You also probably want to rotate before you translate, so that everything is easier since you'll be rotating around the origin of the coordinate system.
By taking the general 3D rotation matrix (all three 3D rotation matrices multiplied together, you can find it on the Wikipedia page) and applying it to the vector (1,0,0) you can then get the equations for the three angles a, b, and c needed to rotate that initial vector to the vector (x,y,z). This results in:
x = cos(a)*cos(b)
y = sin(a)*cos(b)
z = -sin(b)
Then rearranging these equations to find a, b and c, which will be the three angles you need (the three values of the rotation array, respectively):
a = atan(y/x)
b = asin(-z)
c = 0
So in your code this would look like:
const rotation = [
Math.atan2(normal[1], normal[0]) * toDeg,
Math.asin(-normal[2]) * toDeg,
0
];
It may be that you need to use a different rotation matrix (if the order of the rotations is not what you expected) or a different starting vector (although you can just use this method and then do an extra 90 degree rotation if each plane actually starts in the positive Y direction, for example).

Get face rotation Three.js

I am getting the intersections of mouse click with Three.js like this
me.vector.set(
(event.clientX / window.innerWidth) * 2 - 1,
-(event.clientY / window.innerHeight) * 2 + 1,
0.5);
me.vector.unproject(app.camera);
me.ray.set(app.camera.position, me.vector.sub(app.camera.position).normalize());
var intersects = me.ray.intersectObjects(app.colliders, false);
So, i got intersects perfectly, with following properties:
distance, face, faceIndex, object, point, and then I execute a function.
The problem is the following:
I want to detect when i click a face of a cube, that is like a floor, in the next example would be the gray face.
sorry about my engilsh D:
WebGL defines vertices and faces with coordinates, colors, and normals. A face normal is a normalized vector, perpendicular to the face plane (and generally pointing 'outside' the mesh). It defines its orientation and enables calculation of lightning for instance. In three.js you can access it via face.normal.
If your floor-like faces are stricly horizontal, then their normals are all precisely {x:0,y:1,z:0}. And since normals are normalized, simply check whether face.normal.y === 1 also checks that x and y equal 0.
If your faces are not strictly horizontal, you may need to set a limit angle with the y-axis. You can calculate this angle with var angle=Math.acos(Yaxis.dot(faceNormal)) where Yaxis=new THREE.Vector3(0,1,0).

Three.js Projector and Ray objects

I have been trying to work with the Projector and Ray classes in order to do some collision detection demos. I have started just trying to use the mouse to select objects or to drag them. I have looked at examples that use the objects, but none of them seem to have comments explaining what exactly some of the methods of Projector and Ray are doing. I have a couple questions that I am hoping will be easy for someone to answer.
What exactly is happening and what is the difference between Projector.projectVector() and Projector.unprojectVector()? I notice that it seems in all the examples using both projector and ray objects the unproject method is called before the ray is created. When would you use projectVector?
I am using the following code in this demo to spin the cube when dragged on with the mouse. Can someone explain in simple terms what exactly is happening when I unproject with the mouse3D and camera and then create the Ray. Does the ray depend on the call to unprojectVector()
/** Event fired when the mouse button is pressed down */
function onDocumentMouseDown(event) {
event.preventDefault();
mouseDown = true;
mouse3D.x = mouse2D.x = mouseDown2D.x = (event.clientX / window.innerWidth) * 2 - 1;
mouse3D.y = mouse2D.y = mouseDown2D.y = -(event.clientY / window.innerHeight) * 2 + 1;
mouse3D.z = 0.5;
/** Project from camera through the mouse and create a ray */
projector.unprojectVector(mouse3D, camera);
var ray = new THREE.Ray(camera.position, mouse3D.subSelf(camera.position).normalize());
var intersects = ray.intersectObject(crateMesh); // store intersecting objects
if (intersects.length > 0) {
SELECTED = intersects[0].object;
var intersects = ray.intersectObject(plane);
}
}
/** This event handler is only fired after the mouse down event and
before the mouse up event and only when the mouse moves */
function onDocumentMouseMove(event) {
event.preventDefault();
mouse3D.x = mouse2D.x = (event.clientX / window.innerWidth) * 2 - 1;
mouse3D.y = mouse2D.y = -(event.clientY / window.innerHeight) * 2 + 1;
mouse3D.z = 0.5;
projector.unprojectVector(mouse3D, camera);
var ray = new THREE.Ray(camera.position, mouse3D.subSelf(camera.position).normalize());
if (SELECTED) {
var intersects = ray.intersectObject(plane);
dragVector.sub(mouse2D, mouseDown2D);
return;
}
var intersects = ray.intersectObject(crateMesh);
if (intersects.length > 0) {
if (INTERSECTED != intersects[0].object) {
INTERSECTED = intersects[0].object;
}
}
else {
INTERSECTED = null;
}
}
/** Removes event listeners when the mouse button is let go */
function onDocumentMouseUp(event) {
event.preventDefault();
/** Update mouse position */
mouse3D.x = mouse2D.x = (event.clientX / window.innerWidth) * 2 - 1;
mouse3D.y = mouse2D.y = -(event.clientY / window.innerHeight) * 2 + 1;
mouse3D.z = 0.5;
if (INTERSECTED) {
SELECTED = null;
}
mouseDown = false;
dragVector.set(0, 0);
}
/** Removes event listeners if the mouse runs off the renderer */
function onDocumentMouseOut(event) {
event.preventDefault();
if (INTERSECTED) {
plane.position.copy(INTERSECTED.position);
SELECTED = null;
}
mouseDown = false;
dragVector.set(0, 0);
}
I found that I needed to go a bit deeper under the surface to work outside of the scope of the sample code (such as having a canvas that does not fill the screen or having additional effects). I wrote a blog post about it here. This is a shortened version, but should cover pretty much everything I found.
How to do it
The following code (similar to that already provided by #mrdoob) will change the color of a cube when clicked:
var mouse3D = new THREE.Vector3( ( event.clientX / window.innerWidth ) * 2 - 1, //x
-( event.clientY / window.innerHeight ) * 2 + 1, //y
0.5 ); //z
projector.unprojectVector( mouse3D, camera );
mouse3D.sub( camera.position );
mouse3D.normalize();
var raycaster = new THREE.Raycaster( camera.position, mouse3D );
var intersects = raycaster.intersectObjects( objects );
// Change color if hit block
if ( intersects.length > 0 ) {
intersects[ 0 ].object.material.color.setHex( Math.random() * 0xffffff );
}
With the more recent three.js releases (around r55 and later), you can use pickingRay which simplifies things even further so that this becomes:
var mouse3D = new THREE.Vector3( ( event.clientX / window.innerWidth ) * 2 - 1, //x
-( event.clientY / window.innerHeight ) * 2 + 1, //y
0.5 ); //z
var raycaster = projector.pickingRay( mouse3D.clone(), camera );
var intersects = raycaster.intersectObjects( objects );
// Change color if hit block
if ( intersects.length > 0 ) {
intersects[ 0 ].object.material.color.setHex( Math.random() * 0xffffff );
}
Let's stick with the old approach as it gives more insight into what is happening under the hood. You can see this working here, simply click on the cube to change its colour.
What's happening?
var mouse3D = new THREE.Vector3( ( event.clientX / window.innerWidth ) * 2 - 1, //x
-( event.clientY / window.innerHeight ) * 2 + 1, //y
0.5 ); //z
event.clientX is the x coordinate of the click position. Dividing by window.innerWidth gives the position of the click in proportion of the full window width. Basically, this is translating from screen coordinates that start at (0,0) at the top left through to (window.innerWidth,window.innerHeight) at the bottom right, to the cartesian coordinates with center (0,0) and ranging from (-1,-1) to (1,1) as shown below:
Note that z has a value of 0.5. I won't go into too much detail about the z value at this point except to say that this is the depth of the point away from the camera that we are projecting into 3D space along the z axis. More on this later.
Next:
projector.unprojectVector( mouse3D, camera );
If you look at the three.js code you will see that this is really an inversion of the projection matrix from the 3D world to the camera. Bear in mind that in order to get from 3D world coordinates to a projection on the screen, the 3D world needs to be projected onto the 2D surface of the camera (which is what you see on your screen). We are basically doing the inverse.
Note that mouse3D will now contain this unprojected value. This is the position of a point in 3D space along the ray/trajectory that we are interested in. The exact point depends on the z value (we will see this later).
At this point, it may be useful to have a look at the following image:
The point that we have just calculated (mouse3D) is shown by the green dot. Note that the size of the dots are purely illustrative, they have no bearing on the size of the camera or mouse3D point. We are more interested in the coordinates at the center of the dots.
Now, we don't just want a single point in 3D space, but instead we want a ray/trajectory (shown by the black dots) so that we can determine whether an object is positioned along this ray/trajectory. Note that the points shown along the ray are just arbitrary points, the ray is a direction from the camera, not a set of points.
Fortunately, because we a have a point along the ray and we know that the trajectory must pass from the camera to this point, we can determine the direction of the ray. Therefore, the next step is to subtract the camera position from the mouse3D position, this will give a directional vector rather than just a single point:
mouse3D.sub( camera.position );
mouse3D.normalize();
We now have a direction from the camera to this point in 3D space (mouse3D now contains this direction). This is then turned into a unit vector by normalizing it.
The next step is to create a ray (Raycaster) starting from the camera position and using the direction (mouse3D) to cast the ray:
var raycaster = new THREE.Raycaster( camera.position, mouse3D );
The rest of the code determines whether the objects in 3D space are intersected by the ray or not. Happily it is all taken care of us behind the scenes using intersectsObjects.
The Demo
OK, so let's look at a demo from my site here that shows these rays being cast in 3D space. When you click anywhere, the camera rotates around the object to show you how the ray is cast. Note that when the camera returns to its original position, you only see a single dot. This is because all the other dots are along the line of the projection and therefore blocked from view by the front dot. This is similar to when you look down the line of an arrow pointing directly away from you - all that you see is the base. Of course, the same applies when looking down the line of an arrow that is travelling directly towards you (you only see the head), which is generally a bad situation to be in.
The z coordinate
Let's take another look at that z coordinate. Refer to this demo as you read through this section and experiment with different values for z.
OK, lets take another look at this function:
var mouse3D = new THREE.Vector3( ( event.clientX / window.innerWidth ) * 2 - 1, //x
-( event.clientY / window.innerHeight ) * 2 + 1, //y
0.5 ); //z
We chose 0.5 as the value. I mentioned earlier that the z coordinate dictates the depth of the projection into 3D. So, let's have a look at different values for z to see what effect it has. To do this, I have placed a blue dot where the camera is, and a line of green dots from the camera to the unprojected position. Then, after the intersections have been calculated, I move the camera back and to the side to show the ray. Best seen with a few examples.
First, a z value of 0.5:
Note the green line of dots from the camera (blue dot) to the unprojected value (the coordinate in 3D space). This is like the barrel of a gun, pointing in the direction that they ray should be cast. The green line essentially represents the direction that is calculated before being normalised.
OK, let's try a value of 0.9:
As you can see, the green line has now extended further into 3D space. 0.99 extends even further.
I do not know if there is any importance as to how big the value of z is. It seems that a bigger value would be more precise (like a longer gun barrel), but since we are calculating the direction, even a short distance should be pretty accurate. The examples that I have seen use 0.5, so that is what I will stick with unless told otherwise.
Projection when the canvas is not full screen
Now that we know a bit more about what is going on, we can figure out what the values should be when the canvas does not fill the window and is positioned on the page. Say, for example, that:
the div containing the three.js canvas is offsetX from the left and offsetY from the top of the screen.
the canvas has a width equal to viewWidth and height equal to viewHeight.
The code would then be:
var mouse3D = new THREE.Vector3( ( event.clientX - offsetX ) / viewWidth * 2 - 1,
-( event.clientY - offsetY ) / viewHeight * 2 + 1,
0.5 );
Basically, what we are doing is calculating the position of the mouse click relative to the canvas (for x: event.clientX - offsetX). Then we determine proportionally where the click occurred (for x: /viewWidth) similar to when the canvas filled the window.
That's it, hopefully it helps.
Basically, you need to project from the 3D world space and the 2D screen space.
Renderers use projectVector for translating 3D points to the 2D screen. unprojectVector is basically for doing the inverse, unprojecting 2D points into the 3D world. For both methods you pass the camera you're viewing the scene through.
So, in this code you're creating a normalised vector in 2D space. To be honest, I was never too sure about the z = 0.5 logic.
mouse3D.x = (event.clientX / window.innerWidth) * 2 - 1;
mouse3D.y = -(event.clientY / window.innerHeight) * 2 + 1;
mouse3D.z = 0.5;
Then, this code uses the camera projection matrix to transform it to our 3D world space.
projector.unprojectVector(mouse3D, camera);
With the mouse3D point converted into the 3D space, we can now use it for getting the direction and then use the camera position to throw a ray from.
var ray = new THREE.Ray(camera.position, mouse3D.subSelf(camera.position).normalize());
var intersects = ray.intersectObject(plane);
As of release r70, Projector.unprojectVector and Projector.pickingRay are deprecated. Instead, we have raycaster.setFromCamera which makes the life easier in finding the objects under the mouse pointer.
var mouse = new THREE.Vector2();
mouse.x = (event.clientX / window.innerWidth) * 2 - 1;
mouse.y = -(event.clientY / window.innerHeight) * 2 + 1;
var raycaster = new THREE.Raycaster();
raycaster.setFromCamera(mouse, camera);
var intersects = raycaster.intersectObjects(scene.children);
intersects[0].object gives the object under the mouse pointer and intersects[0].point gives the point on the object where the mouse pointer was clicked.
Projector.unprojectVector() treats the vec3 as a position. During the process the vector gets translated, hence we use .sub(camera.position) on it. Plus we need to normalize it after after this operation.
I will add some graphics to this post but for now I can describe the geometry of the operation.
We can think of the camera as a pyramid in terms of geometry. We in fact define it with 6 panes - left, right, top, bottom, near and far (near being the plane closest to the tip).
If we were standing in some 3d and observing these operations, we would see this pyramid in an arbitrary position with an arbitrary rotation in space. Lets say that this pyramid's origin is at it's tip, and it's negative z axis runs towards the bottom.
Whatever ends up being contained within those 6 planes will end up being rendered on our screen if we apply the correct sequence of matrix transformations. Which i opengl go something like this:
NDC_or_homogenous_coordinates = projectionMatrix * viewMatrix * modelMatrix * position.xyzw;
This takes our mesh from it's object space into world space, into camera space and finally it projects it does the perspective projection matrix which essentially puts everything into a small cube (NDC with ranges from -1 to 1).
Object space can be a neat set of xyz coordinates in which you generate something procedurally or say, a 3d model, that an artist modeled using symmetry and thus neatly sits aligned with the coordinate space, as opposed to an architectural model obtained from say something like REVIT or AutoCAD.
An objectMatrix could happen in between the model matrix and the view matrix, but this is usually taken care of ahead of time. Say, flipping y and z, or bringing a model thats far away from the origin into bounds, converting units etc.
If we think of our flat 2d screen as if it had depth, it could be described the same way as the NDC cube, albeit, slightly distorted. This is why we supply the aspect ratio to the camera. If we imagine a square the size of our screen height, the remainder is the aspect ratio that we need to scale our x coordinates.
Now back to 3d space.
We're standing in a 3d scene and we see the pyramid. If we cut everything around the pyramid, and then take the pyramid along with the part of the scene contained in it and put it's tip at 0,0,0, and point the bottom towards the -z axis we will end up here:
viewMatrix * modelMatrix * position.xyzw
Multiplying this by the projection matrix will be the same as if we took the tip, and started pulling it appart in the x and y axis creating a square out of that one point, and turning the pyramid into a box.
In this process the box gets scaled to -1 and 1 and we get our perspective projection and we end up here:
projectionMatrix * viewMatrix * modelMatrix * position.xyzw;
In this space, we have control over a 2 dimensional mouse event. Since it's on our screen, we know that it's two dimensional, and that it's somewhere within the NDC cube. If it's two dimensional, we can say that we know X and Y but not the Z, hence the need for ray casting.
So when we cast a ray, we are basically sending a line through the cube, perpendicular to one of it's sides.
Now we need to figure out if that ray hits something in the scene, and in order to do that we need to transform the ray from this cube, into some space suitable for computation. We want the ray in world space.
Ray is an infinite line in space. It's different from a vector because it has a direction, and it must pass through a point in space. And indeed this is how the Raycaster takes its arguments.
So if we squeeze the top of the box along with the line, back into the pyramid, the line will originate from the tip and run down and intersect the bottom of the pyramid somewhere between -- mouse.x * farRange and -mouse.y * farRange.
(-1 and 1 at first, but view space is in world scale, just rotated and moved)
Since this is the default location of the camera so to speak (it's object space) if we apply it's own world matrix to the ray, we will transform it along with the camera.
Since the ray passes through 0,0,0, we only have it's direction and THREE.Vector3 has a method for transforming a direction:
THREE.Vector3.transformDirection()
It also normalizes the vector in the process.
The Z coordinate in the method above
This essentially works with any value, and acts the same because of the way the NDC cube works.
The near plane and far plane are projected onto -1 and 1.
So when you say, shoot a ray at:
[ mouse.x | mouse.y | someZpositive ]
you send a line, through a point (mouse.x, mouse.y, 1) in the direction of (0,0,someZpositive)
If you relate this to the box/pyramid example, this point is at the bottom, and since the line originates from the camera it goes through that point as well.
BUT, in the NDC space, this point is stretched to infinity, and this line ends up being parallel with the left,top,right,bottom planes.
Unprojecting with the above method turns this into a position/point essentially. The far plane just gets mapped into world space, so our point sits somewhere at z=-1, between -camera aspect and + cameraAspect on X and -1 and 1 on y.
since it's a point, applying the cameras world matrix will not only rotate it but translate it as well. Hence the need to bring this back to the origin by subtracting the cameras position.

Categories

Resources