How to use Three.js plane - javascript

I'm trying to use three.js plane to get the distance from a point to a plane.
I have three points a,b,c, that I calculate the normal like so:
const v = a.clone().sub(c);
const u = b.clone().sub(c);
const normal = u.cross(v);
Then
const plane = new THREE.Plane(normal, (?))
What are you supposed to give in the second argument?
From the docs:
the negative distance from the origin to the plane along the normal vector. Default is 0.
What does that mean?
If I place there the distance of one of the points a,b,c to (0,0,0) (positive and negative distance), like const dist = a.distanceTo(new THREE.Vector3(0,0,0)), then if I do:
plane.distanceToPoint(a);
I'm getting a huge number and not zero, the same happens if I leave that argument empty.
So how can I place that plane at its correct place so that the distance to points on that plane will be zero as it should?

As you have three coplanar points, you can use .setFromCoplanarPoints(a, b, c) method of THREE.Plane().
An example of using it is in this SO answer.

Related

3D model in HTML/CSS; Calculate Euler rotation of triangle

TLDR; Given a set of triangle vertices and a normal vector (all in unit space), how do I calculate X, Y, Z Euler rotation angles of the triangle in world space?
I am attemping to display a 3D model in HTML - with actual HTML tags and CSS transforms. I've already loaded an OBJ file into a Javascript class instance.
The model is triangulated. My first aim is just to display the triangles as planes (HTML elements are rectangular) - I'll be 'cutting out' the triangle shapes with CSS clip-path later on.
I am really struggling to understand and get the triangles of the model rotated correctly.
I thought a rotation matrix could help me out, but my only experience with those is where I already have the rotation vector and I need to convert and send that to WebGL. This time there is no WebGL (or tutorials) to make things easier.
The following excerpt shows the face creation/'rendering' of faces. I'm using the face normal as the rotation but I know this is wrong.
for (const face of _obj.faces) {
const vertices = face.vertices.map(_index => _obj.vertices[_index]);
const center = [
(vertices[0][0] + vertices[1][0] + vertices[2][0]) / 3,
(vertices[0][1] + vertices[1][1] + vertices[2][1]) / 3,
(vertices[0][2] + vertices[1][2] + vertices[2][2]) / 3
];
// Each vertex has a normal but I am just picking the first vertex' normal
// to use as the 'face normal'.
const normals = face.normals.map(_index => _obj.normals[_index]);
const normal = normals[0];
// HTML element creation code goes here; reference is 'element'.
// Set face position (unit space)
element.style.setProperty('--posX', center[0]);
element.style.setProperty('--posY', center[1]);
element.style.setProperty('--posZ', center[2]);
// Set face rotation, converting to degrees also.
const rotation = [
normal[0] * toDeg,
normal[1] * toDeg,
normal[2] * toDeg,
];
element.style.setProperty('--rotX', rotation[0]);
element.style.setProperty('--rotY', rotation[1]);
element.style.setProperty('--rotZ', rotation[2]);
}
The CSS first translates the face on X,Y,Z, then rotates it on X,Y,Z in that order.
I think I need to 'decompose' my triangles' rotation into separate axis rotations - i.e rotate on X, then on Y, then on Z to get the correct rotation as per the model face.
I realise that the normal vector gives me an orientation but not a rotation around itself - I need to calculate that. I think I have to determine a vector along one triangle side and cross it with the normal, but this is something I am not clear on.
I have spent hours looking at similar questions on SO but I'm not smart enough to understand or make them work for me.
Is it possible to describe what steps to take without Latex equations? I'm good with pseudo code but my Math skills are severely lacking.
The full code is here: https://whoshotdk.co.uk/cssfps/ (view HTML source)
The mesh building function is at line 422.
The OBJ file is here: https://whoshotdk.co.uk/cssfps/data/model/test.obj
The Blender file is here: https://whoshotdk.co.uk/cssfps/data/model/test.blend
The mesh is just a single plane at an angle, displayed in my example (wrongly) in pink.
The world is setup so that -X is left, -Y is up, -Z is into the screen.
Thank You!
If you have a plane and want to rotate it to be in the same direction as some normal, you need to figure out the angles between that plane's normal vector and the normal vector you want. The Euler angles between two 3D vectors can be complicated, but in this case the initial plane normal should always be the same, so I'll assume the plane normal starts pointing towards positive X to make the maths simpler.
You also probably want to rotate before you translate, so that everything is easier since you'll be rotating around the origin of the coordinate system.
By taking the general 3D rotation matrix (all three 3D rotation matrices multiplied together, you can find it on the Wikipedia page) and applying it to the vector (1,0,0) you can then get the equations for the three angles a, b, and c needed to rotate that initial vector to the vector (x,y,z). This results in:
x = cos(a)*cos(b)
y = sin(a)*cos(b)
z = -sin(b)
Then rearranging these equations to find a, b and c, which will be the three angles you need (the three values of the rotation array, respectively):
a = atan(y/x)
b = asin(-z)
c = 0
So in your code this would look like:
const rotation = [
Math.atan2(normal[1], normal[0]) * toDeg,
Math.asin(-normal[2]) * toDeg,
0
];
It may be that you need to use a different rotation matrix (if the order of the rotations is not what you expected) or a different starting vector (although you can just use this method and then do an extra 90 degree rotation if each plane actually starts in the positive Y direction, for example).

Create a triangle around a point, perpendicular to a normal

I'd like get the points of a triangle around a point where the face would point in the direction of a specified normal. I'll be using THREE.js to add them to a BufferGeometry.
Very crude drawing:
Here's the code I have so far:
//The XYZ location of a point:
var x = model.points[i*3];
var y = model.points[i*3+1];
var z = model.points[i*3+2];
//The normal vector direction:
var nx = model.normals[i*3];
var ny = model.normals[i*3+1];
var nz = model.normals[i*3+2];
How can I pick 3 more points around this point that are all perpendicular to the normal and the same distance from the point / each other?
THANKS!
1) Take cross product of the normal with an arbitrary non-parallel vector. This will get you a vector perpendicular to the normal vector.
1.5) Normalize and scale the perpendicular vector to desired size. The length of this vector will be the distance from the triangle's centroid to each of its vertices.
2) Rotate the perpendicular vector by 2PI/3 and 4PI/3 around the normal vector.
3) Add the 3 vectors to the center point.
Note that there are infinitely many triangles that fit your criteria, even if we limit to only equilateral triangles. This is because there is an entire plane which is perpendicular to the given vector <nx, ny, nz> through the given point (x, y, z). Read here to see how to derive the equation for that plane. From there, you will need to pick a point on the plane. Then you can calculate the other two points by rotating around the given point at (x, y, z).
You need to find the plane parallel to the normal and containing the point (there is only one) and then pick any point in this plane with the specified distance and rotate it two times by 120 degree around the centeral point.

THREE.js raycasting very slow against single > 500k poly (faces) object, line intersection with globe

in my project I have a player walk around a globe. The globe is not just a sphere, it has mountains and valleys, so I need the players z position to change. For this I'm raycasting a single ray from player's position against a single object (the globe) and I get the point they intersect and change players position accordingly. I'm only raycasting when the player moves, not on every frame.
For a complex object it takes forever. It takes ~200ms for an object with ~1m polys (faces) (1024x512 segments sphere). Does raycasting cast against every single face ?
Is there a traditional fast way to achieve this in THREE, like some acceleration structure (octree? bvh? -- tbh from my google searches I haven't seem to find such a thing included in THREE) or some other thinking-out-of-the-box (no ray casting) method?
var dir = g_Game.earthPosition.clone();
var startPoint = g_Game.cubePlayer.position.clone();
var directionVector = dir.sub(startPoint.multiplyScalar(10));
g_Game.raycaster.set(startPoint, directionVector.clone().normalize());
var t1 = new Date().getTime();
var rayIntersects = g_Game.raycaster.intersectObject(g_Game.earth, true);
if (rayIntersects[0]) {
var dist = rayIntersects[0].point.distanceTo(g_Game.earthPosition);
dist = Math.round(dist * 100 + Number.EPSILON) / 100;
g_Player.DistanceFromCenter = dist + 5;
}
var t2 = new Date().getTime();
console.log(t2-t1);
Thank you in advance
Do not use three.js Raycaster.
Consider Ray.js that offers function intersectTriangle(a, b, c, backfaceCulling, target)
Suggested optimizations:
If player starts from some known positions ⇒ you must know his initial height, − no need to raycast (or just do one time full mesh slow intersection)
if player moves with small steps ⇒ next raycast will most likely intersect the same face as before.
Optimization #1 − remember previous face, and raycast it first.
if player does not jump ⇒ next raycast will most likely intersect the adjacent face to the face where player was before.
Optimization #2 − build up a cache, so that given a face idx you could retrieve adjacent faces in O(1) time.
This cache may be loaded from the file, if your planet is not generated in real time.
So with my approach on each move you do O(1) read operation from cache and raycast 1-6 faces.
Win!
For a complex object it takes forever. It takes ~200ms for an object with ~1m polys (faces) (1024x512 segments sphere). Does raycasting cast against every single face ?
Out of the box THREE.js does check every triangle when performing a raycast against a mesh and there are no acceleration structures built into THREE.
I've worked with others on the three-mesh-bvh package (github, npm) to help address this problem, though, which may help you get up to the speeds your looking for. Here's how you might use it:
import * as THREE from 'three';
import { MeshBVH, acceleratedRaycast } from 'three-mesh-bvh';
THREE.Mesh.prototype.raycast = acceleratedRaycast;
// ... initialize the scene...
globeMesh.geometry.boundsTree = new MeshBVH(globeMesh.geometry);
// ... initialize raycaster...
// Optional. Improves the performance of the raycast
// if you only need the first collision
raycaster.firstHitOnly = true;
const intersects = raycaster.intersectObject(globeMesh, true);
// do something with the intersections
There are some caveats mentioned in the README so keep those in mind (the mesh index is modified, only nonanimated BufferGeometry is supported, etc). And there's still some memory optimization that could be done but there are some tweakable options to help tune that.
I'll be interested to hear how this works for you! Feel free to leave feedback in the issues on how to improve the package, as well. Hope that helps!
I think you should pre-render the height map of your globe into a texture, assuming your terrain is not dynamic. Read all of it into a typed array, and then whenever your player moves, you only need to back-project her coordinates into that texture, query it, offset and multiply and you should get what you need in O(1) time.
It's up to you how you generate that height map. Actually if you have a bumpy globe, then you should probably start with height map in the first place, and use that in your vertex shader to render the globe (with the input sphere being perfectly smooth). Then you can use the same height map to query the player's Z.
Edit: Danger! This may cause someone's death one day. The edge case I see here is the nearest collision will be not be seen because searchRange will not contain the nearest triangle but will contain the second nearest one returning it as the closest one. I.e. a robotic arm may stop nearby the torso instead of stopping at the arm right in front of it.
anyway
Here's a hack when raycasting not too far from the previous result i.e. during consecutive mousemove events. This will not work for completely random rays
Mesh raycast supports drawRange to limit how many triangles will be searched. Also each raycast result comes with faceIndex telling which triangle was hit. If you're continuously looking for raycasts i.e. with mousemove or there's a laser linearly scanning a mesh you can first search the area nearby* the previous hit.
triangles' distance in the data may look like they're neighbours but it's not guaranteed they are sorted in any way. Still it's very possible that the close ones in the data are close in space.
let lastFaceIndex = null
const searchRange = 2000 * 3
function raycast(mesh, raycaster) {
// limited search
if (lastFaceIndex !== null) {
const drawRange = mesh.geometry.drawRange
drawRange.start = Math.max(0, lastFaceIndex * 3 - searchRange)
drawRange.count = searchRange * 2
const intersects = raycaster.intersectObjects([mesh]);
drawRange.start = 0
drawRange.count = Infinity
if (intersects.length) {
lastFaceIndex = intersects[0].faceIndex
return intersects[0]
}
}
// regular search
const intersects = raycaster.intersectObjects([mesh]);
if (!intersects.length) {
lastFaceIndex = null
return null
}
lastFaceIndex = intersects[0].faceIndex
return intersects[0]
}

Add Points to a Point Cloud with User Mouse Clicks

I'm using Three.js to render point cloud data retrieved from a server.
For each data set, I loop over the data points and create a Three.js Vector3 object with x, y & z values corresponding to each data point. I push each of these vertices onto a list which I then pass into the vertices prop of my geometry component within my points component.
render() {
this.pointCloudVertices = [];
if (this.props.points) {
const points = this.props.points
for (let i = 0; i < points.x.length; i++) {
const vertex = new THREE.Vector3();
vertex.x = points.x[i]
vertex.y = points.y[i]
vertex.z = points.z[i]
this.pointCloudVertices.push(vertex);
}
}
return (<points>
<geometry vertices={this.pointCloudVertices}/>
<pointsMaterial
color={ (Math.floor(Math.random()*16777215)) }
size={ 0.2 }
/>
</points>);
}
https://github.com/caseysiebel/pc-client/blob/master/src/components/PointCloud.js
I'd like the user to be able to use their mouse to add points to another point cloud (points component) by clicking inside the canvas.
I found a lot of resources pointing to the Three.js' Raycaster, but this tool seems to be more for selecting out objects already in the canvas. In my case I'd like the user to be able to click on and area in the canvas not occupied by an object, have the client work out the x, y & z coordinates of that click and then add a vertex, with those x/y/z values, to a points component (likely empty until the user adds points via this modality).
I'm a little confused as to how I will convert 2D mouse events into a 3D vertex value. If anyone knows any good resources on this subject I'd love to check them out.
With THREE.Raycaster(), I see several solutions:
1. Use the .at() method of the .ray property. Like this:
raycaster.ray.at(100 + Math.random() * 150, rndPoint);
Here you can set the constraints for the distance from the origin of the ray, and it will look like this from your original camera:
and how it will look like from aside:
jsfiddle example. You can switch the lines off there.
2. Use the .intersectObjects() method. Where intersecting objects are planes of constraints. For example, we have planes in the form of a cube. When we cast a ray through them, we always intersect two planes, and the array of intersectec objects will be sorted by distance from the origin of the ray. Thus the first element in this array will be the closest plane. So, when we know it, we can take their points of intersection and sub point1 from point2, getting a new vector (its length and direction). And then we'll just set a point at a random place along the vector from point1 to point2:
intersected = raycaster.intersectObjects(planes.children);
if (intersected.length > 0){
var point1 = intersected[0].point;
var point2 = intersected[1].point;
var diff = point2.clone().sub(point1);
var diffLength = diff.length();
rndPoint = point1.clone().addScaledVector(diff.normalize(), Math.random() * diffLength);
. . .
}
It will look like this from the front camera:
and from aside:
jsfiddle example. Lines are switchable here too.
Or you can use THREE.Raycaster() with THREE.OrthographicCamera(). Which is simplier )

Helper Function Needed to Turn WebGL / Three.js Lengths to Pixesl

I'm am searching for how WebGL / Three.js in general sets their heights and widths.
As in what numbers systems do they use to set x,y,z.
For the Example below, the arrow it pointing straight up with the Y being set to 1 but in pixels it looks like 15- - 200 pixels.
Is there a helper function that i can write that I could pass in 100 for the pixels and it would return me the correct number to float number to use with THREE.js.
Excuse me if I am not talking in correct terms when it comes to number system but this is he only way i know how to reference it at this point.
The only thing i am missing below is creating the scene. but the rest is there, the image shows what it looks lik.
Once again is there a helper function that i can pass pixels to and in return get back the correct number in float for use with THREE.js?
Here is my arrow:
//scene.remove(cube);
scene.remove(group);
// create a new one
var sphere = createMesh(new THREE.SphereGeometry(5, 10, 10));
var cube = createMesh(new THREE.BoxGeometry(6, 6, 6));
sphere.position.set(controls.spherePosX, controls.spherePosY, controls.spherePosZ);
cube.position.set(controls.cubePosX, controls.cubePosY, controls.cubePosZ);
// add it to the scene.
// also create a group, only used for rotating
var group = new THREE.Group();
group.add(sphere);
group.add(cube);
scene.add(group);
controls.positionBoundingBox();
var arrow = new THREE.ArrowHelper(new THREE.Vector3(0, 1, 0), 0, 10, 0x0000ff);
scene.add(arrow);
I receive these JS objects with the Pixels then write to screen, but how do i convert the pixels down to usable units in 3D?
The lengths in 3D do not translate to lengths in 2D uniformly. Especially when perspective projection is employed.
Let's consider your example: Two arrows of the same 3D length and the same orientation would render to different 2D lengths depending on their distance from the camera. The arrow that is closer to camera will be rendered longer than the arrow farther from camera.
In order to maintain a certain pixel length for a certain arrow, you'd have to adjust the 3D length of the arrow every time some parameters of the camera change (e.g. position, orientation, FOV). And also every time the position or orientation of the arrow changes. This is possible (see comment by #WacławJasper ) but rather complicated.
If you could explain the bigger picture of what you wish to achieve there might be a simpler solution to your problem.

Categories

Resources