Helper Function Needed to Turn WebGL / Three.js Lengths to Pixesl - javascript

I'm am searching for how WebGL / Three.js in general sets their heights and widths.
As in what numbers systems do they use to set x,y,z.
For the Example below, the arrow it pointing straight up with the Y being set to 1 but in pixels it looks like 15- - 200 pixels.
Is there a helper function that i can write that I could pass in 100 for the pixels and it would return me the correct number to float number to use with THREE.js.
Excuse me if I am not talking in correct terms when it comes to number system but this is he only way i know how to reference it at this point.
The only thing i am missing below is creating the scene. but the rest is there, the image shows what it looks lik.
Once again is there a helper function that i can pass pixels to and in return get back the correct number in float for use with THREE.js?
Here is my arrow:
//scene.remove(cube);
scene.remove(group);
// create a new one
var sphere = createMesh(new THREE.SphereGeometry(5, 10, 10));
var cube = createMesh(new THREE.BoxGeometry(6, 6, 6));
sphere.position.set(controls.spherePosX, controls.spherePosY, controls.spherePosZ);
cube.position.set(controls.cubePosX, controls.cubePosY, controls.cubePosZ);
// add it to the scene.
// also create a group, only used for rotating
var group = new THREE.Group();
group.add(sphere);
group.add(cube);
scene.add(group);
controls.positionBoundingBox();
var arrow = new THREE.ArrowHelper(new THREE.Vector3(0, 1, 0), 0, 10, 0x0000ff);
scene.add(arrow);
I receive these JS objects with the Pixels then write to screen, but how do i convert the pixels down to usable units in 3D?

The lengths in 3D do not translate to lengths in 2D uniformly. Especially when perspective projection is employed.
Let's consider your example: Two arrows of the same 3D length and the same orientation would render to different 2D lengths depending on their distance from the camera. The arrow that is closer to camera will be rendered longer than the arrow farther from camera.
In order to maintain a certain pixel length for a certain arrow, you'd have to adjust the 3D length of the arrow every time some parameters of the camera change (e.g. position, orientation, FOV). And also every time the position or orientation of the arrow changes. This is possible (see comment by #WacławJasper ) but rather complicated.
If you could explain the bigger picture of what you wish to achieve there might be a simpler solution to your problem.

Related

3D model in HTML/CSS; Calculate Euler rotation of triangle

TLDR; Given a set of triangle vertices and a normal vector (all in unit space), how do I calculate X, Y, Z Euler rotation angles of the triangle in world space?
I am attemping to display a 3D model in HTML - with actual HTML tags and CSS transforms. I've already loaded an OBJ file into a Javascript class instance.
The model is triangulated. My first aim is just to display the triangles as planes (HTML elements are rectangular) - I'll be 'cutting out' the triangle shapes with CSS clip-path later on.
I am really struggling to understand and get the triangles of the model rotated correctly.
I thought a rotation matrix could help me out, but my only experience with those is where I already have the rotation vector and I need to convert and send that to WebGL. This time there is no WebGL (or tutorials) to make things easier.
The following excerpt shows the face creation/'rendering' of faces. I'm using the face normal as the rotation but I know this is wrong.
for (const face of _obj.faces) {
const vertices = face.vertices.map(_index => _obj.vertices[_index]);
const center = [
(vertices[0][0] + vertices[1][0] + vertices[2][0]) / 3,
(vertices[0][1] + vertices[1][1] + vertices[2][1]) / 3,
(vertices[0][2] + vertices[1][2] + vertices[2][2]) / 3
];
// Each vertex has a normal but I am just picking the first vertex' normal
// to use as the 'face normal'.
const normals = face.normals.map(_index => _obj.normals[_index]);
const normal = normals[0];
// HTML element creation code goes here; reference is 'element'.
// Set face position (unit space)
element.style.setProperty('--posX', center[0]);
element.style.setProperty('--posY', center[1]);
element.style.setProperty('--posZ', center[2]);
// Set face rotation, converting to degrees also.
const rotation = [
normal[0] * toDeg,
normal[1] * toDeg,
normal[2] * toDeg,
];
element.style.setProperty('--rotX', rotation[0]);
element.style.setProperty('--rotY', rotation[1]);
element.style.setProperty('--rotZ', rotation[2]);
}
The CSS first translates the face on X,Y,Z, then rotates it on X,Y,Z in that order.
I think I need to 'decompose' my triangles' rotation into separate axis rotations - i.e rotate on X, then on Y, then on Z to get the correct rotation as per the model face.
I realise that the normal vector gives me an orientation but not a rotation around itself - I need to calculate that. I think I have to determine a vector along one triangle side and cross it with the normal, but this is something I am not clear on.
I have spent hours looking at similar questions on SO but I'm not smart enough to understand or make them work for me.
Is it possible to describe what steps to take without Latex equations? I'm good with pseudo code but my Math skills are severely lacking.
The full code is here: https://whoshotdk.co.uk/cssfps/ (view HTML source)
The mesh building function is at line 422.
The OBJ file is here: https://whoshotdk.co.uk/cssfps/data/model/test.obj
The Blender file is here: https://whoshotdk.co.uk/cssfps/data/model/test.blend
The mesh is just a single plane at an angle, displayed in my example (wrongly) in pink.
The world is setup so that -X is left, -Y is up, -Z is into the screen.
Thank You!
If you have a plane and want to rotate it to be in the same direction as some normal, you need to figure out the angles between that plane's normal vector and the normal vector you want. The Euler angles between two 3D vectors can be complicated, but in this case the initial plane normal should always be the same, so I'll assume the plane normal starts pointing towards positive X to make the maths simpler.
You also probably want to rotate before you translate, so that everything is easier since you'll be rotating around the origin of the coordinate system.
By taking the general 3D rotation matrix (all three 3D rotation matrices multiplied together, you can find it on the Wikipedia page) and applying it to the vector (1,0,0) you can then get the equations for the three angles a, b, and c needed to rotate that initial vector to the vector (x,y,z). This results in:
x = cos(a)*cos(b)
y = sin(a)*cos(b)
z = -sin(b)
Then rearranging these equations to find a, b and c, which will be the three angles you need (the three values of the rotation array, respectively):
a = atan(y/x)
b = asin(-z)
c = 0
So in your code this would look like:
const rotation = [
Math.atan2(normal[1], normal[0]) * toDeg,
Math.asin(-normal[2]) * toDeg,
0
];
It may be that you need to use a different rotation matrix (if the order of the rotations is not what you expected) or a different starting vector (although you can just use this method and then do an extra 90 degree rotation if each plane actually starts in the positive Y direction, for example).

three.js lookAt() : how to point some local axis which *isn't* the positive Z axis towards another object

I'm creating an app where a person (right now I'm using a cone-shape) is standing on some surface (right now I'm using a cylinder laid lengthwise) and I'd like their feet to orient toward some point (right now it's the center of the cylinder).
(edit: I just realized that my Z axis in this photo is pointing in the wrong direction; it should be pointing towards the camera, but the question remains unchanged.)
Here is a version of the code similar to what I'm trying to accomplish. https://codepen.io/liamcorbett/pen/YMWayJ (Use arrow keys to move the cone)
//...
person = CreatePerson();
person.mesh.up = new THREE.Vector3(0, 0, 1);
//
// ...
//
function updateObj(obj, aboutObj=false){
let mesh = obj.mesh;
if (aboutObj) {
mesh.lookAt(
aboutObj.mesh.position.x,
aboutObj.mesh.position.y,
mesh.position.z)
};
}
//
// ...
//
function animate() {
// ...
updateObj(person);
// ...
}
The code above gives me something similar to what I'm looking for, but the issue is that lookAt() seems to always point the local Positive Z-axis in some direction, and I'd much prefer that it point the local Negative Y-axis instead.
I'd prefer to not change the x,y,z axes of the model itself, as I feel that's going to be a pain to deal with when I'm applying other logic to the person object.
Is there a way to change which axis lookAt() uses? Or am I going to have to roll my own lookAt() function? Thanks ~
Is there a way to change which axis lookAt() uses?
No, the default local forward vector for 3D objects (excluding cameras) is (0, 0, 1). Unlike other engines, three.js does not allow to configure the forward vector, only the up vector. But this is not really helpful in your case.
You can try to transform the geometry in order to achieve a similar effect.
If you don't want to do this for some reasons and you still want to use Object3D.lookAt(), you have to compute a different target vector (so not the cylinder's center).
Even if the forward vector of the lookAt method can't be changed (as #Mugen87 said), you can still adjust the local rotation afterwards by knowing in advance the difference between the forward Z axis used, and the axis you consider your mesh to be "upward" (ex: a person standing up on the Y axis).
Basically, in your case, just add this line after the lookAt method :
mesh.rotateOnAxis( new THREE.Vector3(1,0,0), Math.PI * -0.5 );
And the cone will look up :)

THREE .JS raycasting performance

I am trying to find the closest distance from a point to large, complex Mesh along a plane in a direction range:
for (var zDown in verticalDistances) {
var myIntersect = {};
for (var theta = Math.PI / 2 - 0.5; theta < Math.PI / 2 + 0.5; theta += 0.3) {
var rayDirection = new THREE.Vector3(
Math.cos(theta),
Math.sin(theta),
0
).transformDirection(object.matrixWorld);
// console.log(rayDirection);
_raycaster.set(verticalDistances[zDown].minFacePoint, rayDirection, 0, 50);
// console.time('raycast: ');
var intersect = _raycaster.intersectObject(planeBufferMesh);
// console.timeEnd('raycast: '); // this is huge!!! ~ 2,300 ms
// console.log(_raycaster);
// console.log(intersect);
if (intersect.length == 0) continue;
if ((!('distance' in myIntersect)) || myIntersect.distance > intersect[0].distance) {
myIntersect.distance = intersect[0].distance;
myIntersect.point = intersect[0].point.clone();
}
}
// do stuff
}
I get great results with mouse hover on the same surface but when performing this loop the raycasting is taking over 2 seconds per cast. The only thing i can think of is that the BackSide of the DoubleSide Material is a ton slower?
Also i notice as I space out my verticalDistances[zDown].minFacePoint to be farther apart raycast starts to speed up up (500ms /cast). So as the distance between verticalDistances[i].minFacePoint and verticalDistances[i+1].minFacePoint increases, the raycaster performs faster.
I would go the route of using octree but the mouse hover event works extremely well on the exact same planeBuffer. Is this a side of Material issue,. that could be solved by loading 2 FrontSide meshes pointing in opposite directions?
Thank You!!!!
EDIT: it is not a front back issue. I ran my raycast down the front and back side of the plane buffer geometry with the same spot result. Live example coming.
EDIT 2: working example here. Performance is little better than Original case but still too slow. I need to move the cylinder in real time. I can optimize a bit by finding certain things, but mouse hover is instant. When you look at the console time the first two(500ms) are the results i am getting for all results.
EDIT 3: added a mouse hover event, that performs the same as the other raycasters. I am not getting results in my working code that i get in this sample though. The results I get for all raycast are the same as i get for the first 1 or 2 in the sample around 500ms. If i could get it down to 200ms i can target the items i am looking for and do way less raycasting. I am completely open to suggestions on better methods. Is octree the way to go?
raycast: : 467.27001953125ms
raycast: : 443.830810546875ms
EDIT 4: #pailhead Here is my plan.
1. find closest grid vertex to point on the plane. I can do a scan of vertex in x/y direction then calculate the min distance.
2. once i have that closest vertex i know that my closest point has to be on a face containing that vertex. So i will find all faces with that vertex using the object.mesh.index.array and calculate the plane to point of each face. Seems like a ray cast should be a little bit smarter than a full scan when intersecting a mesh and at least cull points based on max distance? #WestLangley any suggestions?
EDIT 5:
#pailhead thank you for the help. Its appreciated. I have really simplified my example(<200 lines with tons more comments); Is raycaster checking every face? Much quicker to pick out the faces within the set raycasting range specified in the constructor and do a face to point calc. There is no way this should be looping over every face to raycast. I'm going to write my own PlaneBufferGeometry raycast function tonight, after taking a peak at the source code and checking octree. I would think if we have a range in the raycaster constructor, pull out plane buffer vertices within that range ignoring z. Then just raycast those or do a point to plane calculation. I guess i could just create a "mini" surface from that bounding circle and then raycast against it. But the fact that the max distance(manual uses "far") doesn't effect the speed of the raycaster makes me wonder how much it is optimized for planeBuffer geometries. FYI your 300k loop is ~3ms on jsfiddle.
EDIT 6: Looks like all meshes are treated the same in the raycast function. That means it wont smart hunt out the area for a plane Buffer Geometry. Looking at mesh.js lines 266 we loop over the entire index array. I guess for a regular mesh you dont know what faces are where because its a TIN, but a planeBuffer could really use a bounding box/sphere rule, because your x/y are known order positions and only the Z are unknown. Last edit, Answer will be next
FYI: for max speed, you could use math. There is no need to use ray casting. https://brilliant.org/wiki/3d-coordinate-geometry-equation-of-a-plane/
The biggest issue resolved is filtering out faces of planeBufferGeometry based on vertex index. With a planeBufferGeometry you can find a bounding sphere or rectangle that will give you the faces you need to check. they are ordered in x/y in the index array so that filters out many of the faces. I did an indexOf the bottom left position and lastIndexOf the top right corner position in the index array. RAYCASTING CHECKS EVERY FACE
I also gave up on finding the distance from each face of the object and instead used vertical path down the center of the object. This decreased the ray castings needed.
Lastly I did my own face walk through and used the traingle.closestPointToPoint() function on each face.
Ended up getting around 10ms per point to surface calculation(single raycast) and around 100 ms per object (10 vertical slices) to surface. I was seeing 2.5 seconds per raycast and 25+ seconds per object prior to optimization.

XTK - Toolkit.. the cube moves by should only rotating

Im a newbie in 3D computer graphics and seen an odd thing.
I used the XTK-Toolkit, witch is great with DICOM. I add a cube in the scene and translated it far from the center (http://jsfiddle.net/64L47wtd/2/).
when the cube rotate it looks like it is moving
Is this a bug in XTK, or an principle problem with 3D rendering?
window.onload = function() {
// create and initialize a 3D renderer
var r = new X.renderer3D();
r.init();
// create a cube
cube = new X.cube();
// skin it..
cube.texture.file = 'http://x.babymri.org/?xtk.png';
cube.transform.translateX(250);
cube.transform.translateY(200);
cube.transform.translateX(270);
r.add(cube); // add the cube to the renderer
r.render(); // ..and render it
// add some animation
r.onRender = function() {
// rotation by 1 degree in X and Y directions
cube.transform.rotateX(1);
cube.transform.rotateY(1);
};
};
You miss to consider the cube a compound object consisting of several vertices, edges and/or faces. As a compound object it's using local coordinate system consisting of axes X, Y, Z. The actual cube is described internally using coordinates for vertices related to that cube-local coordinate system.
By "translating" you declare those relative coordinates of vertices being adjusted prior to applying inside that local coordinate system. Rotation is then still working on the axes of that local coordinate system.
Thus, this isn't an error of X toolkit.
You might need to put the cube into another (probably fully transparent) container object to translate/move it, but keep rotating the cube itself.
I tried to extend your fiddle accordingly but didn't succeed at all. Taking obvious intentions of X Toolkit into account this might be an intended limitation of that toolkit for it doesn't obviously support programmatic construction of complex scenes consisting of multi-level object hierarchies by relying on its API only.

Three.JS: Get position of rotated object

In Three.JS, I am capable of rotating an object about its origin. If I were to do this with a line, for instance, the line rotates, but the positions of its vertices are not updated with their new locations. Is there some way to apply the rotation matrix to the position of the vertices to find the new position of the point? Say I rotate a line with points at (0,0,0) and (0,100,100) by 45° on the x, 20° on the y, and 100° on the z. How would I go about finding the actual position of the vertices with respect to the entire scene.
Thanks
yes, 'entire scene' means world position.
THREE.Vector3() has a applyMatrix4() method,
you can do the same things that the shader does so in order to project a vertex into world space you would do this
yourPoint.applyMatrix4(yourObject.matrixWorld);
to project that into camera space you can apply this next
yourPoint.applyMatrix4(camera.matrixWorld);
to get an actual screen position in -1 to 1
yourPoint.applyMatrix4(camera.projectionMatrix);
you would access your point like this
var yourPoint = yourObject.geometry.vertices[0]; //first vertex
also, rather than doing this three times, you can just combine the matrices. Didnt test this, but something along the lines of this. Might go the other way:
var neededPVMmatrix = new THREE.Matrix4().multiplyMatrices(yourObject.matrixWorld, camera.matrixWorld);
neededPVMmatrix.multiplyMatrices(neededPVMmatrix, camera.projectionMatrix);
if you need a good tutorial on what this does under the hood i recommend this
Alteredq posted everything there is to know about three.js matrices here
edit
One thing to note though, if you want just the rotation, not the translation, you need to use the upper 3x3 portion which is the rotation matrix, of the models world matrix. This might be slightly more complicated. I forgot what three.js gives you, but i think the normalMatrix would do the trick, or perhaps you can convert your THREE.Vector3() to THREE.Vector4(), and set .w to 0, this will prevent any translation from being applied.
edit2
if you want to move the line point in your example, instead of applying it to the particle, apply it to
var yourVertexWorldPosition = new THREE.Vector3().clone(geo.vertices[1]); //this is your second line point, to whatever you set it in your init function
yourVertexWorldPosition.applyMatrix4();//this transforms the new vector into world space based on the matrix you provide (line.matrixWorld)

Categories

Resources