ThreeJS | Detect when an object leaves another object - javascript

I'm making a ThreeJS project in which I have planes (Object3D) flying inside a sphere (Mesh).
I'm trying to detect the collision between a plane and the border of the sphere so I can delete the plane and make it reappear at another place inside the sphere.
My question is how do I detect when an object leaves another object ?
The code I have now :
detectCollision(plane, sphere) {
var boxPlane = new THREE.Box3().setFromObject(plane);
boxPlane.applyMatrix4(plane.matrixWorld);
var boxSphere = new THREE.Box3().setFromObject(sphere);
boxSphere.applyMatrix4(sphere.matrixWorld);
return boxPlane.intersectsBox(boxSphere);
}
In my render function :
var collision = this.detectCollision(plane, this.radar)
if (collision == true) {
console.log("the plane is inside the sphere")
}
else {
console.log("the plane is outside the sphere")
}
})
The problem is that when the planes are inside the sphere I get true and false basically all the time until all the planes leave the sphere. At that point I have a false and no more true.

Box3 is not what you want to use to calculate sphere and plane collisions because the box won't respect the sphere's curvature, nor will it follow the plane's rotation.
Three.js has a class THREE.Sphere that is closer to what you need. Keep in mind that this class is not the same as a Mesh with a SphereGeometry, this is more of a math helper that doesn't render to the canvas. You can use its .containsPoint() method for what you need:
var sphereCalc = new THREE.Sphere( center, radius );
var point = new THREE.Vector3(10, 4, -6);
detectCollision() {
var collided = sphereCalc.containsPoint(point);
if (collided) {
console.log("Point is in sphere");
} else {
console.log("No collision");
}
return collided;
}
You'll have to apply transforms and check all 4 points of each plane in a loop. Notice there's a Sphere.intersectsPlane() method that sounds like it would do this for you, but it's not the same because it uses an infinite plane to calculate the intersection, not one with a defined width and height, so don't use this.
Edit:
To clarify, each plane typically has 4 verts, so you'll have to check each vertex in a for() loop to see if the sphere contains each one of the 4 points.
Additionally, the plane will probably have been moved and rotated, so its original vertex positions will have a transform matrix applied to them. I think you were already taking this into account in your example, but it would be something like:
point.copy(vertex1);
point.applyMatrix4(plane.matrixWorld)
sphereCalc.containsPoint(point);
point.copy(vertex2);
point.applyMatrix4(plane.matrixWorld)
sphereCalc.containsPoint(point);
// ... and so on

Related

How can I calculate normals of closed shape in three.js?

I am trying to write my own mesh importer for my own file format. In my file format there is no normal data. So I am trying to calculate normals of a close shape and apply those normals to mesh.
My solution is calculating a normal vector for every face of geometry and create a raycast from middle of these faces in the direction of normal vector. If that raycast hit something (another plane) it means this direction is inside. In that case I flip normal, If it does not hit something I leave it that way.
While I wrote a function with this logic, normals don't change at all.
function calculateNormals(object){
for (var i = 0; i < object.geometry.faces.length; i++) {
var vertices= object.geometry.vertices;
var face=object.geometry.faces[i];
var a=vertices[face.a];
var b=vertices[face.b];
var c=vertices[face.c];
console.log(face.a+" "+face.b+" "+face.c+" "+face.normal.z);
console.log(face);
console.log(face[4]);
var edge0=new THREE.Vector3(0,0,0);
edge0.subVectors(a,b);
var edge1=new THREE.Vector3(0,0,0);
edge1.subVectors(b,c);
var planeNormal=new THREE.Vector3(0,0,0)
planeNormal.crossVectors(edge0,edge1);
// console.log(planeNormal);
//Raycast from middle point towards plane nrmal direction
//If it hits anything it means normal direction is wrong
var midPoint=calculateMiddlePoint([a,b,c]);
var raycaster = new THREE.Raycaster(midPoint,planeNormal);
var intersects = raycaster.intersectObjects([object]);
if(intersects.length==0){
console.log("Normal is true");
face.normal=planeNormal;
}else{
console.log("Normal is wrong, you should flip normal direction, length: "+intersects.length);
console.log("Old face");
console.log(face.normal);
var newNormal=new THREE.Vector3(-1*planeNormal.x,-1*planeNormal.y,-1*planeNormal.z);
console.log(newNormal);
face.normal=newNormal;
console.log("new face");
console.log(face.normal);
console.log(face);
}
object.geometry.faces[i]=face;
// console.log(object.geometry.faces);
};
return object;
}
Matey makes a good point. The winding order is what determines the face normal, meaning which side is considered "front." Vertex normals are used for shading (with MeshPhongMaterial for example). If your vertex normals point in the opposite direction from your face normal, you'll end up with unintended results (anything from bad shading to a totally black face).
All that said, Geometry has helper functions for calculating normals.
Geometry.computeFaceNormals (based on winding order)
Geometry.computeFlatVertexNormals (sets a vertex normal to be the same as the associated face normal)
Geometry.computeVertexNormals (sets a vertex normal to the average of its surrounding face normals)
Once you've computed the normals, you could make a second pass to try and correct them, either by re-ordering the vertices (to correct the face normal), or by re-calculating the vertex normals yourself.

EdgesGeometry: raycasting not accurate

I'm using EdgesGeometry on PlaneGeometry and it seems it creates a larger hitbox in mouse events. This however, isn't evident when using CircleGeometry. I have the following:
createPanel = function(width, height, widthSegments) {
var geometry = new THREE.PlaneBufferGeometry(width, height, widthSegments);
var edges = new THREE.EdgesGeometry( geometry );
var panel = new THREE.LineSegments( edges, new THREE.LineBasicMaterial({
color: 0xffffff }));
return panel;
}
var tile = createPanel(1.45, .6, 1);
Now I'm using a library called RayInput which does all the raycasting for me but imagine I'm just using a normal raycaster for mouse events. Without the edges and using just the plane, the boundaries of collision is accurate.
After adding EdgesGeometry, the vertical hitbox seems to has increased dramatically thus, the object is detected being clicked when I'm not even clicking on it. The horizontal hitbox seems to have increased only slightly. I've never used EdgesGeometry before so anyone have a clue what is going on?
Thanks in advance.
If you are raycasting against THREE.Line or THREE.LineSegments, you should set the Line.threshold parameter to a value appropriate to the scale of your scene:
raycaster.params.Line.threshold = 0.1; // default is 1
three.js r.114

Imported meshes can't be picked deterministically

I've got grid of cylinder meshes created simply by
var tile = BABYLON.MeshBuilder.CreateCylinder("tile-" + i, { tessellation: 6, height: 0.1 }, scene);
then I have following event callback
window.addEventListener("click", function (evt) {
// try to pick an object
var pickResult = scene.pick(evt.clientX, evt.clientY);
if (pickResult.pickedMesh != null){
alert(pickResult.pickedMesh.name)
});
Then mouse-click on one of tiles raises message box with correct tile name.
When I add some new meshes (3D model inside .babylon file) by
var house;
BABYLON.SceneLoader.ImportMesh("", "../Content/"
, "house.babylon"
, scene
, function (newMeshes)
{ house = newMeshes[0]; });
For better imagination it's texture of house created from four different meshes which is placed over grid of cylinder tiles.
It's displayed fine but when mouse-click it too much often behave as it would totally ignore there is such a mesh and so pickResult.pickedMesh is either null or pickResult.pickedMesh.name points to tile underlaying my imported mesh in point I've clicked.
Just approximately 5% of mesh area corresponds properly to mouse-clicks (let's say in middle of roof, in middle of walls).
I've tried playing with setting some virtual (hidden) house.parent mesh for that which would not be created by importing meshes but seems as dead end.
Are you aware about some way how enforce that scene.pick(evt.clientX, evt.clientY); would respect mesh hierarchy and would consider all visible parts of overlaying texture?
Just for completeness I'm working with middle part of this 3D model (removed left and right house from that).
EDIT: Demo on BabylonJS playground
you could try change
var pickResult = scene.pick(evt.clientX, evt.clientY);
to
var pickResult = scene.pick(scene.pointerX, scene.pointerY);
as evt corresponds to whole page.

Rotate object on specific axis anywhere in Three.js - including outside of mesh

Trying to rotate an object around any axis.
For example like a door hinge (on edge of object) or planet around the sun (outside of object).
The problem seems to be defining the axis. The below unit vector results in axis remaining on object's origin (centre) therefor identical to standard rotation:
object2.rotateOnAxis(new THREE.Vector3(1,0,0), 0.01);
// same as
object1.rotation.x += 0.01;
See code example: JSFiddle
EDIT: Looking for a way that one can rotate around a pivot without using nested children. Rotating a child's parent provides an easy way to manipulate the child's pivot point, but modifying the pivot point is not viable.
Example below, if you wanted to rotate the cube in a figure 8 motion, it would be achievable with this method by changing the parent. But one would have to assure that the new parent's position and orientation is precisely configured to make the child seamlessly jump between parents, and complex motions that do not repeat or loop would be very complicated. Instead, I would like to (and I will paraphrase the question title) rotate an object on a specific axis without using object nesting anywhere in the scene, including outside of the object's mesh.
See code example: JSFiddle with pivots
If you want to rotate an object around an arbitrary line in world space, you can use the following method. The line is specified by a 3D point and a direction vector (axis).
THREE.Object3D.prototype.rotateAroundWorldAxis = function() {
// rotate object around axis in world space (the axis passes through point)
// axis is assumed to be normalized
// assumes object does not have a rotated parent
var q = new THREE.Quaternion();
return function rotateAroundWorldAxis( point, axis, angle ) {
q.setFromAxisAngle( axis, angle );
this.applyQuaternion( q );
this.position.sub( point );
this.position.applyQuaternion( q );
this.position.add( point );
return this;
}
}();
three.js r.85

Three.js: Updating Geometries vs Replacing

I have a scene with lots of objects using ExtrudeGeometry. Each of these need to update each frame, where the shape that is being extruded is changing, along with the amount of extrusion. The shapes are being generated using d3's voronoi algorithm.
See example.
Right now I am achieving this by removing every object from the scene and redrawing them each frame. This is very costly and causing performance issues. Is there a way to edit each mesh/geometry instead of removing from the scene? Would this help with performance? Or is there a more efficient way of redrawing the scene?
I'd need to edit both the shape of the extrusion and the amount of extrusion.
Thanks for taking a look!
If you're not changing the number of faces, you can use morph targets http://threejs.org/examples/webgl_morphtargets.html
You should
Create your geometry
Clone the geometry and make your modifications to it, such as the maximum length of your geometry pillar
Set both geometries as morph targets to your base geometry, for example
baseGeo.morphTargets.push(
{ name: "targetName", vertices: [ modifiedVertexArray ] }
);
After that, you can animate the mesh this using mesh.updateMorphTargets()
See http://threejs.org/examples/webgl_morphtargets.html
So I managed to come up with a way of not having to redraw the scene every time and it massively improved performance.
http://jsfiddle.net/x00xsdrt/4/
This is how I did it:
Created a "template geometry" with ExtrudeGeometry using a dummy
10 sided polygon.
As before, created a bunch of "points", this time assigning each
point one of these template geometries.
On each frame, iterated through each geometry, updating each vertex
to that of the new one (using the voronoi alg as before).
If there are extra vertices left over, "bunch" them up into a single point. (see http://github.com/mrdoob/three.js/wiki/Updates.)
Looking at it now, it's quite a simple process. Before, the thought of manipulating each vertex seemed otherworldly to me, but it's not actually too tricky with simple shapes!
Here's how I did the iteration, polycColumn is just a 2 item array with the same polygon in each item:
// Set the vertex index
var v = 0;
// Iterate over both top and bottom of poly
for (var p=0;p<polyColumn.length;p++) {
// Iterate over half the vertices
for (var j=0;j<verts.length/2;j++) {
// create correct z-index depending on top/bottom
if (p == 1) {
var z = point.extrudeAmount;
} else {
var z = 0;
}
// If there are still legitimate verts
if (j < poly.length) {
verts[v].x = poly[j][0];
verts[v].y = poly[j][1];
verts[v].z = z;
// If we've got extra verts, bunch them up in the same place
} else {
verts[v].x = verts[v - 1].x;
verts[v].y = verts[v - 1].y;
verts[v].z = z;
}
v++;
}
}
point.mesh.geometry.verticesNeedUpdate = true;

Categories

Resources