Three.js raycaster.intersectObjects got wrong results [duplicate] - javascript

I am trying to get collision detection from meshes i lay out on my Three.js scene. I am confused on how the Raycaster reallu works and if i get it right.
Here is a fiddle to descripe what i have problem with
//Add cuba at 40/40
geometry = new THREE.CubeGeometry(20, 20, 20);
material = new THREE.MeshNormalMaterial();
mesh = new THREE.Mesh(geometry, material);
mesh.position.setY(40)
scene.add(mesh);
//Add Ray
var origin = new THREE.Vector3(50, 0, 0),
direction = new THREE.Vector3(-1,0,0),
ray = new THREE.Raycaster(origin, direction),
collisionResults = ray.intersectObjects([mesh]);
if(collisionResults.length!==0){
alert('Ray collides with mesh. Distance :' + collisionResults[0].distance)
}
//Add Arrow to show ray
scene.add( new THREE.ArrowHelper(direction, origin, 50, 0x000000));
Not working:
http://jsfiddle.net/FredricBerling/LwfPL/1/
Working:
http://jsfiddle.net/FredricBerling/LwfPL/3/
Basically the fiddle lays out a cube and the 50 points form that i shoot a ray in a "direction". Problem seems to be that it states a "hit" even if it shouldnt.
I lay out a Arrowhelper to show where i suspect the Raycaster shoots its ray.
From other tests it seems like the direction in Raycaster is different from the one in Arrowhelper. Raycaster seems to shoot the ray into the 0,0,0 of the scene. I am confused
EDIT!. Rob gave the answer. I needed to make sure the meshes was rendered so that worl matrixes was applied. Fiddle is updated with the correct code that works for testing Raycaster as expected.

The apparent false positive you're seeing is due to the fact that even though you have set the box's position, it hasn't yet had its world transformation matrix updated. This normally only happens just before rendering.
If you move the raycast test to after the first render (or call updateWorld() manually), you won't get a hit.

Related

Dynamically adding faces to a three.js geometry

I'm working on some webgl software for generating 3D models and am relying on dynamic geometry. I've observed some very bizarre behavior that I've been able to isolate in this jsfiddle.
It seems that any new faces added after a geometry instance has been added to the scene, any new faces added will not be rendered (properly). In wireframe mode (as in the example), the new geometry is not rendered at all. When using textured materials, I also observed that sometimes new geometry is not rendered depending on the angle of the camera.
Here's a video of that in action.
Back to the jsfiddle, I used an existing three.js code sample (misc_exporter_obj.html) as a scaffold but on line 7 I made a generic function to add a triangle to the geometry. The addGeometry function is called on startup, and if you uncomment line 36 you can see what the expected result should have been:
var material = new THREE.MeshBasicMaterial( { wireframe : true} );
geometry = new THREE.Geometry();
addTriangle(-50, -50, 50, -50, 50, 50);
//addTriangle(-50, -50, -50, 50, 50, 50); // UNCOMMENT TO TEST WHAT FINAL OUTPUT SHOULD LOOK LIKE.
scene.add( new THREE.Mesh( geometry, material ) );
And as per the threejs guide on how to update things, lines 43-47 attempt to add a new triangle when you click the "transform triangle" button by setting the verticesNeedUpdate and elementsNeedUpdate flags:
function addTriangleFace(){
addTriangle(-50, -50, -50, 50, 50, 50);
geometry.verticesNeedUpdate = true;
geometry.elementsNeedUpdate = true;
}
Am I doing this wrong? Or should I submit a bug report?
Thanks.
Disappearing Mesh Update:
I may have discovered the cause of the weird behavior that was causing my mesh to be erased based on camera orientation. This answer suggests that Three.js may have thought that the mesh was not inside the camera's frustum.
I'm guessing the new vertices were not being considered when trying to determine whether the object was in the frustum, so I just disabled culling since the object being drawn is the main object in the scene.
You want to add faces to an existing geometry.
Since buffers can't be resized, the best solution is to switch to BufferGeometry, preallocate sufficiently-sized buffers, and set the drawRange. See this SO answer. This answer, too.
If you add vertices, you will need to recompute the bounding sphere for frustum culling to work correctly.
geometry.computeBoundingSphere();
Or, as you said, you can disable frustum culling:
mesh.frustumCulled = false;
three.js.r.91

EdgesGeometry: raycasting not accurate

I'm using EdgesGeometry on PlaneGeometry and it seems it creates a larger hitbox in mouse events. This however, isn't evident when using CircleGeometry. I have the following:
createPanel = function(width, height, widthSegments) {
var geometry = new THREE.PlaneBufferGeometry(width, height, widthSegments);
var edges = new THREE.EdgesGeometry( geometry );
var panel = new THREE.LineSegments( edges, new THREE.LineBasicMaterial({
color: 0xffffff }));
return panel;
}
var tile = createPanel(1.45, .6, 1);
Now I'm using a library called RayInput which does all the raycasting for me but imagine I'm just using a normal raycaster for mouse events. Without the edges and using just the plane, the boundaries of collision is accurate.
After adding EdgesGeometry, the vertical hitbox seems to has increased dramatically thus, the object is detected being clicked when I'm not even clicking on it. The horizontal hitbox seems to have increased only slightly. I've never used EdgesGeometry before so anyone have a clue what is going on?
Thanks in advance.
If you are raycasting against THREE.Line or THREE.LineSegments, you should set the Line.threshold parameter to a value appropriate to the scale of your scene:
raycaster.params.Line.threshold = 0.1; // default is 1
three.js r.114

three.js - Invert camera rotation matrix

I have a scene with objects and a camera controlled by a trackball. When I add a new object to the root object, I want it in the orientation it would have had before the camera moved around. For example, if you don't rotate the camera, a torus will show up with the hole facing the screen, the ring in the x,y screen plane.
I tried to apply the inverse matrix of the camera, but that doesn't work.
var m = THREE.Matrix4()
m.getInverse(camera.matrixWorld)
obj.setRotationFromMatrix(m)
What am I missing ?
The solution was simply to apply the camera rotation:
obj.setRotationFromMatrix(camera.matrixWorld)
The object is then facing the camera.
You need to declare the object as "new" as well reordering some of the wording. Try this:
var m = new THREE.Matrix4();
m.getInverse( camera.matrixWorld );
obj.rotation.setFromRotationMatrix(m);

three.js Cube Geometry - how to update parameters?

Possibly dumb question but here goes. Three.js geometries have 'parameter' feilds associated with them, see the box geometry here...
box Geometry parameters
I am trying to update these parameters like this...
var nodeSize = 10;
var geometry = new THREE.CubeGeometry(nodeSize, nodeSize, nodeSize);
mesh = new THREE.Mesh(geometry, new THREE.MeshNormalMaterial({side:THREE.DoubleSide}));
scene.add(mesh);
mesh.geometry.parameters.depth=20;
But of course, the geometry remains unchanged. Is there a way of updating the geometry by editing these parameters?
fiddle here https://jsfiddle.net/kn3owveg/2/
Any help appreciated!
parameters.depth is only used at geometry construction time. it has no effect when modifying it. you can think of it as read only.
Use the example at BoxGeometry and the gui on the right to see how to achieve what you want.
Gaitat is totally right, you can't change geometry with changing of parameters.
And there can be another solution. With scaling of your cube.
function setSize( myMesh, xSize, ySize, zSize){
scaleFactorX = xSize / myMesh.geometry.parameters.width;
scaleFactorY = ySize / myMesh.geometry.parameters.height;
scaleFactorZ = zSize / myMesh.geometry.parameters.depth;
myMesh.scale.set( scaleFactorX, scaleFactorY, scaleFactorZ );
}
...
setSize(mesh, 10, 10, 20);
jsfiddle example
Technically, scaling only creates the illusion of an updated geometry. I would say a better approach would be to reassign the geometry value of your mesh to a new geometry.
mesh.geometry = new THREE.CubeGeometry(newSize, newSize, newSize)
With this approach you can update any aspect of the geometry including depth segments for example. This is especially useful when working with non cube geometries like cylinders or spheres.
Here is a full rework of your original code using this approach, really only the last line has changed:
var nodeSize = 10;
var geometry = new THREE.CubeGeometry(nodeSize, nodeSize, nodeSize);
mesh = new THREE.Mesh(geometry, new THREE.MeshNormalMaterial({side:THREE.DoubleSide}));
scene.add(mesh);
mesh.geometry = new THREE.CubeGeometry(nodeSize, nodeSize, 20);

Overlaying texture onto STL loaded mesh

I'm looking for an efficient method of overlaying a texture to cover a mesh. I'm not an expert, more a novice, when it comes to 3 dimensional mapping/objects. Below shows how I would like the end product to look.
When attempting to apply texture with the following code, the end result looks similar to below. I have not done any UV mapping, I believe my answer may be lay here. As you can see from the below image it roughly takes the general shade of the picture but I get the impression that the texture is being drawn between each vertice of the model rather than across the entirity.
var textureLoader = new THREE.TextureLoader();
var texture = textureLoader.load('resource/images/materials/Mahogany.jpg');
var STLLoader = new THREE.STLLoader();
STLLoader.load( 'test.stl', function ( geometry1 ) {
var meshMaterial = new THREE.MeshBasicMaterial({map:texture});
var mesh = new THREE.Mesh( geometry1, meshMaterial );
mesh.scale.set(1, 1, 1);
mesh.position.set(5, 20, 80);
scene.add(mesh);
});
The cube has the correct texturing, whereas my STL loaded mesh does not.
Please ignore the rotation of the object in the above picture, I will move to unioning my objects together once I have fixed my texturing issues.
Fairly new at asking questions on here so please do comment to help me expand my question if it's too general or not percise enough. Thank you.
You may use
THREE.MeshPhongMaterial()
instead of
THREE.MeshBasicMaterial()
THREE.MeshPhongMaterial() will wrap the material outside the object and we can get curved material as per the object.

Categories

Resources