Mesh becomes very angular after substracting with ThreeCSG - javascript

I experience a problem when substracting a mesh from an other mesh using ThreeCSG. My main mesh is a ring and the mesh to substract is a diamond. Before the process to scene looks like this: Mesh fine. But after substracting the meshes the ring becomes angular: Mesh Broken. I do apply the same material / shading as before. Here is the code I use:
var ring_bsp = new ThreeBSP(ring);
var stone_bsp = new ThreeBSP(stone);
var substract_bsp = ring_bsp.subtract( stone_bsp );
var result = substract_bsp.toMesh( ringMaterial );
result.geometry.computeVertexNormals();
result.material.needsUpdate = true;
result.geometry.buffersNeedUpdate = true;
result.geometry.uvsNeedUpdate = true;
result.scale.x = result.scale.y = result.scale.z = 19;
scene.remove(ring);
scene.add(result);
Update one:
If I remove "result.geometry.computeVertexNormals();" the result looks even worst: link.
Update two:
I created a jsfiddle with a minimal case
Update three:
After looking some more into the problem and Wilts last update, I saw that after I use ThreeBSP the vertexes are messed up. You can see this very well in this fiddle.
Update four:
The problem seems to be within the "fromGeometry / toGeometry" functions as I get the same broken mesh if I don't do any substraction at all.

It looks like (some of) your vertex normals get lost during translation (translating your geometry to CSG and translating back to Three.js). You should check out the source code to see where this goes wrong.
UPDATE 1:
I looked into the source code of ThreeCSG.js it seems there is a bug on line 48.
It should be:
vertex = new ThreeBSP.Vertex( vertex.x, vertex.y, vertex.z, face.vertexNormals[1], uvs );
The index for the vertexNormals should be 1 instead of 2.
Maybe that bug causes the wrong export result.
UPDATE 2:
Try updating the vertexNormals of the geometry before you convert to CSG:
var geometry = ring.geometry;
geometry.computeFaceNormals();
geometry.computeVertexNormals();
Note. You need to call computeNormals() first for the correct result.
UPDATE 3:
In the conversion of faces from Three.js geometries to CSG geometries the ThreeBSP.Polygon.prototype.classifySide method checks wheter the vertex of the adjacent face is in the front, in the back or coplanar to the current face. If the point is coplanar the CSG face will be defined as a Face with four vertex points. Because of this process some of your THREE.Face3 get converted to a 4 point CSG face. When later translating it back to a THREE.Face3 the Face vertexNormals become different from their initial values.
The vertex is classified FRONT, BACK or COPLANAR using an EPSILON value to compare the vertex normal with the face normal. If the difference is too small the Vertex is considered coplanar. By increasing the EPSILON value in your ThreeBSP library you can control the precision.
If you set EPSILON to 10 your triangles will never be considered coplanar and the conversion result will be correct.
So at line 5 of your ThreeBSP library set:
EPSILON = 10,

Related

THREE.js raycasting very slow against single > 500k poly (faces) object, line intersection with globe

in my project I have a player walk around a globe. The globe is not just a sphere, it has mountains and valleys, so I need the players z position to change. For this I'm raycasting a single ray from player's position against a single object (the globe) and I get the point they intersect and change players position accordingly. I'm only raycasting when the player moves, not on every frame.
For a complex object it takes forever. It takes ~200ms for an object with ~1m polys (faces) (1024x512 segments sphere). Does raycasting cast against every single face ?
Is there a traditional fast way to achieve this in THREE, like some acceleration structure (octree? bvh? -- tbh from my google searches I haven't seem to find such a thing included in THREE) or some other thinking-out-of-the-box (no ray casting) method?
var dir = g_Game.earthPosition.clone();
var startPoint = g_Game.cubePlayer.position.clone();
var directionVector = dir.sub(startPoint.multiplyScalar(10));
g_Game.raycaster.set(startPoint, directionVector.clone().normalize());
var t1 = new Date().getTime();
var rayIntersects = g_Game.raycaster.intersectObject(g_Game.earth, true);
if (rayIntersects[0]) {
var dist = rayIntersects[0].point.distanceTo(g_Game.earthPosition);
dist = Math.round(dist * 100 + Number.EPSILON) / 100;
g_Player.DistanceFromCenter = dist + 5;
}
var t2 = new Date().getTime();
console.log(t2-t1);
Thank you in advance
Do not use three.js Raycaster.
Consider Ray.js that offers function intersectTriangle(a, b, c, backfaceCulling, target)
Suggested optimizations:
If player starts from some known positions ⇒ you must know his initial height, − no need to raycast (or just do one time full mesh slow intersection)
if player moves with small steps ⇒ next raycast will most likely intersect the same face as before.
Optimization #1 − remember previous face, and raycast it first.
if player does not jump ⇒ next raycast will most likely intersect the adjacent face to the face where player was before.
Optimization #2 − build up a cache, so that given a face idx you could retrieve adjacent faces in O(1) time.
This cache may be loaded from the file, if your planet is not generated in real time.
So with my approach on each move you do O(1) read operation from cache and raycast 1-6 faces.
Win!
For a complex object it takes forever. It takes ~200ms for an object with ~1m polys (faces) (1024x512 segments sphere). Does raycasting cast against every single face ?
Out of the box THREE.js does check every triangle when performing a raycast against a mesh and there are no acceleration structures built into THREE.
I've worked with others on the three-mesh-bvh package (github, npm) to help address this problem, though, which may help you get up to the speeds your looking for. Here's how you might use it:
import * as THREE from 'three';
import { MeshBVH, acceleratedRaycast } from 'three-mesh-bvh';
THREE.Mesh.prototype.raycast = acceleratedRaycast;
// ... initialize the scene...
globeMesh.geometry.boundsTree = new MeshBVH(globeMesh.geometry);
// ... initialize raycaster...
// Optional. Improves the performance of the raycast
// if you only need the first collision
raycaster.firstHitOnly = true;
const intersects = raycaster.intersectObject(globeMesh, true);
// do something with the intersections
There are some caveats mentioned in the README so keep those in mind (the mesh index is modified, only nonanimated BufferGeometry is supported, etc). And there's still some memory optimization that could be done but there are some tweakable options to help tune that.
I'll be interested to hear how this works for you! Feel free to leave feedback in the issues on how to improve the package, as well. Hope that helps!
I think you should pre-render the height map of your globe into a texture, assuming your terrain is not dynamic. Read all of it into a typed array, and then whenever your player moves, you only need to back-project her coordinates into that texture, query it, offset and multiply and you should get what you need in O(1) time.
It's up to you how you generate that height map. Actually if you have a bumpy globe, then you should probably start with height map in the first place, and use that in your vertex shader to render the globe (with the input sphere being perfectly smooth). Then you can use the same height map to query the player's Z.
Edit: Danger! This may cause someone's death one day. The edge case I see here is the nearest collision will be not be seen because searchRange will not contain the nearest triangle but will contain the second nearest one returning it as the closest one. I.e. a robotic arm may stop nearby the torso instead of stopping at the arm right in front of it.
anyway
Here's a hack when raycasting not too far from the previous result i.e. during consecutive mousemove events. This will not work for completely random rays
Mesh raycast supports drawRange to limit how many triangles will be searched. Also each raycast result comes with faceIndex telling which triangle was hit. If you're continuously looking for raycasts i.e. with mousemove or there's a laser linearly scanning a mesh you can first search the area nearby* the previous hit.
triangles' distance in the data may look like they're neighbours but it's not guaranteed they are sorted in any way. Still it's very possible that the close ones in the data are close in space.
let lastFaceIndex = null
const searchRange = 2000 * 3
function raycast(mesh, raycaster) {
// limited search
if (lastFaceIndex !== null) {
const drawRange = mesh.geometry.drawRange
drawRange.start = Math.max(0, lastFaceIndex * 3 - searchRange)
drawRange.count = searchRange * 2
const intersects = raycaster.intersectObjects([mesh]);
drawRange.start = 0
drawRange.count = Infinity
if (intersects.length) {
lastFaceIndex = intersects[0].faceIndex
return intersects[0]
}
}
// regular search
const intersects = raycaster.intersectObjects([mesh]);
if (!intersects.length) {
lastFaceIndex = null
return null
}
lastFaceIndex = intersects[0].faceIndex
return intersects[0]
}

THREE .JS raycasting performance

I am trying to find the closest distance from a point to large, complex Mesh along a plane in a direction range:
for (var zDown in verticalDistances) {
var myIntersect = {};
for (var theta = Math.PI / 2 - 0.5; theta < Math.PI / 2 + 0.5; theta += 0.3) {
var rayDirection = new THREE.Vector3(
Math.cos(theta),
Math.sin(theta),
0
).transformDirection(object.matrixWorld);
// console.log(rayDirection);
_raycaster.set(verticalDistances[zDown].minFacePoint, rayDirection, 0, 50);
// console.time('raycast: ');
var intersect = _raycaster.intersectObject(planeBufferMesh);
// console.timeEnd('raycast: '); // this is huge!!! ~ 2,300 ms
// console.log(_raycaster);
// console.log(intersect);
if (intersect.length == 0) continue;
if ((!('distance' in myIntersect)) || myIntersect.distance > intersect[0].distance) {
myIntersect.distance = intersect[0].distance;
myIntersect.point = intersect[0].point.clone();
}
}
// do stuff
}
I get great results with mouse hover on the same surface but when performing this loop the raycasting is taking over 2 seconds per cast. The only thing i can think of is that the BackSide of the DoubleSide Material is a ton slower?
Also i notice as I space out my verticalDistances[zDown].minFacePoint to be farther apart raycast starts to speed up up (500ms /cast). So as the distance between verticalDistances[i].minFacePoint and verticalDistances[i+1].minFacePoint increases, the raycaster performs faster.
I would go the route of using octree but the mouse hover event works extremely well on the exact same planeBuffer. Is this a side of Material issue,. that could be solved by loading 2 FrontSide meshes pointing in opposite directions?
Thank You!!!!
EDIT: it is not a front back issue. I ran my raycast down the front and back side of the plane buffer geometry with the same spot result. Live example coming.
EDIT 2: working example here. Performance is little better than Original case but still too slow. I need to move the cylinder in real time. I can optimize a bit by finding certain things, but mouse hover is instant. When you look at the console time the first two(500ms) are the results i am getting for all results.
EDIT 3: added a mouse hover event, that performs the same as the other raycasters. I am not getting results in my working code that i get in this sample though. The results I get for all raycast are the same as i get for the first 1 or 2 in the sample around 500ms. If i could get it down to 200ms i can target the items i am looking for and do way less raycasting. I am completely open to suggestions on better methods. Is octree the way to go?
raycast: : 467.27001953125ms
raycast: : 443.830810546875ms
EDIT 4: #pailhead Here is my plan.
1. find closest grid vertex to point on the plane. I can do a scan of vertex in x/y direction then calculate the min distance.
2. once i have that closest vertex i know that my closest point has to be on a face containing that vertex. So i will find all faces with that vertex using the object.mesh.index.array and calculate the plane to point of each face. Seems like a ray cast should be a little bit smarter than a full scan when intersecting a mesh and at least cull points based on max distance? #WestLangley any suggestions?
EDIT 5:
#pailhead thank you for the help. Its appreciated. I have really simplified my example(<200 lines with tons more comments); Is raycaster checking every face? Much quicker to pick out the faces within the set raycasting range specified in the constructor and do a face to point calc. There is no way this should be looping over every face to raycast. I'm going to write my own PlaneBufferGeometry raycast function tonight, after taking a peak at the source code and checking octree. I would think if we have a range in the raycaster constructor, pull out plane buffer vertices within that range ignoring z. Then just raycast those or do a point to plane calculation. I guess i could just create a "mini" surface from that bounding circle and then raycast against it. But the fact that the max distance(manual uses "far") doesn't effect the speed of the raycaster makes me wonder how much it is optimized for planeBuffer geometries. FYI your 300k loop is ~3ms on jsfiddle.
EDIT 6: Looks like all meshes are treated the same in the raycast function. That means it wont smart hunt out the area for a plane Buffer Geometry. Looking at mesh.js lines 266 we loop over the entire index array. I guess for a regular mesh you dont know what faces are where because its a TIN, but a planeBuffer could really use a bounding box/sphere rule, because your x/y are known order positions and only the Z are unknown. Last edit, Answer will be next
FYI: for max speed, you could use math. There is no need to use ray casting. https://brilliant.org/wiki/3d-coordinate-geometry-equation-of-a-plane/
The biggest issue resolved is filtering out faces of planeBufferGeometry based on vertex index. With a planeBufferGeometry you can find a bounding sphere or rectangle that will give you the faces you need to check. they are ordered in x/y in the index array so that filters out many of the faces. I did an indexOf the bottom left position and lastIndexOf the top right corner position in the index array. RAYCASTING CHECKS EVERY FACE
I also gave up on finding the distance from each face of the object and instead used vertical path down the center of the object. This decreased the ray castings needed.
Lastly I did my own face walk through and used the traingle.closestPointToPoint() function on each face.
Ended up getting around 10ms per point to surface calculation(single raycast) and around 100 ms per object (10 vertical slices) to surface. I was seeing 2.5 seconds per raycast and 25+ seconds per object prior to optimization.

How to improve merging by computing new faces in ThreeJS

I'm learning ThreeJS for 4 months, applying it into a personal project.
Yesterday, I achieved building a stronghold using most of ThreeJS geometries and some CSG tricks. The result looks fine, but I like precision and my geometry is kind of a mess (mostly after CSG subtractions).
[Question] I wonder if there's a known way to merge two geometries and replacing its old faces by new computed faces ? There is a JSFiddle to illustrate my question.
[Edit : Updated the fiddle with a fourth and a fifth mesh]
// FIGURE 1 : Basic merged geometry
var figure1 = new THREE.Geometry();
figure1.merge(box1Geometry);
figure1.merge(box2Geometry);
figure1.merge(box3Geometry);
figure1.computeFaceNormals();
figure1.computeVertexNormals();
var mesh = new THREE.Mesh(figure1, material);
scene.add(mesh);
// FIGURE 2 : Merged geometry with merged vertices
var figure2 = figure1.clone();
figure2.mergeVertices();
figure2.computeFaceNormals();
figure2.computeVertexNormals();
mesh = new THREE.Mesh(figure2, material);
// FIGURE 3 : Expected merged geometry (less faces)
var figure3 = new THREE.Geometry();
figure3`.vertices.push(
// manually create vertices here
);
figure3.faces.push(
// manually create the faces here
);
figure3.computeBoundingSphere();
figure3.computeFaceNormals();
figure3.computeVertexNormals();
mesh = new THREE.Mesh(figure3, material);
scene.add(mesh);
Three ways to get the same mesh
The first mesh on the left is a basic merged geometry composed of three boxGeometry.
The second mesh in the middle is exactly the same mesh, after calling the mergeVertices() function. It results saving 4 vertices. But faces inside the mesh are still there. It results not only in looking bad (for me), but also in issues for texturing or lighting these parts (face normals aren't where they should be).
The last mesh on the right is the mesh I would expect after merging. Look at the faces below the middle box, they only fit what they should.
The fact that it leads to texture and lighting issues (look at the JSFiddle, it lights the inner parts of the mesh) makes me think that it must be a simple and well-known way to solve this but I'm just feeling like a big noob.
This issue is directly linked with another question I'll ask if I don't find (or understand) any answer on SO (and maybe it'll help you to understand why I want to do that): Is there a way to apply a texture on this merged geometry without creating an unique material for each face of each geometry (because of the different UV mapping and mesh sizes) ? I can't imagine to do it manually for each face of my huge stronghold...
[EDIT] Writing my question, I just realized that ThreeCSG and its union() function do the trick. But I don't like the mess of vertices it creates. Even for basic geometry like these boxes, ThreeCSG will create strange vertices and faces on parts of the geometry where everything was already fine.
I updated the JSFiddle with a fourth mesh (CSG). In this simple usecase, we can see that there are 2 vertices and 2 faces more than expected. It seems that it kept the old faces (look at the wireframe !).
Is ThreeCSG union the best option for now ?
[EDIT 2] Fiddle updated with native CSG geometry. It gives the result I expected with only 20 vertices and 32 faces. Thanks to Wilt for this idea. The issue is that hard coding the polygons takes too long (take a look at the code for only three boxes). I have no JSON file to load and generate the polygons, I only have ThreeJS geometries. So I'll look at the conversion between ThreeJS and ThreeCSG geometries and I hope to understand why when there is a conversion, it gives a bad result.

Face normals on dynamic geometry

I'm trying to create a vertex animation for a mesh.
Just imagine a vertex shader, but in software instead of hardware.
Basically what I do is to apply a transformation matrix to each vertex. The mesh it's ok but the normals doesn't look good at all.
I've try to use both computeVertexNormals() and computeFaceNormals() but it just doesn't work.
The following code is the one I used for the animation (initialVertices are the initial vertices generated by the CubeGeometry):
for (var i=0;i<mesh1.geometry.vertices.length; i++)
{
var vtx=initialVertices[i].clone();
var dist = vtx.y;
var rot=clock.getElapsedTime() - dist*0.02;
matrix.makeRotationY(rot);
vtx.applyMatrix4(matrix);
mesh1.geometry.vertices[i]=vtx;
}
mesh1.geometry.verticesNeedUpdate = true;
Here there're two examples, one working correctly with CanvasRenderer:
http://kile.stravaganza.org/lab/js/dynamic/canvas.html
and the one that doesn't works in WebGL:
http://kile.stravaganza.org/lab/js/dynamic/webgl.html
Any idea what I'm missing?
You are missing several things.
(1) You need to set the ambient reflectance of the material. It is reasonable to set it equal to the diffuse reflectance, or color.
var material = new THREE.MeshLambertMaterial( {
color:0xff0000,
ambient:0xff0000
} );
(2) If you are moving vertices, you need to update centroids, face normals, and vertex normals -- in the proper order. See the source code.
mesh1.geometry.computeCentroids();
mesh1.geometry.computeFaceNormals();
mesh1.geometry.computeVertexNormals();
(3) When you are using WebGLRenderer, you need to set the required update flags:
mesh1.geometry.verticesNeedUpdate = true;
mesh1.geometry.normalsNeedUpdate = true;
Tip: is it a good idea to avoid new and clone in tight loops.
three.js r.63

Three.js outlines

Is it possible to have an black outline on my 3d models with three.js?
I would have graphics which looks like Borderlands 2. (toon shading + black outlines)
I'm sure I came in late. Let's hope this would solve someone's question later.
Here's the deal, you don't need to render everything twice, the overhead actually is not substantial, all you need to do is duplicate the mesh and set the duplicate mesh's material side to "backside". No double passes. You will be rendering two meshes instead, with most of the outline's geometry culled by WebGL's "backface culling".
Here's an example:
var scene = new THREE.Scene();
//Create main object
var mesh_geo = new THREE.BoxGeometry(1, 1, 1);
var mesh_mat = new THREE.MeshBasicMaterial({color : 0xff0000});
var mesh = new THREE.Mesh(mesh_geo, mesh_mat);
scene.add(mesh);
//Create outline object
var outline_geo = new THREE.BoxGeometry(1, 1, 1);
//Notice the second parameter of the material
var outline_mat = new THREE.MeshBasicMaterial({color : 0x00ff00, side: THREE.BackSide});
var outline = new THREE.Mesh(outline_geo, outline_mat);
//Scale the object up to have an outline (as discussed in previous answer)
outline.scale.multiplyScalar(1.5);
scene.add(outline);
For more details on backface culling, check out: http://en.wikipedia.org/wiki/Back-face_culling
The above approach works well if you want to add an outline to objects, without adding a toon shader, and thus losing "realism".
Toon shading by itself supports edge detection. They've developed the 'cel' shader in Borderlands to achieve this effect.
In cel shading devs can either use the object duplication method (done at the [low] pipeline level), or can use image processing filters for edge detection. This is the point at which performance tradeoff is compared between the two techniques.
More info on cel: http://en.wikipedia.org/wiki/Cel_shading
Cheers!
Yes it is possible but not in a simple out-of-the-box way. For toon shading there are even shaders included in /examples/js/ShaderToon.js
For the outlines I think the most commonly suggested method is to render in two passes. First pass renders the models in black, and slightly larger scale. Second pass is normal scale and with the toon shaders. This way you'll see the larger black models as an outline. It's not perfect but I don't think there's an easy way out. You might have more success searching for "three.js hidden line rendering", as, while different look, somewhat similar method is used to achieve that.
Its a old question but here is what i did.
I created a Outlined Cel-shader for my CG course. Unfortunately it takes 3 rendering passes. Im currently trying to figure out how to remove one pass.
Here's the idea:
1) Render NormalDepth image to texture.
In vertex shader you do what you normally do, position to screen space and normal to screen space.
In fragment shader you calculate the depth of the pixel and then create the normal color with the depth as the alpha value
float ndcDepth = (2.0 * gl_FragCoord.z - gl_DepthRange.near - gl_DepthRange.far) / (gl_DepthRange.far - gl_DepthRange.near);
float clipDepth = ndcDepth / gl_FragCoord.w;
2) Render the scene on to a texture with cel-shading. I changed the scene override material.
3)Make quad and render both textures on the quad and have a orto camera look at it. Cel-shaded texture is just renderd on quad but the normaldepth shaded on that you use some edge detection and then with that you know when the pixel needs to be black(edge).

Categories

Resources