How to improve merging by computing new faces in ThreeJS - javascript

I'm learning ThreeJS for 4 months, applying it into a personal project.
Yesterday, I achieved building a stronghold using most of ThreeJS geometries and some CSG tricks. The result looks fine, but I like precision and my geometry is kind of a mess (mostly after CSG subtractions).
[Question] I wonder if there's a known way to merge two geometries and replacing its old faces by new computed faces ? There is a JSFiddle to illustrate my question.
[Edit : Updated the fiddle with a fourth and a fifth mesh]
// FIGURE 1 : Basic merged geometry
var figure1 = new THREE.Geometry();
figure1.merge(box1Geometry);
figure1.merge(box2Geometry);
figure1.merge(box3Geometry);
figure1.computeFaceNormals();
figure1.computeVertexNormals();
var mesh = new THREE.Mesh(figure1, material);
scene.add(mesh);
// FIGURE 2 : Merged geometry with merged vertices
var figure2 = figure1.clone();
figure2.mergeVertices();
figure2.computeFaceNormals();
figure2.computeVertexNormals();
mesh = new THREE.Mesh(figure2, material);
// FIGURE 3 : Expected merged geometry (less faces)
var figure3 = new THREE.Geometry();
figure3`.vertices.push(
// manually create vertices here
);
figure3.faces.push(
// manually create the faces here
);
figure3.computeBoundingSphere();
figure3.computeFaceNormals();
figure3.computeVertexNormals();
mesh = new THREE.Mesh(figure3, material);
scene.add(mesh);
Three ways to get the same mesh
The first mesh on the left is a basic merged geometry composed of three boxGeometry.
The second mesh in the middle is exactly the same mesh, after calling the mergeVertices() function. It results saving 4 vertices. But faces inside the mesh are still there. It results not only in looking bad (for me), but also in issues for texturing or lighting these parts (face normals aren't where they should be).
The last mesh on the right is the mesh I would expect after merging. Look at the faces below the middle box, they only fit what they should.
The fact that it leads to texture and lighting issues (look at the JSFiddle, it lights the inner parts of the mesh) makes me think that it must be a simple and well-known way to solve this but I'm just feeling like a big noob.
This issue is directly linked with another question I'll ask if I don't find (or understand) any answer on SO (and maybe it'll help you to understand why I want to do that): Is there a way to apply a texture on this merged geometry without creating an unique material for each face of each geometry (because of the different UV mapping and mesh sizes) ? I can't imagine to do it manually for each face of my huge stronghold...
[EDIT] Writing my question, I just realized that ThreeCSG and its union() function do the trick. But I don't like the mess of vertices it creates. Even for basic geometry like these boxes, ThreeCSG will create strange vertices and faces on parts of the geometry where everything was already fine.
I updated the JSFiddle with a fourth mesh (CSG). In this simple usecase, we can see that there are 2 vertices and 2 faces more than expected. It seems that it kept the old faces (look at the wireframe !).
Is ThreeCSG union the best option for now ?
[EDIT 2] Fiddle updated with native CSG geometry. It gives the result I expected with only 20 vertices and 32 faces. Thanks to Wilt for this idea. The issue is that hard coding the polygons takes too long (take a look at the code for only three boxes). I have no JSON file to load and generate the polygons, I only have ThreeJS geometries. So I'll look at the conversion between ThreeJS and ThreeCSG geometries and I hope to understand why when there is a conversion, it gives a bad result.

Related

resizing individual models in a single geometry

I have a 3D model of my home town. I would like to use real time data to change the height of the buildings. In my first try, I loaded the buildings as individual meshes and called scene.add(buildingMesh) during setup.
var threeObjects = []
var buildingMesh = new THREE.Mesh(geometry, material)
threeObjects.push(buildingMesh);
$.each(threeObjects,function(i, buildingMesh)
{
buildingMesh.rotation.x += -3.1415*0.5;
buildingMesh.castShadow = true;
buildingMesh.receiveShadow = true;
scene.add(buildingMesh);
});
Which is too slow as my dataset consists of roughly 10.000 building.
So I took the approach to add all the (geometries of the) meshes to a single geometry and wrap that in a mesh to be added to the scene
singleGeometry.merge(buildingMesh.geometry, buildingMesh.matrix); //in a loop
var faceColorMaterial = new THREE.MeshLambertMaterial( { color: 0xffffff, vertexColors: THREE.VertexColors } );
combinedMesh = new THREE.Mesh(singleGeometry, faceColorMaterial);
scene.add(combinedMesh);
Just to make a proof of concept, I'm trying to change the height of a building when I click it. Alas, this is not working.
By adding a new id field, I can get a reference to the faces and vertices and change the color of the building, but I can not for the life of me, get them to to change height.
In my first version, I would just use something like:
buildingMesh.scale.z=2;
But as I have no meshes anymore, I'm kinda lost.
Can anybody help?
disclaimer: I'm new to Three.js, so my question might be stupid...hope it's not :)
If you combine all of your buildings into a single geometry, you're obliterating everything that makes the buildings distinct from each other. Now you can't tell building A from building B because it's all one big geometry, and geometry at its basic level is literally just arrays of points and polygons with no way of telling any of it apart. So I think it's the wrong approach to merge it all together.
Instead, you should take advantage of three.js's efficient scene graph architecture. You had the right idea at first to just add all the buildings to a single root Object3D ("scene"). That way you get all the efficiencies of the scene graph but can still individually address the buildings.
To make it load more efficiently, instead of creating the scene graph in three.js every time you load the app, you should do it ahead of time in a 3D modeling program. Build the parent/child relationships there, and export it as a single model containing all of the buildings as child nodes. When you import it into three.js, it should retain its structure.
JCD: That was not quite the question I asked.
But anyhow, I found a solution to the problem.
What I did was to merge all the geometries, but in stead of using the standard clone function in geometry.merge() I used a shallow reference, which made it possible for me to use the reference in threeObjects to find the correct building and resize the part of the geometry using Mesh.scale, followed by a geometry.verticesNeedUpdate = true;
For further optimization, I split the model into 5 different geometries and only updated the geometry that contained the building

Distorted UVs on a single object in my Three.js scene

I've been putting together a 3d model of a house and right now I'm stuck with yet another aggravating roadblock like those three.js has gotten me accustomed to.
I'm creating my scene in Maya and using the OBJ exporter to write obj and mtl files that I then import into three.js. I have about 9 objects in my scene, ungrouped, children only to the world, history deleted, and with texture maps that have ambient occlusion and lighting baked into them assigned to them via shadingMaps.
I've actually had little luck actually using the mtl file, so I just copied my texture maps and loaded them separately and created materials out of them in three.js.
Now, all of these objects look just fine in the browser, except for the simplest one, the walls and floor object. This is what the object looks like in Maya:
As you can see, a rather simple mesh with minimal polys looking beautiful in Maya.
I've learned that when I export objects into obj files, only one UV channel is supported, so I copy my UVs into the default channel and delete all other UV channels before exporting. This is the UV map:
But when I assign this material in the browser, I get a strange texture distortion like so:
It's like the UVs are all over the place. I would seriously doubt that my approach is anywhere close to being on target if it weren't for those 8 other (more complex, mind you) objects which all display fine.
, including part of the wall that I've cut out of the problematic piece, which is part of the bathroom.
Does anyone have a clue as to how I can troubleshoot this? I've tried exporting straight to js from Maya, but I'm having even more problems with that approach. I've tried converting the obj file into js using the packaged browser-based converter. I've spent days on this and am not making any progress.
Here's some relevant code.
scene = new THREE.Scene();
renderer = new THREE.WebGLRenderer({antialias: true} );
var wallTexture = THREE.ImageUtils.loadTexture("obj/final_walls.jpg");
var wallMaterial = new THREE.MeshLambertMaterial( {color: 0x929EAC, map:wallTexture} );
var manager = new THREE.LoadingManager();
var loader = new THREE.OBJMTLLoader( manager );
loader.load( 'obj/wallOnly.obj','obj/wallOnly.mtl', function ( object ) {
object.children[2].material = wallMaterial;
floorplan.add(object);
camera.lookAt( object );
} );
Please help!!
OMG! After running my head through the wall, I finally found the solution!
I've discovered that the problem was not as much a distortion of the texture as it was a random swapping of uv faces. That's right! For some reason, the webGL renderer randomly swapped some faces of the object with others.
Out of total coincidence I turned my mesh into quads instead of triangles and, voila!, that fixed everything. QUADS!!! I wasted friggin 3 solid days on triangles!!!

Rendering lots of similar but not identical meshes on scene

We have 1 geometry that gets attached to every mesh in our scene.
var geometry = new three.PlaneGeometry(1, 1, 1, 1),
Everything has a texture that we generate and cache to create a new material and a mesh for each object.
this.material = new three.MeshLambertMaterial({
transparent: true,
emissive: 0xffffff
});
// get the cached texture
this.material.map = this.getTexture(this.attributes);
this.shape = new three.Mesh(geometry, this.material);
Afterwards we add these shapes into various Object3Ds in order to move large groups of shapes around.
This all works great on nicer devices and up to 5000 circles, but then our framerate starts to drop. On weaker devices this is dramatically slower even with say 100 meshes. We know that merging geometries can speed things up; however, we only have a single geometry that is shared. Is it possible to merge meshes? Does that even make sense? Note: These shapes are interactive (movable/clickable). What are our options?
Other notes:
We are using Ejecta on mobile devices, which is great at low mesh counts, but not so great after 100 meshes. I don't think its Ejecta's fault, but rather our lack of knowledge about how to optimize! Also even on desktop our app has some CPU usage amount that we find suspicious.
Figured it out! We went from being able to render 5k things at 60fps to 100k things at approx 40fps.
We followed what most people are saying out there about merging meshes, but it took some experimentation to really understand what was happening and getting multiple textures/materials to work.
for (var i = 0; i < 100000; i++) {
// creates a mesh from the geometry and material from the question and returns an object
circle = ourCircleFactory.create();
circle.shape.updateMatrix();
sceneGeometry.merge(circle.shape.geometry, circle.shape.matrix, circle.cachedMaterialIndex);
}
var finalMesh = new three.Mesh(sceneGeometry, new THREE.MeshFaceMaterial(cachedMaterials));
scene.add(finalMesh);
That code will create 1 geometry per cached material. cachedMaterialIndex is something we created to cache textures and indicate which material to use.
It is likely that this code will create 1 geometry per combination of material and geometry. EG: if you have 5 geometries and they are interchangeable with 5 materials then you will get 25 geometries. It seems that it doesn't matter how many objects you have on screen. Note: we were getting 15fps with 5000 geometries so I think this is a fairly cheap solution.

Mesh becomes very angular after substracting with ThreeCSG

I experience a problem when substracting a mesh from an other mesh using ThreeCSG. My main mesh is a ring and the mesh to substract is a diamond. Before the process to scene looks like this: Mesh fine. But after substracting the meshes the ring becomes angular: Mesh Broken. I do apply the same material / shading as before. Here is the code I use:
var ring_bsp = new ThreeBSP(ring);
var stone_bsp = new ThreeBSP(stone);
var substract_bsp = ring_bsp.subtract( stone_bsp );
var result = substract_bsp.toMesh( ringMaterial );
result.geometry.computeVertexNormals();
result.material.needsUpdate = true;
result.geometry.buffersNeedUpdate = true;
result.geometry.uvsNeedUpdate = true;
result.scale.x = result.scale.y = result.scale.z = 19;
scene.remove(ring);
scene.add(result);
Update one:
If I remove "result.geometry.computeVertexNormals();" the result looks even worst: link.
Update two:
I created a jsfiddle with a minimal case
Update three:
After looking some more into the problem and Wilts last update, I saw that after I use ThreeBSP the vertexes are messed up. You can see this very well in this fiddle.
Update four:
The problem seems to be within the "fromGeometry / toGeometry" functions as I get the same broken mesh if I don't do any substraction at all.
It looks like (some of) your vertex normals get lost during translation (translating your geometry to CSG and translating back to Three.js). You should check out the source code to see where this goes wrong.
UPDATE 1:
I looked into the source code of ThreeCSG.js it seems there is a bug on line 48.
It should be:
vertex = new ThreeBSP.Vertex( vertex.x, vertex.y, vertex.z, face.vertexNormals[1], uvs );
The index for the vertexNormals should be 1 instead of 2.
Maybe that bug causes the wrong export result.
UPDATE 2:
Try updating the vertexNormals of the geometry before you convert to CSG:
var geometry = ring.geometry;
geometry.computeFaceNormals();
geometry.computeVertexNormals();
Note. You need to call computeNormals() first for the correct result.
UPDATE 3:
In the conversion of faces from Three.js geometries to CSG geometries the ThreeBSP.Polygon.prototype.classifySide method checks wheter the vertex of the adjacent face is in the front, in the back or coplanar to the current face. If the point is coplanar the CSG face will be defined as a Face with four vertex points. Because of this process some of your THREE.Face3 get converted to a 4 point CSG face. When later translating it back to a THREE.Face3 the Face vertexNormals become different from their initial values.
The vertex is classified FRONT, BACK or COPLANAR using an EPSILON value to compare the vertex normal with the face normal. If the difference is too small the Vertex is considered coplanar. By increasing the EPSILON value in your ThreeBSP library you can control the precision.
If you set EPSILON to 10 your triangles will never be considered coplanar and the conversion result will be correct.
So at line 5 of your ThreeBSP library set:
EPSILON = 10,

Three.js outlines

Is it possible to have an black outline on my 3d models with three.js?
I would have graphics which looks like Borderlands 2. (toon shading + black outlines)
I'm sure I came in late. Let's hope this would solve someone's question later.
Here's the deal, you don't need to render everything twice, the overhead actually is not substantial, all you need to do is duplicate the mesh and set the duplicate mesh's material side to "backside". No double passes. You will be rendering two meshes instead, with most of the outline's geometry culled by WebGL's "backface culling".
Here's an example:
var scene = new THREE.Scene();
//Create main object
var mesh_geo = new THREE.BoxGeometry(1, 1, 1);
var mesh_mat = new THREE.MeshBasicMaterial({color : 0xff0000});
var mesh = new THREE.Mesh(mesh_geo, mesh_mat);
scene.add(mesh);
//Create outline object
var outline_geo = new THREE.BoxGeometry(1, 1, 1);
//Notice the second parameter of the material
var outline_mat = new THREE.MeshBasicMaterial({color : 0x00ff00, side: THREE.BackSide});
var outline = new THREE.Mesh(outline_geo, outline_mat);
//Scale the object up to have an outline (as discussed in previous answer)
outline.scale.multiplyScalar(1.5);
scene.add(outline);
For more details on backface culling, check out: http://en.wikipedia.org/wiki/Back-face_culling
The above approach works well if you want to add an outline to objects, without adding a toon shader, and thus losing "realism".
Toon shading by itself supports edge detection. They've developed the 'cel' shader in Borderlands to achieve this effect.
In cel shading devs can either use the object duplication method (done at the [low] pipeline level), or can use image processing filters for edge detection. This is the point at which performance tradeoff is compared between the two techniques.
More info on cel: http://en.wikipedia.org/wiki/Cel_shading
Cheers!
Yes it is possible but not in a simple out-of-the-box way. For toon shading there are even shaders included in /examples/js/ShaderToon.js
For the outlines I think the most commonly suggested method is to render in two passes. First pass renders the models in black, and slightly larger scale. Second pass is normal scale and with the toon shaders. This way you'll see the larger black models as an outline. It's not perfect but I don't think there's an easy way out. You might have more success searching for "three.js hidden line rendering", as, while different look, somewhat similar method is used to achieve that.
Its a old question but here is what i did.
I created a Outlined Cel-shader for my CG course. Unfortunately it takes 3 rendering passes. Im currently trying to figure out how to remove one pass.
Here's the idea:
1) Render NormalDepth image to texture.
In vertex shader you do what you normally do, position to screen space and normal to screen space.
In fragment shader you calculate the depth of the pixel and then create the normal color with the depth as the alpha value
float ndcDepth = (2.0 * gl_FragCoord.z - gl_DepthRange.near - gl_DepthRange.far) / (gl_DepthRange.far - gl_DepthRange.near);
float clipDepth = ndcDepth / gl_FragCoord.w;
2) Render the scene on to a texture with cel-shading. I changed the scene override material.
3)Make quad and render both textures on the quad and have a orto camera look at it. Cel-shaded texture is just renderd on quad but the normaldepth shaded on that you use some edge detection and then with that you know when the pixel needs to be black(edge).

Categories

Resources