Three.js: Updating Geometries vs Replacing - javascript

I have a scene with lots of objects using ExtrudeGeometry. Each of these need to update each frame, where the shape that is being extruded is changing, along with the amount of extrusion. The shapes are being generated using d3's voronoi algorithm.
See example.
Right now I am achieving this by removing every object from the scene and redrawing them each frame. This is very costly and causing performance issues. Is there a way to edit each mesh/geometry instead of removing from the scene? Would this help with performance? Or is there a more efficient way of redrawing the scene?
I'd need to edit both the shape of the extrusion and the amount of extrusion.
Thanks for taking a look!

If you're not changing the number of faces, you can use morph targets http://threejs.org/examples/webgl_morphtargets.html
You should
Create your geometry
Clone the geometry and make your modifications to it, such as the maximum length of your geometry pillar
Set both geometries as morph targets to your base geometry, for example
baseGeo.morphTargets.push(
{ name: "targetName", vertices: [ modifiedVertexArray ] }
);
After that, you can animate the mesh this using mesh.updateMorphTargets()
See http://threejs.org/examples/webgl_morphtargets.html

So I managed to come up with a way of not having to redraw the scene every time and it massively improved performance.
http://jsfiddle.net/x00xsdrt/4/
This is how I did it:
Created a "template geometry" with ExtrudeGeometry using a dummy
10 sided polygon.
As before, created a bunch of "points", this time assigning each
point one of these template geometries.
On each frame, iterated through each geometry, updating each vertex
to that of the new one (using the voronoi alg as before).
If there are extra vertices left over, "bunch" them up into a single point. (see http://github.com/mrdoob/three.js/wiki/Updates.)
Looking at it now, it's quite a simple process. Before, the thought of manipulating each vertex seemed otherworldly to me, but it's not actually too tricky with simple shapes!
Here's how I did the iteration, polycColumn is just a 2 item array with the same polygon in each item:
// Set the vertex index
var v = 0;
// Iterate over both top and bottom of poly
for (var p=0;p<polyColumn.length;p++) {
// Iterate over half the vertices
for (var j=0;j<verts.length/2;j++) {
// create correct z-index depending on top/bottom
if (p == 1) {
var z = point.extrudeAmount;
} else {
var z = 0;
}
// If there are still legitimate verts
if (j < poly.length) {
verts[v].x = poly[j][0];
verts[v].y = poly[j][1];
verts[v].z = z;
// If we've got extra verts, bunch them up in the same place
} else {
verts[v].x = verts[v - 1].x;
verts[v].y = verts[v - 1].y;
verts[v].z = z;
}
v++;
}
}
point.mesh.geometry.verticesNeedUpdate = true;

Related

THREE.js raycasting very slow against single > 500k poly (faces) object, line intersection with globe

in my project I have a player walk around a globe. The globe is not just a sphere, it has mountains and valleys, so I need the players z position to change. For this I'm raycasting a single ray from player's position against a single object (the globe) and I get the point they intersect and change players position accordingly. I'm only raycasting when the player moves, not on every frame.
For a complex object it takes forever. It takes ~200ms for an object with ~1m polys (faces) (1024x512 segments sphere). Does raycasting cast against every single face ?
Is there a traditional fast way to achieve this in THREE, like some acceleration structure (octree? bvh? -- tbh from my google searches I haven't seem to find such a thing included in THREE) or some other thinking-out-of-the-box (no ray casting) method?
var dir = g_Game.earthPosition.clone();
var startPoint = g_Game.cubePlayer.position.clone();
var directionVector = dir.sub(startPoint.multiplyScalar(10));
g_Game.raycaster.set(startPoint, directionVector.clone().normalize());
var t1 = new Date().getTime();
var rayIntersects = g_Game.raycaster.intersectObject(g_Game.earth, true);
if (rayIntersects[0]) {
var dist = rayIntersects[0].point.distanceTo(g_Game.earthPosition);
dist = Math.round(dist * 100 + Number.EPSILON) / 100;
g_Player.DistanceFromCenter = dist + 5;
}
var t2 = new Date().getTime();
console.log(t2-t1);
Thank you in advance
Do not use three.js Raycaster.
Consider Ray.js that offers function intersectTriangle(a, b, c, backfaceCulling, target)
Suggested optimizations:
If player starts from some known positions ⇒ you must know his initial height, − no need to raycast (or just do one time full mesh slow intersection)
if player moves with small steps ⇒ next raycast will most likely intersect the same face as before.
Optimization #1 − remember previous face, and raycast it first.
if player does not jump ⇒ next raycast will most likely intersect the adjacent face to the face where player was before.
Optimization #2 − build up a cache, so that given a face idx you could retrieve adjacent faces in O(1) time.
This cache may be loaded from the file, if your planet is not generated in real time.
So with my approach on each move you do O(1) read operation from cache and raycast 1-6 faces.
Win!
For a complex object it takes forever. It takes ~200ms for an object with ~1m polys (faces) (1024x512 segments sphere). Does raycasting cast against every single face ?
Out of the box THREE.js does check every triangle when performing a raycast against a mesh and there are no acceleration structures built into THREE.
I've worked with others on the three-mesh-bvh package (github, npm) to help address this problem, though, which may help you get up to the speeds your looking for. Here's how you might use it:
import * as THREE from 'three';
import { MeshBVH, acceleratedRaycast } from 'three-mesh-bvh';
THREE.Mesh.prototype.raycast = acceleratedRaycast;
// ... initialize the scene...
globeMesh.geometry.boundsTree = new MeshBVH(globeMesh.geometry);
// ... initialize raycaster...
// Optional. Improves the performance of the raycast
// if you only need the first collision
raycaster.firstHitOnly = true;
const intersects = raycaster.intersectObject(globeMesh, true);
// do something with the intersections
There are some caveats mentioned in the README so keep those in mind (the mesh index is modified, only nonanimated BufferGeometry is supported, etc). And there's still some memory optimization that could be done but there are some tweakable options to help tune that.
I'll be interested to hear how this works for you! Feel free to leave feedback in the issues on how to improve the package, as well. Hope that helps!
I think you should pre-render the height map of your globe into a texture, assuming your terrain is not dynamic. Read all of it into a typed array, and then whenever your player moves, you only need to back-project her coordinates into that texture, query it, offset and multiply and you should get what you need in O(1) time.
It's up to you how you generate that height map. Actually if you have a bumpy globe, then you should probably start with height map in the first place, and use that in your vertex shader to render the globe (with the input sphere being perfectly smooth). Then you can use the same height map to query the player's Z.
Edit: Danger! This may cause someone's death one day. The edge case I see here is the nearest collision will be not be seen because searchRange will not contain the nearest triangle but will contain the second nearest one returning it as the closest one. I.e. a robotic arm may stop nearby the torso instead of stopping at the arm right in front of it.
anyway
Here's a hack when raycasting not too far from the previous result i.e. during consecutive mousemove events. This will not work for completely random rays
Mesh raycast supports drawRange to limit how many triangles will be searched. Also each raycast result comes with faceIndex telling which triangle was hit. If you're continuously looking for raycasts i.e. with mousemove or there's a laser linearly scanning a mesh you can first search the area nearby* the previous hit.
triangles' distance in the data may look like they're neighbours but it's not guaranteed they are sorted in any way. Still it's very possible that the close ones in the data are close in space.
let lastFaceIndex = null
const searchRange = 2000 * 3
function raycast(mesh, raycaster) {
// limited search
if (lastFaceIndex !== null) {
const drawRange = mesh.geometry.drawRange
drawRange.start = Math.max(0, lastFaceIndex * 3 - searchRange)
drawRange.count = searchRange * 2
const intersects = raycaster.intersectObjects([mesh]);
drawRange.start = 0
drawRange.count = Infinity
if (intersects.length) {
lastFaceIndex = intersects[0].faceIndex
return intersects[0]
}
}
// regular search
const intersects = raycaster.intersectObjects([mesh]);
if (!intersects.length) {
lastFaceIndex = null
return null
}
lastFaceIndex = intersects[0].faceIndex
return intersects[0]
}

Experiencing something odd when using THREE.Raycaster for collision detection (r68)

I've been using the THREE.Raycaster successfully to test collisions for many things in my game engine so far, it's great and it works well.
However, recently I've run into something quite peculiar which I cannot seem to figure out. From my point of view, my logic and code are sound but the expected result is not correct.
Perhaps I'm just missing something obvious so I thought I'd ask for some help.
I am casting rays out from the center of the top of a group of meshes, one by one, in a circular arc. The meshes are all children of a parent Object3D and the goal is to test collisions between the origin mesh and other meshes which are also children of the parent. To test my rays, I am using the THREE.ArrowHelper.
Here's an image of the result of my code - http://imgur.com/ipzYUsa
In this image, the ArrowHelper objects are positioned (origin:direction) exactly how I want them. But yeah, there's something wrong with this picture, the code that is produces this is:
var degree = Math.PI / 16,
tiles = this.tilesContainer.children,
tilesNum = tiles.length,
raycaster = new THREE.Raycaster(),
rayDirections, rayDirectionsNum, rayOrigin, rayDirection, collisions,
tile, i, j, k;
for (i = 0; i < tilesNum; i++) {
tile = tiles[i];
rayOrigin = new THREE.Vector3(
tile.position.x,
tile.geometry.boundingBox.max.y,
tile.position.z
);
rayDirections = [];
for (j = 0; j < Math.PI * 2; j += degree) {
rayDirections.push(new THREE.Vector3(Math.sin(j), 0, Math.cos(j)).normalize());
}
rayDirectionsNum = rayDirections.length;
for (k = 0; k < rayDirectionsNum; k++) {
rayDirection = rayDirections[k];
raycaster.set(rayOrigin, rayDirection);
collisions = raycaster.intersectObjects(tiles);
this.testRay(rayOrigin, rayDirection, collisions);
}
}
The testRay method looks like this:
testRay: function (origin, direction, collisions) {
var arrowHelper = new THREE.ArrowHelper(
direction,
origin,
1,
(collisions.length === 0) ? 0xFF0000 : 0x0000FF
);
this.scene.add(arrowHelper);
}
Now, obviously, something is off about this image. The rays that collide with other meshes should be blue, while those that do not collide should be red.
It's clear from this image that something is totally out of whack, and when I inspect the collisions, I get some really off results. For a lot of those rays which appear blue in the image, I'm getting a huge number of collisions, something like 30 collisions for a single ray sometimes, but nothing for the others even when they are right next to other tiles.
I just can't figure out what it might be. How can it be that so many rays that should be blue are red? And how can rays from tiles at the edge of the level have blue collisions to tiles that do not exist?
Really scratching my head (read: bashing my head repeatedly) over this one, any help would be super appreciated!
The solution was actually outside this code and not, at least I don't believe, related to the outdated r68 build.
When making the tile meshes, I needed to set three properties on them
tileMesh.matrixAutoUpdate = false;
tileMesh.updateMatrix();
tileMesh.updateMatrixWorld(); // this is new
I was doing the first two, just not the last one. Why this is necessary, I do not know, it seems a little odd to me but this is what fixed my problem. I had an AxisHelper in the scene, if you look at the original image, you'll notice that all the ArrowHelper objects that are blue are actually pointing towards the AxisHelper. This is really weird because the AxisHelper was added to the scene, not to tilesContainer. Adding the ArrowHelper objects to tilesContainer did not help.
The process to render the scene had the raycaster code run before the AxisHelper was added to the scene and before the initial render happened. The problem was also fixed if I moved the raycaster code call after the AxisHelper was added, but this was a hacky solution.
So the true fix was to add .updateMatrixWorld() to the tiles. The result now looks like this http://imgur.com/8LewqxL, which is correct (the ArrowHelper objects have been shortened in length so they don't overlap).
Big thanks to Manthrax for his help on this one.
I think you make some local vs global space error. I don't see so fast where exactly you go wrong, but all your position and direction calculations seem to be in the local system of the tilesContainer. Are you consistent in your local vs global coordinate system handling?
For example you add your arrowHelper to the scene instead of to the tilesContainer. It could be that the tilesContainer has some rotation set and because of this the arrows are pointing in another direction then you expected.
What happens for example if you add the arrows to the tilesContainer instead?

ThreeJS - adding objects in different order affects alpha / display

My program creates dynamic number of point cloud objects with custom attributes that includes the alpha value of each particle. This works fine, however, when the objects are nested within each other (say spheres) the smaller (inner) ones are getting obscured by the bigger ones, even though their particles' alpha is set properly. When I reverse the order of adding the point-cloud objects to the scene, starting with the bigger ones, going down to the smaller ones, I can see the smaller ones thru the bigger ones.
My question is whether there is a way to tell the renderer to update or recalculate the alpha values or re-render the smaller inner objects so that they show up?
I ran into the same problem as you do. I fixed it to calculate and set the renderdepth for each mesh. For this you need the camera position and the center of your mesh.
You probably already created meshes for each object. If you save all these meshes into an array, it's easier to calculate and set the renderdepth on these objects.
Here's an example how I did it.
updateRenderDepthOnRooms(cameraPosition: THREE.Vector3): void {
var rooms: Room[] = this.getAllRooms();
rooms.forEach((room) => {
var roomCenter = getCenter(room.mesh.geometry);
var renderDepth = 0 - roomCenter.distanceToSquared(cameraPosition);
room.mesh.renderDepth = renderDepth;
});
}
function getCenter(geometry: THREE.Geometry): THREE.Vector3 {
geometry.computeBoundingBox();
var bb = geometry.boundingBox;
var offset = new THREE.Vector3();
offset.addVectors(bb.min, bb.max);
offset.multiplyScalar(0.5);
return offset;
}
So, to get the center of your object, you can ask the geometry from your mesh and use the getCenter(..) function from my example. Then you calculate the renderdepth with the ThreeJs function distanceToSquared(..) and then set this renderdepth to your mesh.
That's it. Hope this will help you.

Remove adjoining faces in three.js

I'm trying to optimize a scene where I'm rendering cubes based off of an image's pixel data:
http://jsfiddle.net/majman/4sukB/
The code simply checks each pixel in an image and creates & positions a cube mesh accordingly.
However, as you can see if you toggle wireframes on, there is an abundance of unnecessary internal faces.
I am using mergeVertices as well as THREE.GeometryUtils.merge - so things are partially optimized.
I ran across this approach of comparing all the faces of merged geometry, but because each cube face is now a pair of tri's - they are difficult to compare as the two triangles of adjoining faces will be flipped.
I've also looked at the minecraft example, but I havne't been able to wrap my head around that approach.
Ok, with WestLangley's help - I was able to get there.
http://jsfiddle.net/majman/4sukB/2/
Took some fiddling to figure out which faces to adjust within buildPlane. After that, comparing centroids was relatively straight forward:
function removeDuplicateFaces(geometry){
for(var i=0; i<geometry.faces.length; i++){
var centroid = geometry.faces[i].centroid;
for(var j=0; j < i; j++){
var f2 = geometry.faces[j];
if( f2 !== undefined ){
var centroid2 = f2.centroid;
if(centroid.equals(centroid2)){
delete geometry.faces[i];
delete geometry.faces[j];
}
}
}
}
geometry.faces = geometry.faces.filter( function(a){ return a!== undefined });
return geometry;
}

Face normals on dynamic geometry

I'm trying to create a vertex animation for a mesh.
Just imagine a vertex shader, but in software instead of hardware.
Basically what I do is to apply a transformation matrix to each vertex. The mesh it's ok but the normals doesn't look good at all.
I've try to use both computeVertexNormals() and computeFaceNormals() but it just doesn't work.
The following code is the one I used for the animation (initialVertices are the initial vertices generated by the CubeGeometry):
for (var i=0;i<mesh1.geometry.vertices.length; i++)
{
var vtx=initialVertices[i].clone();
var dist = vtx.y;
var rot=clock.getElapsedTime() - dist*0.02;
matrix.makeRotationY(rot);
vtx.applyMatrix4(matrix);
mesh1.geometry.vertices[i]=vtx;
}
mesh1.geometry.verticesNeedUpdate = true;
Here there're two examples, one working correctly with CanvasRenderer:
http://kile.stravaganza.org/lab/js/dynamic/canvas.html
and the one that doesn't works in WebGL:
http://kile.stravaganza.org/lab/js/dynamic/webgl.html
Any idea what I'm missing?
You are missing several things.
(1) You need to set the ambient reflectance of the material. It is reasonable to set it equal to the diffuse reflectance, or color.
var material = new THREE.MeshLambertMaterial( {
color:0xff0000,
ambient:0xff0000
} );
(2) If you are moving vertices, you need to update centroids, face normals, and vertex normals -- in the proper order. See the source code.
mesh1.geometry.computeCentroids();
mesh1.geometry.computeFaceNormals();
mesh1.geometry.computeVertexNormals();
(3) When you are using WebGLRenderer, you need to set the required update flags:
mesh1.geometry.verticesNeedUpdate = true;
mesh1.geometry.normalsNeedUpdate = true;
Tip: is it a good idea to avoid new and clone in tight loops.
three.js r.63

Categories

Resources