Replace TubeBufferGeometry's contents with new TubeBufferGeometry in place, without allocation - javascript

I have a mesh whose geometry is a TubeBufferGeometry. Each frame of an animation cycle, the path of the TubeBufferGeometry will change (the path will be determined by values supplied at runtime), so I want to update the geometry every frame with a new TubeBufferGeometry. Of course, I can update the mesh's geometry like so:
mesh.geometry.dispose()
mesh.geometry = new THREE.TubeBufferGeometry(newPath, params)
But this is wasteful as it requires allocating a whole new BufferGeometry each frame. Ideally, I could simply give the TubeBufferGeometry constructor an existing geometry to overwrite, and instead of allocating a whole new geometry it would write its contents to that geometry's buffers. Something like this:
THREE.TubeBufferGeometry.overwrite(mesh.geometry, newPath, params)
(Because they'd use the same params, the old geometry's buffers would be sufficiently large to store the new geometry.)
Is something like this possible? Having TubeBufferGeometry compute the vertex positions for me is much more convenient than computing them by hand, but I just need a way for it to compute them in an existing buffer instead of allocating a new one each frame.

The geometry generators of three.js are intended for a one time creation of geometries. They are not intended to use them per frame in order to animate the structure of a mesh.
This approach is in general wasteful even without allocating new buffers. You should consider to author the animation as a morph target animation in Blender.

Related

THREE.js What does this vertices mean in the box geometry?

I have created a box geometry as below,
const hand1geo = new THREE.BoxGeometry(2, 0.01, 0.2);
const material_sidehand = new THREE.MeshBasicMaterial({ color: 0x3cc1b7 });
const sidehand = new THREE.Mesh(hand1geo, material_sidehand);
What I want to do is to extract vertices from this box and I use this,
this.sidehand.attributes.position.array
And what I got is as following,
The picture of result. I really don't understand why it just spanned 72 elements(24 vectors) with same value. Why there are 24 vectors here and where have them been defined? Because I wanna use raycaster to do the collision detection later on.
I tried to use this.sidehand.vertices but it doesn't work.
I tried to use this.sidehand.vertices but it doesn't work.
I don't know what references you used but Mesh never had a property called vertices. You probably refer to the former Geometry class which indeed had this property. However, this class has been deprecated and BufferGeometry is used now instead.
I really don't understand why it just spanned 72 elements(24 vectors) with same value.
The values are not identical. BoxGeometry defines all vertices of the box in local space in a flat array so the data can be directly used by the WebGL API (which is good for performance).
There are 24 vectors because the geometry defines for each side of the box four vertices. Each side is composed of two triangles. This is done so it's possible to generate proper normals and texture coordinates.
I suggest you reconsider to use raw geometry data for collision detection. You are going to achieve much better performance by working with bounding volumes instead.

Correctly disposing of curves in THREE.JS

I have a sphere with multiple moving points on it, and I am drawing curves connecting the points like this:
Since the points are moving, I draw these curves for every frame, and thus there is a lot of memory overhead that I am worried about.
Each curve is drawn with
// points = array of Three.Vector3 size 40
path = new THREE.CatmullRomCurve3(points)
mesh = new THREE.Mesh(
new THREE.TubeGeometry(path,64,0.5,false), // geometry
new THREE.MeshBasicMaterial({color: 0x0000ff}) // material
)
scene.add(mesh)
and for disposal:
scene.remove(mesh)
mesh.material.dispose()
mesh.geometry.dispose()
It does not let me, however, dispose of my array of 40 Three.js vectors points and of my CatmullRomCurve3 path.
What is the issue, and how do I dispose of the new THREE.Vector3() and new THREE.CatmullRomCurve3().
What is the issue, and how do I dispose of the new THREE.Vector3() and new THREE.CatmullRomCurve3().
dispose() methods in three.js are mainly intended to free GPU memory which is associated with JS objects like geometries, materials, textures or render targets. Instantiating curves and plain math entities like Vector3 do not cause an allocation of GPU memory.
Hence, it should be sufficient to just remove any references to path so it can be cleaned up by the GC.

resizing individual models in a single geometry

I have a 3D model of my home town. I would like to use real time data to change the height of the buildings. In my first try, I loaded the buildings as individual meshes and called scene.add(buildingMesh) during setup.
var threeObjects = []
var buildingMesh = new THREE.Mesh(geometry, material)
threeObjects.push(buildingMesh);
$.each(threeObjects,function(i, buildingMesh)
{
buildingMesh.rotation.x += -3.1415*0.5;
buildingMesh.castShadow = true;
buildingMesh.receiveShadow = true;
scene.add(buildingMesh);
});
Which is too slow as my dataset consists of roughly 10.000 building.
So I took the approach to add all the (geometries of the) meshes to a single geometry and wrap that in a mesh to be added to the scene
singleGeometry.merge(buildingMesh.geometry, buildingMesh.matrix); //in a loop
var faceColorMaterial = new THREE.MeshLambertMaterial( { color: 0xffffff, vertexColors: THREE.VertexColors } );
combinedMesh = new THREE.Mesh(singleGeometry, faceColorMaterial);
scene.add(combinedMesh);
Just to make a proof of concept, I'm trying to change the height of a building when I click it. Alas, this is not working.
By adding a new id field, I can get a reference to the faces and vertices and change the color of the building, but I can not for the life of me, get them to to change height.
In my first version, I would just use something like:
buildingMesh.scale.z=2;
But as I have no meshes anymore, I'm kinda lost.
Can anybody help?
disclaimer: I'm new to Three.js, so my question might be stupid...hope it's not :)
If you combine all of your buildings into a single geometry, you're obliterating everything that makes the buildings distinct from each other. Now you can't tell building A from building B because it's all one big geometry, and geometry at its basic level is literally just arrays of points and polygons with no way of telling any of it apart. So I think it's the wrong approach to merge it all together.
Instead, you should take advantage of three.js's efficient scene graph architecture. You had the right idea at first to just add all the buildings to a single root Object3D ("scene"). That way you get all the efficiencies of the scene graph but can still individually address the buildings.
To make it load more efficiently, instead of creating the scene graph in three.js every time you load the app, you should do it ahead of time in a 3D modeling program. Build the parent/child relationships there, and export it as a single model containing all of the buildings as child nodes. When you import it into three.js, it should retain its structure.
JCD: That was not quite the question I asked.
But anyhow, I found a solution to the problem.
What I did was to merge all the geometries, but in stead of using the standard clone function in geometry.merge() I used a shallow reference, which made it possible for me to use the reference in threeObjects to find the correct building and resize the part of the geometry using Mesh.scale, followed by a geometry.verticesNeedUpdate = true;
For further optimization, I split the model into 5 different geometries and only updated the geometry that contained the building

How to Change a Box's dimensions/size after creation?

One can easily create a THREE.BoxGeometry where you have to pass arguments when creating as three separated arguments for width, height, and depth.
I would like to create any and all THREE[types]() with no parameters and set the values after that.
Is there a way to set the dimensions/size of the box geometry after creation (possibly buried in a Mesh already too)? other then scaling etc.
I couldn't find this in the documentation if so, otherwise maybe a major feature request if not a bug there. Any thoughts on how to classify this? maybe just a documentation change.
If you want to scale a mesh, you have two choices: scale the mesh
mesh.scale.set( x, y, z );
or scale the mesh's geometry
mesh.geometry.scale( x, y, z );
The first method modifies the mesh's matrix transform.
The second method modifies the vertices of the geometry.
Look at the source code so you understand what each scale method is doing.
three.js r.73
When you instantiate a BoxGeometry object, or any other geometry for that matter, the vertices and such buffers are created on the spot using the parameters provided. As such, it is not possible to simply change a property of the geometry and have the vertices update; the entire object must be re-instantiated.
You will need to create your geometries as you have the parameters for them available. You can however create meshes without geometries, add them to a scene, and update the mesh's geometry property once you have enough information to instantiate the object. If not that, you could also set a default value at first and then scale to reach your target.
Technically, scaling only creates the illusion of an updated geometry and the question did say (other then scaling). So, I would say a better approach would be to reassign the geometry property of your mesh to a new geometry.
mesh.geometry = new THREE.BoxGeometry(newSize, newSize, newSize)
With this approach you can update any aspect of the geometry including width segments for example. This is especially useful when working with non box geometries like cylinders or spheres.
Here is a full working example using this approach:
let size = 10
let newSize = 20
// Create a blank geometry and make a mesh from it.
let geometry = new THREE.BoxGeometry()
let material = new THREE.MeshNormalMaterial()
let mesh = new THREE.Mesh(geometry, material)
// Adding this mesh to the scene won't display anything because ...
// the geometry has no parameters yet.
scene.add(mesh)
// Unless you intend to reuse your old geometry dispose of it...
// this will significantly reduce memory footprint.
mesh.geometry.dispose()
// Update the mesh geometry to a new geometry with whatever parameters you desire.
// You will now see these changes reflected in the scene.
mesh.geometry = new THREE.BoxGeometry(size, size, size)
// You can update the geometry as many times as you like.
// This can be done before or after adding the mesh to the scene.
mesh.geometry = new THREE.BoxGeometry(newSize, newSize, newSize)

Three.js outlines

Is it possible to have an black outline on my 3d models with three.js?
I would have graphics which looks like Borderlands 2. (toon shading + black outlines)
I'm sure I came in late. Let's hope this would solve someone's question later.
Here's the deal, you don't need to render everything twice, the overhead actually is not substantial, all you need to do is duplicate the mesh and set the duplicate mesh's material side to "backside". No double passes. You will be rendering two meshes instead, with most of the outline's geometry culled by WebGL's "backface culling".
Here's an example:
var scene = new THREE.Scene();
//Create main object
var mesh_geo = new THREE.BoxGeometry(1, 1, 1);
var mesh_mat = new THREE.MeshBasicMaterial({color : 0xff0000});
var mesh = new THREE.Mesh(mesh_geo, mesh_mat);
scene.add(mesh);
//Create outline object
var outline_geo = new THREE.BoxGeometry(1, 1, 1);
//Notice the second parameter of the material
var outline_mat = new THREE.MeshBasicMaterial({color : 0x00ff00, side: THREE.BackSide});
var outline = new THREE.Mesh(outline_geo, outline_mat);
//Scale the object up to have an outline (as discussed in previous answer)
outline.scale.multiplyScalar(1.5);
scene.add(outline);
For more details on backface culling, check out: http://en.wikipedia.org/wiki/Back-face_culling
The above approach works well if you want to add an outline to objects, without adding a toon shader, and thus losing "realism".
Toon shading by itself supports edge detection. They've developed the 'cel' shader in Borderlands to achieve this effect.
In cel shading devs can either use the object duplication method (done at the [low] pipeline level), or can use image processing filters for edge detection. This is the point at which performance tradeoff is compared between the two techniques.
More info on cel: http://en.wikipedia.org/wiki/Cel_shading
Cheers!
Yes it is possible but not in a simple out-of-the-box way. For toon shading there are even shaders included in /examples/js/ShaderToon.js
For the outlines I think the most commonly suggested method is to render in two passes. First pass renders the models in black, and slightly larger scale. Second pass is normal scale and with the toon shaders. This way you'll see the larger black models as an outline. It's not perfect but I don't think there's an easy way out. You might have more success searching for "three.js hidden line rendering", as, while different look, somewhat similar method is used to achieve that.
Its a old question but here is what i did.
I created a Outlined Cel-shader for my CG course. Unfortunately it takes 3 rendering passes. Im currently trying to figure out how to remove one pass.
Here's the idea:
1) Render NormalDepth image to texture.
In vertex shader you do what you normally do, position to screen space and normal to screen space.
In fragment shader you calculate the depth of the pixel and then create the normal color with the depth as the alpha value
float ndcDepth = (2.0 * gl_FragCoord.z - gl_DepthRange.near - gl_DepthRange.far) / (gl_DepthRange.far - gl_DepthRange.near);
float clipDepth = ndcDepth / gl_FragCoord.w;
2) Render the scene on to a texture with cel-shading. I changed the scene override material.
3)Make quad and render both textures on the quad and have a orto camera look at it. Cel-shaded texture is just renderd on quad but the normaldepth shaded on that you use some edge detection and then with that you know when the pixel needs to be black(edge).

Categories

Resources