I have a sphere with multiple moving points on it, and I am drawing curves connecting the points like this:
Since the points are moving, I draw these curves for every frame, and thus there is a lot of memory overhead that I am worried about.
Each curve is drawn with
// points = array of Three.Vector3 size 40
path = new THREE.CatmullRomCurve3(points)
mesh = new THREE.Mesh(
new THREE.TubeGeometry(path,64,0.5,false), // geometry
new THREE.MeshBasicMaterial({color: 0x0000ff}) // material
)
scene.add(mesh)
and for disposal:
scene.remove(mesh)
mesh.material.dispose()
mesh.geometry.dispose()
It does not let me, however, dispose of my array of 40 Three.js vectors points and of my CatmullRomCurve3 path.
What is the issue, and how do I dispose of the new THREE.Vector3() and new THREE.CatmullRomCurve3().
What is the issue, and how do I dispose of the new THREE.Vector3() and new THREE.CatmullRomCurve3().
dispose() methods in three.js are mainly intended to free GPU memory which is associated with JS objects like geometries, materials, textures or render targets. Instantiating curves and plain math entities like Vector3 do not cause an allocation of GPU memory.
Hence, it should be sufficient to just remove any references to path so it can be cleaned up by the GC.
Related
I have a mesh whose geometry is a TubeBufferGeometry. Each frame of an animation cycle, the path of the TubeBufferGeometry will change (the path will be determined by values supplied at runtime), so I want to update the geometry every frame with a new TubeBufferGeometry. Of course, I can update the mesh's geometry like so:
mesh.geometry.dispose()
mesh.geometry = new THREE.TubeBufferGeometry(newPath, params)
But this is wasteful as it requires allocating a whole new BufferGeometry each frame. Ideally, I could simply give the TubeBufferGeometry constructor an existing geometry to overwrite, and instead of allocating a whole new geometry it would write its contents to that geometry's buffers. Something like this:
THREE.TubeBufferGeometry.overwrite(mesh.geometry, newPath, params)
(Because they'd use the same params, the old geometry's buffers would be sufficiently large to store the new geometry.)
Is something like this possible? Having TubeBufferGeometry compute the vertex positions for me is much more convenient than computing them by hand, but I just need a way for it to compute them in an existing buffer instead of allocating a new one each frame.
The geometry generators of three.js are intended for a one time creation of geometries. They are not intended to use them per frame in order to animate the structure of a mesh.
This approach is in general wasteful even without allocating new buffers. You should consider to author the animation as a morph target animation in Blender.
I'm new to the area of geometry generation and manipulation and I'm planning on doing this on an intricate and large scale. I know the basic way of doing this is like it's shown in the answer to this question..
var geom = new THREE.Geometry();
var v1 = new THREE.Vector3(0,0,0);
var v2 = new THREE.Vector3(0,500,0);
var v3 = new THREE.Vector3(0,500,500);
geom.vertices.push(v1);
geom.vertices.push(v2);
geom.vertices.push(v3);
geom.faces.push( new THREE.Face3( 0, 1, 2 ) );
geom.computeFaceNormals();
var object = new THREE.Mesh( geom, new THREE.MeshNormalMaterial() );
object.position.z = -100;//move a bit back - size of 500 is a bit big
object.rotation.y = -Math.PI * .5;//triangle is pointing in depth, rotate it -90 degrees on Y
scene.add(object);
But I do have experience with doing image manipulation working directly with a typed array image buffer on the GPU which is essentially the same thing as manipulating 3D points, since colors are essentially 3D points on a 2D grid (in the case of a buffer, flattened out to a 1D typed array) and I know just how much faster that kind of large scale manipulation is when processed with shaders on the GPU.
So I'm wondering if I can access the geometry in three.js directly as a typed array buffer. If so, I can use gpu.js to manipulate it on the GPU rather than CPU and boost performance exponentially.
Basically I'm asking if there's something like canvas's getImageData method for three.js geometry.
As ThJim01 mentioned in the comment, THREE.BufferGeometry is the way to go, but if you insist on using THREE.Geometry to initialize your list of triangles, you can use the BufferGeometry.fromGeometry function to generate the BufferGeometry from the Geometry you originally made.
var geometry = new THREE.Geometry();
// ... initialize verts and faces ...
// Initialize the BufferGeometry
var buffGeom = new THREE.BufferGeometry();
buffGeom.fromGeometry(geometry);
// Print the typed array for the position of the vertices
console.log(buffGeom.getAttribute('position').array);
Note that the resultant geometry will not have an index array and just be a list of disjointed triangles (as it was represented as in the first place!)
Hope that helps!
One can easily create a THREE.BoxGeometry where you have to pass arguments when creating as three separated arguments for width, height, and depth.
I would like to create any and all THREE[types]() with no parameters and set the values after that.
Is there a way to set the dimensions/size of the box geometry after creation (possibly buried in a Mesh already too)? other then scaling etc.
I couldn't find this in the documentation if so, otherwise maybe a major feature request if not a bug there. Any thoughts on how to classify this? maybe just a documentation change.
If you want to scale a mesh, you have two choices: scale the mesh
mesh.scale.set( x, y, z );
or scale the mesh's geometry
mesh.geometry.scale( x, y, z );
The first method modifies the mesh's matrix transform.
The second method modifies the vertices of the geometry.
Look at the source code so you understand what each scale method is doing.
three.js r.73
When you instantiate a BoxGeometry object, or any other geometry for that matter, the vertices and such buffers are created on the spot using the parameters provided. As such, it is not possible to simply change a property of the geometry and have the vertices update; the entire object must be re-instantiated.
You will need to create your geometries as you have the parameters for them available. You can however create meshes without geometries, add them to a scene, and update the mesh's geometry property once you have enough information to instantiate the object. If not that, you could also set a default value at first and then scale to reach your target.
Technically, scaling only creates the illusion of an updated geometry and the question did say (other then scaling). So, I would say a better approach would be to reassign the geometry property of your mesh to a new geometry.
mesh.geometry = new THREE.BoxGeometry(newSize, newSize, newSize)
With this approach you can update any aspect of the geometry including width segments for example. This is especially useful when working with non box geometries like cylinders or spheres.
Here is a full working example using this approach:
let size = 10
let newSize = 20
// Create a blank geometry and make a mesh from it.
let geometry = new THREE.BoxGeometry()
let material = new THREE.MeshNormalMaterial()
let mesh = new THREE.Mesh(geometry, material)
// Adding this mesh to the scene won't display anything because ...
// the geometry has no parameters yet.
scene.add(mesh)
// Unless you intend to reuse your old geometry dispose of it...
// this will significantly reduce memory footprint.
mesh.geometry.dispose()
// Update the mesh geometry to a new geometry with whatever parameters you desire.
// You will now see these changes reflected in the scene.
mesh.geometry = new THREE.BoxGeometry(size, size, size)
// You can update the geometry as many times as you like.
// This can be done before or after adding the mesh to the scene.
mesh.geometry = new THREE.BoxGeometry(newSize, newSize, newSize)
I've been putting together a 3d model of a house and right now I'm stuck with yet another aggravating roadblock like those three.js has gotten me accustomed to.
I'm creating my scene in Maya and using the OBJ exporter to write obj and mtl files that I then import into three.js. I have about 9 objects in my scene, ungrouped, children only to the world, history deleted, and with texture maps that have ambient occlusion and lighting baked into them assigned to them via shadingMaps.
I've actually had little luck actually using the mtl file, so I just copied my texture maps and loaded them separately and created materials out of them in three.js.
Now, all of these objects look just fine in the browser, except for the simplest one, the walls and floor object. This is what the object looks like in Maya:
As you can see, a rather simple mesh with minimal polys looking beautiful in Maya.
I've learned that when I export objects into obj files, only one UV channel is supported, so I copy my UVs into the default channel and delete all other UV channels before exporting. This is the UV map:
But when I assign this material in the browser, I get a strange texture distortion like so:
It's like the UVs are all over the place. I would seriously doubt that my approach is anywhere close to being on target if it weren't for those 8 other (more complex, mind you) objects which all display fine.
, including part of the wall that I've cut out of the problematic piece, which is part of the bathroom.
Does anyone have a clue as to how I can troubleshoot this? I've tried exporting straight to js from Maya, but I'm having even more problems with that approach. I've tried converting the obj file into js using the packaged browser-based converter. I've spent days on this and am not making any progress.
Here's some relevant code.
scene = new THREE.Scene();
renderer = new THREE.WebGLRenderer({antialias: true} );
var wallTexture = THREE.ImageUtils.loadTexture("obj/final_walls.jpg");
var wallMaterial = new THREE.MeshLambertMaterial( {color: 0x929EAC, map:wallTexture} );
var manager = new THREE.LoadingManager();
var loader = new THREE.OBJMTLLoader( manager );
loader.load( 'obj/wallOnly.obj','obj/wallOnly.mtl', function ( object ) {
object.children[2].material = wallMaterial;
floorplan.add(object);
camera.lookAt( object );
} );
Please help!!
OMG! After running my head through the wall, I finally found the solution!
I've discovered that the problem was not as much a distortion of the texture as it was a random swapping of uv faces. That's right! For some reason, the webGL renderer randomly swapped some faces of the object with others.
Out of total coincidence I turned my mesh into quads instead of triangles and, voila!, that fixed everything. QUADS!!! I wasted friggin 3 solid days on triangles!!!
We have 1 geometry that gets attached to every mesh in our scene.
var geometry = new three.PlaneGeometry(1, 1, 1, 1),
Everything has a texture that we generate and cache to create a new material and a mesh for each object.
this.material = new three.MeshLambertMaterial({
transparent: true,
emissive: 0xffffff
});
// get the cached texture
this.material.map = this.getTexture(this.attributes);
this.shape = new three.Mesh(geometry, this.material);
Afterwards we add these shapes into various Object3Ds in order to move large groups of shapes around.
This all works great on nicer devices and up to 5000 circles, but then our framerate starts to drop. On weaker devices this is dramatically slower even with say 100 meshes. We know that merging geometries can speed things up; however, we only have a single geometry that is shared. Is it possible to merge meshes? Does that even make sense? Note: These shapes are interactive (movable/clickable). What are our options?
Other notes:
We are using Ejecta on mobile devices, which is great at low mesh counts, but not so great after 100 meshes. I don't think its Ejecta's fault, but rather our lack of knowledge about how to optimize! Also even on desktop our app has some CPU usage amount that we find suspicious.
Figured it out! We went from being able to render 5k things at 60fps to 100k things at approx 40fps.
We followed what most people are saying out there about merging meshes, but it took some experimentation to really understand what was happening and getting multiple textures/materials to work.
for (var i = 0; i < 100000; i++) {
// creates a mesh from the geometry and material from the question and returns an object
circle = ourCircleFactory.create();
circle.shape.updateMatrix();
sceneGeometry.merge(circle.shape.geometry, circle.shape.matrix, circle.cachedMaterialIndex);
}
var finalMesh = new three.Mesh(sceneGeometry, new THREE.MeshFaceMaterial(cachedMaterials));
scene.add(finalMesh);
That code will create 1 geometry per cached material. cachedMaterialIndex is something we created to cache textures and indicate which material to use.
It is likely that this code will create 1 geometry per combination of material and geometry. EG: if you have 5 geometries and they are interchangeable with 5 materials then you will get 25 geometries. It seems that it doesn't matter how many objects you have on screen. Note: we were getting 15fps with 5000 geometries so I think this is a fairly cheap solution.