So I have this GLTF model that I import into a three.js scene. The model itself is created in blender where I created a character that I have rigged and I created a few different animations with separate names (IdleAction, StartAimingAction etc). I then save this as a glb-file which includes the animations I created. I can successfully load this glb into my three.js scene and display any of the animations I created.
I now tried to duplicate the character (both mesh and armature) so that I have 2 characters. I did this by selectin both mesh and the armature in Blender and shift+D to duplicate. I then save to glb and load into my threejs scene. It seems when I do this I get two instances of every animation (check the image below). For instance I have 2 animations both with the name "IdleAction", one for each character. But there seems to be no real way of knowing which animations belongs to which object. For instance I have 2 animations both with the the name "StartAimingAction" (one for each character), but how can I choose programmatically to for example play one animation for character 1 and another animation for character 2 if I can't see which animation belongs to which character?
This so far is just a test, because my aim is to have a scene with 100 of these soldier characters, which will be incredibly messy if that means 100 animations of every type. Since I have around 5 different animations that I created that would result in 500 animations.
It would be so much nicer if the animations became of a part of the actual character object instead (see arrow in image below), but those lists are always empty, instead all animations are a part of the "animations" list in the gltf root object. (BlueSoldierTemplate and BlueSoldierTemplate001 are character 1 and 2 in the image).
Related
I am trying to create a clock with the help of Text Geometry. In order to update the time I need to update the text in Text Geometry which can be done by removing and recreating a new Text Geometry.
Every time I add a new Text Geometry it freezes my browser:
// Remove old mesh
earthClockMesh.geometry.dispose();
earthClockMesh.material.dispose();
group.remove(earthClockMesh);
//add new mesh
earthClockMesh = this.getTextMesh(
new Date(diluatedTime).toLocaleString(),
textMaterial
);
group.add(earthClockMesh);
Anybody know better way way we can update the text in Text Geometry without freezing browser.
Live Example
https://codesandbox.io/s/peaceful-boyd-x859m
You can see the particles gets freeze for a moment when TextGeometry changed
Using THREE.TextBufferGeometry will improve the performance since it produces much less object allocation than TextGeometry. Besides, each instance of THREE.Geometry is internally converted to THREE.BufferGeometry before the first rendering. If you additionally lower the amount of curveSegments, the should be almost no noticeable lag anymore.
three.js R107
The reason for this is that you're deleting the generated geometry, then re-building thousands and thousands of triangles each second. This is very computationally expensive, and you see the animation freeze while the CPU tries to catch up. This is what you're doing:
Text geometry gets disposed
CPU re-builds all characters with an updated second (first bottleneck)
New geometry data gets passed to GPU (second bottleneck)
Scene gets rendered smoothly while no geom is being rebuilt
With real-time graphics (videogames, visualizers, etc), the geometry construction typically happens at the beginning of the app to avoid these mid-game stutters. Try to generate the geometry only once, then swap it out as necessary:
Create a "dictionary" of all 16 necessary characters as individual Mesh objects: 0123456789:/APM,
With this base dictionary, you can .clone() the needed characters, then place them with .position. You can use .clone(false).
At the end of each second, .clone() the geometry from the dictionary into the few characters you need to update.
The beauty of cloning Meshes like this is that the geometry is only generated once. You don't need to spend tons of processing power re-building it.
I have a demo where i used hundreds of cubes that have exactly same geometry and texture, example:
texture = THREE.ImageUtils.loadTexture ...
material = new THREE.MeshLambertMaterial( map: texture )
geometry = new THREE.BoxGeometry( 1, 1, 1 )
cubes = []
for i in [0..1000]
cubes.push new THREE.Mesh geometry, material
... on every frame
for cube in cubes
// do something with each cube
Once all the cubes are created i start moving them on the screen.
All of them have the same texture, same size, they just change position and rotation. The problem here is that when i start using many hundreds of cubes the computer starts to suffer to render it.
Is there any way i could tell Three.js / WebGL that all those objects are the same object, they are identical copies just in different positions ?
I ready something about BufferGeometry and Geometry2 being able to boost performance for this sort of situation but i'm not exactly sure what would be the best in this case.
Thank you
Is there any way i could tell Three.js / WebGL that all those objects are the
same object, they are identical copies just in different positions ?
Unfortunately there's nothing that can automatically determine and optimize rendercalls in that regard. That would be pretty awesome.
I read something about BufferGeometry and Geometry2 being able to boost performance for this sort of situation but i'm not exactly sure what would be the best in this case.
So, the details here is this: the normal THREE.Geometry-class three.js provides is built for developer-convenience, but is a bit far from how data is handled by WebGL. This is what the DirectGeometry (earlier called Geometry2) and BufferGeometry are for. A BufferGeometry is a representation of how WebGL expects data for drawcalls to be held: it contains a typed-array for every attribute of the geometry. The conversion from Geometry to BufferGeometry happens automatically every time geometry.verticesNeedsUpdate is set to true.
If you don't change any of the attributes, this conversion will happen once per geometry (of which you have 1) so this is completely ok and moving to a buffer-geometry won't help (simply because you are already using it).
The main problem you face with several hundred geometries is the number of drawcalls required to render the scene. Generally speaking, every instance of THREE.Mesh represents a single drawcall. And those drawcalls are expensive: A single drawcall that outputs hundred thousands of triangles is no problem at all, but a thousands of drawcalls with 100 triangles each will very quickly become a serious performance problem.
Now, there are different ways how the number of drawcalls can be reduced using three.js. The first is (as already mentioned in comments) to combine multiple meshes/geometries into a single one (in the end, meshes are just a collection of triangles, so there's no requirement that they form a single "body" or something like that). This isn't too practical in your case as this would involve applying the position and rotation of each of your cubes via JS and update the vertex-arrays accordingly on each frame.
What you are really looking for is a WebGL-feature called geometry instancing.
This is not as easy to use as regular meshes and geometries, but not too complicated either.
With instancing, you can create a huge amount of objects in a single drawcall. All of the rendered objects will share a single geometry (your cube-geometry with its vertices, normals and uv-coordinates). The instancing happens when you add special attributes named InstancedBufferAttribute that can contain independent values for each of the instances. So you could add two per-instance attributes for position and rotation (or a single instance transformation-matrix if you like).
These examples should pretty much be what you are looking for:
http://threejs.org/examples/?q=instancing
The only difficulty with instancing as of now is the material: you will need to provide a custom vertex-shader that knows how to apply your per-instance-attributes to the vertex-positions from the original geometry (this can also be seen in the code of the examples).
You have a webgl tag so Im going to give a non three js answer.
The best way to handle this is to allocate a float texture array made of model transform matrix data (or just vec3 positions if thats all you need). Then you allocate a mesh chunk containing all your cube data. You need to add an additional attribute which I refer to as modelTransform index. For each "cube instance" in the mesh chunk, write the correct modelTransform index value corresponding to the correct offset in the model transform data texture.
On each frame, you calculate the correct model transform data for all the cubes and write to the model transform data texture with correct offsets and such. Upload the texture to GPU on each frame.
In the vertex shader, access the model transform data from the modelTransform index attribute and the float texture. Rest is the same.
This is what I am using in my engine and it works well for smallish objects such as cubes. Note however, updating 150000 cubes on 60 FPS will likely take most of your CPU resources from JS. This is unavoidable regardless of which instancing scheme you take.
If the motion/animation of each cube is fixed, then a even better way to do it is to upload a velocity attribute and initial creation time stamp attribute for each cube instance. On each frame, send the current time as uniform and calculate the position as "pos += attr_velocity * getDeltaTime(attr_initTime, unif_currentTime);". This skips work on CPU all together and allows you to render a much higher number of cubes.
I'm working on porting an existing three.js project to WebVR + Oculus Rift. Basically, this app takes an STL file as input, creates a THREE.Mesh based on it and renders it on an empty scene. I managed to make it work in Firefox Nightly with VREffect plugin to three.js and VRControls. A problem I have is models rendered in VR aren't really 3D. Namely, when I move the HMD back and forth an active 3D model doesn't get closer/farther, and I can't see different sides of the model. It looks like the model is rather a flat background image stuck to its position. If I add THREE.AxisHelper to the scene, it is transformed correctly when HMD is moved.
Originally, THREE.OrbitControls were used in the app and models were rotated and moved properly.
There's quite some amount of source code so I'll post some snippets on demand.
It turned out that technically there was no problem. The issue was essentially with different scales of my models and Oculus movements. When VRControls is used with default settings, it reports a position of HMD as it reads it from Oculus, in meters. So, the range of movements of my head could barely exceed 1 m, whereas average sizes of my models are about a few dozens of their own units. When I used them altogether at the same scene, it was like a viewer is an ant looking at a giant model. Naturally, the ant have to walk a while to see another side of the model. That's why it seemed like not a 3D body.
Fortunately, there's a scale property of VRControls that should be used for adjusting scale of HMD movements. When I set it to about 30, everything works pretty well.
Thanks to #brianpeiris's comment, I decided to check coordinates of the model and camera once again to make sure they're not knit with each other. And, it led me to the solution.
I have written an app using Three.js (r73) that allows the user to load multiple .dae files using the ColladaLoader.
If the user selects a sufficient number of objects the texture will not show for any of the objects...at this point I get this:
WebGLRenderer: trying to use 26 texture units while this GPU supports only 16
The error message seems fairly self-explanitory - does this mean I can only load 16 textures at any one time? Is there a way around this? Can I render my scene with half my objects - clear the texture units - and then render the other half?
Quite new to Three.js - so sorry if its a stupid question.
This number is based on what your GPU supports, you can see it listed here at WebGL Report, under Max Texture Image Units: 16.
Many people confuse this number with how many textures you can have in a single scene, this is false. This number represents how many textures you can use for a single object (i.e. in a single draw call).
So if you have an extremely complicated object, with hundreds of separate textures. You'll have to find a way to either merge the textures together, or split the object into multiple objects that can be drawn separately.
However, if you draw 1000 separate objects, each with a different texture, this shouldn't be a problem.
The warning comes from exceeding the maximum number of "total" texture units, and not the Vertex texture units. Refer to WebGLRenderer.js, function getTextureUnit() for the reasoning behind this and the printing of this error message. (ex, https://searchcode.com/codesearch/view/96702746/, line 4730)
To avoid the warning, analyse the shaders, and reduce the count of texture units required in the shader, for the rendering.
I am writing a simple minecraft clone in THREE.js.
However, the result is very laggy.
I am using box geometry for the voxels, but I need to remove blocks when they are broken and need to use mouse picking.
I have heard that joining geometries speeds it up but as far as I am aware, this means that you cannot remove any of the voxels or use mouse picking.
What are the other ways of speeding up in THREE.js?
Using a box per voxel (making one draw call per voxel) will be too slow on any machine even if you wrote it in assembly language.
You need to build a mesh for every section of your world. As in divide the world into 48x48x48 unit chunks and build one mesh that contains all the voxels in that area. When the user edits a box you edit the mesh (the vertices) rather than remove a Box object.