I have a project with 100 BoxGeometries, each with their own image inside (around 10kb). I have noticed performance problems and frames skipping on mobile devices when boxes are first rendered, which I'd like to eliminate.
At the moment I load up all images first, and create textures, before adding the boxes:
let loader = new THREE.TextureLoader()
for(let img of assets) {
loader.load(img.url, texture => {
textures[img.name] = texture
assets.splice(assets.indexOf(img),1)
if(!assets.length) {
addBoxes()
}
})
}
I then lay out 100 boxes in a grid, here's some pseudo-code to illustrate:
textures.forEach((texture,i) => {
let box = new THREE.Mesh(
new THREE.BoxGeometry(1, .04, 1),
[
blackMaterial,
blackMaterial,
new THREE.MeshBasicMaterial({
map: Controller.textures[boxMaterial.postID]
}),
blackMaterial,
blackMaterial,
blackMaterial
]
)
box.position.x = (i % 10 - 5)
box.position.z = (Math.floor(i / 10) - 5)
scene.add( box )
})
requestAnimationFrame( step )
I have a THREE.OrthographicCamera that can pan and zoom around these boxes. I have noticed that when they first come into view, they cause memory to spike, but once all boxes have been seen, the net heap falls down drastically, and performance becomes smooth and no frame rates are dropped.
Please note that after 6 seconds memory suddenly flattens out, this is once all boxes have been seen once:
To combat this, I have attempted the frustrumCulled parameter on the boxes:
box.frustumCulled = false
This solves the issue in some ways. Once loaded, performance is extremely smooth from the start and the memory issues are gone. However, I do not seem to have a way to detect once all the meshes are loaded, so initial load is slow and an intro animation I have, and early interactions are jagged and performance intensive as they are starting too early.
I understand that loading all boxes with eager loading will cause a larger load time, and this would be fine for this project, to avoid the memory issues through lazyloading. However, what other options do I have? Perhaps box.frustumCulled isn't the right approach.
And is there a way to have event listeners on such loading activity? Ideally I would load all boxes proper, as if they had been seen once, with a preloader, and when the system was ready, I could fire an init method.
A few ideas:
1. Share geometry
You're using all cubes, so they can share their geometry definition. Even if you want them to be different sizes, you can apply a scaling transformation to the mesh later.
let boxGeo = new THREE.BoxGeometry(1, .04, 1)
textures.forEach((texture,i) => {
let box = new THREE.Mesh(
boxGeo,
[
blackMaterial,
blackMaterial,
new THREE.MeshBasicMaterial({
map: Controller.textures[boxMaterial.postID]
}),
blackMaterial,
blackMaterial,
blackMaterial
]
)
Now, your program only needs to upload one geometry definition to the GPU rather than however many you created for each texture.
2. Render before textures
This is a shot in the dark, but try creating your cubes up-front, with a transparent material, and apply your textures later. My thought is that getting the upload of the geometry out of the way up front will shave some time off your initial render.
3. Instances
I'm not up-to-date on how three.js handles instanced materials, but you might be able to use InstancedMesh to create your cubes and increase even your overall rendering performance.
Related
I'm using an augmented reality library that does some fancy image tracking stuff. After learning a whole lot about this project, I'm now beyond my current ability and could use some help. For our purposes, the library creates an (empty) anchor point at the center of an IRL image target in-camera. Then moves the virtual world around the IRL camera.
My goal is to drive plane.rotation to always face the camera, while keeping plane.position locked to the anchor point. Additionally, plane.rotation values will be referenced later in development.
const THREE = window.MINDAR.IMAGE.THREE;
document.addEventListener('DOMContentLoaded', () => {
const start = async() => {
// initialize MindAR
const mindarThree = new window.MINDAR.IMAGE.MindARThree({
container: document.body,
imageTargetSrc: '../../assets/targets/testQR.mind',
});
const {renderer, scene, camera} = mindarThree;
// create AR object
const geometry = new THREE.PlaneGeometry(1, 1.25);
const material = new THREE.MeshBasicMaterial({color: 0x00ffff, transparent: true, opacity: 0.5});
const plane = new THREE.Mesh(geometry, material);
// create anchor
const anchor = mindarThree.addAnchor(0);
anchor.group.add(plane);
// start AR
await mindarThree.start();
renderer.setAnimationLoop(() => {
renderer.render(scene, camera);
});
}
start();
});
Everything I've tried so far went into the solutions already massaged into the (functioning draft) code. I have, however, done some research and found a couple avenues that might or might not work. Just tossing them out to see what might stick or inspire another solution. Skill-wise, I'm still in the beginner category, so any help figuring this out is much appreciated.
identify plane object by its group index number;
drive (override lib?) object rotation (x, y, z) to face camera;
possible solutions from dev:
"You can get those values through the anchor object, e.g. anchor.group.position. Meaning that you can use the current three.js API and get those values but without using it for rendering i.e. don't append the renderer.domElement to document."
"You can hack into the source code of mindar (it's open source)."
"Another way might be easier for you to try is to just create another camera yourself. I believe you can have multiple cameras, and just render another layer on top using your new camera."
I think it may be as simple as calling lookAt in the animation loop function:
// start AR
await mindarThree.start();
renderer.setAnimationLoop(() => {
plane.lookAt(new THREE.Vector3());
renderer.render(scene, camera);
});
This assumes the camera is always located at (0,0,0) (i.e., new THREE.Vector3()). This seems to be true from my limited testing. I found it helpful to debug by copy-pasting the MindAR three.js example into this codepen and printing some relevant values to the console.
Also note that, internally, MindAR's three.js module seems to directly modify the world matrix of the anchor.group object without modifying the position/rotation/scale parameters.
How does one traverse a mesh loaded with GLTFLoader properly to walk through all layers?
I am trying to do a simple selective bloom pass on a model by traversing the model’s all parts, setting them to the bloom layer, and then rendering the combined original + bloomed layers. However, as we can see in the images below, only the yellow outer part of the model is actually found during the traversal, does anyone know how to extract the rest of the model for layer setting?
For reproduction, the model can be downloaded from here:
https://github.com/whatsmycode/Models/blob/master/PrimaryIonDrive.glb
This is the code I currently use:
import { GLTFLoader } from 'three/examples/jsm/loaders/GLTFLoader.js';
let BLOOM_LAYER = 1;
new GLTFLoader().load( 'models/PrimaryIonDrive.glb', function ( gltf ) {
const model = gltf.scene;
model.traverse( function( child ) {
child.layers.enable(BLOOM_LAYER);
});
scene.add( model );
});
This is the resulting image, bloom is applied to the yellow outer rings only.
This is the bloom-mask only
The issue was that I had not added the point- and ambient lights to both layers. The bloomed object has properties that requires light to show color for all parts except for the emitting yellow rings. To fix the problem, I simply enabled the lights for both layers before adding the lights to the scene.
const pointLight = new THREE.PointLight(0xffffff);
pointLight.layers.enable(ENTIRE_LAYER);
pointLight.layers.enable(BLOOM_LAYER);
const ambientLight = new THREE.AmbientLight(0xffffff);
ambientLight.layers.enable(ENTIRE_LAYER);
ambientLight.layers.enable(BLOOM_LAYER);
scene.add(pointLight, ambientLight);
Hello i am trying to adjust my bump map scale on my collada model.
Tried this but did not work:
Three.js ColladaLoader bumpScale/weighting? Way to adjust bump map intensity
I am using r77
And this example: http://threejs.org/examples/#webgl_loader_collada
Replaced the current model with a more complex one that consist out of 3 meshes
Exported them all together in one .dae file. and they contain a couple of materials and textures
All textures are next to the .dae in the folder and working fine.
It is just the normal that is not working. and the alpha textures are bit odd.
I tried different things like so:
------ none of these work ------
if ( child instanceof THREE.SkinnedMesh ) {
child.material.normalScale = (0.03,0.03); // adjusting bump height
// trying to change bump value.
//controlPanel.children[0].material = new THREE.MeshPhongMaterial( { map: controlPanel.children[0].material.map } );
//collada.scene.children[0].children[0].material.normalScale = (0.03,0.03);
//dae.children[2].material = new THREE. MeshBasicMaterial( { color: 0x333333, wireframe: true } )
//child.material.color.setRGB (1, 1, 0);
child.material.bumpScale = 0.03;
None of them seem to have effect on my model at all.
And i cannot find a good source that helps explaining the resons.
Hopefully someone here knows the problem!
Greets.
Ok so with the help of guy named "bai"
I found out that blenders collada exporter
does not add the following inside the bump part of the .dae
<bump bumptype="NORMALMAP">
instead it only does
<bump>
which results in the normal map not working.
I've exported an animated model from Blender which doesn't seem to have any issue instantiating. I'm able to create the THREE.Animation and model, but I was finding there was no animation. I realized I needed to set skinning true on each material, but when I do that the entire mesh goes missing.
Below is my (quick and messy) code trying to get everything to work.
function loadModel() {
var loader = new THREE.JSONLoader();
loader.load('assets/models/Robot.js', function(geom, mat) {
_mesh = new THREE.Object3D();
_scene.add(_mesh);
geom.computeBoundingBox();
ensureLoop(geom.animation);
THREE.AnimationHandler.add(geom.animation);
for (var i = 0; i < mat.length; i++) {
var m = mat[i];
//m.skinning = true; <-- Uncommenting this makes the model disappear
//m.morphTargets = true; <-- This causes all sorts of WebGL warnings
m.wrapAround = true;
}
var mesh = new THREE.SkinnedMesh(geom, new THREE.MeshFaceMaterial(mat));
mesh.scale.set(400, 400, 400);
mesh.position.set(0, -200, 0);
mesh.rotation.set(Utils.toRadians(-90), 0, 0);
_mesh.add(mesh);
_robot = mesh;
Render.startRender(loop);
var animation = new THREE.Animation(mesh, geom.animation.name);
animation.JITCompile = false;
animation.interpolationType = THREE.AnimationHandler.LINEAR;
animation.play();
});
}
I believe I'm updating the AnimationHandler correctly in my loop
function loop() {
_mesh.rotation.y += 0.01;
var delta = 0.75 * _clock.getDelta();
THREE.AnimationHandler.update(delta);
}
In the section metadata of the exported JSON file the number of morphTargets and bones are both greater than 0?
I think that you followed the example here:
http://threejs.org/examples/#webgl_animation_skinning_morph
in which the animated model uses Morph Target and Skeletal Animation (see Wikipedia for the theoretical concepts).
If the animated model uses only Skeletal Animation as in this example http://alteredqualia.com/three/examples/webgl_animation_skinning_tf2.html
you have to instantiate a THREE.SkinnedMesh Object and then set only the m.skinning property to true.
I was having the same problem just now. What worked for me was to remake the model with applied scale and have keyframes for LocRotScale, not just location.
lately, I've encoutered a similar issue of mesh disapearing while exporting blender skinning animation to json. It turned out, the mesh I was using had double vertex (one vertice hidding another). All looks good While creating the vertex groups and the animations in blender, but when I imported the mesh via three.js, it kept disapearing as soon as the animation started. In other words, If 1 vertice from your mesh is omitted from the vertex groups, you will experience this disapearing behavior. To prevent this issue, I now use the "remove doubles" function from blender to validate the mesh integrity before exporting it to json. You might have encountered the same issue and redoing your mesh work fix it... Anyways, the question is pretty old, but the topic is still valid as of today, so I hope this fresh info will help someone out there...
Peace INF1
I've downloaded a sphere example from: http://aerotwist.com/lab/getting-started-with-three-js/ and I can see the nice red sphere. I'd like to use a texture on it. I've tried this:
var texture = THREE.ImageUtils.loadTexture("ball-texture.jpg");
texture.wrapS = texture.wrapT = THREE.ClampToEdgeWrapping;
texture.repeat.set( 125, 125 );
texture.offset.set( 15, 15 );
texture.needsUpdate = true;
var sphereMaterial = new THREE.MeshBasicMaterial( { map: texture } );
var sphere = new THREE.Mesh(new THREE.Sphere(radius, segments, rings),sphereMaterial);
but I can't see anything, all is black. Does anyone have a working example for sphere texture?
You might have two problems.
First, try loading it like this:
var texture = THREE.ImageUtils.loadTexture('ball-texture.jpg', {}, function() {
renderer.render(scene, camera);
});
texture.needsUpdate = true;
Make sure that the texture size is a power of two (512x512px for IE).
Are you using Firefox? This could be a problem in your browser. Firefox uses some kind of cross-site-blocker for textures. The result is black instead. Take a look at this site for more info: http://hacks.mozilla.org/2011/06/cross-domain-webgl-textures-disabled-in-firefox-5/
Do you have a rendering loop, or did you render the scene just once?
You need to have a rendering loop so that when the THREE.ImageUtils loads the image and updates the texture, you re-render the scene with the now updated texture.
All the three.js examples seem to rely on this technique. I.e., Fire off several async operations involving a fetch of a remote resource, start rendering loop, let scene be updated as remote resources arrive.
IMHO this is Three.js's biggest gotcha for Javascript newbs (like me) who are not familiar with how async operations work.
I had this problem, but if you are loading the html as a file (i.e. locally not a webserver), many browsers (chrome for e.g.) will not allow you to load images in the standard three.js way as it is a security violation.