I am attempting to render multiple scenes using the same renderer. I have already referenced other stackoverflow answers to the question, but I am unable to do this successfully.
I load each of my gltf models separately. In my HTML file, they are assigned to the same canvas. When I attempt to render, the first render doesn't appear, and the second one shows a black box!
Conceptually, I want to render two meshes, but I don't want to use two WebGL renderers. My understanding is that the aforementioned method allows me to render one scene, clear it, then render the second--with the first render still being visible in the first scene.
Here are the relevant snippets of code:
// Load a glTF resource
loader.load(
// my first gltf file here
function ( gltf ) {
scene.add( gltf.scene );
},
// Load a glTF resource
loader.load(
// my second gltf file here
function ( gltf ) {
scene2.add( gltf.scene );
},
// Canvas
const canvas = document.querySelector('canvas name here')
// Scene
const scene = new THREE.Scene()
const scene2 = new THREE.Scene();
// Renderer
const renderer = new THREE.WebGLRenderer({
canvas: canvas,
antialias: true,
alpha: true
})
renderer.autoClear = false
renderer.clear()
renderer.render(scene, camera)
renderer.clearDepth()
renderer.render(scene2, camera)
Edit: to be clear, I am trying to replicate the effect shown on this webpage: https://threejs.org/examples/?q=multiple#webgl_multiple_elements . Notice how each scene has its own container on a different location on the webpage, but it is using the same renderer. I am not randomizing my geometry.
Adding scene2 to the renderer is overwriting scene1. I believe the renderer can only handle one scene.
You can add both GLTFs to the same scene.
loader.load('exampleOne.gltf', (gltf) => {
scene.add(gltf.scene);
});
loader.load('exampleTwo.gltf', (gltf) => {
scene.add(gltf.scene);
});
To refactor the above, you can create one GLTFLoader and call it twice.
const onLoad = (gltf: any) => {
scene.add(gltf.scene);
}
loader.load('exampleOne.gltf', onLoad);
loader.load('exampleTwo.gltf', onLoad);
Related
How does one traverse a mesh loaded with GLTFLoader properly to walk through all layers?
I am trying to do a simple selective bloom pass on a model by traversing the model’s all parts, setting them to the bloom layer, and then rendering the combined original + bloomed layers. However, as we can see in the images below, only the yellow outer part of the model is actually found during the traversal, does anyone know how to extract the rest of the model for layer setting?
For reproduction, the model can be downloaded from here:
https://github.com/whatsmycode/Models/blob/master/PrimaryIonDrive.glb
This is the code I currently use:
import { GLTFLoader } from 'three/examples/jsm/loaders/GLTFLoader.js';
let BLOOM_LAYER = 1;
new GLTFLoader().load( 'models/PrimaryIonDrive.glb', function ( gltf ) {
const model = gltf.scene;
model.traverse( function( child ) {
child.layers.enable(BLOOM_LAYER);
});
scene.add( model );
});
This is the resulting image, bloom is applied to the yellow outer rings only.
This is the bloom-mask only
The issue was that I had not added the point- and ambient lights to both layers. The bloomed object has properties that requires light to show color for all parts except for the emitting yellow rings. To fix the problem, I simply enabled the lights for both layers before adding the lights to the scene.
const pointLight = new THREE.PointLight(0xffffff);
pointLight.layers.enable(ENTIRE_LAYER);
pointLight.layers.enable(BLOOM_LAYER);
const ambientLight = new THREE.AmbientLight(0xffffff);
ambientLight.layers.enable(ENTIRE_LAYER);
ambientLight.layers.enable(BLOOM_LAYER);
scene.add(pointLight, ambientLight);
I've been working on a three.js project to try and learn the framework. Got a basic model floating around that works fine on the desktop browser but will crash repeatedly on mobile. I uploaded the project on my server http://threedeesneaker.404vanity.com/
Is there any way to optimize this for mobile devices? I tried both chrome and safari for iPhone and iPad.
The code it self:
(function() {
var scene, camera, renderer;
var geometry, material, mesh, sneaker;
init();
animate();
function init() {
scene = new THREE.Scene();
var WIDTH = window.innerWidth,
HEIGHT = window.innerHeight;
var ambient = new THREE.AmbientLight( 0x444444 );
scene.add( ambient );
camera = new THREE.PerspectiveCamera( 3, WIDTH / HEIGHT, 1, 20000 );
camera.position.z = 1000;
window.addEventListener('resize', function() {
var WIDTH = window.innerWidth,
HEIGHT = window.innerHeight;
renderer.setSize(WIDTH, HEIGHT);
camera.aspect = WIDTH / HEIGHT;
camera.updateProjectionMatrix();
});
geometry = new THREE.BoxGeometry( 200, 200, 200 );
material = new THREE.MeshBasicMaterial( { color: 0xff0000, wireframe: true } );
mesh = new THREE.Mesh( geometry, material );
scene.add( mesh );
// prepare loader and load the model
var oLoader = new THREE.OBJMTLLoader();
oLoader.load('models/sneaker.obj', 'models/sneaker.mtl', function(object) {
object.scale.set(1, 1, 1);
object.rotation.y = 600;
object.rotation.z= 600;
sneaker = object;
scene.add(sneaker);
});
// var loader = new THREE.OBJLoader();
// loader.load('models/sneaker.obj', function(object) {
// sneaker = object;
// sneaker.scale.set(1,1,1);
// sneaker.rotation.y = 600;
// sneaker.rotation.z= 600;
// scene.add(sneaker);
// });
renderer = new THREE.WebGLRenderer();
renderer.setSize( WIDTH, HEIGHT );
renderer.setClearColor(0x333F47, 1);
var light = new THREE.PointLight(0xffffff);
light.position.set(-100,200,100);
scene.add(light);
document.body.appendChild( renderer.domElement );
}
function animate() {
requestAnimationFrame( animate );
mesh.rotation.x += 0.01;
mesh.rotation.y += 0.02;
sneaker.rotation.x += 0.01;
sneaker.rotation.y += 0.02;
renderer.render( scene, camera );
}
})();
First a comment on your js : check if typeof sneaker !== 'undefined' in your render loop before asking to rotate your mesh, before loading it generates errors.
Your scene crashes because you are using too detailed materials, I can see a 4096x4096 bump map for instance. It strongly increases frame rendering time on desktop and is probably the reason why the page is irresponsive on mobile : the fragment shader computations become too big.
However it would be a shame to completly delete those details you spent time on. What you can do is to add a device detector in your js. You can use that to display two different models on desktop and on mobile.
But there are further important improvements you can bring. As they are part of my original post I let them there :) :
Resize your textures. You are using two 4096 x 4096 jpg of 4.5MB, this is heavy (note that there are webgl-enabled smartphones with only 500Mo RAM that get realeased these days). Moreover you have very few details that justifies it. You could change your uv to reduce a lot the parts with no details, and probably resize the picture to 512x512. Finally, use a JPG compressor that will reduce the weight by 70-80%. Depending on your picture PNG can be a better choice also. The device's GPU memory is still something else, and if you still need to improve performance you can check in the script if the client supports .pvr or .ktx texture formats, optimized for GPU memory.
An important problem that makes your visualization unappropriate for mobile devices is that you have ... 23 render calls, because you are using 15 textures and 23 geometries.
What it means is that, for each frame, you will have to bind 23 different geometries before the final frame renders. Some mobile CPU-GPU couples cannot do that 60 times per second. Don't plan more than 10 render calls for average mobile devices. That means less geometries with less materials. Merge.
I have not inspected your .obj file in detail to understand how you get 23 geometries in the end, neither where your 13 textures come from, up to you.
A lot of 3D apps (OpenGL) on the stores have more than 23 objects of course. But stores know the apps and they know your phone so they can do the compatibility job and hide the app to low devices.
Here is the tip to check your render calls, geometries and materials in the scene : in your main function, after having set the renderer, include a pointer to it in the window object window.renderer = renderer. Now at runtime in your console, once resources have been loaded, type renderer.info. It will return those data in an object.
Is it possible to differentiate between the meshes within one .js file exported from blender and animate them separately using Three.js?
The cube I would like to select is named "Cube" loads properly. However, when I try to get it by Name or Id, it doesn't recognize the var item1.
loader = new THREE.JSONLoader();
loader.load('engine.js', function (geometry, materials) {
var mesh, material;
material = new THREE.MeshFaceMaterial(materials);
mesh = new THREE.Mesh(geometry, material);
mesh.scale.set(1, 1, 1);
var item1 = scene.getObjectByName("Cube");
item1.position.x = 15;
scene.add(mesh);
});
I found this post but it seems unresolved: Three.js load multiple separated objects / JSONLoader
What is the best approach to loading multiple meshes via JSONLoader? I'd prefer to load them together as one .js file and just select the ones I would like to animate.
Thanks for your help!
In your blender scene you need to name every mesh you want to access independently in three.js. Then you can use Object3D.getObjectByName() to access your mesh in three.js.
Yes, it is possible to load an entire scene with several meshes from a json file exported from Blender!
You can see the complete process described on my answer of the cited post
So, you can differentiate between the meshes using the getObjectByName method and manipulate them separately. But it is important to know that the loaded object isn't a Geometry anymore. It is labeled with the Scene type by now and it must be handled in a different way.
You must change the loading code for one like this:
loader = new THREE.JSONLoader();
loader.load( "obj/Books.json", function ( loadedObj ) {
var surface = loadedObj.getObjectByName("Surface");
var outline = loadedObj.getObjectByName("Outline");
var mask = loadedObj.getObjectByName("Mask");
mask.scale.set(0.9, 0.9, 0.9);
scene.add(surface);
scene.add(outline);
scene.add(mask);
} );
In the above code we can indeed animate the surface, outline and mask meshes independently.
I'm looking at the THREE.js example located here and wondering how to prevent the 'flattening' of scenes rendered as textures. In other words, the scene loses the illusion of having depth when set as a WebGLRenderTarget.
I have looked everywhere, including in THREE.js documentation, and have found no mention of this kind of functionality, probably because it would put a significant load on the user's processor unnecessarily (except for in very particular cases). Perhaps this is possible in pure WebGL, though?
EDIT: Downvoters - why is this question poor? I have done significant research into this matter, but since I'm new to WebGL, I can't exactly spout senseless code... How do I improve my query?
I think you want to use screenspace projections instead of UV projections if that makes sense. Given your tv example, the screen would have the UV points that do get transformed as you move the camera around. You want something that stays put, ie. no matter how much you move, you're looking at the same thing. I'm not sure how this is done without shaders, but in fragment shaders you have gl_FragCoord
Because THREE.js "flattens" every scene it renders, all that's needed is a change of perspective (relative to the main camera of the main scene) to maintain the illusion of depth in render targets. Here's a skeleton of something that would do that:
var scene = new THREE.Scene(),
rtScene = new THREE.Scene(),
camera = new THREE.PerspectiveCamera( ..... ),
rtCamera = new THREE.PerspectiveCamera( ..... ),
rtCube = new THREE.Mesh(new THREE.CubeGeometry(1,1,1), new THREE.MeshSimpleMaterial({ color: 0x0000ff }),
rtTexture = new THREE.WebGLRenderTarget( window.innerWidth, window.innerHeight, { minFilter: THREE.LinearFilter, magFilter: THREE.NearestFilter, format: THREE.RGBFormat }),
material = new THREE.MeshBasicMaterial({ map: rtTexture }),
cube = new THREE.Mesh(new THREE.CubeGeometry(1,1,1), material);
function init() {
//manipulate cameras
//add any textures, lighting, etc
rtScene.add( rtCube );
scene.add( cube );
}
function update() {
//some function of cube.rotation & cube.position
//that changes the rtCamera rotation & position,
//depending on the desired effect.
}
function animate() {
requestAnimationFrame( animate );
render();
}
function render() {
renderer.clear();
update();
renderer.render( rtScene, rtCamera, rtTexture, true );
renderer.render( scene, camera );
}
init();
animate();
I assumed in my code that camera would remain stationary while cube rotates around the y axis. Each face has its own updating instance of material. The update() function for each face is a bunch of trigonometric gibberish that can be derived easily with the law of cosines. I will post a jsFiddle example as soon as I have my local working properly.
I'm trying to render a scene in two different renderers (successively not at the same time) but it leads to the error "GL_INVALID_OPERATION".
Here is a sample script:
var scene1 = new THREE.Scene();
var camera1 = new THREE.PerspectiveCamera( ... );
var renderer1= new THREE.WebGLRenderer( ... );
var renderer2= new THREE.WebGLRenderer( ... );
var camera2 = new THREE.PerspectiveCamera( ... );
//Render scene1 in renderer1
renderer1.render( scene1, camera1 );
//[After some user event...]
//Render scene1 in renderer2
renderer2.render( scene1, camera2 ); //This fails. getError()=1282 (i.e. GL_INVALID_OPERATION)
I know it often deprecated to render a scene in two different renderers even not at the same time, but I could think of no other way of solving my issue as it is part of a very big project.
I understand there are GL data associated to scene1 that are linked to renderer1 but how can I remove those data so that I could render the scene1 again in an other renderer ???
Beware that I am not trying to render the scene in the two renderes simutaneously (which is different than https://github.com/mrdoob/three.js/issues/189).
Thanks for the help.
Regards.
The problem was related to objects/material/textures being bound to specific OpenGL buffers. The solution is thus to unbind from any buffer all the children of the object to be removed from a specific scene, before adding it to an other scene.
I'll post the code of my solution asap.
Regards.