I'm trying to render a scene in two different renderers (successively not at the same time) but it leads to the error "GL_INVALID_OPERATION".
Here is a sample script:
var scene1 = new THREE.Scene();
var camera1 = new THREE.PerspectiveCamera( ... );
var renderer1= new THREE.WebGLRenderer( ... );
var renderer2= new THREE.WebGLRenderer( ... );
var camera2 = new THREE.PerspectiveCamera( ... );
//Render scene1 in renderer1
renderer1.render( scene1, camera1 );
//[After some user event...]
//Render scene1 in renderer2
renderer2.render( scene1, camera2 ); //This fails. getError()=1282 (i.e. GL_INVALID_OPERATION)
I know it often deprecated to render a scene in two different renderers even not at the same time, but I could think of no other way of solving my issue as it is part of a very big project.
I understand there are GL data associated to scene1 that are linked to renderer1 but how can I remove those data so that I could render the scene1 again in an other renderer ???
Beware that I am not trying to render the scene in the two renderes simutaneously (which is different than https://github.com/mrdoob/three.js/issues/189).
Thanks for the help.
Regards.
The problem was related to objects/material/textures being bound to specific OpenGL buffers. The solution is thus to unbind from any buffer all the children of the object to be removed from a specific scene, before adding it to an other scene.
I'll post the code of my solution asap.
Regards.
Related
I am attempting to render multiple scenes using the same renderer. I have already referenced other stackoverflow answers to the question, but I am unable to do this successfully.
I load each of my gltf models separately. In my HTML file, they are assigned to the same canvas. When I attempt to render, the first render doesn't appear, and the second one shows a black box!
Conceptually, I want to render two meshes, but I don't want to use two WebGL renderers. My understanding is that the aforementioned method allows me to render one scene, clear it, then render the second--with the first render still being visible in the first scene.
Here are the relevant snippets of code:
// Load a glTF resource
loader.load(
// my first gltf file here
function ( gltf ) {
scene.add( gltf.scene );
},
// Load a glTF resource
loader.load(
// my second gltf file here
function ( gltf ) {
scene2.add( gltf.scene );
},
// Canvas
const canvas = document.querySelector('canvas name here')
// Scene
const scene = new THREE.Scene()
const scene2 = new THREE.Scene();
// Renderer
const renderer = new THREE.WebGLRenderer({
canvas: canvas,
antialias: true,
alpha: true
})
renderer.autoClear = false
renderer.clear()
renderer.render(scene, camera)
renderer.clearDepth()
renderer.render(scene2, camera)
Edit: to be clear, I am trying to replicate the effect shown on this webpage: https://threejs.org/examples/?q=multiple#webgl_multiple_elements . Notice how each scene has its own container on a different location on the webpage, but it is using the same renderer. I am not randomizing my geometry.
Adding scene2 to the renderer is overwriting scene1. I believe the renderer can only handle one scene.
You can add both GLTFs to the same scene.
loader.load('exampleOne.gltf', (gltf) => {
scene.add(gltf.scene);
});
loader.load('exampleTwo.gltf', (gltf) => {
scene.add(gltf.scene);
});
To refactor the above, you can create one GLTFLoader and call it twice.
const onLoad = (gltf: any) => {
scene.add(gltf.scene);
}
loader.load('exampleOne.gltf', onLoad);
loader.load('exampleTwo.gltf', onLoad);
How does one traverse a mesh loaded with GLTFLoader properly to walk through all layers?
I am trying to do a simple selective bloom pass on a model by traversing the model’s all parts, setting them to the bloom layer, and then rendering the combined original + bloomed layers. However, as we can see in the images below, only the yellow outer part of the model is actually found during the traversal, does anyone know how to extract the rest of the model for layer setting?
For reproduction, the model can be downloaded from here:
https://github.com/whatsmycode/Models/blob/master/PrimaryIonDrive.glb
This is the code I currently use:
import { GLTFLoader } from 'three/examples/jsm/loaders/GLTFLoader.js';
let BLOOM_LAYER = 1;
new GLTFLoader().load( 'models/PrimaryIonDrive.glb', function ( gltf ) {
const model = gltf.scene;
model.traverse( function( child ) {
child.layers.enable(BLOOM_LAYER);
});
scene.add( model );
});
This is the resulting image, bloom is applied to the yellow outer rings only.
This is the bloom-mask only
The issue was that I had not added the point- and ambient lights to both layers. The bloomed object has properties that requires light to show color for all parts except for the emitting yellow rings. To fix the problem, I simply enabled the lights for both layers before adding the lights to the scene.
const pointLight = new THREE.PointLight(0xffffff);
pointLight.layers.enable(ENTIRE_LAYER);
pointLight.layers.enable(BLOOM_LAYER);
const ambientLight = new THREE.AmbientLight(0xffffff);
ambientLight.layers.enable(ENTIRE_LAYER);
ambientLight.layers.enable(BLOOM_LAYER);
scene.add(pointLight, ambientLight);
Not sure if I worded my title right but I'm getting my feet wet with three JS. Right now I have a simple glb model that I would like to import into my scene, but I can't get the lighting right. The image below is what I want to accomplish.
But when I import my glb into my scene and add some lighting this is what it looks like
The model is quite dark and I can't get it to light up ideally. I've tried adding ambient lights top-down, point lights as a child to the camera instance, hemisphere lights, etc. but I just can't get it to look right. Below is the code for the current lighting; I'm trying to achieve the look by using point lights atm.
var light = new THREE.PointLight( 0xffffff, 10 );
light.position.z = 10
camera.add(light)
var light2 = new THREE.PointLight( 0xffffff, 10 );
light2.position.set(0, -20, 30)
scene.add(light2)
If anyone could give me some insights as to what is the proper way to achieve what I am desiring that will be great.
So I did some digging in, and it turns out that Blender includes this thing called an environment map
https://discourse.threejs.org/t/exporting-blender-scene-lighting-issues/11887/8
So I had to recreate the environment in my scene as well.
After importing RoomEnvironment like so:
import { RoomEnvironment } from 'three/examples/jsm/environments/RoomEnvironment';
I created the room environment:
const environment = new RoomEnvironment();
const pmremGenerator = new THREE.PMREMGenerator( renderer );
scene.environment = pmremGenerator.fromScene( environment ).texture;
Then I added the following attributes to my scene object:
renderer.toneMapping = THREE.ACESFilmicToneMapping;
renderer.toneMappingExposure = 1.2;
renderer.outputEncoding = THREE.sRGBEncoding;
After that, it lights up just as fine! I honestly don't really know what these toneMapping stuff do at the moment, but for now this solves my problem.
I have imported a model into my Three.js scene. I am able to move and rotate the bones but the model's geometry does not move with the bones.
Here is the code I have used to import the JSON file and add it to the scene,
/*load JSON file*/
// instantiate a loader
var loader = new THREE.JSONLoader();
loader.load( 'https://cdn.rawgit.com/wpdildine/wpdildine.github.com/master/models/cylinder.json', addModel );
var helpset;
var scaleVal = 3;
function addModel( geometry, materials ){
materials.skinning = true;
var cs = scaleVal * Math.random();
mesh = new THREE.SkinnedMesh( geometry, new THREE.MeshFaceMaterial(materials) );
scene.add(mesh);
helpset = new THREE.SkeletonHelper(mesh);
scene.add(helpset);
}
The JSON file that I have imported includes weights so I did not think I had to add them myself. Would it be anything to do with binding the skeleton to the mesh?
Here is a link to my code - https://jsfiddle.net/joeob61k/1/ (New link with scripts, thanks #Mr. Polywhirl)
As you can see, 'Bone_2' in the GUI controls moves one of the bones but not the mesh.
EDIT: I have tried accessing the bones of the mesh in the render() function. I have done so by using the following line of code,
mesh.skeleton.bones[2].rotation = 0.1;
I get the following error: 'Cannot read property 'skeleton' of undefined(…)' were undefined is the mesh variable. Is there a new way of accessing the bones of a SkinnedMesh that I need to use?
The problem was with the line,
materials.skinning = true;
It needs to be the following to work,
materials[0].skinning = true;
I'm looking at the THREE.js example located here and wondering how to prevent the 'flattening' of scenes rendered as textures. In other words, the scene loses the illusion of having depth when set as a WebGLRenderTarget.
I have looked everywhere, including in THREE.js documentation, and have found no mention of this kind of functionality, probably because it would put a significant load on the user's processor unnecessarily (except for in very particular cases). Perhaps this is possible in pure WebGL, though?
EDIT: Downvoters - why is this question poor? I have done significant research into this matter, but since I'm new to WebGL, I can't exactly spout senseless code... How do I improve my query?
I think you want to use screenspace projections instead of UV projections if that makes sense. Given your tv example, the screen would have the UV points that do get transformed as you move the camera around. You want something that stays put, ie. no matter how much you move, you're looking at the same thing. I'm not sure how this is done without shaders, but in fragment shaders you have gl_FragCoord
Because THREE.js "flattens" every scene it renders, all that's needed is a change of perspective (relative to the main camera of the main scene) to maintain the illusion of depth in render targets. Here's a skeleton of something that would do that:
var scene = new THREE.Scene(),
rtScene = new THREE.Scene(),
camera = new THREE.PerspectiveCamera( ..... ),
rtCamera = new THREE.PerspectiveCamera( ..... ),
rtCube = new THREE.Mesh(new THREE.CubeGeometry(1,1,1), new THREE.MeshSimpleMaterial({ color: 0x0000ff }),
rtTexture = new THREE.WebGLRenderTarget( window.innerWidth, window.innerHeight, { minFilter: THREE.LinearFilter, magFilter: THREE.NearestFilter, format: THREE.RGBFormat }),
material = new THREE.MeshBasicMaterial({ map: rtTexture }),
cube = new THREE.Mesh(new THREE.CubeGeometry(1,1,1), material);
function init() {
//manipulate cameras
//add any textures, lighting, etc
rtScene.add( rtCube );
scene.add( cube );
}
function update() {
//some function of cube.rotation & cube.position
//that changes the rtCamera rotation & position,
//depending on the desired effect.
}
function animate() {
requestAnimationFrame( animate );
render();
}
function render() {
renderer.clear();
update();
renderer.render( rtScene, rtCamera, rtTexture, true );
renderer.render( scene, camera );
}
init();
animate();
I assumed in my code that camera would remain stationary while cube rotates around the y axis. Each face has its own updating instance of material. The update() function for each face is a bunch of trigonometric gibberish that can be derived easily with the law of cosines. I will post a jsFiddle example as soon as I have my local working properly.