I want to have a reflecting cube surface in a WebGL page with Three.js. It should resemble a mobile phone display, which reflects some light, but it still has to be black.
I have created an example of a reflecting cube (and also a reflective sphere) with detailed comments. The live version is at
http://stemkoski.github.com/Three.js/Reflection.html
with nicely formatted code at
https://github.com/stemkoski/stemkoski.github.com/blob/master/Three.js/Reflection.html
(This is part of a collection of tutorial examples at http://stemkoski.github.com/Three.js/)
The main points are:
add to your scene a second camera (a CubeCamera) positioned at the object whose surface should be reflective
create a material and set the environment map as the results of rendering from this second camera, e.g.
for example:
var mirrorCubeMaterial = new THREE.MeshBasicMaterial(
{ envMap: mirrorCubeCamera.renderTarget } );
in your render function, you have to render from all your cameras. Temporarily hide the object that is reflecting (so that it doesn't get in the way of the camera you are going to use), render from that camera, then unhide the reflective object.
for example:
mirrorCube.visible = false;
mirrorCubeCamera.updateCubeMap( renderer, scene );
mirrorCube.visible = true;
These code snippets are from the links I posted above; check them out!
Related
I am writing a 3d game in javascript with threejs. I made a skybox, and it works, but if I make my cameras near and far distances too small it doesn't show.
I understand why this happens, the camera attached to my player doesn't see as far as the skybox. If I make my cameras "near" and "far" attributes large enough (corresponding to the size of my game map) I can make it so that my skybox is always within range, but I don't want that, since I don't want the camera to see all the objects that far away.
Any ideas of how to force the camera to see the skybox but still have a small "far" attribute so as to no see all the objects in the world?
Any help would be greatly appreciated.
There’s scene.background, which can be set to a CubeTexture.
Just want to add an example, because someone might find it useful here:
var loader = new THREE.CubeTextureLoader();
loader.load([
'./img/sky/galaxy-X.jpg', './img/sky/galaxy+X.jpg',
'./img/sky/galaxy-Y.jpg', './img/sky/galaxy+Y.jpg',
'./img/sky/galaxy-Z.jpg', './img/sky/galaxy+Z.jpg'
] , function(texture)
{
scene.background = texture;
});
I have a forked three.js codepen experiment that has square particles floating around.
But i'm trying to modify it such that i can pass text (geometry?) into it, replacing the square particles, somewhat like a word / tag cloud. Is this possible?
Link to current codepen:
https://codepen.io/farisk/pen/pWQGxB
Heres what i wish to achieve:
I'm currently not sure where to start.
I was thinking of somehow using a text geometry
var loader = new THREE.FontLoader();
let font = loader.parse(fontJSON);
var geometry = new THREE.TextGeometry("hello", {font: font, size: 120, height: 10, material: 0});
But someone mentioned that this is not the right way? I'm pretty new to three.js / html canvas so any help is appreciated.
Passing in geometry per particle is usually not possible in particle systems because the fact that its the same geometry for each particle is what makes these systems efficient.
To achieve the effect that you are looking for there are basically two options:
Render all texts into a single sprite texture and provide texture coordinates for each particle such that each particle renders the correct text. (Only two dimensional rendering of the text and not scalable for a large amount of texts) See this example.
Make each text object it's own geometry and render them without a particle system. (You loose the performance gain of particle systems)
If you really just want to achieve a tag cloud you could also just use pure JavaScript and transform the position of the text elements according to some calculated 3D positions.
Hy,
I want to make a room in three.js and I want the walls that have objects behind them (from the pov of the camera) to become transparent (.5 opacity) as I rotate the room.
To clarify a little bit:
Imagine you have a room. That room has walls. In that room you insert
furniture. Camera looks at the room and I want the walls to be
transparent only if from the pov of the camera they have other objects
behind (so you can see throw walls the room). The walls in the back
should have opacity 1. So anywhere you move the camera (and look at
the room) you can see all the elements (otherwise some walls will
block the view)
You don't provide a lot of detail with regards to how you are moving the camera. But it can be done fairly easily. All meshes have a material property that has an opacity.
Here is a jsFiddle - http://jsfiddle.net/Komsomol/xu2mjwdk/
I added the entire OrbitControls.js inside and added a boolean;
var doneMoving = false;
Which I added in the mouseup and mousedown of the OrbitControls. Just to capture when we are not moving the camera.
There are some specific options that need to be added in the renderer and the object.
renderer = new THREE.WebGLRenderer({
alpha:true,
transparent: true
});
The object
torusMat = new THREE.MeshPhongMaterial();
torusMat.needsUpdate = true;
torusMat.transparent = true;
And finally add some control code in the Animate method to kick off whatever changes you want.
if(doneMoving){
torusMat.opacity = 0.5;
} else {
torusMat.opacity = 1;
}
That's about it. This should give you enough of an idea how to implement this.
I have a web page that is based on the following ThreeJS sample:
http://threejs.org/examples/#css3d_molecules
I create new nodes (atoms) dynamically by attaching them to existing nodes (atoms). After creating it and adding it to the scene, I want to bring the new atom/child element to the forefront using the minimum number of rotate, pan, & zoom operations needed to do so. However, although I know how to do each of those operations, I don't know how to calculate the optimal sequence based on the current position/quaternion of the node (atom) where I created it in the scene, and its new position which will be (0, 0, ). I will be animating the change from the old place in the scene to the new so I don't want to just change the new element's position/quaternion to the new value via an assignment. I want animate the change over time, performing a slice of each operation (pan/zoom/rotate) each animation frame.
Are there any ThreeJS methods to help me do this?
See how to: get the global/world position of a child object to get world position of any mesh in your hierarchy. Then have your camera.lookAt() to that location.
I am working on an open source tool for navigating and visualizing human anatomy.
The main object of interest is a 'chessboard' taking up most of the screen. Dragging the board around performs certain CSS3 3D-transforms. A 3D object (a head, in the example below) is shown hovering over the board, rendered with three.js.
The transformation of the head is synchronized with that of the board. But this is currently a very imperfect synchronization, realized by trial-and-error.
How do the 'CSS 3D world' and the 'three.js/WebGL 3D world' correspond? For example, where is the 'camera' in the CSS world? Is there a way to synchronize the two properly? Even better, is there a library?
They do synchronize. Try hacking http://threejs.org/examples/css3d_sandbox.html.
You should be able to create a CSS3DObject and a PlaneGeometry that line up perfectly, assuming the same camera is used to render both.
var geometry = new THREE.PlaneGeometry(100, 100);
var mesh = new THREE.Mesh( geometry, material );
mesh.position.copy( object.position );
mesh.rotation.copy( object.rotation );
mesh.scale.copy( object.scale );
scene.add( mesh );
In fact, here is a fiddle: http://jsfiddle.net/L9cUN/
three.js r.66