Hy,
I want to make a room in three.js and I want the walls that have objects behind them (from the pov of the camera) to become transparent (.5 opacity) as I rotate the room.
To clarify a little bit:
Imagine you have a room. That room has walls. In that room you insert
furniture. Camera looks at the room and I want the walls to be
transparent only if from the pov of the camera they have other objects
behind (so you can see throw walls the room). The walls in the back
should have opacity 1. So anywhere you move the camera (and look at
the room) you can see all the elements (otherwise some walls will
block the view)
You don't provide a lot of detail with regards to how you are moving the camera. But it can be done fairly easily. All meshes have a material property that has an opacity.
Here is a jsFiddle - http://jsfiddle.net/Komsomol/xu2mjwdk/
I added the entire OrbitControls.js inside and added a boolean;
var doneMoving = false;
Which I added in the mouseup and mousedown of the OrbitControls. Just to capture when we are not moving the camera.
There are some specific options that need to be added in the renderer and the object.
renderer = new THREE.WebGLRenderer({
alpha:true,
transparent: true
});
The object
torusMat = new THREE.MeshPhongMaterial();
torusMat.needsUpdate = true;
torusMat.transparent = true;
And finally add some control code in the Animate method to kick off whatever changes you want.
if(doneMoving){
torusMat.opacity = 0.5;
} else {
torusMat.opacity = 1;
}
That's about it. This should give you enough of an idea how to implement this.
Related
I have created a raytracing algorithm which currently displays a triangle on screen, and am now having trouble moving the camera around correctly.
I would like my code to have the arrow keys rotate the camera, and WASD move the camera in 3D space. The camera currently rotates correctly using two rotation matrices for y and z rotations.
The Problem
Moving the camera, rather than rotating, is where the issue arises. To move the camera, I require two vectors - cameraForward and cameraRight. These vectors can be added on to the position of the camera when input is detected. These vectors will also need to change when the cemara is rotated, in the same rotation as all the rays experience. But when I apply these rotation matrices to the vectors representing cameraRight and cameraForward, there seems to be an issue. Holding down the A or D key will result in the camera moving unexpectedly in circles or odd wavy lines.
I managed to fix the issue with cameraForward by using a different method. I added a couple lines of code which calculate when the ray at the centre as been 'fired' and proceed to set cameraForward to that ray. Therefore cameraForward will always follow the central ray being sent out i.e. the centre of the field of view. However, I cannot do the same with cameraRight, as this vector is not in the field of view.
Before solving cameraForward using this method the same issue arose with moving forwards and backwards.
I also tried taking the cross product of one of the other rays along with the cameraForward vector which I thought might produce cameraRight, but to no avail - more sporadic camera movement
I do not have the vector for cameraUp either so cannot calculate the cross product to find cameraRight.
I also thought maybe the code was being run too many times and the vector was rotated multiple times. However mvoing the code elsewhere had no effect and the method it was already in is run every frame, so I do not believe that is the issue.
Here is my code to rotate the camera right and the method which does the rotation.
camRight's inital value is (0, 0, 1)
camRight = normalize(rotateVector(camRight, rotationZ));
camRight = normalize(rotateVector(camRight, rotationY));
function rotateVector(v, m) {
return new Vector3(dot(m.a, v), dot(m.b, v), dot(m.c, v));
}
I know this code works as the code rotating the camera view functions correctly using the same matrices and methods.
(the following code)
myDirection = normalize(rotateVector(myDirection, rotationZ));
myDirection = normalize(rotateVector(myDirection, rotationY));
When the user presses A or D the following code is run
if (keys[65]) {
camPos = add(camPos, normalize(camRight));
requestAnimationFrame(render);
}
if (keys[68]) {
camPos = subtract(camPos, normalize(camRight));
requestAnimationFrame(render);
}
The camera moves forwards and backward correctly, as previously mentioned. Initially, the camera moves left and right correctly too (as its inital value of (0, 0, 1) is correct), but if I rotate the camera, the values for cameraRight go wild.
Have I assumed something wrongly? or is there a flaw in my logic somewhere?
Thank for any help
I am new to Three.js. I am using this example with 6 image cube for panorama effect, where one can pan, zoom in and out around cubes.
https://threejs.org/examples/?q=panorama#webgl_panorama_equirectangular
I want to figure out how, at maximum zoom-in level, I can transition user into a different panorama cube (with different image source), mapped to this particular cube part. So I would, sort of, open the next scene to take user further to the next level in his journey.
This is nearly what Google Street View does when you click on arrows to move forward down the road.
I do not see many examples out there. I researched and saw this may be possible with creating 2 scenes? Any ideas how to make it functional I would appreciate.
Detecting WHEN to transition:
In the example given, the mouse events are all given. The zoom is handled in onDocumentMouseWheel by adjusting the camera's fov property. "Zoom In" reduces the fov, and "Zoom Out" increases it. It would be trivial to detect when the fov has reached a minimum/maximum value, which would trigger your transition to a new scene.
Detecting WHERE to transition:
The next step is determining into which new scene you will transition. You could do something hotspot-like, where you shoot a ray from the camera to see if it hit a particular place (for example a THREE.Sphere which you have strategically positioned). But for simplicity, let's assume you only have the 6 directions you mentioned, and that you're still using the example's mouse control.
Camera movement is handled in onDocumentMouseMove by updating the lat and lon variables (which appear to be in degrees). (Note: It seems lon increases without bounds, so for clarity it might be good to give it a reset value so it can only ever be between 0.0-359.99 or something.) You can get all math-y to check the corners better, or you could simply check your 45's:
if(lat > 45){
// you're looking up
}
else if(lat < -45){
// you're looking down
}
else{
// you're looking at a side, check "lon" instead
}
Your look direction determines to which scene you will transition, should you encounter your maximum zoom.
Transitioning
There are lots of ways you can do this. You could simply replace the texture on the cube that makes up the panorama. You could swap in a totally different THREE.Scene. You could reset the camera--or not. You could play with the lights dimming out/in while the transition happens. You could apply some post-processing to obscure the transition effect. This part is all style, and it's all up to you.
Addressing #Marquizzo's concern:
The lighting is simply a suggestion for a transition. The example doesn't use a light source because the material is a MeshBasicMaterial (doesn't require lighting). The example also doesn't use scene.background, but applies the texture to an inverted sphere. There are other methods one can use if you simply can't affect the "brightness" of the texture (such as CSS transitions).
I added the following code the the example to make it fade in and out, just as an example.
// These are in the global scope, defined just before the call to init();
// I moved "mesh" to the global scope to access its material during the animation loop.
var mesh = null,
colorChange = -0.01;
// This code is inside the "update" function, just before the call to renderer.render(...);
// It causes the color of the material to vary between white/black, giving the fading effect.
mesh.material.color.addScalar(colorChange);
if(mesh.material.color.r + colorChange < 0 || mesh.material.color.r + colorChange > 1){ // not going full epsilon checking for an example...
colorChange = -colorChange;
}
One could even affect the opacity value of the material to make one sphere fade away, and another sphere fade into place.
My main point is that the transition can be accomplished in a variety of ways, and that it's up to #Vad to decide what kind of effect to use.
I'm playing a little with the phaser framework and try to make a simple "fall down" game. The goal is to fall fast enough without getting pushed out at the top of the screen:
To accomplish this I set the canvas size to 800x600:
var game = new Phaser.Game(800, 600, ...);
and resizing the world in create() to 800x6000:
game.world.resize(800, 6000);
In update() I move the camera 1 down:
game.camera.y += 1;
and check if the ball is still inside the camera:
if (!ball.inCamera) {
// ...
}
My Question is:
On the left and on the right the world borders are limiting the moveability of the ball (because of that the ball can't leave the camera there). How can I prevent the ball from "falling" out the camera on the bottom, but still be able to get pushed out the camera at the top?
Is there something similar to
game.physics.arcade.checkCollision.down = true;
but for the camera bounds?
Edit
This is how the ball is created:
ball = game.add.sprite(game.world.width / 2, 20, "ball");
game.physics.arcade.enable(ball);
ball.body.gravity.y = 1000;
To move the ball I check for key presses and then change the ball.body.velocity parameters.
Physics has nothing to do with the camera (and indeed it shouldn't), an elegant solution would be to create an invisible body, align its top with the bottom camera bound and move it with camera, thus still allowing the ball to be moved out of the view at the top, but not allowing it to fall down.
I am currently creating a VR web app using three.js. As the camera controls I am using the device orientation controls used here in the google cardboard three.js demo.
What I need to do is add keyboard controls to this(e.g Up arrow to go forward etc). I've fiddled around with moving the camera on the two axis (x and z) here:
if (e.keyCode == '38') {
camera.position.set(0, 10, camera.position.z+4);
controls.target.set(
camera.position.x +4,
camera.position.y,
camera.position.z
);
effect.render(scene, camera);
...
However I want to make the character move relative to where they are looking (e.g You look one way and press the Up arrow and the character moves the way you looking). Like a first person view.
Does anyone have any ideas on how this is done? Ive tried using the first person controls from three.js but this eliminates the head tracking which is essential for a VR game.
Any answers would be greatly appreciated. (My source code is practically just the Google cardboard three.js demo code with a function added in too detect key presses)
I solved this by different approach. I created an object3d which is moving in scene. Model and camera are child of this object.
I'm rotating object 3d with camera and in the same time rotate model in opposite direction. When i rotate camera object looks keeping direction. when i want to move object, just translateX object with camera and make model rotation to 0. That did the trick.
On long distances (I have millions of units) started to be jerky. Reason is lost precision.
I solved it by keeping position of object at 0,0,0 and move all other things in opposite direction. That makes your model is still on 0,0,0 coords with right rotation and world is moving around.
Most simple example:
you trying something like
scene.add(character_model);
scene.add(camera);
//camera.rotate ....
character_model.translateX(1);
character_model.rotateX(1);
//etc ...
and now you trying to move camera around the pivot (character_model), but this is overcomplicated mathematics.
Try:
var controls_dimension = new THREE.Object3D();
scene.add(controls_dimension);
controls_dimension.add(character_model);
controls_dimension.add(camera);
//use controls to rotate with this object, not with character_model
controls_dimension.rotateX(2);
// in the same rotate model to opposite direction. You can make
// illusion of rotating camera, not a model.
character_model.rotateX(2*-1);
/*
when you want to go in camera direction=controls_dimension=controls_dimension.translateX(1)
and we moving (you most only animate model rotation to controls_dimension.rotation)
*/
I want to have a reflecting cube surface in a WebGL page with Three.js. It should resemble a mobile phone display, which reflects some light, but it still has to be black.
I have created an example of a reflecting cube (and also a reflective sphere) with detailed comments. The live version is at
http://stemkoski.github.com/Three.js/Reflection.html
with nicely formatted code at
https://github.com/stemkoski/stemkoski.github.com/blob/master/Three.js/Reflection.html
(This is part of a collection of tutorial examples at http://stemkoski.github.com/Three.js/)
The main points are:
add to your scene a second camera (a CubeCamera) positioned at the object whose surface should be reflective
create a material and set the environment map as the results of rendering from this second camera, e.g.
for example:
var mirrorCubeMaterial = new THREE.MeshBasicMaterial(
{ envMap: mirrorCubeCamera.renderTarget } );
in your render function, you have to render from all your cameras. Temporarily hide the object that is reflecting (so that it doesn't get in the way of the camera you are going to use), render from that camera, then unhide the reflective object.
for example:
mirrorCube.visible = false;
mirrorCubeCamera.updateCubeMap( renderer, scene );
mirrorCube.visible = true;
These code snippets are from the links I posted above; check them out!