I am having problems regarding the switching of two viewpoints.
Whenever I move from viewpoint1 to viewpoint2 the rotation when I arrive at viewpoint2 seems off.
At first I wanted to switch viewpoints and this would involve setting the camera position. This doesn't seem possible so I had to wrap the camera with a container like so:
<a-entity><a-camera></a-camera></a-entity>
And then use .setAttribute('position', xyz); on the a-entity tag (I know the object3d approach is favorable, I still need to refactor this).
Now when I look from viewpoint1 to viewpoint2 (we call this movement deltaRotation) and switch viewpoints I change the rotation of the entity to my preferred point of view, lets say 0, 45, 0.
What I expect is that I end up at viewpoint2 with the camera looking at 0, 45, 0. The camera however seems to be looking at 0, 45, 0 + deltaRotation.
Is this the way I am supposed to be switching viewpoints? If so is this "tank" model way of working intended? Or should I take another approach.
What's happening is when you move the camera around with the default look-controls, it's changing the rotation attribute of the camera within the "camera rig" wrapper entity. So, when you change the rotation of the entity, the end result is the camera will be looking in a direction that is the sum of the rotations of the camera and the camera rig.
Unfortunately I'm not sure how to work around this because setting or animating the rotation of the main camera directly simply doesn't work. So, scripting the perspective of the first-person camera is something a-frame isn't equipped to handle in its current state. Reverting to an earlier version of a-frame may be your best option.
Related
I have a ThreeJS set up where the camera can move through a 3D world and while moving it will change its lookAt multiple times. So let's assume that camera.getWorldDirection() will always be more or less random.
Now I need the camera to move exactly left / right / up / down relative to camera.getWorldDirection().
You can't just use something like camera.position.x += 1 because that only applies for a world direction of Vector3(0, 0, -1). If the world direction changes to e.g. (1, -1, 0), moving the camera to the right does require changes in the X and the Z axis.
I had a look at quaternions and 4D matrices but I can't get my monkeybrain around them. Would be really nice if someone could help me out.
Here is a demo: https://normanwink.com/demo/room/
I found the answer here in a non-accepted answer.
You gotta use camera.translateX() and similar functions to transform its local position. That way you can always manipulate camera.position to move it around and have the translate functions to add offsets relative to the cameras viewing angle.
I have created a raytracing algorithm which currently displays a triangle on screen, and am now having trouble moving the camera around correctly.
I would like my code to have the arrow keys rotate the camera, and WASD move the camera in 3D space. The camera currently rotates correctly using two rotation matrices for y and z rotations.
The Problem
Moving the camera, rather than rotating, is where the issue arises. To move the camera, I require two vectors - cameraForward and cameraRight. These vectors can be added on to the position of the camera when input is detected. These vectors will also need to change when the cemara is rotated, in the same rotation as all the rays experience. But when I apply these rotation matrices to the vectors representing cameraRight and cameraForward, there seems to be an issue. Holding down the A or D key will result in the camera moving unexpectedly in circles or odd wavy lines.
I managed to fix the issue with cameraForward by using a different method. I added a couple lines of code which calculate when the ray at the centre as been 'fired' and proceed to set cameraForward to that ray. Therefore cameraForward will always follow the central ray being sent out i.e. the centre of the field of view. However, I cannot do the same with cameraRight, as this vector is not in the field of view.
Before solving cameraForward using this method the same issue arose with moving forwards and backwards.
I also tried taking the cross product of one of the other rays along with the cameraForward vector which I thought might produce cameraRight, but to no avail - more sporadic camera movement
I do not have the vector for cameraUp either so cannot calculate the cross product to find cameraRight.
I also thought maybe the code was being run too many times and the vector was rotated multiple times. However mvoing the code elsewhere had no effect and the method it was already in is run every frame, so I do not believe that is the issue.
Here is my code to rotate the camera right and the method which does the rotation.
camRight's inital value is (0, 0, 1)
camRight = normalize(rotateVector(camRight, rotationZ));
camRight = normalize(rotateVector(camRight, rotationY));
function rotateVector(v, m) {
return new Vector3(dot(m.a, v), dot(m.b, v), dot(m.c, v));
}
I know this code works as the code rotating the camera view functions correctly using the same matrices and methods.
(the following code)
myDirection = normalize(rotateVector(myDirection, rotationZ));
myDirection = normalize(rotateVector(myDirection, rotationY));
When the user presses A or D the following code is run
if (keys[65]) {
camPos = add(camPos, normalize(camRight));
requestAnimationFrame(render);
}
if (keys[68]) {
camPos = subtract(camPos, normalize(camRight));
requestAnimationFrame(render);
}
The camera moves forwards and backward correctly, as previously mentioned. Initially, the camera moves left and right correctly too (as its inital value of (0, 0, 1) is correct), but if I rotate the camera, the values for cameraRight go wild.
Have I assumed something wrongly? or is there a flaw in my logic somewhere?
Thank for any help
I am new to Three.js. I am using this example with 6 image cube for panorama effect, where one can pan, zoom in and out around cubes.
https://threejs.org/examples/?q=panorama#webgl_panorama_equirectangular
I want to figure out how, at maximum zoom-in level, I can transition user into a different panorama cube (with different image source), mapped to this particular cube part. So I would, sort of, open the next scene to take user further to the next level in his journey.
This is nearly what Google Street View does when you click on arrows to move forward down the road.
I do not see many examples out there. I researched and saw this may be possible with creating 2 scenes? Any ideas how to make it functional I would appreciate.
Detecting WHEN to transition:
In the example given, the mouse events are all given. The zoom is handled in onDocumentMouseWheel by adjusting the camera's fov property. "Zoom In" reduces the fov, and "Zoom Out" increases it. It would be trivial to detect when the fov has reached a minimum/maximum value, which would trigger your transition to a new scene.
Detecting WHERE to transition:
The next step is determining into which new scene you will transition. You could do something hotspot-like, where you shoot a ray from the camera to see if it hit a particular place (for example a THREE.Sphere which you have strategically positioned). But for simplicity, let's assume you only have the 6 directions you mentioned, and that you're still using the example's mouse control.
Camera movement is handled in onDocumentMouseMove by updating the lat and lon variables (which appear to be in degrees). (Note: It seems lon increases without bounds, so for clarity it might be good to give it a reset value so it can only ever be between 0.0-359.99 or something.) You can get all math-y to check the corners better, or you could simply check your 45's:
if(lat > 45){
// you're looking up
}
else if(lat < -45){
// you're looking down
}
else{
// you're looking at a side, check "lon" instead
}
Your look direction determines to which scene you will transition, should you encounter your maximum zoom.
Transitioning
There are lots of ways you can do this. You could simply replace the texture on the cube that makes up the panorama. You could swap in a totally different THREE.Scene. You could reset the camera--or not. You could play with the lights dimming out/in while the transition happens. You could apply some post-processing to obscure the transition effect. This part is all style, and it's all up to you.
Addressing #Marquizzo's concern:
The lighting is simply a suggestion for a transition. The example doesn't use a light source because the material is a MeshBasicMaterial (doesn't require lighting). The example also doesn't use scene.background, but applies the texture to an inverted sphere. There are other methods one can use if you simply can't affect the "brightness" of the texture (such as CSS transitions).
I added the following code the the example to make it fade in and out, just as an example.
// These are in the global scope, defined just before the call to init();
// I moved "mesh" to the global scope to access its material during the animation loop.
var mesh = null,
colorChange = -0.01;
// This code is inside the "update" function, just before the call to renderer.render(...);
// It causes the color of the material to vary between white/black, giving the fading effect.
mesh.material.color.addScalar(colorChange);
if(mesh.material.color.r + colorChange < 0 || mesh.material.color.r + colorChange > 1){ // not going full epsilon checking for an example...
colorChange = -colorChange;
}
One could even affect the opacity value of the material to make one sphere fade away, and another sphere fade into place.
My main point is that the transition can be accomplished in a variety of ways, and that it's up to #Vad to decide what kind of effect to use.
I am currently creating a VR web app using three.js. As the camera controls I am using the device orientation controls used here in the google cardboard three.js demo.
What I need to do is add keyboard controls to this(e.g Up arrow to go forward etc). I've fiddled around with moving the camera on the two axis (x and z) here:
if (e.keyCode == '38') {
camera.position.set(0, 10, camera.position.z+4);
controls.target.set(
camera.position.x +4,
camera.position.y,
camera.position.z
);
effect.render(scene, camera);
...
However I want to make the character move relative to where they are looking (e.g You look one way and press the Up arrow and the character moves the way you looking). Like a first person view.
Does anyone have any ideas on how this is done? Ive tried using the first person controls from three.js but this eliminates the head tracking which is essential for a VR game.
Any answers would be greatly appreciated. (My source code is practically just the Google cardboard three.js demo code with a function added in too detect key presses)
I solved this by different approach. I created an object3d which is moving in scene. Model and camera are child of this object.
I'm rotating object 3d with camera and in the same time rotate model in opposite direction. When i rotate camera object looks keeping direction. when i want to move object, just translateX object with camera and make model rotation to 0. That did the trick.
On long distances (I have millions of units) started to be jerky. Reason is lost precision.
I solved it by keeping position of object at 0,0,0 and move all other things in opposite direction. That makes your model is still on 0,0,0 coords with right rotation and world is moving around.
Most simple example:
you trying something like
scene.add(character_model);
scene.add(camera);
//camera.rotate ....
character_model.translateX(1);
character_model.rotateX(1);
//etc ...
and now you trying to move camera around the pivot (character_model), but this is overcomplicated mathematics.
Try:
var controls_dimension = new THREE.Object3D();
scene.add(controls_dimension);
controls_dimension.add(character_model);
controls_dimension.add(camera);
//use controls to rotate with this object, not with character_model
controls_dimension.rotateX(2);
// in the same rotate model to opposite direction. You can make
// illusion of rotating camera, not a model.
character_model.rotateX(2*-1);
/*
when you want to go in camera direction=controls_dimension=controls_dimension.translateX(1)
and we moving (you most only animate model rotation to controls_dimension.rotation)
*/
I want to be able to drag an object, but have it 'snap' to the surface of another.
I don't care if it only 'snaps' when I release the mouse, or if it updates live.
Ultimate goal: create a smooth shape in Blender, import it, and be able to snap drag to that shape. I know I can do this with procedurally generated surfaces with a bit of math, but I'm looking to use some non-procedurally generated surfaces.. or at least, the surfaces I'm wanting to use I haven't figured out how to generate procedurally yet.
I tried taking this example: http://stemkoski.github.io/Three.js/Mouse-Click.html and changing the 'click' effect to a 'drag' effect. Then I incorporated it with this example: http://mrdoob.github.io/three.js/examples/webgl_interactive_draggablecubes.html
The effect is working, in that I can drag any of the cubes across the sphere, and the face hilights.
Then I tried taking the dragged object, and snapping it to the face of the sphere using this flawed logic:
SELECTED.position.x = intersectsRay[ 0 ].face.normal.x;
SELECTED.position.y = intersectsRay[ 0 ].face.normal.y;
SELECTED.position.z = intersectsRay[ 0 ].face.normal.z;
The problem: the dragged objects always snap to the center of the sphere...
The reason being, (I think . . . ) is the face 'normal' is the center of the (sphere) in this case.
Does anyone know a way to find the x,y,z of the FACE (no matter what the shape), or any other way to implement this concept?
An easy solution would be to not use the local (and normalized) face normal, instead for example you could use the vertex index.
Something like:
intersectedOBJ.geometry.vertices[intersect.face.a]
This way, you would snap your dragged object to one of the face's vertices.
Also there is a face "centroid" that you can use or you could calculate the center of the face on your own.