I'm trying to synchronize, upon the press of a button, the orientation of two objects (for example two cubes - not necessarily of the same dimensions) which live in two separate three.js scenes.
For the specific application at hand, I have access to the camera orientation 4 matrix of the first scene $M$ (it is an NGLView instance and so I believe I need to use its_camera_orientation trait). Based on the discussion here https://www.wikiwand.com/en/Camera_resectioning, I have attempted to set the orientation of the second scene by passing the camera position of the first scene in world coordinates, $C=-R^T T$ along with the camera direction $D=R^T (0 0 1)$. To get the correct relative orientation and position I multiplied the 4 matrix $M$ by the inverse of the initial 4 matrix of the first scene $M_1^{-1}$ and the initial 4 matrix of the second scene $M_2$: $M-> M_2 M_1^{-1} M$. Since the second scene is using OrbitControls, I set the camera position equal to C and the orbit controls target to camera direction.
It seems pretty close but the two scenes are not quite synced correctly and I'm not entirely sure what I'm doing wrong here. I was wondering if anyone had any ideas?
Related
What I have is an OrthographicCamera set up so that is has an isometric view of the scene and OrbitConrols added to allow for panning around and zooming but not for rotation.
What I’d like to have is a button that will centre the objects in a scene and zoom the OrthographicCamera so that the objects fit within the canvas area while keeping the isometric view, i.e. the angle between the camera.position and the camera.lookAt (control.target) point.
What I’ve tried is to set the controls.target at the centre of the bounding box of the objects in the scene.
I have 2 problems with the code at the moment.
The First is I couldn’t work out how to calculate the zoom level needed to make sure the objects in the scene are all in view. I’ve hard coded a value for just now.
The Second is that with the current code, if the camera is panned so that the objects appear nearly off the screen, either up or down, then when centred the angle of the camera changes. This was happening when the camera was panned far left or right but setting the max and min Azimuth Angle seems to prevent this.
camera rotates after centring
The image above shows the scene when loaded then after centering when the camera was panned so the objects were going off the top of the screen.
I have tried a number of ways to do this after looking at answers to similar questions as this but am still having problems getting it to work.
function fitDrawingToPage(){
// Variables Bbox etc are set outside the function
Bbox = new THREE.Box3();
for (const object of sceneObjects) Bbox.expandByObject(object);
newTarget = Bbox.getCenter(new THREE.Vector3());
controls.target.set( newTarget.x, newTarget.y, newTarget.z );
controls.update();
camera.zoom = 0.5;
camera.updateProjectionMatrix();
camera.updateMatrix();
render();
}
current example of code in jsfiddle
I have created a raytracing algorithm which currently displays a triangle on screen, and am now having trouble moving the camera around correctly.
I would like my code to have the arrow keys rotate the camera, and WASD move the camera in 3D space. The camera currently rotates correctly using two rotation matrices for y and z rotations.
The Problem
Moving the camera, rather than rotating, is where the issue arises. To move the camera, I require two vectors - cameraForward and cameraRight. These vectors can be added on to the position of the camera when input is detected. These vectors will also need to change when the cemara is rotated, in the same rotation as all the rays experience. But when I apply these rotation matrices to the vectors representing cameraRight and cameraForward, there seems to be an issue. Holding down the A or D key will result in the camera moving unexpectedly in circles or odd wavy lines.
I managed to fix the issue with cameraForward by using a different method. I added a couple lines of code which calculate when the ray at the centre as been 'fired' and proceed to set cameraForward to that ray. Therefore cameraForward will always follow the central ray being sent out i.e. the centre of the field of view. However, I cannot do the same with cameraRight, as this vector is not in the field of view.
Before solving cameraForward using this method the same issue arose with moving forwards and backwards.
I also tried taking the cross product of one of the other rays along with the cameraForward vector which I thought might produce cameraRight, but to no avail - more sporadic camera movement
I do not have the vector for cameraUp either so cannot calculate the cross product to find cameraRight.
I also thought maybe the code was being run too many times and the vector was rotated multiple times. However mvoing the code elsewhere had no effect and the method it was already in is run every frame, so I do not believe that is the issue.
Here is my code to rotate the camera right and the method which does the rotation.
camRight's inital value is (0, 0, 1)
camRight = normalize(rotateVector(camRight, rotationZ));
camRight = normalize(rotateVector(camRight, rotationY));
function rotateVector(v, m) {
return new Vector3(dot(m.a, v), dot(m.b, v), dot(m.c, v));
}
I know this code works as the code rotating the camera view functions correctly using the same matrices and methods.
(the following code)
myDirection = normalize(rotateVector(myDirection, rotationZ));
myDirection = normalize(rotateVector(myDirection, rotationY));
When the user presses A or D the following code is run
if (keys[65]) {
camPos = add(camPos, normalize(camRight));
requestAnimationFrame(render);
}
if (keys[68]) {
camPos = subtract(camPos, normalize(camRight));
requestAnimationFrame(render);
}
The camera moves forwards and backward correctly, as previously mentioned. Initially, the camera moves left and right correctly too (as its inital value of (0, 0, 1) is correct), but if I rotate the camera, the values for cameraRight go wild.
Have I assumed something wrongly? or is there a flaw in my logic somewhere?
Thank for any help
I want to remove clipping planes in THREE js but I can't seem to find any information on how to do this. What I found is that orthographic camera can have negative value for near clipping plane.
If I put negative value in near clipping plane of perspective camera, it doesn't throw an error but doesn't show any objects.
I draw relatively huge objects and the near clipping plane is very frustrating when I try to explore them as they disapear completely if they are behind the camera even if most of the object should still be visible, is there a way to remove it so my objects always gets drawn?
here is the camera values I use:
var camera = new THREE.PerspectiveCamera(90, size.x/size.y, 0.1, 1000);
When I move, I move the camera instead of all the objects relative to a fixed camera, I guessed it would be more performent, but I don't think it matters for my problem.
All of the objects have position (0, 0, 0) but can have part extending up to 10-15 units away from their position
I am currently creating a VR web app using three.js. As the camera controls I am using the device orientation controls used here in the google cardboard three.js demo.
What I need to do is add keyboard controls to this(e.g Up arrow to go forward etc). I've fiddled around with moving the camera on the two axis (x and z) here:
if (e.keyCode == '38') {
camera.position.set(0, 10, camera.position.z+4);
controls.target.set(
camera.position.x +4,
camera.position.y,
camera.position.z
);
effect.render(scene, camera);
...
However I want to make the character move relative to where they are looking (e.g You look one way and press the Up arrow and the character moves the way you looking). Like a first person view.
Does anyone have any ideas on how this is done? Ive tried using the first person controls from three.js but this eliminates the head tracking which is essential for a VR game.
Any answers would be greatly appreciated. (My source code is practically just the Google cardboard three.js demo code with a function added in too detect key presses)
I solved this by different approach. I created an object3d which is moving in scene. Model and camera are child of this object.
I'm rotating object 3d with camera and in the same time rotate model in opposite direction. When i rotate camera object looks keeping direction. when i want to move object, just translateX object with camera and make model rotation to 0. That did the trick.
On long distances (I have millions of units) started to be jerky. Reason is lost precision.
I solved it by keeping position of object at 0,0,0 and move all other things in opposite direction. That makes your model is still on 0,0,0 coords with right rotation and world is moving around.
Most simple example:
you trying something like
scene.add(character_model);
scene.add(camera);
//camera.rotate ....
character_model.translateX(1);
character_model.rotateX(1);
//etc ...
and now you trying to move camera around the pivot (character_model), but this is overcomplicated mathematics.
Try:
var controls_dimension = new THREE.Object3D();
scene.add(controls_dimension);
controls_dimension.add(character_model);
controls_dimension.add(camera);
//use controls to rotate with this object, not with character_model
controls_dimension.rotateX(2);
// in the same rotate model to opposite direction. You can make
// illusion of rotating camera, not a model.
character_model.rotateX(2*-1);
/*
when you want to go in camera direction=controls_dimension=controls_dimension.translateX(1)
and we moving (you most only animate model rotation to controls_dimension.rotation)
*/
I'm newbie in three.js and WebGL.
In my application, there is 3D scene in which the two objects.
object - it is a big sphere;
object - a smaller sphere, which is located on the surface of the first object.
Big sphere rotates around its axis. And also there is the possibility to rotate the camera around the spheres.
So as a small sphere on the surface of a large sphere, it also rotates with it. Small sphere will be visible to us as large sphere turns to the camera and it will not be visible when a large sphere is in front of it.
The question is, how do I determine when a small sphere is visible to the camera, and when it is not visible?
Also, I need to get the coordinates in 2d for small sphere where it is visible. How can I do this?
This can be accomplished with three.js's built-in raycaster and projector functionalities. To start, try taking a look at this demo and its source code. Here is another example. In this way you can determine which objects are closer to an invisible line that is emitted from the camera's position.
Otherwise, if you are simply interested in which of the two objects is closer to the camera, you can simply check to see which of their position values have a lesser distance to the camera's coordinates. The three-dimensional distance formula would come in handy:
bigSphereDistance = Math.sqrt( Math.pow(camera.position.x - big.position.x,2) +
Math.pow(camera.position.y - big.position.y,2) +
Math.pow(camera.position.z - big.position.z,2) );
smallSphereDistance = Math.sqrt( Math.pow(camera.position.x - small.position.x,2) +
Math.pow(camera.position.y - small.position.y,2) +
Math.pow(camera.position.z - small.position.z,2) );
//then check...
bigSphereDistance > smallSphereDistance ? /*case*/ : /*case*/;
Intuitively, the small sphere is visible when its distance is less than that of the big sphere, with a buffer of the small sphere's radius.
To answer your second question, finding any object's 2D coordinates can accomplished like this.