I built my 3D globe and placed it into a parent bounding box / pivot like this:
var globe = new THREE.Group();
if (earthmesh) {globe.add(earthmesh);};
if (linesmesh) {globe.add(linesmesh);};
if (cloudmesh) {globe.add(cloudmesh);};
if (atmosmesh) {globe.add(atmosmesh);};
pivot = new THREE.Group();
if (globe) {pivot.add(globe);};
if (pivot) {scene.add(pivot);};
so that when applying, say, a 23 degree rotation around the Z (far to near) axis on the parent:
pivot.rotation.set(THREE.Math.degToRad(0), THREE.Math.degToRad(0), THREE.Math.degToRad(23));
the globe would rotate around the Y (down to up) axis relative to its parent, in the animation function (in case you wonder, v is just an object holding the values for the .loop i.e. 360 degrees, .rate i.e. the rotation speed, as well as the .roll, .spin and .tilt variables corresponding to the X, Y and Z rotations of the globe, in degrees):
function animate()
{
delta = clock.getDelta();
resize();
v.spin = (v.loop + v.spin + v.rate * delta) % v.loop;
globe.rotation.set(THREE.Math.degToRad(v.roll), THREE.Math.degToRad(v.spin), THREE.Math.degToRad(v.tilt));
renderer.render(scene, camera);
requestAnimationFrame(animate);
}
like the green arrow indicates below, instead of how the red arrow indicates if the globe had no rotated parent:
This is very simple and it works well, if I do something like:
pivot.rotation.set(THREE.Math.degToRad(v.roll), THREE.Math.degToRad(0), THREE.Math.degToRad(v.tilt));
globe.rotation.set(THREE.Math.degToRad(0), THREE.Math.degToRad(v.spin), THREE.Math.degToRad(0));
in the animation function above and my on-demand / manual rotation system, without changing the camera position / direction as that is handled as desired by the ArcballControls library of Three.js, also visible above.
However, I would like to achieve this object rotation inside its parent by using other tools available in Three.js, like matrices, vectors, quaternions or other built-in functions, without having to set a parent for the globe and distributing the rotations between the parent and the child. The idea is to do the globe rotations normally like in the original animation function above, and apply some of those other tools (or even formulas) to alter the rotation and produce the desired effect. How can I do that?
In the interest of clarity, the benefit of such an approach over creating a parent for the globe would be that:
I would be able to easily toggle between the "normal" (red arrow) and the "altered" (green arrow) rotation systems in an if (...) {...} condition without having to destroy, recreate or reset the parent - child construct
I would be able to freely use the rotations around all the 3 axes for both the globe and the altering method, instead of applying some rotations on the parent and some on the child
Also, just in case I'll eventually settle on the parent - child approach above if the other methods are too complicated, how would I "reset" the rotation system to the "original" (i.e. red arrow) way in that case, without changing the object's visual orientation? I mean, resetting the pivot rotation to (0, 0, 0) is clear, but what would be the rotation applied to the globe child so that it stays the same even if its parent is now not rotated?
Note: For simplicity, a (0, 0, 0) position is assumed for both the parent and the child groups / objects and the parent rotation is illustrated on a single axis, but ideally, the alternative should allow different hypothetical positioning for the pivot and the object, while the globe altering rotation should be possible on all 3 axes.
P.S. Feel free to suggest improvements to how the question is formulated in order to make it clearer or more compact (sorry, I'm not an expert in technical terms, like world or local rotations or the like), but try not to rush into marking it as a duplicate or closing it before you are absolutely sure about it.
Your question is pretty long, but I think what you're looking for is "rotation order". Rotations are typically applied to the x, then y, finally z axes.
You might want to put the earth's tilt (z) first, and daily spin (y) last, so you can change this with: globe.rotation.order = "ZXY"
See here for official documentation on rotation order
Related
So i am simulating a foreward kinematic chain in Three.js. The chain contents of cylindrical joints and static links inbetween. The links are static and should only be rotated and translated according to the kinematics. I get displacement matrices M with some rotation and a positional vektor for the joints. The displacement matrix determines also the rotation and position of the following joints, but you have to add the cylindrical rotation and translation for the joint to be in the correct place.
function updateLink(link, jointAxis, trans, rot_3, rotOffset, transOffset){
pre_link = new Object3D()
pre_link = link.clone() //use the old link from zero configuration
pre_link.matrix.setFromMatrix3(rot_3) // change the position and rotation due to
// movement of the chain
pre_link.matrix.setPosition(trans)
pre_link.matrixAutoUpdate = false;
The translation is just the translational offset along the joint axis(which is nomalized) :
trans = anchor + transOffSet * jointAxis
https://i.stack.imgur.com/Ni5Xb.png
Now the link is at the bottom of the joint, but it needs to get rotated around the joint axis to get in place, while not effecting the rotation and translation it got from the displacement matrix.
I tried out a bunch of things , iE that i add the joint to the cylinder and then rotate the parent:
cylinder.add(link)
cylinder.rotateOnAxis(jointAxis, rotOffset)
With this code, the link gets rotated around an axis that goes through the origin, but i want to rotate around general lines not going thorugh the origin. (The cylinder up Axis, which is some kind of normalized vector at a certain point in the coordinate system.)
Any help or ideas would be much aprreciated.
I have a ThreeJS set up where the camera can move through a 3D world and while moving it will change its lookAt multiple times. So let's assume that camera.getWorldDirection() will always be more or less random.
Now I need the camera to move exactly left / right / up / down relative to camera.getWorldDirection().
You can't just use something like camera.position.x += 1 because that only applies for a world direction of Vector3(0, 0, -1). If the world direction changes to e.g. (1, -1, 0), moving the camera to the right does require changes in the X and the Z axis.
I had a look at quaternions and 4D matrices but I can't get my monkeybrain around them. Would be really nice if someone could help me out.
Here is a demo: https://normanwink.com/demo/room/
I found the answer here in a non-accepted answer.
You gotta use camera.translateX() and similar functions to transform its local position. That way you can always manipulate camera.position to move it around and have the translate functions to add offsets relative to the cameras viewing angle.
I have created a raytracing algorithm which currently displays a triangle on screen, and am now having trouble moving the camera around correctly.
I would like my code to have the arrow keys rotate the camera, and WASD move the camera in 3D space. The camera currently rotates correctly using two rotation matrices for y and z rotations.
The Problem
Moving the camera, rather than rotating, is where the issue arises. To move the camera, I require two vectors - cameraForward and cameraRight. These vectors can be added on to the position of the camera when input is detected. These vectors will also need to change when the cemara is rotated, in the same rotation as all the rays experience. But when I apply these rotation matrices to the vectors representing cameraRight and cameraForward, there seems to be an issue. Holding down the A or D key will result in the camera moving unexpectedly in circles or odd wavy lines.
I managed to fix the issue with cameraForward by using a different method. I added a couple lines of code which calculate when the ray at the centre as been 'fired' and proceed to set cameraForward to that ray. Therefore cameraForward will always follow the central ray being sent out i.e. the centre of the field of view. However, I cannot do the same with cameraRight, as this vector is not in the field of view.
Before solving cameraForward using this method the same issue arose with moving forwards and backwards.
I also tried taking the cross product of one of the other rays along with the cameraForward vector which I thought might produce cameraRight, but to no avail - more sporadic camera movement
I do not have the vector for cameraUp either so cannot calculate the cross product to find cameraRight.
I also thought maybe the code was being run too many times and the vector was rotated multiple times. However mvoing the code elsewhere had no effect and the method it was already in is run every frame, so I do not believe that is the issue.
Here is my code to rotate the camera right and the method which does the rotation.
camRight's inital value is (0, 0, 1)
camRight = normalize(rotateVector(camRight, rotationZ));
camRight = normalize(rotateVector(camRight, rotationY));
function rotateVector(v, m) {
return new Vector3(dot(m.a, v), dot(m.b, v), dot(m.c, v));
}
I know this code works as the code rotating the camera view functions correctly using the same matrices and methods.
(the following code)
myDirection = normalize(rotateVector(myDirection, rotationZ));
myDirection = normalize(rotateVector(myDirection, rotationY));
When the user presses A or D the following code is run
if (keys[65]) {
camPos = add(camPos, normalize(camRight));
requestAnimationFrame(render);
}
if (keys[68]) {
camPos = subtract(camPos, normalize(camRight));
requestAnimationFrame(render);
}
The camera moves forwards and backward correctly, as previously mentioned. Initially, the camera moves left and right correctly too (as its inital value of (0, 0, 1) is correct), but if I rotate the camera, the values for cameraRight go wild.
Have I assumed something wrongly? or is there a flaw in my logic somewhere?
Thank for any help
I am new to Three.js. I am using this example with 6 image cube for panorama effect, where one can pan, zoom in and out around cubes.
https://threejs.org/examples/?q=panorama#webgl_panorama_equirectangular
I want to figure out how, at maximum zoom-in level, I can transition user into a different panorama cube (with different image source), mapped to this particular cube part. So I would, sort of, open the next scene to take user further to the next level in his journey.
This is nearly what Google Street View does when you click on arrows to move forward down the road.
I do not see many examples out there. I researched and saw this may be possible with creating 2 scenes? Any ideas how to make it functional I would appreciate.
Detecting WHEN to transition:
In the example given, the mouse events are all given. The zoom is handled in onDocumentMouseWheel by adjusting the camera's fov property. "Zoom In" reduces the fov, and "Zoom Out" increases it. It would be trivial to detect when the fov has reached a minimum/maximum value, which would trigger your transition to a new scene.
Detecting WHERE to transition:
The next step is determining into which new scene you will transition. You could do something hotspot-like, where you shoot a ray from the camera to see if it hit a particular place (for example a THREE.Sphere which you have strategically positioned). But for simplicity, let's assume you only have the 6 directions you mentioned, and that you're still using the example's mouse control.
Camera movement is handled in onDocumentMouseMove by updating the lat and lon variables (which appear to be in degrees). (Note: It seems lon increases without bounds, so for clarity it might be good to give it a reset value so it can only ever be between 0.0-359.99 or something.) You can get all math-y to check the corners better, or you could simply check your 45's:
if(lat > 45){
// you're looking up
}
else if(lat < -45){
// you're looking down
}
else{
// you're looking at a side, check "lon" instead
}
Your look direction determines to which scene you will transition, should you encounter your maximum zoom.
Transitioning
There are lots of ways you can do this. You could simply replace the texture on the cube that makes up the panorama. You could swap in a totally different THREE.Scene. You could reset the camera--or not. You could play with the lights dimming out/in while the transition happens. You could apply some post-processing to obscure the transition effect. This part is all style, and it's all up to you.
Addressing #Marquizzo's concern:
The lighting is simply a suggestion for a transition. The example doesn't use a light source because the material is a MeshBasicMaterial (doesn't require lighting). The example also doesn't use scene.background, but applies the texture to an inverted sphere. There are other methods one can use if you simply can't affect the "brightness" of the texture (such as CSS transitions).
I added the following code the the example to make it fade in and out, just as an example.
// These are in the global scope, defined just before the call to init();
// I moved "mesh" to the global scope to access its material during the animation loop.
var mesh = null,
colorChange = -0.01;
// This code is inside the "update" function, just before the call to renderer.render(...);
// It causes the color of the material to vary between white/black, giving the fading effect.
mesh.material.color.addScalar(colorChange);
if(mesh.material.color.r + colorChange < 0 || mesh.material.color.r + colorChange > 1){ // not going full epsilon checking for an example...
colorChange = -colorChange;
}
One could even affect the opacity value of the material to make one sphere fade away, and another sphere fade into place.
My main point is that the transition can be accomplished in a variety of ways, and that it's up to #Vad to decide what kind of effect to use.
I'm trying to figure out how I can get the correct "active" tile under the mouse when I have "ramp" and +1 height tiles (see picture below).
When my world is flat, everything works no problem. Once I add a tile with a height of say +1, along with a ramp going back to +0, my screen -> map routine is still looking as if everything is "flat".
In the picture above, the green "ramp" is the real tile I want to render and calculate mouse -> map, however the blue tile you see "below" it is the area which gets calculated. So if you move your mouse into any of the dark green areas, it thinks you're on another tile.
Here is my map render (very simple)
canvas.width = canvas.width; // cheap clear in firefox 3.6, does not work in other browsers
for(i=0;i<map_y;i++){
for(j=0;j<map_x;j++){
var xpos = (i-j)*tile_h + current_x;
var ypos = (i+j)*tile_h/2+ current_y;
context.beginPath();
context.moveTo(xpos, ypos+(tile_h/2));
context.lineTo(xpos+(tile_w/2), ypos);
context.lineTo(xpos+(tile_w), ypos+(tile_h/2));
context.lineTo(xpos+(tile_w/2), ypos+(tile_h));
context.fill();
}
}
And here is my mouse -> map routine:
ymouse=( (2*(ev.pageY-canvas.offsetTop-current_y)-ev.pageX+canvas.offsetLeft+current_x)/2 );
xmouse=( ev.pageX+ymouse-current_x-(tile_w/2)-canvas.offsetLeft );
ymouse=Math.round(ymouse/tile_h);
xmouse=Math.round(xmouse/(tile_w/2));
current_tile=[xmouse,ymouse];
I have a feeling I'll have to start over and implement a world based map system rather than a simple screen -> map routine.
Thanks.
Your assumption is correct. In order to "pick" against world geometry, your routine needs to be aware of the world (and not just the base-level tile configuration). That is, without any concept of the height of the tiles near the one that is currently picked (by your current algorithm), there's no way to determine whether a neighboring tile (or one even further away, depending on the permitted height) should be intercepted by picking ray.
You've got the final possible point of your picking ray, already. What remains is to define the remainder of the ray, in world-space, and to check that ray for intersections with world geometry.
If, like the picture, your view angle is always 45 degrees and always from the same direction, your mouse -> map routine could use an algorithm something like:
calculate i,j of tile as you're doing currently (your final value of xmouse, ymouse)
look up height and angle of tile at i,j
given the height and angle, does this tile intersect the picking ray? If so, set lasti, lastj = i, j
increment/decrement i,j one step diagonally toward viewer
have we fallen off the edge of the map? If so, return lasti, lastj. Otherwise go back to 2.
Depending on the maximum height of a tile, you might have to check only 2 tiles, rather than going all the way to the edge of the map.
3 is the tricky part, and depends on your world geometry. Draw some triangles and you should be able to figure it out. Or you might try looking at the function intersect_quadrilateral_ray() here.