Consider this simple example of a cube centered on the origin of the world. Since the camera is looking directly at it, the resulting rendered image shows the cube in the middle of the rendered 2D image and only its front face is visible. I'd like to have control over that cube's placement. I.e. I'd like to shift the rendered output up and to the left by some amount. That way, I can for example shift everything by half of the canvas's width and height and have the cube centered on the top left corner of the rendered output.
To be clear: I don't want to move the camera nor the object in the 3D world (nor the canvas). I just want the rendered result itself to shift, and I'd like to define this shift in 2D screen units rather than in 3D space. It entails that after the said shift, the sides of the cube will still not be visible — only the front face as it is currently. It also entails that if I shift the output to the left, some geometry that's on the right side of the view but previously out of the frame would now shift into view and get rendered.
In some 3D software I've encountered the ability to do this by modifying the camera's X and Y "center shift". Maybe in three.js I'd have to do it by applying a transformation to the camera or to the renderer? I'm not familiar enough with the library to know where to dig.
There's no relevant code to share, but StackOverflow won't let me submit this question without some code ;)
You offset of the camera using a pattern like so:
var fullWidth = window.innerWidth;
var fullHeight = window.innerHeight;
var xPixels = 600;
var yPixels = 200;
camera.setViewOffset ( fullWidth, fullHeight, xPixels, yPixels, fullWidth, fullHeight );
To undo it, call
camera.clearViewOffset();
See the docs for more info about this method and multi-monitor setups. It works for OrthographicCamera, too.
three.js r.84
Why don't you just use canvas translate?
Just do something like this:
// adjust for camera
ctx.translate(-this.camera.x, -this.camera.y);
// render scene here
// end of camera view
ctx.translate(this.camera.x, this.camera.y);
Related
What I have is an OrthographicCamera set up so that is has an isometric view of the scene and OrbitConrols added to allow for panning around and zooming but not for rotation.
What I’d like to have is a button that will centre the objects in a scene and zoom the OrthographicCamera so that the objects fit within the canvas area while keeping the isometric view, i.e. the angle between the camera.position and the camera.lookAt (control.target) point.
What I’ve tried is to set the controls.target at the centre of the bounding box of the objects in the scene.
I have 2 problems with the code at the moment.
The First is I couldn’t work out how to calculate the zoom level needed to make sure the objects in the scene are all in view. I’ve hard coded a value for just now.
The Second is that with the current code, if the camera is panned so that the objects appear nearly off the screen, either up or down, then when centred the angle of the camera changes. This was happening when the camera was panned far left or right but setting the max and min Azimuth Angle seems to prevent this.
camera rotates after centring
The image above shows the scene when loaded then after centering when the camera was panned so the objects were going off the top of the screen.
I have tried a number of ways to do this after looking at answers to similar questions as this but am still having problems getting it to work.
function fitDrawingToPage(){
// Variables Bbox etc are set outside the function
Bbox = new THREE.Box3();
for (const object of sceneObjects) Bbox.expandByObject(object);
newTarget = Bbox.getCenter(new THREE.Vector3());
controls.target.set( newTarget.x, newTarget.y, newTarget.z );
controls.update();
camera.zoom = 0.5;
camera.updateProjectionMatrix();
camera.updateMatrix();
render();
}
current example of code in jsfiddle
I have created a raytracing algorithm which currently displays a triangle on screen, and am now having trouble moving the camera around correctly.
I would like my code to have the arrow keys rotate the camera, and WASD move the camera in 3D space. The camera currently rotates correctly using two rotation matrices for y and z rotations.
The Problem
Moving the camera, rather than rotating, is where the issue arises. To move the camera, I require two vectors - cameraForward and cameraRight. These vectors can be added on to the position of the camera when input is detected. These vectors will also need to change when the cemara is rotated, in the same rotation as all the rays experience. But when I apply these rotation matrices to the vectors representing cameraRight and cameraForward, there seems to be an issue. Holding down the A or D key will result in the camera moving unexpectedly in circles or odd wavy lines.
I managed to fix the issue with cameraForward by using a different method. I added a couple lines of code which calculate when the ray at the centre as been 'fired' and proceed to set cameraForward to that ray. Therefore cameraForward will always follow the central ray being sent out i.e. the centre of the field of view. However, I cannot do the same with cameraRight, as this vector is not in the field of view.
Before solving cameraForward using this method the same issue arose with moving forwards and backwards.
I also tried taking the cross product of one of the other rays along with the cameraForward vector which I thought might produce cameraRight, but to no avail - more sporadic camera movement
I do not have the vector for cameraUp either so cannot calculate the cross product to find cameraRight.
I also thought maybe the code was being run too many times and the vector was rotated multiple times. However mvoing the code elsewhere had no effect and the method it was already in is run every frame, so I do not believe that is the issue.
Here is my code to rotate the camera right and the method which does the rotation.
camRight's inital value is (0, 0, 1)
camRight = normalize(rotateVector(camRight, rotationZ));
camRight = normalize(rotateVector(camRight, rotationY));
function rotateVector(v, m) {
return new Vector3(dot(m.a, v), dot(m.b, v), dot(m.c, v));
}
I know this code works as the code rotating the camera view functions correctly using the same matrices and methods.
(the following code)
myDirection = normalize(rotateVector(myDirection, rotationZ));
myDirection = normalize(rotateVector(myDirection, rotationY));
When the user presses A or D the following code is run
if (keys[65]) {
camPos = add(camPos, normalize(camRight));
requestAnimationFrame(render);
}
if (keys[68]) {
camPos = subtract(camPos, normalize(camRight));
requestAnimationFrame(render);
}
The camera moves forwards and backward correctly, as previously mentioned. Initially, the camera moves left and right correctly too (as its inital value of (0, 0, 1) is correct), but if I rotate the camera, the values for cameraRight go wild.
Have I assumed something wrongly? or is there a flaw in my logic somewhere?
Thank for any help
I am currently creating a VR web app using three.js. As the camera controls I am using the device orientation controls used here in the google cardboard three.js demo.
What I need to do is add keyboard controls to this(e.g Up arrow to go forward etc). I've fiddled around with moving the camera on the two axis (x and z) here:
if (e.keyCode == '38') {
camera.position.set(0, 10, camera.position.z+4);
controls.target.set(
camera.position.x +4,
camera.position.y,
camera.position.z
);
effect.render(scene, camera);
...
However I want to make the character move relative to where they are looking (e.g You look one way and press the Up arrow and the character moves the way you looking). Like a first person view.
Does anyone have any ideas on how this is done? Ive tried using the first person controls from three.js but this eliminates the head tracking which is essential for a VR game.
Any answers would be greatly appreciated. (My source code is practically just the Google cardboard three.js demo code with a function added in too detect key presses)
I solved this by different approach. I created an object3d which is moving in scene. Model and camera are child of this object.
I'm rotating object 3d with camera and in the same time rotate model in opposite direction. When i rotate camera object looks keeping direction. when i want to move object, just translateX object with camera and make model rotation to 0. That did the trick.
On long distances (I have millions of units) started to be jerky. Reason is lost precision.
I solved it by keeping position of object at 0,0,0 and move all other things in opposite direction. That makes your model is still on 0,0,0 coords with right rotation and world is moving around.
Most simple example:
you trying something like
scene.add(character_model);
scene.add(camera);
//camera.rotate ....
character_model.translateX(1);
character_model.rotateX(1);
//etc ...
and now you trying to move camera around the pivot (character_model), but this is overcomplicated mathematics.
Try:
var controls_dimension = new THREE.Object3D();
scene.add(controls_dimension);
controls_dimension.add(character_model);
controls_dimension.add(camera);
//use controls to rotate with this object, not with character_model
controls_dimension.rotateX(2);
// in the same rotate model to opposite direction. You can make
// illusion of rotating camera, not a model.
character_model.rotateX(2*-1);
/*
when you want to go in camera direction=controls_dimension=controls_dimension.translateX(1)
and we moving (you most only animate model rotation to controls_dimension.rotation)
*/
Using a canvas covering entire view I want to draw a rotating triangle to the area marked by a div container.
However Firefox does not always draw the triangle into the div placeholder because of scrolling. See the picture (ignore the repeating background picture) and the demo. Chromium renders the triangle correctly while scrolling.
Is my code wrong or the Firefox implementation is not fast enough to render the triangle at the correct position when scrolling?
Algorithm:
Initialization:
create large canvas covering entire view, get WebGL context
allocate buffers for rendering triangle
Rendering loop:
If the div placeholder is in the current view then:
set up rendering coordinates with gl.viewport to match the div placeholder's position
render the triangle (the actual orientation is derived from Date())
Code:
var triangle;
var gl;
function drawScenes() {
gl.clear(gl.COLOR_BUFFER_BIT);
if(isScrolledIntoView('#div0')) {
// set up viewport for rendering on top of the div placeholder
var docViewTop = $(window).scrollTop();
gl.viewport(0, $('#canvas').height() + docViewTop - 400, 200, 200);
triangle.render();
}
requestAnimFrame(drawScenes);
}
function start() {
createOverlayCanvas('canvas');
gl = initGL('canvas');
triangle = new Triangle(gl); //sets up the buffers
drawScenes();
}
I am using Ubuntu 11.10 with Nvidia proprietary drivers.
The motivation behind this is that I want to have multiple placeholders and render different objects into each one of them.
Why I am not using multiple canvases?
Initializing one canvas is faster
When drawing one specific object to multiple placeholders, the objects data are shared. Unlike the multiple canvases we can't share state.
Thank you for your help.
I tasted it right now with my firefox 9.0.1 (on windows 7) and it works exactly like chrome...
Try to update your firefox or report the bug to mozzila...
I'm trying to figure out how I can get the correct "active" tile under the mouse when I have "ramp" and +1 height tiles (see picture below).
When my world is flat, everything works no problem. Once I add a tile with a height of say +1, along with a ramp going back to +0, my screen -> map routine is still looking as if everything is "flat".
In the picture above, the green "ramp" is the real tile I want to render and calculate mouse -> map, however the blue tile you see "below" it is the area which gets calculated. So if you move your mouse into any of the dark green areas, it thinks you're on another tile.
Here is my map render (very simple)
canvas.width = canvas.width; // cheap clear in firefox 3.6, does not work in other browsers
for(i=0;i<map_y;i++){
for(j=0;j<map_x;j++){
var xpos = (i-j)*tile_h + current_x;
var ypos = (i+j)*tile_h/2+ current_y;
context.beginPath();
context.moveTo(xpos, ypos+(tile_h/2));
context.lineTo(xpos+(tile_w/2), ypos);
context.lineTo(xpos+(tile_w), ypos+(tile_h/2));
context.lineTo(xpos+(tile_w/2), ypos+(tile_h));
context.fill();
}
}
And here is my mouse -> map routine:
ymouse=( (2*(ev.pageY-canvas.offsetTop-current_y)-ev.pageX+canvas.offsetLeft+current_x)/2 );
xmouse=( ev.pageX+ymouse-current_x-(tile_w/2)-canvas.offsetLeft );
ymouse=Math.round(ymouse/tile_h);
xmouse=Math.round(xmouse/(tile_w/2));
current_tile=[xmouse,ymouse];
I have a feeling I'll have to start over and implement a world based map system rather than a simple screen -> map routine.
Thanks.
Your assumption is correct. In order to "pick" against world geometry, your routine needs to be aware of the world (and not just the base-level tile configuration). That is, without any concept of the height of the tiles near the one that is currently picked (by your current algorithm), there's no way to determine whether a neighboring tile (or one even further away, depending on the permitted height) should be intercepted by picking ray.
You've got the final possible point of your picking ray, already. What remains is to define the remainder of the ray, in world-space, and to check that ray for intersections with world geometry.
If, like the picture, your view angle is always 45 degrees and always from the same direction, your mouse -> map routine could use an algorithm something like:
calculate i,j of tile as you're doing currently (your final value of xmouse, ymouse)
look up height and angle of tile at i,j
given the height and angle, does this tile intersect the picking ray? If so, set lasti, lastj = i, j
increment/decrement i,j one step diagonally toward viewer
have we fallen off the edge of the map? If so, return lasti, lastj. Otherwise go back to 2.
Depending on the maximum height of a tile, you might have to check only 2 tiles, rather than going all the way to the edge of the map.
3 is the tricky part, and depends on your world geometry. Draw some triangles and you should be able to figure it out. Or you might try looking at the function intersect_quadrilateral_ray() here.