I'm playing a little with the phaser framework and try to make a simple "fall down" game. The goal is to fall fast enough without getting pushed out at the top of the screen:
To accomplish this I set the canvas size to 800x600:
var game = new Phaser.Game(800, 600, ...);
and resizing the world in create() to 800x6000:
game.world.resize(800, 6000);
In update() I move the camera 1 down:
game.camera.y += 1;
and check if the ball is still inside the camera:
if (!ball.inCamera) {
// ...
}
My Question is:
On the left and on the right the world borders are limiting the moveability of the ball (because of that the ball can't leave the camera there). How can I prevent the ball from "falling" out the camera on the bottom, but still be able to get pushed out the camera at the top?
Is there something similar to
game.physics.arcade.checkCollision.down = true;
but for the camera bounds?
Edit
This is how the ball is created:
ball = game.add.sprite(game.world.width / 2, 20, "ball");
game.physics.arcade.enable(ball);
ball.body.gravity.y = 1000;
To move the ball I check for key presses and then change the ball.body.velocity parameters.
Physics has nothing to do with the camera (and indeed it shouldn't), an elegant solution would be to create an invisible body, align its top with the bottom camera bound and move it with camera, thus still allowing the ball to be moved out of the view at the top, but not allowing it to fall down.
Related
What I have is an OrthographicCamera set up so that is has an isometric view of the scene and OrbitConrols added to allow for panning around and zooming but not for rotation.
What I’d like to have is a button that will centre the objects in a scene and zoom the OrthographicCamera so that the objects fit within the canvas area while keeping the isometric view, i.e. the angle between the camera.position and the camera.lookAt (control.target) point.
What I’ve tried is to set the controls.target at the centre of the bounding box of the objects in the scene.
I have 2 problems with the code at the moment.
The First is I couldn’t work out how to calculate the zoom level needed to make sure the objects in the scene are all in view. I’ve hard coded a value for just now.
The Second is that with the current code, if the camera is panned so that the objects appear nearly off the screen, either up or down, then when centred the angle of the camera changes. This was happening when the camera was panned far left or right but setting the max and min Azimuth Angle seems to prevent this.
camera rotates after centring
The image above shows the scene when loaded then after centering when the camera was panned so the objects were going off the top of the screen.
I have tried a number of ways to do this after looking at answers to similar questions as this but am still having problems getting it to work.
function fitDrawingToPage(){
// Variables Bbox etc are set outside the function
Bbox = new THREE.Box3();
for (const object of sceneObjects) Bbox.expandByObject(object);
newTarget = Bbox.getCenter(new THREE.Vector3());
controls.target.set( newTarget.x, newTarget.y, newTarget.z );
controls.update();
camera.zoom = 0.5;
camera.updateProjectionMatrix();
camera.updateMatrix();
render();
}
current example of code in jsfiddle
I'm using the translateZ function so that camera can move forward/backward along its lookat direction. But translateZ moves it relatively. So camera.translateZ(10); followed by another camera.translateZ(10); will move the camera by 20 units in total. Because of this, I can't use it in a tween animation where I tween the parameter inside of that function.
So I was wondering if there's a way to translate along Z in absolute coordinates, for example if I'm looking at object in origin, I'd like to set distance_from_origin while still maintaining lookat direction?
camera.absTranslateZ(distance_from_origin);
I use something similar on all my camera rigs. I only need 2 variables: the focus position (vec3) and the z-distance. I then reset the position on each frame:
var focusPos = new THREE.Vector3();
var zDist = 10;
// Called once per frame
update() {
// reset camera position
camera.position.copy(focusPos);
// Set camera rotations if needed
// camera.rotation.set(rotX, rotY, 0);
// Apply z-translation to dolly in/out
camera.translateZ(zDist);
}
I like this approach because it lets you change and even tween the focus position, so you're not always looking at the origin. However, it depends on how you're calculating your camera rotations, you might run into gimbal lock if you need more complex rotations.
I have created a raytracing algorithm which currently displays a triangle on screen, and am now having trouble moving the camera around correctly.
I would like my code to have the arrow keys rotate the camera, and WASD move the camera in 3D space. The camera currently rotates correctly using two rotation matrices for y and z rotations.
The Problem
Moving the camera, rather than rotating, is where the issue arises. To move the camera, I require two vectors - cameraForward and cameraRight. These vectors can be added on to the position of the camera when input is detected. These vectors will also need to change when the cemara is rotated, in the same rotation as all the rays experience. But when I apply these rotation matrices to the vectors representing cameraRight and cameraForward, there seems to be an issue. Holding down the A or D key will result in the camera moving unexpectedly in circles or odd wavy lines.
I managed to fix the issue with cameraForward by using a different method. I added a couple lines of code which calculate when the ray at the centre as been 'fired' and proceed to set cameraForward to that ray. Therefore cameraForward will always follow the central ray being sent out i.e. the centre of the field of view. However, I cannot do the same with cameraRight, as this vector is not in the field of view.
Before solving cameraForward using this method the same issue arose with moving forwards and backwards.
I also tried taking the cross product of one of the other rays along with the cameraForward vector which I thought might produce cameraRight, but to no avail - more sporadic camera movement
I do not have the vector for cameraUp either so cannot calculate the cross product to find cameraRight.
I also thought maybe the code was being run too many times and the vector was rotated multiple times. However mvoing the code elsewhere had no effect and the method it was already in is run every frame, so I do not believe that is the issue.
Here is my code to rotate the camera right and the method which does the rotation.
camRight's inital value is (0, 0, 1)
camRight = normalize(rotateVector(camRight, rotationZ));
camRight = normalize(rotateVector(camRight, rotationY));
function rotateVector(v, m) {
return new Vector3(dot(m.a, v), dot(m.b, v), dot(m.c, v));
}
I know this code works as the code rotating the camera view functions correctly using the same matrices and methods.
(the following code)
myDirection = normalize(rotateVector(myDirection, rotationZ));
myDirection = normalize(rotateVector(myDirection, rotationY));
When the user presses A or D the following code is run
if (keys[65]) {
camPos = add(camPos, normalize(camRight));
requestAnimationFrame(render);
}
if (keys[68]) {
camPos = subtract(camPos, normalize(camRight));
requestAnimationFrame(render);
}
The camera moves forwards and backward correctly, as previously mentioned. Initially, the camera moves left and right correctly too (as its inital value of (0, 0, 1) is correct), but if I rotate the camera, the values for cameraRight go wild.
Have I assumed something wrongly? or is there a flaw in my logic somewhere?
Thank for any help
I am currently creating a VR web app using three.js. As the camera controls I am using the device orientation controls used here in the google cardboard three.js demo.
What I need to do is add keyboard controls to this(e.g Up arrow to go forward etc). I've fiddled around with moving the camera on the two axis (x and z) here:
if (e.keyCode == '38') {
camera.position.set(0, 10, camera.position.z+4);
controls.target.set(
camera.position.x +4,
camera.position.y,
camera.position.z
);
effect.render(scene, camera);
...
However I want to make the character move relative to where they are looking (e.g You look one way and press the Up arrow and the character moves the way you looking). Like a first person view.
Does anyone have any ideas on how this is done? Ive tried using the first person controls from three.js but this eliminates the head tracking which is essential for a VR game.
Any answers would be greatly appreciated. (My source code is practically just the Google cardboard three.js demo code with a function added in too detect key presses)
I solved this by different approach. I created an object3d which is moving in scene. Model and camera are child of this object.
I'm rotating object 3d with camera and in the same time rotate model in opposite direction. When i rotate camera object looks keeping direction. when i want to move object, just translateX object with camera and make model rotation to 0. That did the trick.
On long distances (I have millions of units) started to be jerky. Reason is lost precision.
I solved it by keeping position of object at 0,0,0 and move all other things in opposite direction. That makes your model is still on 0,0,0 coords with right rotation and world is moving around.
Most simple example:
you trying something like
scene.add(character_model);
scene.add(camera);
//camera.rotate ....
character_model.translateX(1);
character_model.rotateX(1);
//etc ...
and now you trying to move camera around the pivot (character_model), but this is overcomplicated mathematics.
Try:
var controls_dimension = new THREE.Object3D();
scene.add(controls_dimension);
controls_dimension.add(character_model);
controls_dimension.add(camera);
//use controls to rotate with this object, not with character_model
controls_dimension.rotateX(2);
// in the same rotate model to opposite direction. You can make
// illusion of rotating camera, not a model.
character_model.rotateX(2*-1);
/*
when you want to go in camera direction=controls_dimension=controls_dimension.translateX(1)
and we moving (you most only animate model rotation to controls_dimension.rotation)
*/
I am working on a school project that includes these conditions:
Make maze with using only JS, HTML5 and CSS.
Make a torch effect around the character. You cannot light through walls.
I started making this game with the use of canvas.
I have succeeded to make a torch effect around the character as shown here:
http://people.inf.elte.hu/tunyooo/web2/HTML5-Maze.html
However, I cannot make it NOT to light through walls.
I am fairly sure I should do something like this:
Start a loop in all directions from the current position of the character up until it reaches the view distance OR if the context.getImageData() returns [0,0,0,255]. This way, I could get the character's distance from northern, eastern, western and southern walls.
Then, I could light the maze around the character with a (viewdistance-DistanceFrom*Wall) rectangle.
Unfortunately though, after 15 hours of thinking about this I am running out of ideas how to make this work.
Any tips are appreciated.
A simpler way of doing this is (ps: I get a "forbidden" error on the link provided so i cannot see what you did):
Have a matte version of the maze, a transparent and white image where white represent allowed drawing areas. This matte image should match the maze image in size and placement.
Create an off-screen canvas the size of the torch image
When you need to draw the torch first draw the matte image into the off-screen canvas. Draw it offset so the correct part of the matte is drawn. For example: if the torch will be drawn at position 100, 100 on the maze then draw the matte into the off-screen canvas at -100,-100 - or simply create the canvas the same size as the maze and draw in the matte at 0,0 and the torch at the relative position. More memory is used but simpler to maintain.
Change composite mode to source-in and then draw the torch. Change composite mode back to copy for the next draw.
Now your torch is clipped to fit within the wall. Now simply draw the off-screen canvas to your main canvas instead of the torch.
Note: it's important that the torch is made such as it cannot reach the other side of the wall (diameter size) or it will instead shine "under" the maze walls - this can be solved other ways though by using matte for different zones which is chosen depending on player position (not shown here).
To move in the demo below just move the mouse over the canvas area.
Live demo
function mousemoved(e) {
var rect = canvas.getBoundingClientRect(), // adjust mouse pos.:
x = e.clientX - rect.left - iTorch.width * 0.5, // center of torch
y = e.clientY - rect.top - iTorch.height * 0.5;
octx.drawImage(iMatte, 0, 0); // draw matte to off-screen
octx.globalCompositeOperation = 'source-in'; // change comp mode
octx.drawImage(iTorch, x, y); // clip torch
octx.globalCompositeOperation = 'copy'; // change comp mode for next
ctx.drawImage(iMaze, 0, 0); // redraw maze
ctx.drawImage(ocanvas, 0, 0); // draw clipped torch on top
}
In the demo the torch is of more or less random size, a bit too big in fact - something I made quick and dirty. But try to move within the maze path to see it being clipped. The off-screen canvas is added on the size of the main canvas to show what goes on.
An added bonus is that you could use the same matte for hit-testing.
Make your maze hallways into clipping paths.
Your torch effects will be contained within the clipping paths.
[ Addition to answer based on questioner's comments ]
To create a clipping path from your existing maze image:
Open up your maze image in a Paint program. The mouse cursors X/Y position are usually displayed as you move over the maze image.
Record the top-left and bottom-right of each maze hallway in an array.
var hallways=[];
hallways.push({left:100, y:50, right: 150, bottom: 65}); // for each hallway
Listen for mouse events and determine which hallway the mouse is in.
// hallwayIndex is the index of the hallway the mouse is inside
var hallwayIndex=-1;
// x=mouse's x coordinate, y=mouse's y coordinate
for(var i=0;i<hallways;i++){
var hall=hallways[i];
if(x>=hall.left &&
x<=hall.right &&
y>=hall.top &&
y<=hall.bottom)
{ hallwayIndex=i; }
}
Redraw the maze on the canvas
Create a clipping path for the current hallway:
var width=hall.right-hall.left;
var height=hall.bottom-hall.top;
ctx.beginPath();
ctx.Rect(hall.left,hall.top,width,height);
ctx.clip();
Draw the player+torch into the hallway (the torch will not glow thru the walls).
There is a brilliant article on this topic: http://www.redblobgames.com/articles/visibility/
Doing it accurately like that, however, is a lot of work. If you want to go with a quick and dirty solution, I would suggest the following. Build the world from large blocks (think retro pixels). This makes collision detection simpler too. Now you can consider all points within the torch radius. Walk in a straight line from the character to the point. If you reach the point without hitting a wall, make it bright.
(You could do the same with 1-pixel blocks too, but you might hit performance issues.)