SVG zoom in on mouse - mathematical model - javascript

Before you think "why is this guy asking for help on this problem, surely this has been implemented 1000x" - while you are mostly correct, I have attempted to solve this problem with several open source libs yet here I am.
I am attempting to implement an SVG based "zoom in on mouse wheel, focusing on the mouse" from scratch.
I know there are many libraries that accomplish this, d3 and svg-pan-zoom to name a couple. Unfortunately, my implementations using those libs are falling short of my expectations. I was hoping that I could get some help from the community with the underlying mathematical model for this type of UI feature.
Basically, the desired behavior is like Google Maps, a user has their mouse hovering over a location, they scroll the mouse wheel (inward), and the scale of the map image increases, while the location being hovered over becomes the horizontal and vertical center of the viewport.
Naturally, I have access to the width / height of the viewport and the x / y of the mouse.
In this example, I will only focus on the x axis, the viewport is 900 units wide, the square is 100 units wide, it's x offset is 400 units, and the scale is 1:1
<g transform="translate(0 0) scale(1)">
Assuming the mouse x position was at or near 450 units, if a user wheels in until scale reached 2:1, I would expect the x offset to reach -450 units, centering the point of focus like so.
<g transform="translate(-450 0) scale(2)">
The x and y offsets need to be recalculated on each increment of wheel scroll as a function of the current scale / mouse offsets.
All of my attempts have fallen utterly short of the desired behavior, any advice is appreciated.
While I appreciate any help, please refrain from answering with suggestions to 3rd party libraries, jQuery plugins and things of that nature. My aim here is to understand the mathematical model behind this problem in a general sense, my use of SVG is primarily illustrative.

What I usually do is I maintain three variable offset x offset y and scale. They will be applied as a transform to a container group, like your element <g transform="translate(0 0) scale(1)">.
If the mouse would be over the origin the new translation would be trivial to calculate. You just multiply the offset x and y by the difference in scale :
offsetX = offsetX * newScale/scale
offsetY = offsetY * newScale/scale
What you could do is translate the offset so that the mouse is at the origin. Then you scale and then you translate every thing back. Have a look at this typescript class that has a scaleRelativeTo method to do just what you want:
export class Point implements Interfaces.IPoint {
x: number;
y: number;
public constructor(x: number, y: number) {
this.x = x;
this.y = y;
}
add(p: Interfaces.IPoint): Point {
return new Point(this.x + p.x, this.y + p.y);
}
snapTo(gridX: number, gridY: number): Point {
var x = Math.round(this.x / gridX) * gridX;
var y = Math.round(this.y / gridY) * gridY;
return new Point(x, y);
}
scale(factor: number): Point {
return new Point(this.x * factor, this.y * factor);
}
scaleRelativeTo(point: Interfaces.IPoint, factor: number): Point {
return this.subtract(point).scale(factor).add(point);
}
subtract(p: Interfaces.IPoint): Point {
return new Point(this.x - p.x, this.y - p.y);
}
}
So if you have given transform given by translate(offsetX,offsetY) scale(scale) and a scroll event took place at (mouseX, mouseY) leading to a new scale newScale you would calculate the new transform by :
offsetX = (offsetX - mouseX) * newScale/scale + mouseX
offsetY = (offsetY - mouseY) * newScale/scale + mouseY

Related

Calculate coordinates of a point after zooming in JavaScript

I have a requirement to make some annotations on an image. This image is scalable (can be zoomed in and out). Now the challenge is that the annotations should also move with the scaling. How can I achieve this? I understand that 'direction' of zooming depends on the point considered as 'centre' when zooming, so assuming that this 'centre' is the absolute centre of the iamge container (width/2, height/2), how do I get the coordinates of the same point on image after zooming?
As an example, consider the following two images:
Image-1 (Normal scale):
Image-2 (Zoomed-in):
If I know the coordinates of the red point in Image-1 (which is at normal scale), how do I get the coordinates (x,y) of the same red point in Image-2? Note that the image container's width and height will remain same throughout the zooming process.
This function should return your new X and Y measured from the left top of the image.
Bear in mind, that the new coordinates can be outside of the width/height of your image, as the point you picked might be "zoomed off the edge"
/**
* width: integer, width of image in px
* height: integer, height of image in px;
* x: integer, horizontal distance from left
* y: integer, vertical distance from top
* scale: float, scale factor (1,5 = 150%)
*/
const scaleCoordinates = (width, height, x, y, scale) =>{
const centerX = width/2;
const centerY = height/2;
const relX = x - centerX;
const relY = y - centerY;
const scaledX = relX * scale;
const scaledY= relY * scale;
return {x: scaledX + centerX, y: scaledY + centerY};
}
console.log(scaleCoordinates(100,100,25,50, 1.2));
First, you'd want to determine the coordinates of the annotation with respect to the center of the image.
So for example on an image of 200 x 100, the point (120,60) with the origin in the left top corner would be (20,-10) when you take the center of the image as your origin.
If you scale the image 150%, your new coordinates would be those coordinates multiplied by 1,5 (=150%).
In our example that would be 30, -15.
Than you can calculate that back to absolute values, with the original point of origin

Accounting for Canvas Size Differences when Drawing on Image with Stored Coordinates

I'm struggling to find a method/strategy to handle drawing with stored coordinates and the variation in canvas dimensions across various devices and screen sizes for my web app.
Basically I want to display an image on the canvas. The user will mark two points on an area of image and the app records where these markers were placed. The idea is that the user will use the app every odd day, able to see where X amount of previous points were drawn and able to add two new ones to the area mentioned in places not already marked by previous markers. The canvas is currently set up for height = window.innerHeight and width = window.innerWidth/2.
My initial thought was recording the coordinates of each pair of points and retrieving them as required so they can be redrawn. But these coordinates don't match up if the canvas changes size, as discovered when I tested the web page on different devices. How can I record the previous coordinates and use them to mark the same area of the image regardless of canvas dimensions?
Use percentages! Example:
So lets say on Device 1 the canvas size is 150x200,
User puts marker on pixel 25x30. You can do some math to get the percentage.
And then you SAVE that percentage, not the location,
Example:
let userX = 25; //where the user placed a marker
let canvasWidth = 150;
//Use a calculator to verify :D
let percent = 100 / (canvasWidth / userX); //16.666%
And now that you have the percent you can set the marker's location based on that percent.
Example:
let markerX = (canvasWidth * percent) / 100; //24.999
canvasWidth = 400; //Lets change the canvas size!
markerX = (canvasWidth * percent) / 100; //66.664;
And voila :D just grab the canvas size and you can determine marker's location every time.
Virtual Canvas
You must define a virtual canvas. This is the ideal canvas with a predefined size, all coordinates are relative to this canvas. The center of this virtual canvas is coordinate 0,0
When a coordinate is entered it is converted to the virtual coordinates and stored. When rendered they are converted to the device screen coordinates.
Different devices have various aspect ratios, even a single device can be tilted which changes the aspect. That means that the virtual canvas will not exactly fit on all devices. The best you can do is ensure that the whole virtual canvas is visible without stretching it in x, or y directions. this is called scale to fit.
Scale to fit
To render to the device canvas you need to scale the coordinates so that the whole virtual canvas can fit. You use the canvas transform to apply the scaling.
To create the device scale matrix
const vWidth = 1920; // virtual canvas size
const vHeight = 1080;
function scaleToFitMatrix(dWidth, dHeight) {
const scale = Math.min(dWidth / vWidth, dHeight / vHeight);
return [scale, 0, 0, scale, dWidth / 2, dHeight / 2];
}
const scaleMatrix = scaleToFitMatrix(innerWidth, innerHeight);
Scale position not pixels
Point is defined as a position on the virtual canvas. However the transform will also scale the line widths, and feature sizes which you would not want on very low or high res devices.
To keep the same pixels size but still render in features in pixel sizes you use the inverse scale, and reset the transform just before you stroke as follows (4 pixel box centered over point)
const point = {x : 0, y : 0}; // center of virtual canvas
const point1 = {x : -vWidth / 2, y : -vHeight / 2}; // top left of virtual canvas
const point2 = {x : vWidth / 2, y : vHeight / 2}; // bottom right of virtual canvas
function drawPoint(ctx, matrix, vX, vY, pW, pH) { // vX, vY virtual coordinate
const invScale = 1 / matrix[0]; // to scale to pixel size
ctx.setTransform(...matrix);
ctx.lineWidth = 1; // width of line
ctx.beginPath();
ctx.rect(vX - pW * 0.5 * invScale, vY - pH * 0.5 * invScale, pW * invScale, pH * invScale);
ctx.setTransform(1,0,0,1,0,0); // reset transform for line width to be correct
ctx.fill();
ctx.stroke();
}
const ctx = canvas.getContext("2d");
drawPoint(ctx, scaleMatrix, point.x, point.y, 4, 4);
Transforming via CPU
To convert a point from the device coordinates to the virtual coordinates you need to apply the inverse matrix to that point. For example you get the pageX, pageY coordinates from a mouse, you convert using the scale matrix as follows
function pointToVirtual(matrix, point) {
point.x = (point.x - matrix[4]) / matrix[0];
point.y = (point.y - matrix[5]) / matrix[3];
return point;
}
To convert from virtual to device
function virtualToPoint(matrix, point) {
point.x = (point.x * matrix[0]) + matrix[4];
point.y = (point.y * matrix[3]) + matrix[5];
return point;
}
Check bounds
There may be an area above/below or left/right of the canvas that is outside the virtual canvas coordinates. To check if inside the virtual canvas call the following
function isInVritual(vPoint) {
return ! (vPoint.x < -vWidth / 2 ||
vPoint.y < -vHeight / 2 ||
vPoint.x >= vWidth / 2 ||
vPoint.y >= vHeight / 2);
}
const dPoint = {x: page.x, y: page.y}; // coordinate in device coords
if (isInVirtual(pointToVirtual(scaleMatrix,dPoint))) {
console.log("Point inside");
} else {
console.log("Point out of bounds.");
}
Extra points
The above assumes that the canvas is aligned to the screen.
Some devices will be zoomed (pinch scaled). You will need to check the device pixel scale for the best results.
It is best to set the virtual canvas size to the max screen resolution you expect.
Always work in virtual coordinates, only convert to device coordinates when you need to render.

How to convert cartesian coordinates to computer screen coordinates [duplicate]

For my game I need functions to translate between two coordinate systems. Well it's mainly math question but what I need is the C++ code to do it and a bit of explanation how to solve my issue.
Screen coordiantes:
a) top left corner is 0,0
b) no minus values
c) right += x (the more is x value, the more on the right is point)
d) bottom +=y
Cartesian 2D coordinates:
a) middle point is (0, 0)
b) minus values do exist
c) right += x
d) bottom -= y (the less is y, the more at the bottom is point)
I need an easy way to translate from one system to another and vice versa. To do that, (I think) I need some knowledge like where is the (0, 0) [top left corner in screen coordinates] placed in the cartesian coordinates.
However there is a problem that for some point in cartesian coordinates after translating it to screen ones, the position in screen coordinates may be minus, which is a nonsense. I cant put top left corner of screen coordinates in (-inifity, +infinity) cartesian coords...
How can I solve this? The only solution I can think of is to place screen (0, 0) in cartesian (0, 0) and only use IV quarter of cartesian system, but in that case using cartesian system is pointless...
I'm sure there are ways for translating screen coordinates into cartesian coordinates and vice versa, but I'm doing something wrong in my thinking with that minus values.
The basic algorithm to translate from cartesian coordinates to screen coordinates are
screenX = cartX + screen_width/2
screenY = screen_height/2 - cartY
But as you mentioned, cartesian space is infinite, and your screen space is not. This can be solved easily by changing the resolution between screen space and cartesian space. The above algorithm makes 1 unit in cartesian space = 1 unit/pixel in screen space. If you allow for other ratios, you can "zoom" out or in your screen space to cover all of the cartesian space necessary.
This would change the above algorithm to
screenX = zoom_factor*cartX + screen_width/2
screenY = screen_height/2 - zoom_factor*cartY
Now you handle negative (or overly large) screenX and screenY by modifying your zoom factor until all your cartesian coordinates will fit on the screen.
You could also allow for panning of the coordinate space too, meaning, allowing the center of cartesian space to be off-center of the screen. This could also help in allowing your zoom_factor to stay as tight as possible but also fit data which isn't evenly distributed around the origin of cartesian space.
This would change the algorithm to
screenX = zoom_factor*cartX + screen_width/2 + offsetX
screenY = screen_height/2 - zoom_factor*cartY + offsetY
You must know the size of the screen in order to be able to convert
Convert to Cartesian:
cartesianx = screenx - screenwidth / 2;
cartesiany = -screeny + screenheight / 2;
Convert to Screen:
screenx = cartesianx + screenwidth / 2;
screeny = -cartesiany + screenheight / 2;
For cases where you have a negative screen value:
I would not worry about this, this content will simply be clipped so the user will not see. If this is a problem, I would add some constraints that prevent the cartesian coordinate from being too large. Another solution, since you can't have the edges be +/- infinity, would be to scale your coordinates (e.g. 1 pixel = 10 cartesian) Let's call this scalefactor. The equations are now:
Convert to Cartesian with scale factor:
cartesianx = scalefactor*screenx - screenwidth / 2;
cartesiany = -scalefactor*screeny + screenheight / 2;
Convert to Screen with scale factor:
screenx = (cartesianx + screenwidth / 2) / scalefactor;
screeny = (-cartesiany + screenheight / 2) / scalefactor;
You need to know the width and height of the screen.
Then you can do:
cartX = screenX - (width / 2);
cartY = -(screenY - (height / 2));
And:
screenX = cartX + (width / 2);
screenY = -cartY + (height / 2);
You will always have the problem that the result could be off the screen -- either as a negative value, or as a value larger than the available screen size.
Sometimes that won't matter: e.g., if your graphical API accepts negative values and clips your drawing for you. Sometimes it will matter, and for those cases you should have a function that checks if a set of screen coordinates is on the screen.
You could also write your own clipping functions that try to do something reasonable with coordinates that fall off the screen (such as truncating negative screen coordinates to 0, and coordinates that are too large to the maximum onscreen coordinate). However, keep in mind that "reasonable" depends on what you're trying to do, so it might be best to hold off on defining such functions until you actually need them.
In any case, as other answers have noted, you can convert between the coordinate systems as:
cart.x = screen.x - width/2;
cart.y = height/2 - screen.y;
and
screen.x = cart.x + width/2;
screen.y = height/2 - cart.y;
I've got some boost c++ for you, based on microsoft article:
https://msdn.microsoft.com/en-us/library/jj635757(v=vs.85).aspx
You just need to know two screen points and two points in your coordinate system. Then you can convert point from one system to another.
#include <boost/numeric/ublas/vector.hpp>
#include <boost/numeric/ublas/vector_proxy.hpp>
#include <boost/numeric/ublas/matrix.hpp>
#include <boost/numeric/ublas/triangular.hpp>
#include <boost/numeric/ublas/lu.hpp>
#include <boost/numeric/ublas/io.hpp>
/* Matrix inversion routine.
Uses lu_factorize and lu_substitute in uBLAS to invert a matrix */
template<class T>
bool InvertMatrix(const boost::numeric::ublas::matrix<T>& input, boost::numeric::ublas::matrix<T>& inverse)
{
typedef boost::numeric::ublas::permutation_matrix<std::size_t> pmatrix;
// create a working copy of the input
boost::numeric::ublas::matrix<T> A(input);
// create a permutation matrix for the LU-factorization
pmatrix pm(A.size1());
// perform LU-factorization
int res = lu_factorize(A, pm);
if (res != 0)
return false;
// create identity matrix of "inverse"
inverse.assign(boost::numeric::ublas::identity_matrix<T> (A.size1()));
// backsubstitute to get the inverse
lu_substitute(A, pm, inverse);
return true;
}
PointF ConvertCoordinates(PointF pt_in,
PointF pt1, PointF pt2, PointF pt1_, PointF pt2_)
{
float matrix1[]={
pt1.X, pt1.Y, 1.0f, 0.0f,
-pt1.Y, pt1.X, 0.0f, 1.0f,
pt2.X, pt2.Y, 1.0f, 0.0f,
-pt2.Y, pt2.X, 0.0f, 1.0f
};
boost::numeric::ublas::matrix<float> M(4, 4);
CopyMemory(&M.data()[0], matrix1, sizeof(matrix1));
boost::numeric::ublas::matrix<float> M_1(4, 4);
InvertMatrix<float>(M, M_1);
double vector[] = {
pt1_.X,
pt1_.Y,
pt2_.X,
pt2_.Y
};
boost::numeric::ublas::vector<float> u(4);
boost::numeric::ublas::vector<float> u1(4);
u(0) = pt1_.X;
u(1) = pt1_.Y;
u(2) = pt2_.X;
u(3) = pt2_.Y;
u1 = boost::numeric::ublas::prod(M_1, u);
PointF pt;
pt.X = u1(0)*pt_in.X + u1(1)*pt_in.Y + u1(2);
pt.Y = u1(1)*pt_in.X - u1(0)*pt_in.Y + u1(3);
return pt;
}

HTML5 canvas get coordinates after zoom and translate

BACKGROUND: I have an HTML5 canvas and I have an image drawn on it. Now when the image is first loaded, it is loaded at a scale of 100%. The image is 5000 x 5000. And the canvas size is 600 x 600. So onload, I only see the first 600 x-pixels and 600 y-pixels. I have the option of scaling and translating the image on the canvas.
MY ISSUE: I am trying to figure out an algorithm that return the pixel coordinates of a mouse click relative to the image, not the canvas while taking into account scaling and translating.
I know there are a lot of topics already on this, but nothing I've seen has worked. My issue is when I have multiple translations and scaling. I can zoom once and get the correct coordinates, and I can then scale and get the right coordinates again, but once I zoom or scale more than once, the coordinates are off.
Here is what I have so far.
//get pixel coordinates from canvas mousePos.x, mousePos.y
(mousePos.x - x_translation)/scale //same for mousePos.y
annotationCanvas.addEventListener('mouseup',function(evt){
dragStart = null;
if (!dragged) {
var mousePos = getMousePos(canvas, evt);
var message1 = " mouse x: " + (mousePos.x) + ' ' + "mouse y: " + (mousePos.y);
var message = " x: " + ((mousePos.x + accX)/currentZoom*currentZoom) + ' ' + "y: " + ((mousePos.y + accY)/currentZoom);
console.log(message);
console.log(message1);
console.log("zoomAcc = " + zoomAcc);
console.log("currentZoom = " + currentZoom);
ctx.fillStyle="#FF0000";
ctx.fillRect((mousePos.x + accX)/currentZoom, (mousePos.y + accY)/currentZoom, -5, -5);
}
},true);
//accX and accY are the cumulative shift for x and y respectively, and xShift and xShift yShift are the incremental shifts of x and y respectively
where current zoom is the accumulative zoom. and zoomAcc is the single iteration of zoom at that point. So in this case, when I zoom in, zoomAcc is always 1.1, and currentZoom = currentZoom*zoomAcc.
Why is this wrong? if someone can please show me how to track these transformations and then apply them to mousePos.x and mousePos.y I would be grateful.
thanks
UPDATE:
In the image, the green dot is where I clicked, the red dot is where my calculation of that point is calculated, using markE's method. The m values are the matrix values in your markE's method.
When you command the context to translate and scale, these are known as canvas transformations.
Canvas transformations are based on a matrix that can be represented by 6 array elements:
// an array representing the canvas affine transformation matrix
var matrix=[1,0,0,1,0,0];
If you do context.translate or context.scale and also simultaneously update the matrix, then you can use the matrix to convert untransformed X/Y coordinates (like mouse events) into transformed image coordinates.
context.translate:
You can simultaneously do context.translate(x,y) and track that translation in the matrix like this:
// do the translate
// but also save the translate in the matrix
function translate(x,y){
matrix[4] += matrix[0] * x + matrix[2] * y;
matrix[5] += matrix[1] * x + matrix[3] * y;
ctx.translate(x,y);
}
context.scale:
You can simultaneously do context.scale(x,y) and track that scaling the matrix like this:
// do the scale
// but also save the scale in the matrix
function scale(x,y){
matrix[0] *= x;
matrix[1] *= x;
matrix[2] *= y;
matrix[3] *= y;
ctx.scale(x,y);
}
Converting mouse coordinates to transformed image coordinates
The problem is the browser is unaware that you have transformed your canvas coordinate system and the browser will return mouse coordinates relative to the browser window--not relative to the transformed canvas.
Fortunately the transformation matrix has been tracking all your accumulated translations and scalings.
You can convert the browser’s window coordinates to transformed coordinates like this:
// convert mouseX/mouseY coordinates
// into transformed coordinates
function getXY(mouseX,mouseY){
newX = mouseX * matrix[0] + mouseY * matrix[2] + matrix[4];
newY = mouseX * matrix[1] + mouseY * matrix[3] + matrix[5];
return({x:newX,y:newY});
}
There's a DOMMatrix object that will apply transformations to coordinates. I calculated coordinates for translated and rotated shapes as follows by putting my x and y coordinates into a DOMPoint and using a method of the DOMMatrix returned by CanvasRenderingContext2D.getTransform. This allowed a click handler to figure out which shape on the canvas was being clicked. This code apparently performs the calculation in markE's answer:
const oldX = 1, oldY = 1; // your values here
const transform = context.getTransform();
// Destructure to get the x and y values out of the transformed DOMPoint.
const { x, y } = transform.transformPoint(new DOMPoint(oldX, oldY));
DOMMatrix also has methods for translating and scaling and other operations, so you don't need to manually write those out anymore. MDN doesn't fully document them but does link to a page with the specification of non-mutating and mutating methods.

Simulate a physical 3d ball throw on a 2d js canvas from mouse click into the scene

I'd like to throw a ball (with an image) into a 2d scene and check it for a collision when it reached some distance. But I can't make it "fly" correctly. It seems like this has been asked like a million times, but with the more I find, the more confused I get..
Now I followed this answer but it seems, like the ball behaves very different than I expect. In fact, its moving to the top left of my canvas and becoming too little way too fast - ofcouse I could adjust this by setting vz to 0.01 or similar, but then I dont't see a ball at all...
This is my object (simplyfied) / Link to full source who is interested. Important parts are update() and render()
var ball = function(x,y) {
this.x = x;
this.y = y;
this.z = 0;
this.r = 0;
this.src = 'img/ball.png';
this.gravity = -0.097;
this.scaleX = 1;
this.scaleY = 1;
this.vx = 0;
this.vy = 3.0;
this.vz = 5.0;
this.isLoaded = false;
// update is called inside window.requestAnimationFrame game loop
this.update = function() {
if(this.isLoaded) {
// ball should fly 'into' the scene
this.x += this.vx;
this.y += this.vy;
this.z += this.vz;
// do more stuff like removing it when hit the ground or check for collision
//this.r += ?
this.vz += this.gravity;
}
};
// render is called inside window.requestAnimationFrame game loop after this.update()
this.render = function() {
if(this.isLoaded) {
var x = this.x / this.z;
var y = this.y / this.z;
this.scaleX = this.scaleX / this.z;
this.scaleY = this.scaleY / this.z;
var width = this.img.width * this.scaleX;
var height = this.img.height * this.scaleY;
canvasContext.drawImage(this.img, x, y, width, height);
}
};
// load image
var self = this;
this.img = new Image();
this.img.onLoad = function() {
self.isLoaded = true;
// update offset to spawn the ball in the middle of the click
self.x = this.width/2;
self.y = this.height/2;
// set radius for collision detection because the ball is round
self.r = this.x;
};
this.img.src = this.src;
}
I'm also wondering, which parametes for velocity should be apropriate when rendering the canvas with ~ 60fps using requestAnimationFrame, to have a "natural" flying animation
I'd appreciate it very much, if anyone could point me to the right direction (also with pseudocode explaining the logic ofcourse).
Thanks
I think the best way is to simulate the situation first within metric system.
speed = 30; // 30 meters per second or 108 km/hour -- quite fast ...
angle = 30 * pi/180; // 30 degree angle, moved to radians.
speed_x = speed * cos(angle);
speed_y = speed * sin(angle); // now you have initial direction vector
x_coord = 0;
y_coord = 0; // assuming quadrant 1 of traditional cartesian coordinate system
time_step = 1.0/60.0; // every frame...
// at most 100 meters and while not below ground
while (y_coord > 0 && x_coord < 100) {
x_coord += speed_x * time_step;
y_coord += speed_y * time_step;
speed_y -= 9.81 * time_step; // in one second the speed has changed 9.81m/s
// Final stage: ball shape, mass and viscosity of air causes a counter force
// that is proportional to the speed of the object. This is a funny part:
// just multiply each speed component separately by a factor (< 1.0)
// (You can calculate the actual factor by noticing that there is a limit for speed
// speed == (speed - 9.81 * time_step)*0.99, called _terminal velocity_
// if you know or guesstimate that, you don't need to remember _rho_,
// projected Area or any other terms for the counter force.
speed_x *= 0.99; speed_y *=0.99;
}
Now you'll have a time / position series, which start at 0,0 (you can calculate this with Excel or OpenOffice Calc)
speed_x speed_y position_x position_y time
25,9807687475 14,9999885096 0 0 0
25,72096106 14,6881236245 0,4286826843 0,2448020604 1 / 60
25,4637514494 14,3793773883 0,8530785418 0,4844583502 2 / 60
25,2091139349 14,0737186144 1,2732304407 0,7190203271
...
5,9296028059 -9,0687933774 33,0844238036 0,0565651137 147 / 60
5,8703067779 -9,1399704437 33,1822622499 -0,0957677271 148 / 60
From that sheet one can first estimate the distance of ball hitting ground and time.
They are 33,08 meters and 2.45 seconds (or 148 frames). By continuing the simulation in excel, one also notices that the terminal velocity will be ~58 km/h, which is not much.
Deciding that terminal velocity of 60 m/s or 216 km/h is suitable, a correct decay factor would be 0,9972824054451614.
Now the only remaining task is to decide how long (in meters) the screen will be and multiply the pos_x, pos_y with correct scaling factor. If screen of 1024 pixels would be 32 meters, then each pixel would correspond to 3.125 centimeters. Depending on the application, one may wish to "improve" the reality and make the ball much larger.
EDIT: Another thing is how to project this on 3D. I suggest you make the path generated by the former algorithm (or excel) as a visible object (consisting of line segments), which you will able to rotate & translate.
The origin of the bad behaviour you're seeing is the projection that you use, centered on (0,0), and more generally too simple to look nice.
You need a more complete projection with center, scale, ...
i use that one for adding a little 3d :
projectOnScreen : function(wx,wy,wz) {
var screenX = ... real X size of your canvas here ... ;
var screenY = ... real Y size of your canvas here ... ;
var scale = ... the scale you use between world / screen coordinates ...;
var ZOffset=3000; // the bigger, the less z has effet
var k =ZOffset; // coeficient to have projected point = point for z=0
var zScale =2.0; // the bigger, the more a change in Z will have effect
var worldCenterX=screenX/(2*scale);
var worldCenterY=screenY/(2*scale);
var sizeAt = ig.system.scale*k/(ZOffset+zScale*wz);
return {
x: screenX/2 + sizeAt * (wx-worldCenterX) ,
y: screenY/2 + sizeAt * (wy-worldCenterY) ,
sizeAt : sizeAt
}
}
Obviously you can optimize depending on your game. For instance if resolution and scale don't change you can compute some parameters once, out of that function.
sizeAt is the zoom factor (canvas.scale) you will have to apply to your images.
Edit : for your update/render code, as pointed out in the post of Aki Suihkonen, you need to use a 'dt', the time in between two updates. so if you change later the frame per second (fps) OR if you have a temporary slowdown in the game, you can change the dt and everything still behaves the same.
Equation becomes x+=vx*dt / ... / vx+=gravity*dt;
you should have the speed, and gravity computed relative to screen height, to have same behaviour whatever the screen size.
i would also use a negative z to start with. to have a bigger ball first.
Also i would separate concerns :
- handle loading of the image separatly. Your game should start after all necessary assets are loaded. Some free and tiny frameworks can do a lot for you. just one example : crafty.js, but there are a lot of good ones.
- adjustment relative to the click position and the image size should be done in the render, and x,y are just the mouse coordinates.
var currWidth = this.width *scaleAt, currHeight= this.height*scaleAt;
canvasContext.drawImage(this.img, x-currWidth/2, y-currHeight/2, currWidth, currHeight);
Or you can have the canvas to do the scale. bonus is that you can easily rotate this way :
ctx.save();
ctx.translate(x,y);
ctx.scale(scaleAt, scaleAt); // or scaleAt * worldToScreenScale if you have
// a scaling factor
// ctx.rotate(someAngle); // if you want...
ctx.drawImage(this.img, x-this.width/2, x-this.height/2);
ctx.restore();

Categories

Resources