How Does a camera convert from clip space into screen space? - javascript

I want to render a bunch of 3d points into 2d canvas without webgl.
I thought clip space and screen space are the same thing, and camera is used to convert from 3d world space to 2d screen space,
but apperently they are not.
So on webgl, when setting gl_Position, it's in clip space,
later this position is converted to screen space by webgl, and gl_FragCoord is set.
How is this calculation is done and where?
And Camera matrix and view projection matrices has nothing to do with converting clip space to screen space.
I can have a 3d world space that fit's into clip space, and I wouldn't need to use a camera right?
If all my assumptions are true, I need to learn how to convert from clip space into screen space.
Here's my code:
const uMatrix = mvpMatrix(modelMatrix(transform));
// transform each vertex into 2d screen space
vertices = vertices.map(vertex => {
let res = mat4.multiplyVector(uMatrix, [...vertex, 1.0]);
// res is vec4 element, in clip space,
// how to transform this into screen space?
return [res[0], res[1]];
});
// viewProjectionMatrix calculation
const mvpMatrix = modelMatrix => {
const { pos: camPos, target, up } = camera;
const { fov, aspect, near, far } = camera;
let camMatrix = mat4.lookAt(camPos, target, up);
let viewMatrix = mat4.inverse(camMatrix);
let projectionMatrix = mat4.perspective(fov, aspect, near, far);
let viewProjectionMatrix = mat4.multiply(projectionMatrix, viewMatrix);
return mat4.multiply(viewProjectionMatrix, modelMatrix);
};
The camera mentioned in this article transforms clip space to screen space, If so it shouldn't be named a camera right?

First the geometry is clipped, according to the clip space coordinate (gl_Position). The clip space coordinate is a Homogeneous coordinates. The condition for a homogeneous coordinate to be in clip space is:
-w <= x, y, z <= w.
The clip space coordinate is transformed to a Cartesian coordinate in normalized device space, by Perspective divide:
ndc_position = gl_Position.xyz / gl_Position.w
The normalized device space is a cube, with the left bottom front of (-1, -1, -1) and the right top back of (1, 1, 1).
The x and y component of the normalized device space coordinate is linear mapped to the viewport, which is set by gl.viewport (See WebGL Viewport). The viewport is a rectangle with an origin (x, y) and a width and a height:
xw = (ndc_position.x + 1) * (width / 2) + x
yw = (ndc_position.y + 1) * (height / 2 ) + y
xw and yw can be accessed by gl_FragCoord.xy in the fragment shader.
The z component of the normalized device space coordinate is linear mapped to the depth range, which is by default [0.0, 1.0], but can be set by gl.depthRange. See Viewport Depth Range. The depth range consists of a near value and a far value. far has to be greater than near and both values have to be in [0.0, 1.0]:
depth = (ndc_position.z + 1) * (far-near) / 2 + near
The depth can be accessed by gl_FragCoord.z in the fragment shader.
All this operations are done automatically in the rendering pipeline and are part of the Vertex Post-Processing.

Related

Accounting for Canvas Size Differences when Drawing on Image with Stored Coordinates

I'm struggling to find a method/strategy to handle drawing with stored coordinates and the variation in canvas dimensions across various devices and screen sizes for my web app.
Basically I want to display an image on the canvas. The user will mark two points on an area of image and the app records where these markers were placed. The idea is that the user will use the app every odd day, able to see where X amount of previous points were drawn and able to add two new ones to the area mentioned in places not already marked by previous markers. The canvas is currently set up for height = window.innerHeight and width = window.innerWidth/2.
My initial thought was recording the coordinates of each pair of points and retrieving them as required so they can be redrawn. But these coordinates don't match up if the canvas changes size, as discovered when I tested the web page on different devices. How can I record the previous coordinates and use them to mark the same area of the image regardless of canvas dimensions?
Use percentages! Example:
So lets say on Device 1 the canvas size is 150x200,
User puts marker on pixel 25x30. You can do some math to get the percentage.
And then you SAVE that percentage, not the location,
Example:
let userX = 25; //where the user placed a marker
let canvasWidth = 150;
//Use a calculator to verify :D
let percent = 100 / (canvasWidth / userX); //16.666%
And now that you have the percent you can set the marker's location based on that percent.
Example:
let markerX = (canvasWidth * percent) / 100; //24.999
canvasWidth = 400; //Lets change the canvas size!
markerX = (canvasWidth * percent) / 100; //66.664;
And voila :D just grab the canvas size and you can determine marker's location every time.
Virtual Canvas
You must define a virtual canvas. This is the ideal canvas with a predefined size, all coordinates are relative to this canvas. The center of this virtual canvas is coordinate 0,0
When a coordinate is entered it is converted to the virtual coordinates and stored. When rendered they are converted to the device screen coordinates.
Different devices have various aspect ratios, even a single device can be tilted which changes the aspect. That means that the virtual canvas will not exactly fit on all devices. The best you can do is ensure that the whole virtual canvas is visible without stretching it in x, or y directions. this is called scale to fit.
Scale to fit
To render to the device canvas you need to scale the coordinates so that the whole virtual canvas can fit. You use the canvas transform to apply the scaling.
To create the device scale matrix
const vWidth = 1920; // virtual canvas size
const vHeight = 1080;
function scaleToFitMatrix(dWidth, dHeight) {
const scale = Math.min(dWidth / vWidth, dHeight / vHeight);
return [scale, 0, 0, scale, dWidth / 2, dHeight / 2];
}
const scaleMatrix = scaleToFitMatrix(innerWidth, innerHeight);
Scale position not pixels
Point is defined as a position on the virtual canvas. However the transform will also scale the line widths, and feature sizes which you would not want on very low or high res devices.
To keep the same pixels size but still render in features in pixel sizes you use the inverse scale, and reset the transform just before you stroke as follows (4 pixel box centered over point)
const point = {x : 0, y : 0}; // center of virtual canvas
const point1 = {x : -vWidth / 2, y : -vHeight / 2}; // top left of virtual canvas
const point2 = {x : vWidth / 2, y : vHeight / 2}; // bottom right of virtual canvas
function drawPoint(ctx, matrix, vX, vY, pW, pH) { // vX, vY virtual coordinate
const invScale = 1 / matrix[0]; // to scale to pixel size
ctx.setTransform(...matrix);
ctx.lineWidth = 1; // width of line
ctx.beginPath();
ctx.rect(vX - pW * 0.5 * invScale, vY - pH * 0.5 * invScale, pW * invScale, pH * invScale);
ctx.setTransform(1,0,0,1,0,0); // reset transform for line width to be correct
ctx.fill();
ctx.stroke();
}
const ctx = canvas.getContext("2d");
drawPoint(ctx, scaleMatrix, point.x, point.y, 4, 4);
Transforming via CPU
To convert a point from the device coordinates to the virtual coordinates you need to apply the inverse matrix to that point. For example you get the pageX, pageY coordinates from a mouse, you convert using the scale matrix as follows
function pointToVirtual(matrix, point) {
point.x = (point.x - matrix[4]) / matrix[0];
point.y = (point.y - matrix[5]) / matrix[3];
return point;
}
To convert from virtual to device
function virtualToPoint(matrix, point) {
point.x = (point.x * matrix[0]) + matrix[4];
point.y = (point.y * matrix[3]) + matrix[5];
return point;
}
Check bounds
There may be an area above/below or left/right of the canvas that is outside the virtual canvas coordinates. To check if inside the virtual canvas call the following
function isInVritual(vPoint) {
return ! (vPoint.x < -vWidth / 2 ||
vPoint.y < -vHeight / 2 ||
vPoint.x >= vWidth / 2 ||
vPoint.y >= vHeight / 2);
}
const dPoint = {x: page.x, y: page.y}; // coordinate in device coords
if (isInVirtual(pointToVirtual(scaleMatrix,dPoint))) {
console.log("Point inside");
} else {
console.log("Point out of bounds.");
}
Extra points
The above assumes that the canvas is aligned to the screen.
Some devices will be zoomed (pinch scaled). You will need to check the device pixel scale for the best results.
It is best to set the virtual canvas size to the max screen resolution you expect.
Always work in virtual coordinates, only convert to device coordinates when you need to render.

How to calculate FOV from VRFrameData?

There used to be field of view information in the VREyeParameters, but that was deprecated. So now i am wondering: Is possible to calculate that using the view/projection matrices provided by VRFrameData?
The projection matrix describes the mapping from 3D points of a scene, to 2D points of the viewport. The projection matrix transforms from view space to the clip space. Clip space coordinates are Homogeneous coordinates. The coordinates in the clip space are transformed to the normalized device coordinates (NDC) in the range (-1, -1, -1) to (1, 1, 1) by dividing with the w component of the clip coordinates.
At Perspective Projection the projection matrix describes the mapping from 3D points in the world as they are seen from of a pinhole camera, to 2D points of the viewport. The eye space coordinates in the camera frustum (a truncated pyramid) are mapped to a cube (the normalized device coordinates).
If you want to know the corners of the camera frustum in view space, then you have to transform the corners of the normalized device space (-1, -1, -1), ..., (1, 1, 1) by the inverse projection matrix. To get cartesian coordinates, the X, Y, and Z component of the result has to be divided by the W (4th) component of the result.
glMatrix is a library which provides matrix operations and data types such as mat4 and vec4:
projection = mat4.clone( VRFrameData.leftProjectionMatrix );
inverse_prj = mat4.create();
mat4.invert( inverse_prj, projection );
pt_ndc = [-1, -1, -1];
v4_ndc = vec4.fromValues( pt_ndc[0], pt_ndc[1], pt_ndc[2], 1 );
v4_view = vec4.create();
vec4.transformMat4( v4_view, v4_ndc, inverse_prj );
pt_view = [v4_view[0]/v4_view[3], v4_view[1]/v4_view[3], v4_view[2]/v4_view[3]];
The transformation view coordinates to world coordinates can be done by the inverse view matrix.
view = mat4.clone( VRFrameData.leftViewMatrix );
inverse_view = mat4.create();
mat4.invert( inverse_view, view );
v3_view = vec3.clone( pt_view );
v3_world = vec3.create();
mat4.transformMat4( v3_world, v3_view, inverse_view );
Note, the left and right projection matrix are not symmetric. This means the line of sight is not in the center of the frustum and they are different for the left and the right eye.
Further note, a perspective projection matrix looks like this:
r = right, l = left, b = bottom, t = top, n = near, f = far
2*n/(r-l) 0 0 0
0 2*n/(t-b) 0 0
(r+l)/(r-l) (t+b)/(t-b) -(f+n)/(f-n) -1
0 0 -2*f*n/(f-n) 0
where :
a = w / h
ta = tan( fov_y / 2 );
2 * n / (r-l) = 1 / (ta * a)
2 * n / (t-b) = 1 / ta
If the projection is symmetric, where the line of sight is in the center of the view port and the field of view is not displaced, then the matrix can be simplified:
1/(ta*a) 0 0 0
0 1/ta 0 0
0 0 -(f+n)/(f-n) -1
0 0 -2*f*n/(f-n) 0
This means the field of view angle can be calculated by:
fov_y = Math.atan( 1/prjMat[5] ) * 2; // prjMat[5] is prjMat[1][1]
and the aspect ratio by:
aspect = prjMat[5] / prjMat[0];
The calculation for the field of view angle also works, if the projection matrix is only symmetric along the horizontal. This means if -bottom is equal to top. For the projection matrices of the 2 eyes this should be the case.
Furthermore:
z_ndc = 2.0 * depth - 1.0;
z_eye = 2.0 * n * f / (f + n - z_ndc * (f - n));
by substituting the fields of the projection matrix this is:
A = prj_mat[2][2]
B = prj_mat[3][2]
z_eye = B / (A + z_ndc)
This means the distance to the near plane and to the far plane can be calculated by:
A = prj_mat[10]; // prj_mat[10] is prj_mat[2][2]
B = prj_mat[14]; // prj_mat[14] is prj_mat[3][2]
near = - B / (A - 1);
far = - B / (A + 1);
SOHCAHTOA pronounced "So", "cah", "toe-ah"
SOH -> Sine(angle) = Opposite over Hypotenuse
CAH -> Cosine(angle) = Adjacent over Hypotenuse
TOA -> Tangent(angle) = Opposite over Adjacent
Tells us the relationships of the various sides of a right triangle to various trigonometry functions
So looking at a frustum image we can take the right triangle from the eye to the near plane to the top of the frustum to compute the tangent of the field of view and we can use the arc tangent to turn a tangent back into an angle.
Since we know the result of the projection matrix takes our world space frustum and converts it to clip space and ultimately to normalized device space (-1, -1, -1) to (+1, +1, +1) we can get the positions we need by multiplying the corresponding points in NDC space by the inverse of the projection matrix
eye = 0,0,0
centerAtNearPlane = inverseProjectionMatrix * (0,0,-1)
topCenterAtNearPlane = inverseProjectionMatrix * (0, 1, -1)
Then
opposite = topCenterAtNearPlane.y
adjacent = -centerAtNearPlane.z
halfFieldOfView = Math.atan2(opposite, adjacent)
fieldOfView = halfFieldOfView * 2
Let's test
const m4 = twgl.m4;
const fovValueElem = document.querySelector("#fovValue");
const resultElem = document.querySelector("#result");
let fov = degToRad(45);
function updateFOV() {
fovValueElem.textContent = radToDeg(fov).toFixed(1);
// get a projection matrix from somewhere (like VR)
const projection = getProjectionMatrix();
// now that we have projection matrix recompute the FOV from it
const inverseProjection = m4.inverse(projection);
const centerAtZNear = m4.transformPoint(inverseProjection, [0, 0, -1]);
const topCenterAtZNear = m4.transformPoint(inverseProjection, [0, 1, -1]);
const opposite = topCenterAtZNear[1];
const adjacent = -centerAtZNear[2];
const halfFieldOfView = Math.atan2(opposite, adjacent);
const fieldOfView = halfFieldOfView * 2;
resultElem.textContent = radToDeg(fieldOfView).toFixed(1);
}
updateFOV();
function getProjectionMatrix() {
// doesn't matter. We just want a projection matrix as though
// someone else made it for us.
const aspect = 2 / 1;
// choose some zNear and zFar
const zNear = .5;
const zFar = 100;
return m4.perspective(fov, aspect, zNear, zFar);
}
function radToDeg(rad) {
return rad / Math.PI * 180;
}
function degToRad(deg) {
return deg / 180 * Math.PI;
}
document.querySelector("input").addEventListener('input', (e) => {
fov = degToRad(parseInt(e.target.value));
updateFOV();
});
<script src="https://twgljs.org/dist/4.x/twgl-full.min.js"></script>
<input id="fov" type="range" min="1" max="179" value="45"><label>fov: <span id="fovValue"></span></label>
<div>computed fov: <span id="result"></span></div>
Note this assumes the center of the frustum is directly in front of the eye. If it's not then you'd probably have to compute adjacent by computing the length of the vector from the eye to centerAtZNear
const v3 = twgl.v3;
...
const adjacent = v3.length(centerAtZNear);

How to convert cartesian coordinates to computer screen coordinates [duplicate]

For my game I need functions to translate between two coordinate systems. Well it's mainly math question but what I need is the C++ code to do it and a bit of explanation how to solve my issue.
Screen coordiantes:
a) top left corner is 0,0
b) no minus values
c) right += x (the more is x value, the more on the right is point)
d) bottom +=y
Cartesian 2D coordinates:
a) middle point is (0, 0)
b) minus values do exist
c) right += x
d) bottom -= y (the less is y, the more at the bottom is point)
I need an easy way to translate from one system to another and vice versa. To do that, (I think) I need some knowledge like where is the (0, 0) [top left corner in screen coordinates] placed in the cartesian coordinates.
However there is a problem that for some point in cartesian coordinates after translating it to screen ones, the position in screen coordinates may be minus, which is a nonsense. I cant put top left corner of screen coordinates in (-inifity, +infinity) cartesian coords...
How can I solve this? The only solution I can think of is to place screen (0, 0) in cartesian (0, 0) and only use IV quarter of cartesian system, but in that case using cartesian system is pointless...
I'm sure there are ways for translating screen coordinates into cartesian coordinates and vice versa, but I'm doing something wrong in my thinking with that minus values.
The basic algorithm to translate from cartesian coordinates to screen coordinates are
screenX = cartX + screen_width/2
screenY = screen_height/2 - cartY
But as you mentioned, cartesian space is infinite, and your screen space is not. This can be solved easily by changing the resolution between screen space and cartesian space. The above algorithm makes 1 unit in cartesian space = 1 unit/pixel in screen space. If you allow for other ratios, you can "zoom" out or in your screen space to cover all of the cartesian space necessary.
This would change the above algorithm to
screenX = zoom_factor*cartX + screen_width/2
screenY = screen_height/2 - zoom_factor*cartY
Now you handle negative (or overly large) screenX and screenY by modifying your zoom factor until all your cartesian coordinates will fit on the screen.
You could also allow for panning of the coordinate space too, meaning, allowing the center of cartesian space to be off-center of the screen. This could also help in allowing your zoom_factor to stay as tight as possible but also fit data which isn't evenly distributed around the origin of cartesian space.
This would change the algorithm to
screenX = zoom_factor*cartX + screen_width/2 + offsetX
screenY = screen_height/2 - zoom_factor*cartY + offsetY
You must know the size of the screen in order to be able to convert
Convert to Cartesian:
cartesianx = screenx - screenwidth / 2;
cartesiany = -screeny + screenheight / 2;
Convert to Screen:
screenx = cartesianx + screenwidth / 2;
screeny = -cartesiany + screenheight / 2;
For cases where you have a negative screen value:
I would not worry about this, this content will simply be clipped so the user will not see. If this is a problem, I would add some constraints that prevent the cartesian coordinate from being too large. Another solution, since you can't have the edges be +/- infinity, would be to scale your coordinates (e.g. 1 pixel = 10 cartesian) Let's call this scalefactor. The equations are now:
Convert to Cartesian with scale factor:
cartesianx = scalefactor*screenx - screenwidth / 2;
cartesiany = -scalefactor*screeny + screenheight / 2;
Convert to Screen with scale factor:
screenx = (cartesianx + screenwidth / 2) / scalefactor;
screeny = (-cartesiany + screenheight / 2) / scalefactor;
You need to know the width and height of the screen.
Then you can do:
cartX = screenX - (width / 2);
cartY = -(screenY - (height / 2));
And:
screenX = cartX + (width / 2);
screenY = -cartY + (height / 2);
You will always have the problem that the result could be off the screen -- either as a negative value, or as a value larger than the available screen size.
Sometimes that won't matter: e.g., if your graphical API accepts negative values and clips your drawing for you. Sometimes it will matter, and for those cases you should have a function that checks if a set of screen coordinates is on the screen.
You could also write your own clipping functions that try to do something reasonable with coordinates that fall off the screen (such as truncating negative screen coordinates to 0, and coordinates that are too large to the maximum onscreen coordinate). However, keep in mind that "reasonable" depends on what you're trying to do, so it might be best to hold off on defining such functions until you actually need them.
In any case, as other answers have noted, you can convert between the coordinate systems as:
cart.x = screen.x - width/2;
cart.y = height/2 - screen.y;
and
screen.x = cart.x + width/2;
screen.y = height/2 - cart.y;
I've got some boost c++ for you, based on microsoft article:
https://msdn.microsoft.com/en-us/library/jj635757(v=vs.85).aspx
You just need to know two screen points and two points in your coordinate system. Then you can convert point from one system to another.
#include <boost/numeric/ublas/vector.hpp>
#include <boost/numeric/ublas/vector_proxy.hpp>
#include <boost/numeric/ublas/matrix.hpp>
#include <boost/numeric/ublas/triangular.hpp>
#include <boost/numeric/ublas/lu.hpp>
#include <boost/numeric/ublas/io.hpp>
/* Matrix inversion routine.
Uses lu_factorize and lu_substitute in uBLAS to invert a matrix */
template<class T>
bool InvertMatrix(const boost::numeric::ublas::matrix<T>& input, boost::numeric::ublas::matrix<T>& inverse)
{
typedef boost::numeric::ublas::permutation_matrix<std::size_t> pmatrix;
// create a working copy of the input
boost::numeric::ublas::matrix<T> A(input);
// create a permutation matrix for the LU-factorization
pmatrix pm(A.size1());
// perform LU-factorization
int res = lu_factorize(A, pm);
if (res != 0)
return false;
// create identity matrix of "inverse"
inverse.assign(boost::numeric::ublas::identity_matrix<T> (A.size1()));
// backsubstitute to get the inverse
lu_substitute(A, pm, inverse);
return true;
}
PointF ConvertCoordinates(PointF pt_in,
PointF pt1, PointF pt2, PointF pt1_, PointF pt2_)
{
float matrix1[]={
pt1.X, pt1.Y, 1.0f, 0.0f,
-pt1.Y, pt1.X, 0.0f, 1.0f,
pt2.X, pt2.Y, 1.0f, 0.0f,
-pt2.Y, pt2.X, 0.0f, 1.0f
};
boost::numeric::ublas::matrix<float> M(4, 4);
CopyMemory(&M.data()[0], matrix1, sizeof(matrix1));
boost::numeric::ublas::matrix<float> M_1(4, 4);
InvertMatrix<float>(M, M_1);
double vector[] = {
pt1_.X,
pt1_.Y,
pt2_.X,
pt2_.Y
};
boost::numeric::ublas::vector<float> u(4);
boost::numeric::ublas::vector<float> u1(4);
u(0) = pt1_.X;
u(1) = pt1_.Y;
u(2) = pt2_.X;
u(3) = pt2_.Y;
u1 = boost::numeric::ublas::prod(M_1, u);
PointF pt;
pt.X = u1(0)*pt_in.X + u1(1)*pt_in.Y + u1(2);
pt.Y = u1(1)*pt_in.X - u1(0)*pt_in.Y + u1(3);
return pt;
}

Emulate texture2D in Javascript

I have written a shader than transforms vertex positions by a heightmap texture. Because the geometry is being transformed on the vertex shader, I can't use traditional picking algorithms in javascript without reverse engineering the shader to get the vertices into their transformed positions. I seem to be having a problem with my understanding of the texture2D function in GLSL. If you ignore the texture wrapping, how would you go about emulating the same function in JS? This is how I currently do it:
/**
* Gets the normalized value of an image's pixel from the x and y coordinates. The x and y coordinates expected here must be between 0 and 1
*/
sample( data, x, y, wrappingS, wrappingT )
{
var tempVec = new Vec2();
// Checks the texture wrapping and modifies the x and y accordingly
this.clampCoords( x, y, wrappingS, wrappingT, tempVec );
// Convert the normalized units into pixel coords
x = Math.floor( tempVec.x * data.width );
y = Math.floor( tempVec.y * data.height );
if ( x >= data.width )
x = data.width - 1;
if ( y >= data.height )
y = data.height - 1;
var val= data.data[ ((data.width * y) + x) * 4 ];
return val / 255.0;
}
This function seems to produce the right results. I have a texture that is 409 pixels wide by 434 pixels high. I coloured the image black except the very last pixel which I coloured red (408, 434). So when I call my sampler function in JS:
this.sample(imgData, 0.9999, 0.9999. wrapS, wrapT)
The result is 1. Which to me is correct as its refering to the red pixel.
However this doesn't seem to be what GLSL gives me. In GLSL I use this (as a test):
float coarseHeight = texture2D( heightfield, vec2( 0.9999, 0.9999 ) ).r;
Which I would expect coarseHeight should be 1 as well - but instead its 0. I don't understand this... Could someone give me some insight into where I'm going wrong?
You may already noticed that any rendered textures are y mirrored.
OpenGL and by that WebGL texture origin is in the lower-left corner, where as your buffer data when loaded using a canvas 2d method has a upper-left corner origin.
So you either need to rewrite your buffer or invert your v coord.

Convert 3D world space coordinates to SVG viewport coordinates

I've been attempting to display the positions of players in a 3D game, on a web page, from an overhead view perspective (like a minimap). I'm simply superimposing markers (via SVG) on a 1024x1024 image of the game's level (an overhead view, taken from the game itself). I'm only concerned with the x, y coordinates, since I'm not using z in an overhead view.
The game's world has equal min/max coordinates for both x and y: -4096 to 4096. The 0, 0 coordinate is the center of the world.
To make things even more interesting, the initial position of a game level, within the game world, is arbitrary. So, for example, the upper-left world coordinate for the particular level I've been testing with is -2440, 3383.
My initial confusion comes from the fact that the 0,0 coordinate in a web page is the top-left, versus center in the world space.
How do I correctly convert the 3D world space coordinates to properly display in a web page viewport of any dimension?
Here's what I've tried (I've been attempting to use the viewbox attribute in svg to handle the upper left world coordinate offset)
scalePosition: function (targetWidth, targetHeight) {
// in-game positions
var gamePosition = this.get('pos');
var MAX_X = 8192,
MAX_Y = 8192;
// Flip y
gamePosition.y = -gamePosition.y;
// Ensure game coordinates are only positive values
// In game = -4096 < x|y < 4096 (0,0 = center)
// Browser = 0 < x|y < 8192 (0,0 = top-left)
gamePosition.x += 4096;
gamePosition.y += 4096;
// Target dimenions
targetWidth = (targetWidth || 1024),
targetHeight = (targetHeight || 1024);
// Find scale between game dimensions and target dimensions
var xScale = MAX_X / targetWidth,
yScale = MAX_Y / targetHeight;
// Convert in-game coords to target coords
var targetX = gamePosition.x / xScale,
targetY = gamePosition.y / yScale;
return {
x: targetX,
y: targetY
};
},

Categories

Resources