mouse coordinates to isometric coordinates - javascript

I'm trying to get the right tile in the isometric world by mouse position.
I've read some theads about this, but it doesn't seem to work for me.
The basic Idea is to recalculate the normal mouse coordinates to the isometric tile-coordinates.
As you can see the mouse cursor is at the tile 5/4 and the recalculation is wrong (the tile selected is 4/5). This is my Code:
var params.tx = 100, params.ty=54,
PI = 3.14152,
x1 = x_mouse - params.tx*5,
y1 = y_mouse * -2,
xr = Math.cos(PI/4)*x1 - Math.sin(PI/4)*y1,
yr = Math.sin(PI/4)*x1 + Math.cos(PI/4)*y1,
diag = (params.ty) * Math.sqrt(2),
x2 = parseInt(xr / diag)+1,
y2 = parseInt(yr * -1 / diag)+1;
The original height of a tile is 54px, but as you can see, only the border tiles show their full height. The rest of the tiles are cut by 4 pixels.
Please help me - may be my whole formula is wrong.

Related

Zoom in on a photo in canvas using javascript

I was searching for a couple of days how to solve this problem and I decided to ask here for the help.
The thing is, I made a canvas that is 640x480px and preloaded it with an image.
After I used the mouse to select the area that is going to be zoomed in (I used a draggable square, same type like if you would press mouse on windows desktop and select multiple icons) I changed the canvas to be 480x480px (since the zoom in part of the photo is a square), and within that new canvas I have displayed a new zoomed in part of that photo.
My question is: since I am doing all of this so I can zoom in on someones face so I can get a user to more precisely place dots on eyes and mouth (face recognition software like thing) how can I get real coordinates of these dots? In respect to an original image and original canvas that was 640x480px.
Everything is in pure javascript no jQuery, and without any js libraries
Thank you
The same way you'd convert between Fahrenheit and Celsius: decide on a reference point and adjust your scale. The reference point is easy: (0, 0) in the zoomed context is the upper left corner of the selected area in the original context. For the scale, convert the zoomed click point from pixels to percentages. A click at (120, 240) is a click at (25%, 50%). Then multiply that percentage by the size of the selected area and add the reference point offset.
// Assume the user selected in the 640x480 canvas a 223x223
// square whose upper left corner is (174, 36),
let zoomArea = {x: 174, y: 36, size: 223};
// and then clicked (120, 260) in the new 480x480 canvas.
let pointClicked = {x: 120, y: 260};
function getOriginalCoords(area, clicked) {
const ZOOMED_SIZE = 480;
// Get the coordinates of the clicked point in the zoomed
// area, on a scale of 0 to 1.
let clickedPercent = {
x: clicked.x / ZOOMED_SIZE,
y: clicked.y / ZOOMED_SIZE
};
return {
x: clickedPercent.x * area.size + area.x,
y: clickedPercent.y * area.size + area.y
};
}
console.log(getOriginalCoords(zoomArea, pointClicked));
At the end I did it this way
// get bounding rect of canvas
var rectangle = canvas.getBoundingClientRect();
// position of the point in respect to new 480x480 canvas
var xPositionZoom = e.clientX - crosshairOffSet - rectangle.left;
var yPositionZoom = e.clientY - crosshairOffSet - rectangle.top;
// position of the point in respect to original 640x480 canvas
var xPosition = rect.startX + (rect.w * (xPositionZoom / canvas.width));
var yPosition = rect.startY + (rect.h * (yPositionZoom / canvas.height));

Trigonometry Issue causing distortion when drawing floor textures in raycaster

I'm creating a game with raycasted 3D graphics like Wolfenstein 3D but using line segments instead of a grid of blocks for walls. Everything is fine when drawing the floors until rotating the player view.
the floor should be aligned against the walls
Here is the view in 2D, with each pixel on the floor on the screen rendered as a blue point:
In the top image is when the player's rotation is Math.PI. In the bottom image it is rotated slightly.
A significant feature of this is the beginning of the cone of points is aligned along the y axis. It should look like a frustrum.
Here is the code I am using to find the x and y coordinates of each point where a texture is drawn on the floor. This code is run for each x value on the screen.
The variable "projPlane" is the projection plane, which is the size of the screen.
projDistance is the distance from the player to the projection plane so that it fits within the field of view, or (projPlane.width/2)/Math.tan(VectorMath.toRadians(fov/2))
pHeight is the players height.
The variable "x" is the x value of the row being rendered on the screen.
//FLOOR TEXTURE
var floorSize = Math.floor((projPlane.height-wallSize)/2); //draw the floor from under the wall
var floorTextureIndex = 1;
//for texture y
if(floorSize > 0){ // values need to be positive
//find the point on the floor
var textureWidth = textures[floorTextureIndex].textureImage.width;
var textureHeight = textures[floorTextureIndex].textureImage.height;
//console.log(coordX);
for (var ty = 0; ty < floorSize; ty++){
//angle is tan
var yAngle = projPlane.distance / (ty + wallSize/2); //ty + wallSize/2 is the point on the projection plane
var yDistance = yAngle * pHeight; //pHeight is player height
var worldY = player.y + Math.sin(player.vector)*yDistance;
var coordY = Math.floor(worldY % (textureHeight));
var xAngle = Math.atan((projPlane.width/2 - x)/projPlane.distance);
/*if(x < projPlane.width/2){//tangent of the angle in the projectionPlane
xAngle = (x) / projPlane.distance;
}
else{
xAngle = (x-projPlane.width) / projPlane.distance;
}*/
var xDistance = yDistance/Math.cos(xAngle);
var worldX = player.x + Math.cos(player.vector - xAngle)*xDistance;
//console.log(xDistance);
var coordX = Math.floor(worldX % (textureWidth));//disable until I can get y
floorPoints.push(new Point(worldX,worldY));
var tempTexture = textures[floorTextureIndex];
if(tempTexture.textureData[coordX] != undefined){
// a different function drawns these to the screen in descending order
floorTextureColors.push(tempTexture.textureData[coordX][coordY]);
}
};
}
It doesn't seem to be an issue with the y value since the y coordinates of the floor texture seem to appear where they should.(EDIT: it actually was to do with the y value. Adding the xAngle to the player.vector when finding the y position returns a correct y position but there is still a "curved" distortion. I hope one of you can propose a more concrete solution.)
What I do to find the X coordinate is form a triangle with the distance from the player to the floor point as the opposite side the angle that the point makes with the player. The hypotenuse should be the magnitude of the distance to the point's x coordinate.
Then I multiply the cosine of the angle by the magnitude to get the x value.
It works whenever the character isn't pointing west and east. What is causing all the first points to have the same y value? I think that's the biggest clue on the distortion occurring here.

How to find the angle from the center of a rectangle to its vertices

I'm developing a collision detection system in Javascript, and I need to find from which side of the rectangle a ball collided.
Anyway, what I need right now is to find the angle from the center of a rectangle to its vertices. Like this:
As you can see in the image, I want to find that angle, but also the rest of the angles to the bottom left and top left vertices.
I know this is math, but I need to code the formula in Javascript anyway.
Let's say I have this:
var box = {
width : 200,
height : 100
};
var boxCenter = {x : box.width / 2, y : box.height / 2 };
var angleRight = // ... ;
var angleBottom = // ... ;
And so on
The angle (red) may be calculated with:
var angle = 2* Math.atan(height/width);

Calculating relative item positions based on camera position and rotation

For a 2d game, I have these concepts:
stage: container in which items and cameras are placed.
item: a visible entity, located on a certain point in the world, anchored from center.
camera: an invisible entity, used to generate relative images of world, located on a certain point in the world, anchored from center.
In the illustrations, you can see how they are related, and what the end result should be.
Here is the code I have: (dumbed down to make it easier to read)
Note1: This is not happening on canvas, so I will not use canvas translation or rotation (and even then, I don't think it would make the problem any easier).
Note2: Item and camera positions are center coordinates.
var sin = Math.sin(rotationRad);
var cos = Math.cos(rotationRad);
var difX = item.x - camera.x;
var difY = item.y - camera.y;
var offsetX = camera.width / 2;
var offsetY = camera.height / 2;
var view.x = (cos * difX) - (sin * difY) + _Ax + _Bx;
var view.y = (sin * difX) + (cos * difY) + _Ay + _By;
This is supposed to calculate an items new position by:
calculating new position of item by rotating it around camera center
(_A*) adjusting item position by offsetting camera position
(_B*) adjusting item position by offsetting camera size
I tried several different solutions to use for _A* and _B* here, but none of them work.
What is the correct way to do this, and if possible, what is the explanation?
You first subtract new origin position from object position. You then rotate it by the inverse of the rotation. New origin can be camera position of top left corner of viewport. Of course if you know viewport center its top left corner is computed by subtracting half of its dimensions.
Like this:
var topLeft = Camera.Position - Camera.Size / 2;
var newPosition = Object.Position - topLeft;
newPosition = Rotate(newPosition, -camera.Angle);
Rotation is very simple:
rotatedX = x * cos(angle) - y * sin(angle)
rotatedY = y * cos(angle) + x * sin(angle)

KinectJS: Algorithm required to determine new X,Y coords after image resize

BACKGROUND:
The app allows users to upload a photo of themselves and then place a pair of glasses over their face to see what it looks like. For the most part, it is working fine. After the user selects the location of the 2 pupils, I auto zoom the image based on the ratio between the distance of the pupils and then already known distance between the center points of the glasses. All is working fine there, but now I need to automatically place the glasses image over the eyes.
I am using KinectJS, but the problem is not with regards to that library or javascript.. it is more of an algorithm requirement
WHAT I HAVE TO WORK WITH:
Distance between pupils (eyes)
Distance between pupils (glasses)
Glasses width
Glasses height
Zoom ratio
SOME CODE:
//.. code before here just zooms the image, etc..
//problem is here (this is wrong, but I need to know what is the right way to calculate this)
var newLeftEyeX = self.leftEyePosition.x * ratio;
var newLeftEyeY = self.leftEyePosition.y * ratio;
//create a blue dot for testing (remove later)
var newEyePosition = new Kinetic.Circle({
radius: 3,
fill: "blue",
stroke: "blue",
strokeWidth: 0,
x: newLeftEyeX,
y: newLeftEyeY
});
self.pointsLayer.add(newEyePosition);
var glassesWidth = glassesImage.getWidth();
var glassesHeight = glassesImage.getHeight();
// this code below works perfect, as I can see the glasses center over the blue dot created above
newGlassesPosition.x = newLeftEyeX - (glassesWidth / 4);
newGlassesPosition.y = newLeftEyeY - (glassesHeight / 2);
NEEDED
A math genius to give me the algorithm to determine where the new left eye position should be AFTER the image has been resized
UPDATE
After researching this for the past 6 hours or so, I think I need to do some sort of "translate transform", but the examples I see only allow setting this by x and y amounts.. whereas I will only know the scale of the underlying image. Here's the example I found (which cannot help me):
http://tutorials.jenkov.com/html5-canvas/transformation.html
and here is something which looks interesting, but it is for Silverlight:
Get element position after transform
Is there perhaps some way to do the same in Html5 and/or KinectJS? Or perhaps I am going down the wrong road here... any ideas people?
UPDATE 2
I tried this:
// if zoomFactor > 1, then picture got bigger, so...
if (zoomFactor > 1) {
// if x = 10 (for example) and if zoomFactor = 2, that means new x should be 5
// current x / zoomFactor => 10 / 2 = 5
newLeftEyeX = self.leftEyePosition.x / zoomFactor;
// same for y
newLeftEyeY = self.leftEyePosition.y / zoomFactor;
}
else {
// else picture got smaller, so...
// if x = 10 (for example) and if zoomFactor = 0.5, that means new x should be 20
// current x * (1 / zoomFactor) => 10 * (1 / 0.5) = 10 * 2 = 20
newLeftEyeX = self.leftEyePosition.x * (1 / zoomFactor);
// same for y
newLeftEyeY = self.leftEyePosition.y * (1 / zoomFactor);
}
that didn't work, so then I tried an implementation of Rody Oldenhuis' suggestion (thanks Rody):
var xFromCenter = self.leftEyePosition.x - self.xCenter;
var yFromCenter = self.leftEyePosition.y - self.yCenter;
var angle = Math.atan2(yFromCenter, xFromCenter);
var length = Math.hypotenuse(xFromCenter, yFromCenter);
var xNew = zoomFactor * length * Math.cos(angle);
var yNew = zoomFactor * length * Math.sin(angle);
newLeftEyeX = xNew + self.xCenter;
newLeftEyeY = yNew + self.yCenter;
However, that is still not working as expected. So, I am not sure what the issue is currently. If anyone has worked with KinectJS before and has an idea of what the issue may be, please let me know.
UPDATE 3
I checked Rody's calculations on paper and they seem fine, so there is obviously something else here messing things up.. I got the coordinates of the left pupil at zoom factors 1 and 2. With those coordinates, maybe someone can figure out what the issue is:
Zoom Factor 1: x = 239, y = 209
Zoom Factor 2: x = 201, y = 133
OK, since it's an algorithmic question, I'm going to keep this generic and only write pseudo code.
I f I understand you correctly, What you want is the following:
Transform all coordinates such that the origin of your coordinate system is at the zoom center (usually, central pixel)
Compute the angle a line drawn from this new origin to a point of interest makes with the positive x-axis. Compute also the length of this line.
The new x and y coordinates after zooming are defined by elongating this line, such that the new line is the zoom factor times the length of the original line.
Transform the newly found x and y coordinates back to a coordinate system that makes sense to the computer (e.g., top left pixel = 0,0)
Repeat for all points of interest.
In pseudo-code (with formulas):
x_center = image_width/2
y_center = image_height/2
x_from_zoom_center = x_from_topleft - x_center
y_from_zoom_center = y_from_topleft - y_center
angle = atan2(y_from_zoom_center, x_from_zoom_center)
length = hypot(x_from_zoom_center, y_from_zoom_center)
x_new = zoom_factor * length * cos(angle)
y_new = zoom_factor * length * sin(angle)
x_new_topleft = x_new + x_center
y_new_topleft = y_new + y_center
Note that this assumes the number of pixels used for length and width stays the same after zooming. Note also that some rounding should take place (keep everything double precision, and only round to int after all calculations)
In the code above, atan2 is the four-quadrant arctangent, available in most programming languages, and hypot is simply sqrt(x*x + y*y), but then computed more carefully (e.g., to avoid overflow etc.), also available in most programing languages.
Is this indeed what you were after?

Categories

Resources