SVG, Raphael.js, Drawing - javascript

This I guess is more of a maths question or maybe an SVG question. I was looking at modifying some example code I found on the raphael.js site. I already modified it to have a custom centre point. Now I want to modify it so that I can specify at which angle the arc is started at. (similar to d3.js so I can use it to have something like a bar chart with the middle missing).
However I have no idea where or how to begin. My maths is terrible, I have no idea what alpha is and the a variable does. Or why x and y are calculated that way. I have been reading the SVG specification over and over but I am missing some crucial basic knowledge and I don't know.
Can someone point me in the right direction so I can begin to understand this stuff?
window.onload = function () {
var r = Raphael("holder", 600, 600),
R = 200,
init = true,
param = {stroke: "#fff", "stroke-width": 30};
// Custom Attribute
r.customAttributes.arc = function (xPos, yPos, value, total, R) {
var alpha = 360 / total * value,
a = (90 - alpha) * Math.PI / 180,
x = xPos + R * Math.cos(a),
y = yPos - R * Math.sin(a),
var path = [["M", xPos, yPos - R], ["A", R, R, 0, +(alpha > 180), 1, x, y]];
return {path: path};
};
var sec = r.path().attr(param).attr({arc: [300, 300, 3, 60, R]});
};
Running the code produces:
<svg height="600" version="1.1" width="600" xmlns="http://www.w3.org/2000/svg" style="overflow: hidden; position: relative;">
<path style="" fill="none" stroke="#bfb5b5" d="M300,100A200,200,0,0,1,361.8033988749895,109.7886967409693" stroke-width="30">
</svg>
Also I have no idea how the arc parameters work together to draw what they are drawing.
Apologies for the lack of focus on the question.
EDIT:
It's based on the polar clock example. http://raphaeljs.com/polar-clock.html

I think the author of the example is trying to create a custom attribute in order to make it easy to create arcs based on clock rotation.
Basically the total paramter of the custom attribute represents the total movement of the clock (60 seconds) while value (3 in your case) represents the length (in seconds) of the arc you are trying to draw. So basically you have an arc of 3 seconds.
Now for the math:
alpha : the angle (in degrees) of the arc. You notice the conversion from seconds to degrees: 3 seconds -> 18 degrees
a : the angle in radians. Trigonometric formulas use radians not degrees, so you need this conversion. For some reason that I don't understand, this is the complementary angle (90 - alpha)
Edit: the complementary angle is (probably) used to compensate for the fact that in trigonometry the y-axis points upwards while on the svg canvas it points downwards.
x, y : ending points of the path (arc) you are drawing. These are caculated using elementary trigonometry (sorry..you're not getting any help here).
The parameters for the svg arc are described here: http://www.w3.org/TR/SVG/paths.html#PathDataEllipticalArcCommands

Related

a-frame entity-positioning and rotation

Sadly I am not familiar with the positioning and rotation of entites in 3D space, so I want to create a function that positions an entity with easier to understand parameters like:
createEntity(vertical, horizontal, distance)
for
<a-entity position="-2 0 -2" rotation="-10 30 0"></a-entity>
where vertical and horizontal are float-values between 0 and 360 and distance is a float where 0 is position "0 0 0" and as higher the value than farther the entity goes.
the rotation should face the camera at init.
are there helper-function for the calculations?
It sounds like you want to use the Spherical coordinate system to position the elements, and the look-at component to rotate the objects towards the camera.
I'm not aware of any helpers, but it's quite easy to do this with a custom component, like this:
// Register the component
AFRAME.registerComponent('fromspherical', {
// we will use two angles and a radius provided by the user
schema: {
fi: {},
theta: {},
r: {},
},
init: function() {
// lets change it to radians
let fi = this.data.fi * Math.PI / 180
let theta = this.data.theta * Math.PI / 180
// The 'horizontal axis is x. The 'vertical' is y.
// The calculations below are straight from the wiki site.
let z = (-1) * Math.sin(theta) * Math.cos(fi) * this.data.r
let x = Math.sin(theta) * Math.sin(fi) * this.data.r
let y = Math.cos(theta) * this.data.r
// position the element using the provided data
this.el.setAttribute('position', {
x: x,
y: y,
z: z
})
// rotate the element towards the camera
this.el.setAttribute('look-at', '[camera]')
}
})
Check it out in this fiddle.
The calculations are in a different order than on the wiki website. This is because in aframe the XYZ space looks like this:
The camera is looking along the negative Z axis upon default initialization.

Intersection of two Moving Objects

I'm trying to use the answer provided here: Intersection of two Moving Objects with Latitude/Longitude Coordinates
But I have some questions..
What is this angle:
var angle = Math.PI + dir - target.dir
I was thinking that the angle that should be used in the law of cosines is already "alpha or target.dir".. What is that line doing? Also in these two steps:
var x = target.x + target.vel * time * Math.cos(target.dir);
var y = target.y + target.vel * time * Math.sin(target.dir);
Shouldn't the code be using the angle between x- or y-axis and the target velocity vector? Why is the author using alpha here?
What is this angle:
var angle = Math.PI + dir - target.dir
The variable named angle is indeed the angle alpha. Because the direction dir is the direction from chaser to target, and we need it the other way round for this calculation, we add π to it before we subtract target.dir.
Maybe using the word angle as a variable name was a bit vague; I'll change it to alpha, the name I used for this angle in the images.
Shouldn't the code be using the angle between x- or y-axis and the target velocity vector? Why is the author using alpha here?
var x = target.x + target.vel * time * Math.cos(target.dir);
var y = target.y + target.vel * time * Math.sin(target.dir);
We are indeed using target.dir, which is the direction of the target, i.e. the angle between the x-axis and the target vector, to calculate the coordinates of the interception point, and not the angle alpha.

webgl rotate an object around another object in one axis

Im trying to rotate an object around another object while maintaining its own rotation. I have each objects rotation done im just not sure how to rotate an object around another object. For example I have an array called Planets[Sun,Mercury]. I want the sun to be stationary and allow mercury to rotate around the sun on one axis.
Currently I have the sun and mercury rotating by themselves this is done by:
First changing degress to radians.
function degToRad(degrees)
{
return degrees * Math.PI / 180;
}
Then in my drawScene() I rotate the matrix:
mat4.rotate(mvMatrix, degToRad(rCube), [0, 1, 0]);
and then lastly when I animate the scene I move the object using:
var lastTime = 0;
function animate() {
var timeNow = new Date().getTime();
if (lastTime != 0)
{
var elapsed = timeNow - lastTime;
rCube -= (75 * elapsed) / 1000.0;
}
lastTime = timeNow;
}
Is there anyway I can pass an origin point into
mat4.rotate(mvMatrix, degToRad(rCube), [0, 1, 0]);
to make it like:
mat4.rotate(mvMatrix, ObjectToRotateAround, degToRad(rCube), [0, 1, 0]);
I feel as if im not explaining the code I have well. If you wish to have a look it can be found here:
https://copy.com/iIXsTtziJaJztzbe
I think you need to do a sequence of matrix operations and the order of matrix operation matters.
What you probably want in this case is to first translate Mercury to position of Sun, then do the rotation, then reverse the first translation. I have not yet implemented hierarchical objects myself so I dont want to confuse you. But here is the code for my implementation of orbit camera which the yaw function rotates the camera around a target point and you may find it useful:
yaw: function(radian){
this.q = quat.axisAngle(this.q, this.GLOBALUP, radian);
vec3.rotateByQuat(this.dir, this.dir, this.q);
vec3.cross(this.side,this.GLOBALUP,this.dir);
vec3.normalize(this.side,this.side);
this.pos[0] = this.target[0] - this.dir[0] * this.dist;
this.pos[1] = this.target[1] - this.dir[1] * this.dist;
this.pos[2] = this.target[2] - this.dir[2] * this.dist;
}
Where this.dir is a normalized vector that always gives the direction from Camera to target and this.dist is the distance between camera and target. You can use matrix rotation instead of quaternion rotation.
Edit: just to add the direction can be calculated by taking the difference in position of the two objects then normalize it.

Incorrect angle, wrong side calculated

I need to calculate the angle between 3 points. For this, I do the following:
Grab the 3 points (previous, current and next, it's within a loop)
Calculate the distance between the points with Pythagoras
Calculate the angle using Math.acos
This seems to work fine for shapes without angels of over 180 degrees, however if a shape has such an corner it calculates the short-side. Here's an illustration to show what I mean (the red values are wrong):
This is the code that does the calculations:
// Pythagoras for calculating distance between two points (2D)
pointDistance = function (p1x, p1y, p2x, p2y) {
return Math.sqrt((p1x - p2x)*(p1x - p2x) + (p1y - p2y)*(p1y - p2y));
};
// Get the distance between the previous, current and next points
// vprev, vcur and vnext are objects that look like this:
// { x:float, y:float, z:float }
lcn = pointDistance(vcur.x, vcur.z, vnext.x, vnext.z);
lnp = pointDistance(vnext.x, vnext.z, vprev.x, vprev.z);
lpc = pointDistance(vprev.x, vprev.z, vcur.x, vcur.z);
// Calculate and print the angle
Math.acos((lcn*lcn + lpc*lpc - lnp*lnp)/(2*lcn*lpc))*180/Math.PI
Is there something wrong in the code, did I forget to do something, or should it be done a completely different way?
HI there your math and calculations are perfect. Your running into the same problem most people do on calculators, which is orientation. What I would do is find out if the point lies to the left or right of the vector made by the first two points using this code, which I found from
Determine which side of a line a point lies
isLeft = function(ax,ay,bx,by,cx,cy){
return ((bx - ax)*(cy - ay) - (by - ay)*(cx - ax)) > 0;
}
Where ax and ay make up your first point bx by your second and cx cy your third.
if it is to the left just add 180 to your angle
I've got a working but not necessarily brief example of how this can work:
var point1x = 0, point1y = 0,
point2x = 10, point2y = 10,
point3x = 20, point3y = 10,
point4x = 10, point4y = 20;
var slope1 = Math.atan2(point2y-point1y,point2x-point1x)*180/Math.PI;
var slope2 = Math.atan2(point3y-point2y,point3x-point2x)*180/Math.PI;
var slope3 = Math.atan2(point4y-point3y,point4x-point3x)*180/Math.PI;
alert(slope1);
alert(slope2);
alert(slope3);
var Angle1 = slope1-slope2;
var Angle2 = slope2-slope3;
alert(180-Angle1);
alert(180-Angle2);
(see http://jsfiddle.net/ZUESt/1/)
To explain the multiple steps the slopeN variables are the slopes of the individual line segments. AngleN is the amount turned at each junction (ie point N+1). A positive angle is a right turn and a negative angle a left turn.
You can then subtract this angle from 180 to get the actual interior angle that you want.
It should be noted that this code can of course be compressed and that five lines are merely outputting variables to see what is going on. I'll let you worry about optimizing it for your own use with this being a proof of concept.
You need to check boundary conditions (apparently, if points are colinear) and apply the proper calculation to find the angle.
Also, a triangle can't have any (interior) angle greater than 180 degress. Sum of angle of triangle is 180 degrees.

KinectJS: Algorithm required to determine new X,Y coords after image resize

BACKGROUND:
The app allows users to upload a photo of themselves and then place a pair of glasses over their face to see what it looks like. For the most part, it is working fine. After the user selects the location of the 2 pupils, I auto zoom the image based on the ratio between the distance of the pupils and then already known distance between the center points of the glasses. All is working fine there, but now I need to automatically place the glasses image over the eyes.
I am using KinectJS, but the problem is not with regards to that library or javascript.. it is more of an algorithm requirement
WHAT I HAVE TO WORK WITH:
Distance between pupils (eyes)
Distance between pupils (glasses)
Glasses width
Glasses height
Zoom ratio
SOME CODE:
//.. code before here just zooms the image, etc..
//problem is here (this is wrong, but I need to know what is the right way to calculate this)
var newLeftEyeX = self.leftEyePosition.x * ratio;
var newLeftEyeY = self.leftEyePosition.y * ratio;
//create a blue dot for testing (remove later)
var newEyePosition = new Kinetic.Circle({
radius: 3,
fill: "blue",
stroke: "blue",
strokeWidth: 0,
x: newLeftEyeX,
y: newLeftEyeY
});
self.pointsLayer.add(newEyePosition);
var glassesWidth = glassesImage.getWidth();
var glassesHeight = glassesImage.getHeight();
// this code below works perfect, as I can see the glasses center over the blue dot created above
newGlassesPosition.x = newLeftEyeX - (glassesWidth / 4);
newGlassesPosition.y = newLeftEyeY - (glassesHeight / 2);
NEEDED
A math genius to give me the algorithm to determine where the new left eye position should be AFTER the image has been resized
UPDATE
After researching this for the past 6 hours or so, I think I need to do some sort of "translate transform", but the examples I see only allow setting this by x and y amounts.. whereas I will only know the scale of the underlying image. Here's the example I found (which cannot help me):
http://tutorials.jenkov.com/html5-canvas/transformation.html
and here is something which looks interesting, but it is for Silverlight:
Get element position after transform
Is there perhaps some way to do the same in Html5 and/or KinectJS? Or perhaps I am going down the wrong road here... any ideas people?
UPDATE 2
I tried this:
// if zoomFactor > 1, then picture got bigger, so...
if (zoomFactor > 1) {
// if x = 10 (for example) and if zoomFactor = 2, that means new x should be 5
// current x / zoomFactor => 10 / 2 = 5
newLeftEyeX = self.leftEyePosition.x / zoomFactor;
// same for y
newLeftEyeY = self.leftEyePosition.y / zoomFactor;
}
else {
// else picture got smaller, so...
// if x = 10 (for example) and if zoomFactor = 0.5, that means new x should be 20
// current x * (1 / zoomFactor) => 10 * (1 / 0.5) = 10 * 2 = 20
newLeftEyeX = self.leftEyePosition.x * (1 / zoomFactor);
// same for y
newLeftEyeY = self.leftEyePosition.y * (1 / zoomFactor);
}
that didn't work, so then I tried an implementation of Rody Oldenhuis' suggestion (thanks Rody):
var xFromCenter = self.leftEyePosition.x - self.xCenter;
var yFromCenter = self.leftEyePosition.y - self.yCenter;
var angle = Math.atan2(yFromCenter, xFromCenter);
var length = Math.hypotenuse(xFromCenter, yFromCenter);
var xNew = zoomFactor * length * Math.cos(angle);
var yNew = zoomFactor * length * Math.sin(angle);
newLeftEyeX = xNew + self.xCenter;
newLeftEyeY = yNew + self.yCenter;
However, that is still not working as expected. So, I am not sure what the issue is currently. If anyone has worked with KinectJS before and has an idea of what the issue may be, please let me know.
UPDATE 3
I checked Rody's calculations on paper and they seem fine, so there is obviously something else here messing things up.. I got the coordinates of the left pupil at zoom factors 1 and 2. With those coordinates, maybe someone can figure out what the issue is:
Zoom Factor 1: x = 239, y = 209
Zoom Factor 2: x = 201, y = 133
OK, since it's an algorithmic question, I'm going to keep this generic and only write pseudo code.
I f I understand you correctly, What you want is the following:
Transform all coordinates such that the origin of your coordinate system is at the zoom center (usually, central pixel)
Compute the angle a line drawn from this new origin to a point of interest makes with the positive x-axis. Compute also the length of this line.
The new x and y coordinates after zooming are defined by elongating this line, such that the new line is the zoom factor times the length of the original line.
Transform the newly found x and y coordinates back to a coordinate system that makes sense to the computer (e.g., top left pixel = 0,0)
Repeat for all points of interest.
In pseudo-code (with formulas):
x_center = image_width/2
y_center = image_height/2
x_from_zoom_center = x_from_topleft - x_center
y_from_zoom_center = y_from_topleft - y_center
angle = atan2(y_from_zoom_center, x_from_zoom_center)
length = hypot(x_from_zoom_center, y_from_zoom_center)
x_new = zoom_factor * length * cos(angle)
y_new = zoom_factor * length * sin(angle)
x_new_topleft = x_new + x_center
y_new_topleft = y_new + y_center
Note that this assumes the number of pixels used for length and width stays the same after zooming. Note also that some rounding should take place (keep everything double precision, and only round to int after all calculations)
In the code above, atan2 is the four-quadrant arctangent, available in most programming languages, and hypot is simply sqrt(x*x + y*y), but then computed more carefully (e.g., to avoid overflow etc.), also available in most programing languages.
Is this indeed what you were after?

Categories

Resources