Related
I want to draw StackOverflow's logo with this Neural Network:
The NN should ideally become [r, g, b] = f([x, y]). In other words, it should return RGB colors for a given pair of coordinates. The FFNN works pretty well for simple shapes like a circle or a box. For example after several thousands epochs a circle looks like this:
Try it yourself: https://codepen.io/adelriosantiago/pen/PoNGeLw
However since StackOverflow's logo is far more complex even after several thousands of iterations the FFNN's results are somewhat poor:
From left to right:
StackOverflow's logo at 256 colors.
With 15 hidden neurons: The left handle never appears.
50 hidden neurons: Pretty poor result in general.
0.03 as learning rate: Shows blue in the results (blue is not in the orignal image)
A time-decreasing learning rate: The left handle appears but other details are now lost.
Try it yourself: https://codepen.io/adelriosantiago/pen/xxVEjeJ
Some parameters of interest are synaptic.Architect.Perceptron definition and learningRate value.
How can I improve the accuracy of this NN?
Could you improve the snippet? If so, please explain what you did. If there is a better NN architecture to tackle this type of job could you please provide an example?
Additional info:
Artificial Neural Network library used: Synaptic.js
To run this example in your localhost: See repository
By adding another layer, you get better results :
let perceptron = new synaptic.Architect.Perceptron(2, 15, 10, 3)
There are small improvements that you can do to improve efficiency (marginally):
Here is my optimized code:
const width = 125
const height = 125
const outputCtx = document.getElementById("output").getContext("2d")
const iterationLabel = document.getElementById("iteration")
const stopAtIteration = 3000
let perceptron = new synaptic.Architect.Perceptron(2, 15, 10, 3)
let iteration = 0
let inputData = (() => {
const tempCtx = document.createElement("canvas").getContext("2d")
tempCtx.drawImage(document.getElementById("input"), 0, 0)
return tempCtx.getImageData(0, 0, width, height)
})()
const getRGB = (img, x, y) => {
var k = (height * y + x) * 4;
return [
img.data[k] / 255, // R
img.data[k + 1] / 255, // G
img.data[k + 2] / 255, // B
//img.data[(height * y + x) * 4 + 3], // Alpha not used
]
}
const paint = () => {
var imageData = outputCtx.getImageData(0, 0, width, height)
for (let x = 0; x < width; x++) {
for (let y = 0; y < height; y++) {
var rgb = perceptron.activate([x / width, y / height])
var k = (height * y + x) * 4;
imageData.data[k] = rgb[0] * 255
imageData.data[k + 1] = rgb[1] * 255
imageData.data[k + 2] = rgb[2] * 255
imageData.data[k + 3] = 255 // Alpha not used
}
}
outputCtx.putImageData(imageData, 0, 0)
setTimeout(train, 0)
}
const train = () => {
iterationLabel.innerHTML = ++iteration
if (iteration > stopAtIteration) return
let learningRate = 0.01 / (1 + 0.0005 * iteration) // Attempt with dynamic learning rate
//let learningRate = 0.01 // Attempt with non-dynamic learning rate
for (let x = 0; x < width; x += 1) {
for (let y = 0; y < height; y += 1) {
perceptron.activate([x / width, y / height])
perceptron.propagate(learningRate, getRGB(inputData, x, y))
}
}
paint()
}
const startTraining = (btn) => {
btn.disabled = true
train()
}
EDIT : I made another CodePen with even better results:
https://codepen.io/xurei/pen/KKzWLxg
It is likely to be over-fitted BTW.
The perceptron definition:
let perceptron = new synaptic.Architect.Perceptron(2, 8, 15, 7, 3)
Taking some insights from the lecture/slides of Bhiksha Raj (from slides 62 onwards), and summarizing as below:
Each node can be assumed like a linear classifier, and combination of several nodes in a single layer of neural networks can approximate any basic shapes. For example, a rectangle can be formed by 4 nodes for each lines, assuming each nodes contributes to one line, and the shape can be approximated by the final output layer.
Falling back to the summary of complex shapes such as circle, it may require infinite nodes in a layer. Or this would likely hold true for a single layer with two disjoint shapes (A non-overlapping triangle and rectangle). However, this can still be learnt using more than 1 hidden layers. Where, the 1st layer learns the basic shapes, followed by 2nd layer approximating their disjoint combinations.
Thus, you can assume that this logo is combination of disjoint rectangles (5 rectangles for orange and 3 rectangles for grey). We can use atleast 32 nodes in 1st hidden layer and few nodes in the 2nd hidden layer. However, we don't have control over what each node learns. Hence, a few more number of neurons than required neurons should be helpful.
Summary
This question is in JavaScript, but an answer in any language, pseudo-code, or just the maths would be great!
I have been trying to implement the Separating-Axis-Theorem to accomplish the following:
Detecting an intersection between a convex polygon and a circle.
Finding out a translation that can be applied to the circle to resolve the intersection, so that the circle is barely touching the polygon but no longer inside.
Determining the axis of the collision (details at the end of the question).
I have successfully completed the first bullet point and you can see my javascript code at the end of the question. I am having difficulties with the other parts.
Resolving the intersection
There are plenty of examples online on how to resolve the intersection in the direction with the smallest/shortest overlap of the circle. You can see in my code at the end that I already have this calculated.
However this does not suit my needs. I must resolve the collision in the opposite direction of the circle's trajectory (assume I already have the circle's trajectory and would like to pass it into my function as a unit-vector or angle, whichever suits).
You can see the difference between the shortest resolution and the intended resolution in the below image:
How can I calculate the minimum translation vector for resolving the intersection inside my test_CIRCLE_POLY function, but that is to be applied in a specific direction, the opposite of the circle's trajectory?
My ideas/attempts:
My first idea was to add an additional axis to the axes that must be tested in the SAT algorithm, and this axis would be perpendicular to the circle's trajectory. I would then resolve based on the overlap when projecting onto this axis. This would sort of work, but would resolve way to far in most situations. It won't result in the minimum translation. So this won't be satisfactory.
My second idea was to continue to use magnitude of the shortest overlap, but change the direction to be the opposite of the circle's trajectory. This looks promising, but there are probably many edge-cases that I haven't accounted for. Maybe this is a nice place to start.
Determining side/axis of collision
I've figured out a way to determine which sides of the polygon the circle is colliding with. For each tested axis of the polygon, I would simply check for overlap. If there is overlap, that side is colliding.
This solution will not be acceptable once again, as I would like to figure out only one side depending on the circle's trajectory.
My intended solution would tell me, in the example image below, that axis A is the axis of collision, and not axis B. This is because once the intersection is resolved, axis A is the axis corresponding to the side of the polygon that is just barely touching the circle.
My ideas/attempts:
Currently I assume the axis of collision is that perpendicular to the MTV (minimum translation vector). This is currently incorrect, but should be the correct axis once I've updated the intersection resolution process in the first half of the question. So that part should be tackled first.
Alternatively I've considered creating a line from the circle's previous position and their current position + radius, and checking which sides intersect with this line. However, there's still ambiguity, because on occasion there will be more than one side intersecting with the line.
My code so far
function test_CIRCLE_POLY(circle, poly, circleTrajectory) {
// circleTrajectory is currently not being used
let axesToTest = [];
let shortestOverlap = +Infinity;
let shortestOverlapAxis;
// Figure out polygon axes that must be checked
for (let i = 0; i < poly.vertices.length; i++) {
let vertex1 = poly.vertices[i];
let vertex2 = poly.vertices[i + 1] || poly.vertices[0]; // neighbouring vertex
let axis = vertex1.sub(vertex2).perp_norm();
axesToTest.push(axis);
}
// Figure out circle axis that must be checked
let closestVertex;
let closestVertexDistSqr = +Infinity;
for (let vertex of poly.vertices) {
let distSqr = circle.center.sub(vertex).magSqr();
if (distSqr < closestVertexDistSqr) {
closestVertexDistSqr = distSqr;
closestVertex = vertex;
}
}
let axis = closestVertex.sub(circle.center).norm();
axesToTest.push(axis);
// Test for overlap
for (let axis of axesToTest) {
let circleProj = proj_CIRCLE(circle, axis);
let polyProj = proj_POLY(poly, axis);
let overlap = getLineOverlap(circleProj.min, circleProj.max, polyProj.min, polyProj.max);
if (overlap === 0) {
// guaranteed no intersection
return { intersecting: false };
}
if (Math.abs(overlap) < Math.abs(shortestOverlap)) {
shortestOverlap = overlap;
shortestOverlapAxis = axis;
}
}
return {
intersecting: true,
resolutionVector: shortestOverlapAxis.mul(-shortestOverlap),
// this resolution vector is not satisfactory, I need the shortest resolution with a given direction, which would be an angle passed into this function from the trajectory of the circle
collisionAxis: shortestOverlapAxis.perp(),
// this axis is incorrect, I need the axis to be based on the trajectory of the circle which I would pass into this function as an angle
};
}
function proj_POLY(poly, axis) {
let min = +Infinity;
let max = -Infinity;
for (let vertex of poly.vertices) {
let proj = vertex.projNorm_mag(axis);
min = Math.min(proj, min);
max = Math.max(proj, max);
}
return { min, max };
}
function proj_CIRCLE(circle, axis) {
let proj = circle.center.projNorm_mag(axis);
let min = proj - circle.radius;
let max = proj + circle.radius;
return { min, max };
}
// Check for overlap of two 1 dimensional lines
function getLineOverlap(min1, max1, min2, max2) {
let min = Math.max(min1, min2);
let max = Math.min(max1, max2);
// if negative, no overlap
let result = Math.max(max - min, 0);
// add positive/negative sign depending on direction of overlap
return result * ((min1 < min2) ? 1 : -1);
};
I am assuming the polygon is convex and that the circle is moving along a straight line (at least for a some possibly small interval of time) and is not following some curved trajectory. If it is following a curved trajectory, then things get harder. In the case of curved trajectories, the basic ideas could be kept, but the actual point of collision (the point of collision resolution for the circle) might be harder to calculate. Still, I am outlining an idea, which could be extended to that case too. Plus, it could be adopted as a main approach for collision detection between a circle and a convex polygon.
I have not considered all possible cases, which may include special or extreme situations, but at least it gives you a direction to explore.
Transform in your mind the collision between the circle and the polygon into a collision between the center of the circle (a point) and a version of the polygon thickened by the circle's radius r, i.e. (i) each edge of the polygon is offset (translated) outwards by radius r along a vector perpendicular to it and pointing outside of the polygon, (ii) the vertices become circular arcs of radius r, centered at the polygons vertices and connecting the endpoints of the appropriate neighboring offset edges (basically, put circles of radius r at the vertices of the polygon and take their convex hull).
Now, the current position of the circle's center is C = [ C[0], C[1] ] and it has been moving along a straight line with direction vector V = [ V[0], V[1] ] pointing along the direction of motion (or if you prefer, think of V as the velocity of the circle at the moment when you have detected the collision). Then, there is an axis (or let's say a ray - a directed half-line) defined by the vector equation X = C - t * V, where t >= 0 (this axis is pointing to the past trajectory). Basically, this is the half-line that passes through the center point C and is aligned with/parallel to the vector V. Now, the point of resolution, i.e. the point where you want to move your circle to is the point where the axis X = C - t * V intersects the boundary of the thickened polygon.
So you have to check (1) first axis intersection for edges and then (2) axis intersection with circular arcs pertaining to the vertices of the original polygon.
Assume the polygon is given by an array of vertices P = [ P[0], P[1], ..., P[N], P[0] ] oriented counterclockwise.
(1) For each edge P[i-1]P[i] of the original polygon, relevant to your collision (these could be the two neighboring edges meeting at the vertex based on which the collision is detected, or it could be actually all edges in the case of the circle moving with very high speed and you have detected the collision very late, so that the actual collision did not even happen there, I leave this up to you, because you know better the details of your situation) do the following. You have as input data:
C = [ C[0], C[1] ]
V = [ V[0], V[1] ]
P[i-1] = [ P[i-1][0], P[i-1][1] ]
P[i] = [ P[i][0], P[i][1] ]
Do:
Normal = [ P[i-1][1] - P[i][1], P[i][0] - P[i-1][0] ];
Normal = Normal / sqrt((P[i-1][1] - P[i][1])^2 + ( P[i][0] - P[i-1][0] )^2);
// you may have calculated these already
Q_0[0] = P[i-1][0] + r*Normal[0];
Q_0[1] = P[i-1][1] + r*Normal[1];
Q_1[0] = P[i][0]+ r*Normal[0];
Q_1[1] = P[i][1]+ r*Normal[1];
Solve for s, t the linear system of equations (the equation for intersecting ):
Q_0[0] + s*(Q_1[0] - Q_0[0]) = C[0] - t*V[0];
Q_0[1] + s*(Q_1[1] - Q_0[1]) = C[1] - t*V[1];
if 0<= s <= 1 and t >= 0, you are done, and your point of resolution is
R[0] = C[0] - t*V[0];
R[1] = C[1] - t*V[1];
else
(2) For the each vertex P[i] relevant to your collision, do the following:
solve for t the quadratic equation (there is an explicit formula)
norm(P[i] - C + t*V )^2 = r^2
or expanded:
(V[0]^2 + V[1]^2) * t^2 + 2 * ( (P[i][0] - C[0])*V[0] + (P[i][1] - C[1])*V[1] )*t + ( P[i][0] - C[0])^2 + (P[i][1] - C[1])^2 ) - r^2 = 0
or if you prefer in a more code-like way:
a = V[0]^2 + V[1]^2;
b = (P[i][0] - C[0])*V[0] + (P[i][1] - C[1])*V[1];
c = (P[i][0] - C[0])^2 + (P[i][1] - C[1])^2 - r^2;
D = b^2 - a*c;
if D < 0 there is no collision with the vertex
i.e. no intersection between the line X = C - t*V
and the circle of radius r centered at P[i]
else
D = sqrt(D);
t1 = ( - b - D) / a;
t2 = ( - b + D) / a;
where t2 >= t1
Then your point of resolution is
R[0] = C[0] - t2*V[0];
R[1] = C[1] - t2*V[1];
Circle polygon intercept
If the ball is moving and if you can ensure that the ball always starts outside the polygon then the solution is rather simple.
We will call the ball and its movement the ball line. It starts at the ball's current location and end at the position the ball will be at the next frame.
To solve you find the nearest intercept to the start of the ball line.
There are two types of intercept.
Line segment (ball line) with Line segment (polygon edge)
Line segment (ball line) with circle (circle at each (convex only) polygon corner)
The example code has a Lines2 object that contains the two relevant intercept functions. The intercepts are returned as a Vec2containing two unit distances. The intercept functions are for lines (infinite length) not line sgements. If there is no intercept then the return is undefined.
For the line intercepts Line2.unitInterceptsLine(line, result = new Vec2()) the unit values (in result) are the unit distance along each line from the start. negative values are behind the start.
To take in account of the ball radius each polygon edge is offset the ball radius along its normal. It is important that the polygon edges have a consistent direction. In the example the normal is to the right of the line and the polygon points are in a clockwise direction.
For the line segment / circle intercepts Line2.unitInterceptsCircle(center, radius, result = new Vec2()) the unit values (in result) are the unit distance along the line where it intercepts the circle. result.x will always contain the closest intercept (assuming you start outside the circle). If there is an intercept there ways always be two, even if they are at the same point.
Example
The example contains all that is needed
The objects of interest are ball and poly
ball defines the ball and its movement. There is also code to draw it for the example
poly holds the points of the polygon. Converts the points to offset lines depending on the ball radius. It is optimized to that it only calculates the lines if the ball radius changes.
The function poly.movingBallIntercept is the function that does all the work. It take a ball object and an optional results vector.
It returns the position as a Vec2 of the ball if it contacts the polygon.
It does this by finding the smallest unit distance to the offset lines, and point (as circle) and uses that unit distance to position the result.
Note that if the ball is inside the polygon the intercepts with the corners is reversed. The function Line2.unitInterceptsCircle does provide 2 unit distance where the line enters and exits the circle. However you need to know if you are inside or outside to know which one to use. The example assumes you are outside the polygon.
Instructions
Move the mouse to change the balls path.
Click to set the balls starting position.
Math.EPSILON = 1e-6;
Math.isSmall = val => Math.abs(val) < Math.EPSILON;
Math.isUnit = u => !(u < 0 || u > 1);
Math.TAU = Math.PI * 2;
/* export {Vec2, Line2} */ // this should be a module
var temp;
function Vec2(x = 0, y = (temp = x, x === 0 ? (x = 0 , 0) : (x = x.x, temp.y))) {
this.x = x;
this.y = y;
}
Vec2.prototype = {
init(x, y = (temp = x, x = x.x, temp.y)) { this.x = x; this.y = y; return this }, // assumes x is a Vec2 if y is undefined
copy() { return new Vec2(this) },
equal(v) { return (this.x - v.x) === 0 && (this.y - v.y) === 0 },
isUnits() { return Math.isUnit(this.x) && Math.isUnit(this.y) },
add(v, res = this) { res.x = this.x + v.x; res.y = this.y + v.y; return res },
sub(v, res = this) { res.x = this.x - v.x; res.y = this.y - v.y; return res },
scale(val, res = this) { res.x = this.x * val; res.y = this.y * val; return res },
invScale(val, res = this) { res.x = this.x / val; res.y = this.y / val; return res },
dot(v) { return this.x * v.x + this.y * v.y },
uDot(v, div) { return (this.x * v.x + this.y * v.y) / div },
cross(v) { return this.x * v.y - this.y * v.x },
uCross(v, div) { return (this.x * v.y - this.y * v.x) / div },
get length() { return this.lengthSqr ** 0.5 },
set length(l) { this.scale(l / this.length) },
get lengthSqr() { return this.x * this.x + this.y * this.y },
rot90CW(res = this) {
const y = this.x;
res.x = -this.y;
res.y = y;
return res;
},
};
const wV1 = new Vec2(), wV2 = new Vec2(), wV3 = new Vec2(); // pre allocated work vectors used by Line2 functions
function Line2(p1 = new Vec2(), p2 = (temp = p1, p1 = p1.p1 ? p1.p1 : p1, temp.p2 ? temp.p2 : new Vec2())) {
this.p1 = p1;
this.p2 = p2;
}
Line2.prototype = {
init(p1, p2 = (temp = p1, p1 = p1.p1, temp.p2)) { this.p1.init(p1); this.p2.init(p2) },
copy() { return new Line2(this) },
asVec(res = new Vec2()) { return this.p2.sub(this.p1, res) },
unitDistOn(u, res = new Vec2()) { return this.p2.sub(this.p1, res).scale(u).add(this.p1) },
translate(vec, res = this) {
this.p1.add(vec, res.p1);
this.p2.add(vec, res.p2);
return res;
},
translateNormal(amount, res = this) {
this.asVec(wV1).rot90CW().length = -amount;
this.translate(wV1, res);
return res;
},
unitInterceptsLine(line, res = new Vec2()) { // segments
this.asVec(wV1);
line.asVec(wV2);
const c = wV1.cross(wV2);
if (Math.isSmall(c)) { return }
wV3.init(this.p1).sub(line.p1);
res.init(wV1.uCross(wV3, c), wV2.uCross(wV3, c));
return res;
},
unitInterceptsCircle(point, radius, res = new Vec2()) {
this.asVec(wV1);
var b = -2 * this.p1.sub(point, wV2).dot(wV1);
const c = 2 * wV1.lengthSqr;
const d = (b * b - 2 * c * (wV2.lengthSqr - radius * radius)) ** 0.5
if (isNaN(d)) { return }
return res.init((b - d) / c, (b + d) / c);
},
};
/* END of file */ // Vec2 and Line2 module
/* import {vec2, Line2} from "whateverfilename.jsm" */ // Should import vec2 and line2
const POLY_SCALE = 0.5;
const ball = {
pos: new Vec2(-150,0),
delta: new Vec2(10, 10),
radius: 20,
drawPath(ctx) {
ctx.beginPath();
ctx.arc(this.pos.x, this.pos.y, this.radius, 0, Math.TAU);
ctx.stroke();
},
}
const poly = {
bRadius: 0,
lines: [],
set ballRadius(radius) {
const len = this.points.length
this.bRadius = ball.radius;
i = 0;
while (i < len) {
let line = this.lines[i];
if (line) { line.init(this.points[i], this.points[(i + 1) % len]) }
else { line = new Line2(new Vec2(this.points[i]), new Vec2(this.points[(i + 1) % len])) }
this.lines[i++] = line.translateNormal(radius);
}
this.lines.length = i;
},
points: [
new Vec2(-200, -150).scale(POLY_SCALE),
new Vec2(200, -100).scale(POLY_SCALE),
new Vec2(100, 0).scale(POLY_SCALE),
new Vec2(200, 100).scale(POLY_SCALE),
new Vec2(-200, 75).scale(POLY_SCALE),
new Vec2(-150, -50).scale(POLY_SCALE),
],
drawBallLines(ctx) {
if (this.lines.length) {
const r = this.bRadius;
ctx.beginPath();
for (const l of this.lines) {
ctx.moveTo(l.p1.x, l.p1.y);
ctx.lineTo(l.p2.x, l.p2.y);
}
for (const p of this.points) {
ctx.moveTo(p.x + r, p.y);
ctx.arc(p.x, p.y, r, 0, Math.TAU);
}
ctx.stroke()
}
},
drawPath(ctx) {
ctx.beginPath();
for (const p of this.points) { ctx.lineTo(p.x, p.y) }
ctx.closePath();
ctx.stroke();
},
movingBallIntercept(ball, res = new Vec2()) {
if (this.bRadius !== ball.radius) { this.ballRadius = ball.radius }
var i = 0, nearest = Infinity, nearestGeom, units = new Vec2();
const ballT = new Line2(ball.pos, ball.pos.add(ball.delta, new Vec2()));
for (const p of this.points) {
const res = ballT.unitInterceptsCircle(p, ball.radius, units);
if (res && units.x < nearest && Math.isUnit(units.x)) { // assumes ball started outside poly so only need first point
nearest = units.x;
nearestGeom = ballT;
}
}
for (const line of this.lines) {
const res = line.unitInterceptsLine(ballT, units);
if (res && units.x < nearest && units.isUnits()) { // first unit.x is for unit dist on line
nearest = units.x;
nearestGeom = ballT;
}
}
if (nearestGeom) { return ballT.unitDistOn(nearest, res) }
return;
},
}
const ctx = canvas.getContext("2d");
var w = canvas.width, cw = w / 2;
var h = canvas.height, ch = h / 2
requestAnimationFrame(mainLoop);
// line and point for displaying mouse interaction. point holds the result if any
const line = new Line2(ball.pos, ball.pos.add(ball.delta, new Vec2())), point = new Vec2();
function mainLoop() {
ctx.setTransform(1,0,0,1,0,0); // reset transform
if(w !== innerWidth || h !== innerHeight){
cw = (w = canvas.width = innerWidth) / 2;
ch = (h = canvas.height = innerHeight) / 2;
}else{
ctx.clearRect(0,0,w,h);
}
ctx.setTransform(1,0,0,1,cw,ch); // center to canvas
if (mouse.button) { ball.pos.init(mouse.x - cw, mouse.y - ch) }
line.p2.init(mouse.x - cw, mouse.y - ch);
line.p2.sub(line.p1, ball.delta);
ctx.lineWidth = 1;
ctx.strokeStyle = "#000"
poly.drawPath(ctx)
ctx.strokeStyle = "#F804"
poly.drawBallLines(ctx);
ctx.strokeStyle = "#F00"
ctx.beginPath();
ctx.arc(ball.pos.x, ball.pos.y, ball.radius, 0, Math.TAU);
ctx.moveTo(line.p1.x, line.p1.y);
ctx.lineTo(line.p2.x, line.p2.y);
ctx.stroke();
ctx.strokeStyle = "#00f"
ctx.lineWidth = 2;
ctx.beginPath();
if (poly.movingBallIntercept(ball, point)) {
ctx.arc(point.x, point.y, ball.radius, 0, Math.TAU);
} else {
ctx.arc(line.p2.x, line.p2.y, ball.radius, 0, Math.TAU);
}
ctx.stroke();
requestAnimationFrame(mainLoop);
}
const mouse = {x:0, y:0, button: false};
function mouseEvents(e) {
const bounds = canvas.getBoundingClientRect();
mouse.x = e.pageX - bounds.left - scrollX;
mouse.y = e.pageY - bounds.top - scrollY;
mouse.button = e.type === "mousedown" ? true : e.type === "mouseup" ? false : mouse.button;
}
["mousedown","mouseup","mousemove"].forEach(name => document.addEventListener(name,mouseEvents));
#canvas {
position: absolute;
top: 0px;
left: 0px;
}
<canvas id="canvas"></canvas>
Click to position ball. Move mouse to test trajectory
Vec2 and Line2
To make it easier a vector library will help. For the example I wrote a quick Vec2 and Line2 object (Note only functions used in the example have been tested, Note The object are designed for performance, inexperienced coders should avoid using these objects and opt for a more standard vector and line library)
It's probably not what you're looking for, but here's a way to do it (if you're not looking for perfect precision) :
You can try to approximate the position instead of calculating it.
The way you set up your code has a big advantage : You have the last position of the circle before the collision. Thanks to that, you can just "iterate" through the trajectory and try to find a position that is closest to the intersection position.
I'll assume you already have a function that tells you if a circle is intersecting with the polygon.
Code (C++) :
// What we need :
Vector startPos; // Last position of the circle before the collision
Vector currentPos; // Current, unwanted position
Vector dir; // Direction (a unit vector) of the circle's velocity
float distance = compute_distance(startPos, currentPos); // The distance from startPos to currentPos.
Polygon polygon; // The polygon
Circle circle; // The circle.
unsigned int iterations_count = 10; // The number of iterations that will be done. The higher this number, the more precise the resolution.
// The algorithm :
float currentDistance = distance / 2.f; // We start at the half of the distance.
Circle temp_copy; // A copy of the real circle to "play" with.
for (int i = 0; i < iterations_count; ++i) {
temp_copy.pos = startPos + currentDistance * dir;
if (checkForCollision(temp_copy, polygon)) {
currentDistance -= currentDistance / 2.f; // We go towards startPos by the half of the current distance.
}
else {
currentDistance += currentDistance / 2.f; // We go towards currentPos by the half of the current distance.
}
}
// currentDistance now contains the distance between startPos and the intersection point
// And this is where you should place your circle :
Vector intersectionPoint = startPos + currentDistance * dir;
I haven't tested this code so I hope there's no big mistake in there. It's also not optimized and there are a few problems with this approach (the intersection point could end up inside the polygon) so it still needs to be improved but I think you get the idea.
The other (big, depending on what you're doing) problem with this is that it's an approximation and not a perfect answer.
Hope this helps !
I'm not sure if I understood the scenario correctly, but an efficient straight forward use case would be, to only use a square bounding box of your circle first, calculating intersection of that square with your polygone is extremely fast, much much faster, than using the circle. Once you detect an intersection of that square and the polygone, you need to think or to write which precision is mostly suitable for your scenarion. If you need a better precision, than at this state, you can go on as this from here:
From the 90° angle of your sqare intersection, you draw a 45° degree line until it touches your circle, at this point, where it touches, you draw a new square, but this time, the square is embedded into the circle, let it run now, until this new square intersects the polygon, once it intersects, you have a guaranteed circle intersection. Depending on your needed precision, you can simply play around like this.
I'm not sure what your next problem is from here? If it has to be only the inverse of the circles trajectory, than it simply must be that reverse, I'm really not sure what I'm missing here.
I'm stumped on what is probably some pretty simple math. I need to get the X and Y coordinates from each tiles referenced ID. The grid below shows the order the ids are generated in. Each tile has a width and height of 32. Number ones x & y would be equal to (0,0). This is for a game I'm starting to make with canvas using a tileset.
1|2|3
4|5|6
7|8|9
So far for X, I've come up with...
(n % 3) * 32 - 32 // 3 is the width of the source image divded by 32
And for Y...
(n / 3) * 32
This is obviously wrong, but It's the closest I've come, and I don't think I'm too far off from the actual formula.
Here is my actual code so far:
function startGame() {
const canvas = document.getElementById("rpg");
const ctx = canvas.getContext("2d");
const tileSet = new Image();
tileSet.src = "dungeon_tiles.png";
let map = {
cols: 10,
rows: 10,
tsize: 32,
getTileX: function(counter, tiles) {
return ((tiles[counter] - 1) % 64) * 32;
},
getTileY: function(counter, tiles) {
return ((tiles[counter] - 1) / 64) * 32;
}
};
let counter = 0;
tileSet.onload = function() {
for (let c = 0; c < map.cols; c++) {
for (let r = 0; r < map.rows; r++) {
let x = map.getTileX(counter, mapObj.layers[0].data); // mapObj.layers[0].data is the array of values
let y = map.getTileY(counter, mapObj.layers[0].data);
counter += 1;
ctx.drawImage(
tileSet, // image
x, // source x
y, // source y
map.tsize, // source width
map.tsize, // source height
r * map.tsize, // target x
c * map.tsize, // target y
map.tsize, // target width
map.tsize // target height
);
}
}
};
}
If 1 is (0,0) and each tile is 32*32, then finding your horizontal position is a simple 32*(t-1) where t is your tile number. t-1 because your tiles start from 1 instead of 0. Now, you have 3 tiles per row so you want to reset every 3, so the final formula for your x is 32*((t-1)%3).
For the vertical position it's almost the same, but you want to increase your position by 32 only once every 3 tiles, so this is your y: 32*floor((t-1)/3).
floor((t-1)/3) is simply integer division since the numbers are always positive.
If I understand this correctly, you want to get the 1|2|3 values based on x, y correct? You can do something like this:
((y * total # of rows) + x) + 1
This would convert the 2D x, y index to a single index which is, as you stated, 1|2|3. This formula is based on your example where count starts at 1 and not 0. If you want to convert it to 0 base, just remove the + 1.
If you have the width and height, or probably location of input/character, you can have a GetX(int posX) and GetY(int posY) to get the x and y based on the position. Once you have converted the position to x, y values, use the formula above.
int GetX(int posX)
{
return (posX / 32);
}
int GetY(int posY)
{
return (posY / 32);
}
int GetIndex(int posX, int posY)
{
return ((GetY(posY) / totalRows) + GetX(posX)) + 1;
}
I have an array of objects. Each object represents a point has an ID and an array with x y coordinates. , e.g.:
let points = [{id: 1, coords: [1,2]}, {id: 2, coords: [2,3]}]
I also have an array of arrays containing x y coordinates. This array represents a polygon, e.g.:
let polygon = [[0,0], [0,3], [1,4], [0,2]]
The polygon is closed, so the last point of the array is linked to the first.
I use the following algorithm to check if a point is inside a polygon:
pointInPolygon = function (point, polygon) {
// from https://github.com/substack/point-in-polygon
let x = point.coords[0]
let y = point.coords[1]
let inside = false
for (let i = 0, j = polygon.length - 1; i < polygon.length; j = i++) {
let xi = polygon[i][0]
let yi = polygon[i][1]
let xj = polygon[j][0]
let yj = polygon[j][1]
let intersect = ((yi > y) !== (yj > y)) &&
(x < (xj - xi) * (y - yi) / (yj - yi) + xi)
if (intersect) inside = !inside
}
return inside
}
The user draws the polygon with the mouse, which works like this:
http://bl.ocks.org/bycoffe/5575904
Every time the mouse moves (gets new coordinates), we have to add the current mouse location to the polygon, and then we have to loop through all the points and call the pointInPolygon function on the point on every iteration. I have throttled the event already to improve performance:
handleCurrentMouseLocation = throttle(function (mouseLocation, points, polygon) {
let pointIDsInPolygon = []
polygon.push(mouseLocation)
for (let point in points) {
if (pointInPolygon(point, polygon) {
pointIDsInPolygon.push(point.id)
}
}
return pointIDsInPolygon
}, 100)
This works fine when the number of points is not that high (<200), but in my current project, we have over 4000 points. Iterating through all these points and calling the pointInPolygon function for each point every 100 ms makes the whole thing very laggy.
I am looking for a quicker way to accomplish this. For example: maybe, instead of triggering this function every 100 ms when the mouse is drawing the polygon, we could look up some of the closest points to the mouse location and store this in a closestPoints array. Then, when the mouse x/y gets higher/lower than a certain value, it would only loop through the points in closestPoints and the points already in the polygon. But I don't know what these closestPoints would be, or if this whole approach even makes sense. But I do feel like the solution is in decreasing the number of points we have to loop through every time.
To be clear, the over 4000 points in my project are fixed- they are not generated dynamically, but always have exactly the same coordinates. In fact, the points represent centroids of polygons, which represent boundaries of municipalities on a map. So it is, for example, possible to calculate the closestPoints for every point in advance (in this case we would calculate this for the points, not the mouse location like in the previous paragraph).
Any computational geometry expert who could help me with this?
If I understand you correctly, a new point logged from the mouse will make the polygon one point larger. So if at a certain moment the polygon is defined by n points (0,1,...,n-1) and a new point p is logged, then the polygon becomes (0,1,...,n-1,p).
So this means that one edge is removed from the polygon and two are added to it instead.
For example, let's say we have 9 points on the polygon, numbered 0 to 8, where point 8 was the last point that was added to it:
The grey line is the edge that closes the polygon.
Now the mouse moves to point 9, which is added to the polygon:
The grey edge is removed from the polygon, and the two green ones are added to it. Now observe the following rule:
Points that are in the triangle formed by the grey and two green edges swap in/out of the polygon when compared to where they were before the change. All other points retain their previous in/out state.
So, if you would retain the status of each point in memory, then you only need to check for each point whether it is within the above mentioned triangle, and if so, you need to toggle the status of that point.
As the test for inclusion in a triangle will take less time than to test the same for a potentially complex polygon, this will lead to a more efficient algorithm.
You can further improve the efficiency, if you take the bounding rectangle of the triangle with corners at (x0, y0),(x1, y0),(x1, y1),(x0, y1). Then you can already skip over points that have an x or y coordinate that is out of range:
Any point outside of the blue box will not change state: if it was inside the polygon before the last point 9 was added, it still is now. Only for points within the box you'll need to do the pointInPolygon test, but on the triangle only, not the whole polygon. If that test returns true, then the state of the tested point must be toggled.
Group points in square boxes
To further speed up the process you could divide the plane with a grid into square boxes, where each point belongs to one box, but a box will typically have many points. For determining which points are in the triangle, you could first identify which boxes overlap with the triangle.
For that you don't have to test each box, but can derive the boxes from the coordinates that are on the triangle's edges.
Then only the points in the remaining boxes would need to be tested individually. You could play with the box size and see how it impacts performance.
Here is a working example, implementing those ideas. There are 10000 points, but I have no lagging on my PC:
canvas.width = document.body.clientWidth;
const min = [0, 0],
max = [canvas.width, canvas.height],
points = Array.from(Array(10000), i => {
let x = Math.floor(Math.random() * (max[0]-min[0]) + min[0]);
let y = Math.floor(Math.random() * (max[1]-min[1]) + min[1]);
return [x, y];
}),
polygon = [],
boxSize = Math.ceil((max[0] - min[0]) / 50),
boxes = (function (xBoxes, yBoxes) {
return Array.from(Array(yBoxes), _ =>
Array.from(Array(xBoxes), _ => []));
})(toBox(0, max[0])+1, toBox(1, max[1])+1),
insidePoints = new Set,
ctx = canvas.getContext('2d');
function drawPoint(p) {
ctx.fillRect(p[0], p[1], 1, 1);
}
function drawPolygon(pol) {
ctx.beginPath();
ctx.moveTo(pol[0][0], pol[0][1]);
for (const p of pol) {
ctx.lineTo(p[0], p[1]);
}
ctx.stroke();
}
function segmentMap(a, b, dim, coord) {
// Find the coordinate where ab is intersected by a coaxial line at
// the given coord.
// First some boundary conditions:
const dim2 = 1 - dim;
if (a[dim] === coord) {
if (b[dim] === coord) return [a[dim2], b[dim2]];
return [a[dim2]];
}
if (b[dim] === coord) return [b[dim2]];
// See if there is no intersection:
if ((coord > a[dim]) === (coord > b[dim])) return [];
// There is an intersection point:
const res = (coord - a[dim]) * (b[dim2] - a[dim2]) / (b[dim] - a[dim]) + a[dim2];
return [res];
}
function isLeft(a, b, c) {
// Return true if c lies at the left of ab:
return (b[0] - a[0])*(c[1] - a[1]) - (b[1] - a[1])*(c[0] - a[0]) > 0;
}
function inTriangle(a, b, c, p) {
// First do a bounding box check:
if (p[0] < Math.min(a[0], b[0], c[0]) ||
p[0] > Math.max(a[0], b[0], c[0]) ||
p[1] < Math.min(a[1], b[1], c[1]) ||
p[1] > Math.max(a[1], b[1], c[1])) return false;
// Then check that the point is on the same side of each of the
// three edges:
const x = isLeft(a, b, p),
y = isLeft(b, c, p),
z = isLeft(c, a, p);
return x ? y && z : !y && !z;
}
function toBox(dim, coord) {
return Math.floor((coord - min[dim]) / boxSize);
}
function toWorld(dim, box) {
return box * boxSize + min[dim];
}
function drawBox(boxX, boxY) {
let x = toWorld(0, boxX);
let y = toWorld(1, boxY);
drawPolygon([[x, y], [x + boxSize, y], [x + boxSize, y + boxSize], [x, y + boxSize], [x, y]]);
}
function triangleTest(a, b, c, points, insidePoints) {
const markedBoxes = new Set(), // collection of boxes that overlap with triangle
box = [];
for (let dim = 0; dim < 2; dim++) {
const dim2 = 1-dim,
// Order triangle points by coordinate
[d, e, f] = [a, b, c].sort( (p, q) => p[dim] - q[dim] ),
lastBox = toBox(dim, f[dim]);
for (box[dim] = toBox(dim, d[dim]); box[dim] <= lastBox; box[dim]++) {
// Calculate intersections of the triangle edges with the row/column of boxes
const coord = toWorld(dim, box[dim]),
intersections =
[...new Set([...segmentMap(a, b, dim, coord),
...segmentMap(b, c, dim, coord),
...segmentMap(a, c, dim, coord)])];
if (!intersections.length) continue;
intersections.sort( (a,b) => a - b );
const lastBox2 = toBox(dim2, intersections.slice(-1)[0]);
// Mark all boxes between the two intersection points
for (box[dim2] = toBox(dim2, intersections[0]); box[dim2] <= lastBox2; box[dim2]++) {
markedBoxes.add(boxes[box[1]][box[0]]);
if (box[dim]) {
markedBoxes.add(boxes[box[1]-dim][box[0]-(dim2)]);
}
}
}
}
// Perform the triangle test for each individual point in the marked boxes
for (const box of markedBoxes) {
for (const p of box) {
if (inTriangle(a, b, c, p)) {
// Toggle in/out state of this point
if (insidePoints.delete(p)) {
ctx.fillStyle = '#000000';
} else {
ctx.fillStyle = '#e0e0e0';
insidePoints.add(p);
}
drawPoint(p);
}
}
}
}
// Draw points
points.forEach(drawPoint);
// Distribute points into boxes
for (const p of points) {
let hor = Math.floor((p[0] - min[0]) / boxSize);
let ver = Math.floor((p[1] - min[1]) / boxSize);
boxes[ver][hor].push(p);
}
canvas.addEventListener('mousemove', (e) => {
if (e.buttons !== 1) return;
polygon.push([Math.max(e.offsetX,0), Math.max(e.offsetY,0)]);
ctx.strokeStyle = '#000000';
drawPolygon(polygon);
const len = polygon.length;
if (len > 2) {
triangleTest(polygon[0], polygon[len-2+len%2], polygon[len-1-len%2], points, insidePoints);
}
});
canvas.addEventListener('mousedown', (e) => {
// Start a new polygon
polygon.length = 0;
});
Drag mouse to draw a shape:
<canvas id="canvas"></canvas>
Keep a background image where you perform polygon filling every time you update the polygon.
Then testing any point for interiorness will take constant time independently of the polygon complexity.
So, i'm trying to implement hough transform, this version is 1-dimensional (its for all dims reduced to 1 dim optimization) version based on the minor properties.
Enclosed is my code, with a sample image... input and output.
Obvious question is what am i doing wrong. I've tripled check my logic and code and it looks good also my parameters. But obviously i'm missing on something.
Notice that the red pixels are supposed to be ellipses centers , while the blue pixels are edges to be removed (belong to the ellipse that conform to the mathematical equations).
also, i'm not interested in openCV / matlab / ocatve / etc.. usage (nothing against them).
Thank you very much!
var fs = require("fs"),
Canvas = require("canvas"),
Image = Canvas.Image;
var LEAST_REQUIRED_DISTANCE = 40, // LEAST required distance between 2 points , lets say smallest ellipse minor
LEAST_REQUIRED_ELLIPSES = 6, // number of found ellipse
arr_accum = [],
arr_edges = [],
edges_canvas,
xy,
x1y1,
x2y2,
x0,
y0,
a,
alpha,
d,
b,
max_votes,
cos_tau,
sin_tau_sqr,
f,
new_x0,
new_y0,
any_minor_dist,
max_minor,
i,
found_minor_in_accum,
arr_edges_len,
hough_file = 'sample_me2.jpg',
edges_canvas = drawImgToCanvasSync(hough_file); // make sure everything is black and white!
arr_edges = getEdgesArr(edges_canvas);
arr_edges_len = arr_edges.length;
var hough_canvas_img_data = edges_canvas.getContext('2d').getImageData(0, 0, edges_canvas.width,edges_canvas.height);
for(x1y1 = 0; x1y1 < arr_edges_len ; x1y1++){
if (arr_edges[x1y1].x === -1) { continue; }
for(x2y2 = 0 ; x2y2 < arr_edges_len; x2y2++){
if ((arr_edges[x2y2].x === -1) ||
(arr_edges[x2y2].x === arr_edges[x1y1].x && arr_edges[x2y2].y === arr_edges[x1y1].y)) { continue; }
if (distance(arr_edges[x1y1],arr_edges[x2y2]) > LEAST_REQUIRED_DISTANCE){
x0 = (arr_edges[x1y1].x + arr_edges[x2y2].x) / 2;
y0 = (arr_edges[x1y1].y + arr_edges[x2y2].y) / 2;
a = Math.sqrt((arr_edges[x1y1].x - arr_edges[x2y2].x) * (arr_edges[x1y1].x - arr_edges[x2y2].x) + (arr_edges[x1y1].y - arr_edges[x2y2].y) * (arr_edges[x1y1].y - arr_edges[x2y2].y)) / 2;
alpha = Math.atan((arr_edges[x2y2].y - arr_edges[x1y1].y) / (arr_edges[x2y2].x - arr_edges[x1y1].x));
for(xy = 0 ; xy < arr_edges_len; xy++){
if ((arr_edges[xy].x === -1) ||
(arr_edges[xy].x === arr_edges[x2y2].x && arr_edges[xy].y === arr_edges[x2y2].y) ||
(arr_edges[xy].x === arr_edges[x1y1].x && arr_edges[xy].y === arr_edges[x1y1].y)) { continue; }
d = distance({x: x0, y: y0},arr_edges[xy]);
if (d > LEAST_REQUIRED_DISTANCE){
f = distance(arr_edges[xy],arr_edges[x2y2]); // focus
cos_tau = (a * a + d * d - f * f) / (2 * a * d);
sin_tau_sqr = (1 - cos_tau * cos_tau);//Math.sqrt(1 - cos_tau * cos_tau); // getting sin out of cos
b = (a * a * d * d * sin_tau_sqr ) / (a * a - d * d * cos_tau * cos_tau);
b = Math.sqrt(b);
b = parseInt(b.toFixed(0));
d = parseInt(d.toFixed(0));
if (b > 0){
found_minor_in_accum = arr_accum.hasOwnProperty(b);
if (!found_minor_in_accum){
arr_accum[b] = {f: f, cos_tau: cos_tau, sin_tau_sqr: sin_tau_sqr, b: b, d: d, xy: xy, xy_point: JSON.stringify(arr_edges[xy]), x0: x0, y0: y0, accum: 0};
}
else{
arr_accum[b].accum++;
}
}// b
}// if2 - LEAST_REQUIRED_DISTANCE
}// for xy
max_votes = getMaxMinor(arr_accum);
// ONE ellipse has been detected
if (max_votes != null &&
(max_votes.max_votes > LEAST_REQUIRED_ELLIPSES)){
// output ellipse details
new_x0 = parseInt(arr_accum[max_votes.index].x0.toFixed(0)),
new_y0 = parseInt(arr_accum[max_votes.index].y0.toFixed(0));
setPixel(hough_canvas_img_data,new_x0,new_y0,255,0,0,255); // Red centers
// remove the pixels on the detected ellipse from edge pixel array
for (i=0; i < arr_edges.length; i++){
any_minor_dist = distance({x:new_x0, y: new_y0}, arr_edges[i]);
any_minor_dist = parseInt(any_minor_dist.toFixed(0));
max_minor = b;//Math.max(b,arr_accum[max_votes.index].d); // between the max and the min
// coloring in blue the edges we don't need
if (any_minor_dist <= max_minor){
setPixel(hough_canvas_img_data,arr_edges[i].x,arr_edges[i].y,0,0,255,255);
arr_edges[i] = {x: -1, y: -1};
}// if
}// for
}// if - LEAST_REQUIRED_ELLIPSES
// clear accumulated array
arr_accum = [];
}// if1 - LEAST_REQUIRED_DISTANCE
}// for x2y2
}// for xy
edges_canvas.getContext('2d').putImageData(hough_canvas_img_data, 0, 0);
writeCanvasToFile(edges_canvas, __dirname + '/hough.jpg', function() {
});
function getMaxMinor(accum_in){
var max_votes = -1,
max_votes_idx,
i,
accum_len = accum_in.length;
for(i in accum_in){
if (accum_in[i].accum > max_votes){
max_votes = accum_in[i].accum;
max_votes_idx = i;
} // if
}
if (max_votes > 0){
return {max_votes: max_votes, index: max_votes_idx};
}
return null;
}
function distance(point_a,point_b){
return Math.sqrt((point_a.x - point_b.x) * (point_a.x - point_b.x) + (point_a.y - point_b.y) * (point_a.y - point_b.y));
}
function getEdgesArr(canvas_in){
var x,
y,
width = canvas_in.width,
height = canvas_in.height,
pixel,
edges = [],
ctx = canvas_in.getContext('2d'),
img_data = ctx.getImageData(0, 0, width, height);
for(x = 0; x < width; x++){
for(y = 0; y < height; y++){
pixel = getPixel(img_data, x,y);
if (pixel.r !== 0 &&
pixel.g !== 0 &&
pixel.b !== 0 ){
edges.push({x: x, y: y});
}
} // for
}// for
return edges
} // getEdgesArr
function drawImgToCanvasSync(file) {
var data = fs.readFileSync(file)
var canvas = dataToCanvas(data);
return canvas;
}
function dataToCanvas(imagedata) {
img = new Canvas.Image();
img.src = new Buffer(imagedata, 'binary');
var canvas = new Canvas(img.width, img.height);
var ctx = canvas.getContext('2d');
ctx.patternQuality = "best";
ctx.drawImage(img, 0, 0, img.width, img.height,
0, 0, img.width, img.height);
return canvas;
}
function writeCanvasToFile(canvas, file, callback) {
var out = fs.createWriteStream(file)
var stream = canvas.createPNGStream();
stream.on('data', function(chunk) {
out.write(chunk);
});
stream.on('end', function() {
callback();
});
}
function setPixel(imageData, x, y, r, g, b, a) {
index = (x + y * imageData.width) * 4;
imageData.data[index+0] = r;
imageData.data[index+1] = g;
imageData.data[index+2] = b;
imageData.data[index+3] = a;
}
function getPixel(imageData, x, y) {
index = (x + y * imageData.width) * 4;
return {
r: imageData.data[index+0],
g: imageData.data[index+1],
b: imageData.data[index+2],
a: imageData.data[index+3]
}
}
It seems you try to implement the algorithm of Yonghong Xie; Qiang Ji (2002). A new efficient ellipse detection method 2. p. 957.
Ellipse removal suffers from several bugs
In your code, you perform the removal of found ellipse (step 12 of the original paper's algorithm) by resetting coordinates to {-1, -1}.
You need to add:
`if (arr_edges[x1y1].x === -1) break;`
at the end of the x2y2 block. Otherwise, the loop will consider -1, -1 as a white point.
More importantly, your algorithm consists in erasing every point which distance to the center is smaller than b. b supposedly is the minor axis half-length (per the original algorithm). But in your code, variable b actually is the latest (and not most frequent) half-length, and you erase points with a distance lower than b (instead of greater, since it's the minor axis). In other words, you clear all points inside a circle with a distance lower than latest computed axis.
Your sample image can actually be processed with a clearing of all points inside a circle with a distance lower than selected major axis with:
max_minor = arr_accum[max_votes.index].d;
Indeed, you don't have overlapping ellipses and they are spread enough. Please consider a better algorithm for overlapping or closer ellipses.
The algorithm mixes major and minor axes
Step 6 of the paper reads:
For each third pixel (x, y), if the distance between (x, y) and (x0,
y0) is greater than the required least distance for a pair of pixels
to be considered then carry out the following steps from (7) to (9).
This clearly is an approximation. If you do so, you will end up considering points further than the minor axis half length, and eventually on the major axis (with axes swapped). You should make sure the distance between the considered point and the tested ellipse center is smaller than currently considered major axis half-length (condition should be d <= a). This will help with the ellipse erasing part of the algorithm.
Also, if you also compare with the least distance for a pair of pixels, as per the original paper, 40 is too large for the smaller ellipse in your picture. The comment in your code is wrong, it should be at maximum half the smallest ellipse minor axis half-length.
LEAST_REQUIRED_ELLIPSES is too small
This parameter is also misnamed. It is the minimum number of votes an ellipse should get to be considered valid. Each vote corresponds to a pixel. So a value of 6 means that only 6+2 pixels make an ellipse. Since pixels coordinates are integers and you have more than 1 ellipse in your picture, the algorithm might detect ellipses that are not, and eventually clear edges (especially when combined with the buggy ellipse erasing algorithm). Based on tests, a value of 100 will find four of the five ellipses of your picture, while 80 will find them all. Smaller values will not find the proper centers of the ellipses.
Sample image is not black & white
Despite the comment, sample image is not exactly black and white. You should convert it or apply some threshold (e.g. RGB values greater than 10 instead of simply different form 0).
Diff of minimum changes to make it work is available here:
https://gist.github.com/pguyot/26149fec29ffa47f0cfb/revisions
Finally, please note that parseInt(x.toFixed(0)) could be rewritten Math.floor(x), and you probably want to not truncate all floats like this, but rather round them, and proceed where needed: the algorithm to erase the ellipse from the picture would benefit from non truncated values for the center coordinates. This code definitely could be improved further, for example it currently computes the distance between points x1y1 and x2y2 twice.