How to make a moving graph in Javascript P5 - javascript

I have struggled to make my a moving graph in P5. My idea is to make a graph by the length of an array. If the length of the array increases the graph will move up and if it increases it will do the opposite.
My code can be seen at this link, where I have made a comment, where the graph is.

This example from #solub really highlights this issue, and presents a decent solution to the problem.
const W = 680, H = 200; // dimensions of canvas
const time = 400; // number of x tick values
const step = W/time; // time step
let data = []; // to store number of infected people
let count = 0; // steps counter
let pos, fy, c, infected, colors, l, f;
function setup() {
createCanvas(W, H);
fill(255, 30, 70, 90);
// array containing the x positions of the line graph, scaled to fit the canvas
posx = Float32Array.from({ length: time }, (_, i) => map(i, 0, time, 0, W));
// function to map the number of infected people to a specific height (here the height of the canvas)
fy = _ => map(_, 3, 0, H, 10);
// colors based on height stored in an array list.
colors = d3.range(H).map(i => d3.interpolateWarm(norm(i, 0, H)))
}
function draw() {
background('#fff');
// length of data list -1 (to access last item of data list)
l = data.length -1 ;
// frameCount
f = frameCount;
// number of infected people (noised gaussian curved)
c = sin(f*0.008);
infected = (exp(-c*c/2.0) / sqrt(TWO_PI) / 0.2) + map(noise(f*0.02), 0, 1, -1, 1);
// store that number at each step (the x-axis tick values)
if (f&step) {
data.push(infected);
count += 1;
}
// iterate over data list to rebuild curve at each frame
for (let i = 0; i < l; i++) {
y1 = fy(data[i]);
y2 = fy(data[i+1]);
x1 = posx[i];
x2 = posx[i+1];
// vertical lines (x-values)
strokeWeight(0.2);
line(x1, H, x1, y1 + 2);
// polyline
strokeWeight(2);
stroke(colors[Math.floor(map(y1, H, 10, H, 0))] );
line(x1, y1, x2, y2);
}
// draw ellispe at last data point
if (count > 1) {
ellipse(posx[l], fy(data[l]), 4, 4);
}
// reset data and count
if (count%time===0) {
data = [];
count = 0;
}
}

Related

Improving visualizer look

I am not happy with how this code draws this music visualizer using canvas and getByteFrequencyData. https://share.getcloudapp.com/Kou7AJb1
It seems the bars are too large and I think its because the FFT (Fast Fourier Transform) array contains a wide spectrum of data but in my code I am generating n amount of bars based on the width of the canvas.
Then after having n bars I am mapping the FFT to the same index of the bar leaving out lots of useful information.
function convertRange(value: any, r1: any, r2: any) {
return ((value - r1[0]) * (r2[1] - r2[0])) / (r1[1] - r1[0]) + r2[0];
}
// line up and down
const drawVisualizer2 = ({ canvas, frameData, background }: any) => {
const ctx = canvas.getContext("2d");
ctx.clearRect(0, 0, canvas.width, canvas.height);
// TODO: Improve?
const bars = Math.round(canvas.width) / 15 - 1;
const max_of_array = Math.max.apply(Math, frameData.fft);
for (let i = 0; i < bars; i++) {
const height = convertRange(frameData.fft[i], [0, max_of_array], [0, canvas.height / 2 - 20]);
const centerY = canvas.height / 2;
// draw the bar
ctx.strokeStyle = background ? background.colors[0] : "#ffffff";
ctx.lineWidth = 10;
ctx.lineCap = "round";
ctx.beginPath();
ctx.moveTo((i + 1) * 15, centerY);
ctx.lineTo((i + 1) * 15, centerY + height);
ctx.stroke();
ctx.beginPath();
ctx.moveTo((i + 1) * 15, centerY);
ctx.lineTo((i + 1) * 15, centerY - height);
ctx.stroke();
}
};
export default drawVisualizer2;
What I think needs to be done is average out the FFT based on the amount of bars in the loop. If that makes sense what is a practical approach code wise to achieve that?
I hope this makes sense, happy to clarify if needed.
I assume that frameData.fft is the Uint8Array containing the actual frequency data returned by the AnalyserNode.getByteFrequencyData() method.
Your assumption is right - the number of bars of course doesn't match the number of items stored in array and with a loop like
for (let i = 0; i < bars; i++) {
...
frameData.fft[i]
...
}
you're just using the first few values from zero up to the number of bars and ultimately skipping the entire rest of the array.
The fix is quite simple though:
Instead of grabbing values from the array in intervals of 1, the interval must be the number of elements in the array divided by the number of bars. This number is then multiplied by the variable i inside the for-loop and rounded as the division might result in a decimal number and the array's elements are at integer positions.
Here's an example:
let frameData = {
fft: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]
};
let bars = 5;
let steps = frameData.fft.length / bars;
for (let i = 0; i < bars; i++) {
console.log(frameData.fft[Math.round(i * steps)]);
}

Finding the closest indexed color value to the current color in javascript / p5.js

I have an array of "indexed" RGBA color values, and for any given image that I load, I want to be able to run through all the color values of the loaded pixels, and match them to the closest of my indexed color values. So, if the pixel in the image had a color of, say, RGBA(0,0,10,1), and the color RGBA(0,0,0,1) was my closest indexed value, it would adjust the loaded pixel to RGBA(0,0,0,1).
I know PHP has a function imagecolorclosest
int imagecolorclosest( $image, $red, $green, $blue )
Does javascript / p5.js / processing have anything similar? What's the easiest way to compare one color to another. Currently I can read the pixels of the image with this code (using P5.js):
let img;
function preload() {
img = loadImage('assets/00.jpg');
}
function setup() {
image(img, 0, 0, width, height);
let d = pixelDensity();
let fullImage = 4 * (width * d) * (height * d);
loadPixels();
for (let i = 0; i < fullImage; i+=4) {
let curR = pixels[i];
let curG = pixels[i]+1;
let curB = pixels[i+2];
let curA = pixels[i+3];
}
updatePixels();
}
Each color consists 3 color channels. Imagine the color as a point in a 3 dimensional space, where each color channel (red, green, blue) is associated to one dimension. You've to find the closest color (point) by the Euclidean distance. The color with the lowest distance is the "closest" color.
In p5.js you can use p5.Vector for vector arithmetic. The Euclidean distance between to points can be calculated by .dist(). So the distance between points respectively "colors" a and b can be expressed by:
let a = createVector(r1, g1, b1);
let b = createVector(r2, g2, b2);
let distance = a.dist(b);
Use the expression somehow like this:
colorTable = [[r0, g0, b0], [r1, g1, b1] ... ];
int closesetColor(r, g, b) {
let a = createVector(r, g, b);
let minDistance;
let minI;
for (let i=0; i < colorTable; ++i) {
let b = createVector(...colorTable[i]);
let distance = a.dist(b);
if (!minDistance || distance < minDistance) {
minI = i; minDistance = distance;
}
}
return minI;
}
function setup() {
image(img, 0, 0, width, height);
let d = pixelDensity();
let fullImage = 4 * (width * d) * (height * d);
loadPixels();
for (let i = 0; i < fullImage; i+=4) {
let closestI = closesetColor(pixels[i], pixels[i+1], pixels[i+2])
pixels[i] = colorTable[closestI][0];
pixels[i+1] = colorTable[closestI][1];
pixels[i+2] = colorTable[closestI][2];
}
updatePixels();
}
If I understand you correctly you want to keep the colors of an image within a certain limited pallet. If so, you should apply this function to each pixel of your image. It will give you the closest color value to a supplied pixel from a set of limited colors (indexedColors).
// example color pallet (no alpha)
indexedColors = [
[0, 10, 0],
[0, 50, 0]
];
// Takes pixel with no alpha value
function closestIndexedColor(color) {
var closest = {};
var dist;
for (var i = 0; i < indexedColors.length; i++) {
dist = Math.pow(indexedColors[i][0] - color[0], 2);
dist += Math.pow(indexedColors[i][1] - color[1], 2);
dist += Math.pow(indexedColors[i][2] - color[2], 2);
dist = Math.sqrt(dist);
if(!closest.dist || closest.dist > dist){
closest.dist = dist;
closest.color = indexedColors[i];
}
}
// returns closest match as RGB array without alpha
return closest.color;
}
// example usage
closestIndexedColor([0, 20, 0]); // returns [0, 10, 0]
It works the way that the PHP function you mentioned does. If you treat the color values as 3d coordinate points then the closet colors will be the ones with the smallest 3d "distance" between them. This 3d distance is calculated using the distance formula:

Algorithm for reading image as lines (then get a result of them)?

Is there a algorithm to get the line strokes of a image (ignore curves, circles, etc., everything will be treated as lines, but still similiar to vectors), from their pixels? Then get a result of them, like a Array?
This is how it'd basically work to read
In this way, each row of pixel would be read as 1 horizontal line and I'd like to handle vertical lines also; but if there's a round fat line that takes more than 1 row
it'll be considered one line. Its line width is the same height of pixels it has.
For instance, let's suppose we've a array containing rows of pixels in the (red, green, blue, alpha) format (JavaScript):
/* formatted ImageData().data */
[
new Uint8Array([
/* first pixel */
255, 0, 0, 255,
/* second pixel */
255, 0, 0, 255
]),
new Uint8Array([
/* first pixel */
0, 0, 0, 0,
/* second pixel */
0, 0, 0, 0
])
]
This would be a 2x2px image data, with a straight horizontal red line. So, from this array, I want to get a array containing data of lines, like:
[
// x, y: start point
// tx, ty: end point
// w: line width
// the straight horizontal red line of 1 pixel
{ x: 0, y: 0, tx: 2, ty: 0, w: 1, rgba: [255, 0, 0, 255] }
]
Note: I'd like to handle anti-aliasing.
This is my function to read pixels in the above format:
var getImagePixels = function(img){
var canvas = document.createElement('canvas'),
ctx = canvas.getContext('2d');
canvas.width = img.width;
canvas.height = img.height;
ctx.drawImage(img, 0, 0);
var imgData = ctx.getImageData(0, 0, img.width, img.height).data;
var nImgData = [];
var offWidth = img.width * 4;
var dataRow = (nImgData[0] = new Uint8Array(offWidth));
for (var b = 0, i = 0; b++ < img.height;) {
nImgData[b] = new Uint8Array(offWidth);
for (var arrI = 0, len = i + offWidth; i < len; i += 4, arrI += 4) {
nImgData[b][arrI] = imgData[i];
nImgData[b][arrI + 1] = imgData[i + 1];
nImgData[b][arrI + 2] = imgData[i + 2];
nImgData[b][arrI + 3] = imgData[i + 3];
}
}
return nImgData;
};
You can find all lines using Hough transform. It will find only lines, no curves or circles. You may need to run edge detection before finding lines. Here is example:
Here you can find opencv example of implementation.
I had a similar image processing question once, you can read it here. But you can basically take the same idea for anything you want to do with an image.
The basic data can be seen as follows:
var img = new Image,
w = canvas.width,
h = canvas.height,
ctx = canvas.getContext('2d');
img.onload = imgprocess;
img.src = 'some.png';
function imgprocess() {
ctx.drawImage(this, 0, 0, w, h);
var idata = ctx.getImageData(0, 0, w, h),
buffer = idata.data,
buffer32 = new Uint32Array(buffer.buffer),
x, y,
x1 = w, y1 = h, x2 = 0, y2 = 0;
//You now have properties of the image from the canvas data. You will need to write your own loops to detect which pixels etc... See the example in the link for some ideas.
}
UPDATE:
Working example of finding color data;
var canvas = document.getElementById('canvas');
var canvasWidth = canvas.width;
var canvasHeight = canvas.height;
var ctx = canvas.getContext('2d');
var imageData = ctx.getImageData(0, 0, canvasWidth, canvasHeight);
var buf = new ArrayBuffer(imageData.data.length);
var buf8 = new Uint8ClampedArray(buf);
var data = new Uint32Array(buf);
for (var y = 0; y < canvasHeight; ++y) {
for (var x = 0; x < canvasWidth; ++x) {
var value = x * y & 0xff;
data[y * canvasWidth + x] =
(255 << 24) | // alpha
(value << 16) | // blue
(value << 8) | // green
value; // red
}
}
More examples can be seen here
The author above outlines pixel and line data:
The ImageData.data property referenced by the variable data is a one-dimensional array of integers, where each element is in the range 0..255. ImageData.data is arranged in a repeating sequence so that each element refers to an individual channel. That repeating sequence is as follows:
data[0] = red channel of first pixel on first row
data[1] = green channel of first pixel on first row
data[2] = blue channel of first pixel on first row
data[3] = alpha channel of first pixel on first row
data[4] = red channel of second pixel on first row
data[5] = green channel of second pixel on first row
data[6] = blue channel of second pixel on first row
data[7] = alpha channel of second pixel on first row
data[8] = red channel of third pixel on first row
data[9] = green channel of third pixel on first row
data[10] = blue channel of third pixel on first row
data[11] = alpha channel of third pixel on first row

sampling an image a tile at a time using canvas, getImageData and a Web Worker

I am attempting to build a simple HTML5 canvas based image processor that takes an image and generates a tiled version of it with each tile being the average color of the underlying image area.
This is easy enough to do outside the context of a Web Worker but I'd like to use a worker so as not to block the ui processing thread. The Uint8ClampedArray form the data takes is giving me a headache with regards to how to process it tile by tile.
Below is a plunk demonstrating what I've done so far and how it's not working.
http://plnkr.co/edit/AiHmLM1lyJGztk8GHrso?p=preview
The relevant code is in worker.js
Here it is:
onmessage = function (e) {
var i,
j = 0,
k = 0,
data = e.data,
imageData = data.imageData,
tileWidth = Math.floor(data.tileWidth),
tileHeight = Math.floor(data.tileHeight),
width = imageData.width,
height = imageData.height,
tile = [],
len = imageData.data.length,
offset,
processedData = [],
tempData = [],
timesLooped = 0,
tileIncremented = 1;
function sampleTileData(tileData) {
var blockSize = 20, // only visit every x pixels
rgb = {r:0,g:0,b:0},
i = -4,
count = 0,
length = tileData.length;
while ((i += blockSize * 4) < length) {
if (tileData[i].r !== 0 && tileData[i].g !== 0 && tileData[i].b !== 0) {
++count;
rgb.r += tileData[i].r;
rgb.g += tileData[i].g;
rgb.b += tileData[i].b;
}
}
// ~~ used to floor values
rgb.r = ~~(rgb.r/count);
rgb.g = ~~(rgb.g/count);
rgb.b = ~~(rgb.b/count);
processedData.push(rgb);
}
top:
for (; j <= len; j += (width * 4) - (tileWidth * 4), timesLooped++) {
if (k === (tileWidth * 4) * tileHeight) {
k = 0;
offset = timesLooped - 1 < tileHeight ? 4 : 0;
j = ((tileWidth * 4) * tileIncremented) - offset;
timesLooped = 0;
tileIncremented++;
sampleTileData(tempData);
tempData = [];
//console.log('continue "top" loop for new tile');
continue top;
}
for (i = 0; i < tileWidth * 4; i++) {
k++;
tempData.push({r: imageData.data[j+i], g: imageData.data[j+i+1], b: imageData.data[j+i+2], a: imageData.data[j+i+3]});
}
//console.log('continue "top" loop for new row per tile');
}
postMessage(processedData);
};
I'm sure there's a better way of accomplishing what I'm trying to do starting at the labeled for loop. So any alternative methods or suggestions would be much appreciated.
Update:
I've taken a different approach to solving this:
http://jsfiddle.net/TunMn/425/
Close, but no.
I know what the problem is but I have no idea how to go about amending it. Again, any help would be much appreciated.
Approach 1: Manually calculating average per tile
Here is one approach you can try:
There is only need for reading, update can be done later using HW acceleration
Use async calls for every row (or tile if the image is very wide)
This gives an accurate result but is slower and depends on CORS restrictions.
Example
You can see the original image for a blink below. This shows the asynchronous approach works as it allows the UI to update while processing the tiles in chunks.
window.onload = function() {
var img = document.querySelector("img"),
canvas = document.querySelector("canvas"),
ctx = canvas.getContext("2d"),
w = img.naturalWidth, h = img.naturalHeight,
// store average tile colors here:
tileColors = [];
// draw in image
canvas.width = w; canvas.height = h;
ctx.drawImage(img, 0, 0);
// MAIN CALL: calculate, when done the callback function will be invoked
avgTiles(function() {console.log("done!")});
// The tiling function
function avgTiles(callback) {
var cols = 8, // number of tiles (make sure it produce integer value
rows = 8, // for tw/th below:)
tw = (w / cols)|0, // pixel width/height of each tile
th = (h / rows)|0,
x = 0, y = 0;
(function process() { // for async processing
var data, len, count, r, g, b, i;
while(x < cols) { // get next tile on x axis
r = g = b = i = 0;
data = ctx.getImageData(x * tw, y * th, tw, th).data; // single tile
len = data.length;
count = len / 4;
while(i < len) { // calc this tile's color average
r += data[i++]; // add values for each component
g += data[i++];
b += data[i++];
i++
}
// store average color to array, no need to write back at this point
tileColors.push({
r: (r / count)|0,
g: (g / count)|0,
b: (b / count)|0
});
x++; // next tile
}
y++; // next row, but do an async break below:
if (y < rows) {
x = 0;
setTimeout(process, 9); // call it async to allow browser UI to update
}
else {
// draw tiles with average colors, fillRect is faster than setting each pixel:
for(y = 0; y < rows; y++) {
for(x = 0; x < cols; x++) {
var col = tileColors[y * cols + x]; // get stored color
ctx.fillStyle = "rgb(" + col.r + "," + col.g + "," + col.b + ")";
ctx.fillRect(x * tw, y * th, tw, th);
}
}
// we're done, invoke callback
callback()
}
})(); // to self-invoke process()
}
};
<canvas></canvas>
<img src="http://i.imgur.com/X7ZrRkn.png" crossOrigin="anonymous">
Approach 2: Letting the browser do the job
We can also let the browser do the whole job exploiting interpolation and sampling.
When the browser scales an image down it will calculate the average for each new pixel. If we then turn off linear interpolation when we scale up we will get each of those average pixels as square blocks:
Scale down image at a ratio producing number of tiles as number of pixels
Turn off image smoothing
Scale the small image back up to the desired size
This will be many times faster than the first approach, and you will be able to use CORS-restricted images. Just note it may not be as accurate as the first approach, however, it is possible to increase the accuracy by scaling down the image in several step, each half the size.
Example
window.onload = function() {
var img = document.querySelector("img"),
canvas = document.querySelector("canvas"),
ctx = canvas.getContext("2d"),
w = img.naturalWidth, h = img.naturalHeight;
// draw in image
canvas.width = w; canvas.height = h;
// scale down image so number of pixels represent number of tiles,
// here use two steps so we get a more accurate result:
ctx.drawImage(img, 0, 0, w, h, 0, 0, w*0.5, h*0.5); // 50%
ctx.drawImage(canvas, 0, 0, w*0.5, h*0.5, 0, 0, 8, 8); // 8 tiles
// turn off image-smoothing
ctx.imageSmoothingEnabled =
ctx.msImageSmoothingEnabled =
ctx.mozImageSmoothingEnabled =
ctx.webkitImageSmoothingEnabled = false;
// scale image back up
ctx.drawImage(canvas, 0, 0, 8, 8, 0, 0, w, h);
};
<canvas></canvas>
<img src="http://i.imgur.com/X7ZrRkn.png" crossOrigin="anonymous">

hough transform - javascript - node.js

So, i'm trying to implement hough transform, this version is 1-dimensional (its for all dims reduced to 1 dim optimization) version based on the minor properties.
Enclosed is my code, with a sample image... input and output.
Obvious question is what am i doing wrong. I've tripled check my logic and code and it looks good also my parameters. But obviously i'm missing on something.
Notice that the red pixels are supposed to be ellipses centers , while the blue pixels are edges to be removed (belong to the ellipse that conform to the mathematical equations).
also, i'm not interested in openCV / matlab / ocatve / etc.. usage (nothing against them).
Thank you very much!
var fs = require("fs"),
Canvas = require("canvas"),
Image = Canvas.Image;
var LEAST_REQUIRED_DISTANCE = 40, // LEAST required distance between 2 points , lets say smallest ellipse minor
LEAST_REQUIRED_ELLIPSES = 6, // number of found ellipse
arr_accum = [],
arr_edges = [],
edges_canvas,
xy,
x1y1,
x2y2,
x0,
y0,
a,
alpha,
d,
b,
max_votes,
cos_tau,
sin_tau_sqr,
f,
new_x0,
new_y0,
any_minor_dist,
max_minor,
i,
found_minor_in_accum,
arr_edges_len,
hough_file = 'sample_me2.jpg',
edges_canvas = drawImgToCanvasSync(hough_file); // make sure everything is black and white!
arr_edges = getEdgesArr(edges_canvas);
arr_edges_len = arr_edges.length;
var hough_canvas_img_data = edges_canvas.getContext('2d').getImageData(0, 0, edges_canvas.width,edges_canvas.height);
for(x1y1 = 0; x1y1 < arr_edges_len ; x1y1++){
if (arr_edges[x1y1].x === -1) { continue; }
for(x2y2 = 0 ; x2y2 < arr_edges_len; x2y2++){
if ((arr_edges[x2y2].x === -1) ||
(arr_edges[x2y2].x === arr_edges[x1y1].x && arr_edges[x2y2].y === arr_edges[x1y1].y)) { continue; }
if (distance(arr_edges[x1y1],arr_edges[x2y2]) > LEAST_REQUIRED_DISTANCE){
x0 = (arr_edges[x1y1].x + arr_edges[x2y2].x) / 2;
y0 = (arr_edges[x1y1].y + arr_edges[x2y2].y) / 2;
a = Math.sqrt((arr_edges[x1y1].x - arr_edges[x2y2].x) * (arr_edges[x1y1].x - arr_edges[x2y2].x) + (arr_edges[x1y1].y - arr_edges[x2y2].y) * (arr_edges[x1y1].y - arr_edges[x2y2].y)) / 2;
alpha = Math.atan((arr_edges[x2y2].y - arr_edges[x1y1].y) / (arr_edges[x2y2].x - arr_edges[x1y1].x));
for(xy = 0 ; xy < arr_edges_len; xy++){
if ((arr_edges[xy].x === -1) ||
(arr_edges[xy].x === arr_edges[x2y2].x && arr_edges[xy].y === arr_edges[x2y2].y) ||
(arr_edges[xy].x === arr_edges[x1y1].x && arr_edges[xy].y === arr_edges[x1y1].y)) { continue; }
d = distance({x: x0, y: y0},arr_edges[xy]);
if (d > LEAST_REQUIRED_DISTANCE){
f = distance(arr_edges[xy],arr_edges[x2y2]); // focus
cos_tau = (a * a + d * d - f * f) / (2 * a * d);
sin_tau_sqr = (1 - cos_tau * cos_tau);//Math.sqrt(1 - cos_tau * cos_tau); // getting sin out of cos
b = (a * a * d * d * sin_tau_sqr ) / (a * a - d * d * cos_tau * cos_tau);
b = Math.sqrt(b);
b = parseInt(b.toFixed(0));
d = parseInt(d.toFixed(0));
if (b > 0){
found_minor_in_accum = arr_accum.hasOwnProperty(b);
if (!found_minor_in_accum){
arr_accum[b] = {f: f, cos_tau: cos_tau, sin_tau_sqr: sin_tau_sqr, b: b, d: d, xy: xy, xy_point: JSON.stringify(arr_edges[xy]), x0: x0, y0: y0, accum: 0};
}
else{
arr_accum[b].accum++;
}
}// b
}// if2 - LEAST_REQUIRED_DISTANCE
}// for xy
max_votes = getMaxMinor(arr_accum);
// ONE ellipse has been detected
if (max_votes != null &&
(max_votes.max_votes > LEAST_REQUIRED_ELLIPSES)){
// output ellipse details
new_x0 = parseInt(arr_accum[max_votes.index].x0.toFixed(0)),
new_y0 = parseInt(arr_accum[max_votes.index].y0.toFixed(0));
setPixel(hough_canvas_img_data,new_x0,new_y0,255,0,0,255); // Red centers
// remove the pixels on the detected ellipse from edge pixel array
for (i=0; i < arr_edges.length; i++){
any_minor_dist = distance({x:new_x0, y: new_y0}, arr_edges[i]);
any_minor_dist = parseInt(any_minor_dist.toFixed(0));
max_minor = b;//Math.max(b,arr_accum[max_votes.index].d); // between the max and the min
// coloring in blue the edges we don't need
if (any_minor_dist <= max_minor){
setPixel(hough_canvas_img_data,arr_edges[i].x,arr_edges[i].y,0,0,255,255);
arr_edges[i] = {x: -1, y: -1};
}// if
}// for
}// if - LEAST_REQUIRED_ELLIPSES
// clear accumulated array
arr_accum = [];
}// if1 - LEAST_REQUIRED_DISTANCE
}// for x2y2
}// for xy
edges_canvas.getContext('2d').putImageData(hough_canvas_img_data, 0, 0);
writeCanvasToFile(edges_canvas, __dirname + '/hough.jpg', function() {
});
function getMaxMinor(accum_in){
var max_votes = -1,
max_votes_idx,
i,
accum_len = accum_in.length;
for(i in accum_in){
if (accum_in[i].accum > max_votes){
max_votes = accum_in[i].accum;
max_votes_idx = i;
} // if
}
if (max_votes > 0){
return {max_votes: max_votes, index: max_votes_idx};
}
return null;
}
function distance(point_a,point_b){
return Math.sqrt((point_a.x - point_b.x) * (point_a.x - point_b.x) + (point_a.y - point_b.y) * (point_a.y - point_b.y));
}
function getEdgesArr(canvas_in){
var x,
y,
width = canvas_in.width,
height = canvas_in.height,
pixel,
edges = [],
ctx = canvas_in.getContext('2d'),
img_data = ctx.getImageData(0, 0, width, height);
for(x = 0; x < width; x++){
for(y = 0; y < height; y++){
pixel = getPixel(img_data, x,y);
if (pixel.r !== 0 &&
pixel.g !== 0 &&
pixel.b !== 0 ){
edges.push({x: x, y: y});
}
} // for
}// for
return edges
} // getEdgesArr
function drawImgToCanvasSync(file) {
var data = fs.readFileSync(file)
var canvas = dataToCanvas(data);
return canvas;
}
function dataToCanvas(imagedata) {
img = new Canvas.Image();
img.src = new Buffer(imagedata, 'binary');
var canvas = new Canvas(img.width, img.height);
var ctx = canvas.getContext('2d');
ctx.patternQuality = "best";
ctx.drawImage(img, 0, 0, img.width, img.height,
0, 0, img.width, img.height);
return canvas;
}
function writeCanvasToFile(canvas, file, callback) {
var out = fs.createWriteStream(file)
var stream = canvas.createPNGStream();
stream.on('data', function(chunk) {
out.write(chunk);
});
stream.on('end', function() {
callback();
});
}
function setPixel(imageData, x, y, r, g, b, a) {
index = (x + y * imageData.width) * 4;
imageData.data[index+0] = r;
imageData.data[index+1] = g;
imageData.data[index+2] = b;
imageData.data[index+3] = a;
}
function getPixel(imageData, x, y) {
index = (x + y * imageData.width) * 4;
return {
r: imageData.data[index+0],
g: imageData.data[index+1],
b: imageData.data[index+2],
a: imageData.data[index+3]
}
}
It seems you try to implement the algorithm of Yonghong Xie; Qiang Ji (2002). A new efficient ellipse detection method 2. p. 957.
Ellipse removal suffers from several bugs
In your code, you perform the removal of found ellipse (step 12 of the original paper's algorithm) by resetting coordinates to {-1, -1}.
You need to add:
`if (arr_edges[x1y1].x === -1) break;`
at the end of the x2y2 block. Otherwise, the loop will consider -1, -1 as a white point.
More importantly, your algorithm consists in erasing every point which distance to the center is smaller than b. b supposedly is the minor axis half-length (per the original algorithm). But in your code, variable b actually is the latest (and not most frequent) half-length, and you erase points with a distance lower than b (instead of greater, since it's the minor axis). In other words, you clear all points inside a circle with a distance lower than latest computed axis.
Your sample image can actually be processed with a clearing of all points inside a circle with a distance lower than selected major axis with:
max_minor = arr_accum[max_votes.index].d;
Indeed, you don't have overlapping ellipses and they are spread enough. Please consider a better algorithm for overlapping or closer ellipses.
The algorithm mixes major and minor axes
Step 6 of the paper reads:
For each third pixel (x, y), if the distance between (x, y) and (x0,
y0) is greater than the required least distance for a pair of pixels
to be considered then carry out the following steps from (7) to (9).
This clearly is an approximation. If you do so, you will end up considering points further than the minor axis half length, and eventually on the major axis (with axes swapped). You should make sure the distance between the considered point and the tested ellipse center is smaller than currently considered major axis half-length (condition should be d <= a). This will help with the ellipse erasing part of the algorithm.
Also, if you also compare with the least distance for a pair of pixels, as per the original paper, 40 is too large for the smaller ellipse in your picture. The comment in your code is wrong, it should be at maximum half the smallest ellipse minor axis half-length.
LEAST_REQUIRED_ELLIPSES is too small
This parameter is also misnamed. It is the minimum number of votes an ellipse should get to be considered valid. Each vote corresponds to a pixel. So a value of 6 means that only 6+2 pixels make an ellipse. Since pixels coordinates are integers and you have more than 1 ellipse in your picture, the algorithm might detect ellipses that are not, and eventually clear edges (especially when combined with the buggy ellipse erasing algorithm). Based on tests, a value of 100 will find four of the five ellipses of your picture, while 80 will find them all. Smaller values will not find the proper centers of the ellipses.
Sample image is not black & white
Despite the comment, sample image is not exactly black and white. You should convert it or apply some threshold (e.g. RGB values greater than 10 instead of simply different form 0).
Diff of minimum changes to make it work is available here:
https://gist.github.com/pguyot/26149fec29ffa47f0cfb/revisions
Finally, please note that parseInt(x.toFixed(0)) could be rewritten Math.floor(x), and you probably want to not truncate all floats like this, but rather round them, and proceed where needed: the algorithm to erase the ellipse from the picture would benefit from non truncated values for the center coordinates. This code definitely could be improved further, for example it currently computes the distance between points x1y1 and x2y2 twice.

Categories

Resources