Smooth jagged pixels - javascript

I've created a pinch filter/effect on canvas using the following algorithm:
// iterate pixels
for (var i = 0; i < originalPixels.data.length; i+= 4) {
// calculate a pixel's position, distance, and angle
var pixel = new Pixel(affectedPixels, i, origin);
// check if the pixel is in the effect area
if (pixel.dist < effectRadius) {
// initial method (flawed)
// iterate original pixels and calculate the new position of the current pixel in the affected pixels
if (method.value == "org2aff") {
var targetDist = ( pixel.dist - (1 - pixel.dist / effectRadius) * (effectStrength * effectRadius) ).clamp(0, effectRadius);
var targetPos = calcPos(origin, pixel.angle, targetDist);
setPixel(affectedPixels, targetPos.x, targetPos.y, getPixel(originalPixels, pixel.pos.x, pixel.pos.y));
} else {
// alternative method (better)
// iterate affected pixels and calculate the original position of the current pixel in the original pixels
var originalDist = (pixel.dist + (effectStrength * effectRadius)) / (1 + effectStrength);
var originalPos = calcPos(origin, pixel.angle, originalDist);
setPixel(affectedPixels, pixel.pos.x, pixel.pos.y, getPixel(originalPixels, originalPos.x, originalPos.y));
}
} else {
// copy unaffected pixels from original to new image
setPixel(affectedPixels, pixel.pos.x, pixel.pos.y, getPixel(originalPixels, pixel.pos.x, pixel.pos.y));
}
}
I've struggled a lot to get it to this point and I'm quite happy with the result. Nevertheless, I have a small problem; jagged pixels. Compare the JS pinch with Gimp's:
I don't know what I'm missing. Do I need to apply another filter after the actual filter? Or is my algorithm wrong altogether?
I can't add the full code here (as a SO snippet) because it contains 4 base64 images/textures (65k chars in total). Instead, here's a JSFiddle.

One way to clean up the result is supersampling. Here's a simple example: https://jsfiddle.net/Lawmo4q8/
Basically, instead of calculating a single value for a single pixel, you take multiple value samples within/around the pixel...
let color =
calcColor(x - 0.25, y - 0.25) + calcColor(x + 0.25, y - 0.25) +
calcColor(x - 0.25, y + 0.25) + calcColor(x + 0.25, y + 0.25);
...and merge the results in some way.
color /= 4;

Related

Is it possible to call a function over and over but to not reset its previous values?

So I am creating a flappy bird game (If you understand what this game is it would make it much easier for you to understand what I am about to say) and inside this game I have 2 pipes. One of the pipes is located at the lowest y value(600px) and one starts at a y value of 0. So these pipes essentially have opposite y values but the same X values(the X value also moves, but they are still the same value of X). The height of the pipes are also randomly generated. Question starts here: what I want for this code is after a number of tubeX pixels traveled ( a time interval could also work), I want to add another set of pipes and do the same with that value. But the values of the old pipes must stay the same. I think an array of some sort would be best but my javascript coding abilities are quite low and would have not a clue how to implement something like that into my code until I see it.
This bit of code generates random heights:
function pipeY() {
var top = Math.random() * -32 + 1;
height1 = top * 10;
var bottom = Math.random() * 32 + 1;
height2 = bottom *10
loop();
}
this creates a moving X value for the pipes:
tubeX = 400;
velocityX = 0;
force = -0.5;
function pipeX() {
velocityX += force;
velocityX *= 0.9;
tubeX += velocityX;
}
This creates the two pipes that are opposite of each other:
function show() {
var tubeHeight1 = 600;
var tubeHeight2 = 0;
ctx.fillStyle="green";
tube1 = ctx.fillRect(tubeX,tubeHeight1,5,height1);
tube2 = ctx.fillRect(tubeX,tubeHeight2,5,height2);
ctx.stroke();
}
What I ended up doing was creating a while loop inside the pipeX() function. I also changed the values of tubeX to 800 so it just creates a new pipe more efficiently at 400 px. This probably is not the most efficient way but it looks good!
function pipeX() {
var lastTime = 0;
velocityX += force;
velocityX *= 0.9;
tubeX += velocityX;
while (tubeX < -25) {
show();
if (tubeX = 400) {
new pipeY();
new show();
}
}
}

Auto adjust brightness/contrast to read text from images

I was wondering if anyone can point me in the right direction to auto adjust brightness/contrast of an image taken from phone camera using javascript to make reading of text from the image easier.
Appreciate any help,
Many thanks.
To automatically adjust an image we could use a histogram that we generate from the image, and then use a threshold to find a black/white point to use to scale the pixel values to their max in opposite ends.
In HTML5 we would need to use the canvas element in order to read pixel information.
#Building a histogram
A histogram is an overview of which values are most represented in an image. For brightness-contrast we would be interested in the luma value (the perceived lightness of a pixel).
Example luma histogram
To calculate a luma value we can use REC.709 (AKA BT.709, recommended, used here) or REC.601 formulas.
Y = 0.299 * R + 0.587 * G + 0.114 * B
We need to convert this to an integer (iluma = Math.round(luma);), otherwise we would get a hard time building the histogram which is based on integer values [0, 255] for storage (see example code below).
The strategy to determine which range to use can vary, but for simplicity we can choose a threshold strategy based on a minimum representation of pixels in both end.
Red line showing example threshold
To find the darkest based on a threshold we would scan from left to right and when we get a luma value above threshold use that as minimum value. If we get to center (or even just 33% in) we could abort and default to 0.
For the brightest we would do the same but from right to left and defaulting to 255 if no threshold is found.
You can of course use different threshold values for each end - it's all a game of trial-and-error with the values until you find something that suits your scenario.
We should now have two values representing the min-max range:
Min-max range based on threshold
#Scaling the general luma level
First calculate the scale factor we need to use based on the min-max range:
scale = 255 / (max - min) * 2
We will always subtract min from each component even if that means it will clip (if < 0 set the value to 0). When subtracted we scale each component value using the scale factor. The x2 at the end is to compensate for the variations between luma and actual RGB values. Play around with this value like the others (here just an arbitrary example).
We do this for each component in each pixel (0-clip and scale):
component = max(0, component - min) * scale
When the image data is put back the contrast should be max based on the given threshold.
#Tips
You don't have to use the entire image bitmap to analyze the histogram. If you deal with large image sources scale down to a small representation - you don't need much as we're after the brightest/darkest areas and not single pixels.
You can brighten and add contrast an image using blending modes with it self, such as multiply, lighten, hard-light/soft-light etc. (<= IE11 does not support blending modes). Adjust the formula for these, and just experiment.
#Example
This works on a buffer showing the techniques described above. There exist more complex and accurate methods, but this is given as a proof-of-concept (licensed under CC-3.0-by-sa, attribution required).
It starts out with a 10% threshold value. Use slider to see the difference in result using the threshold. The threshold can be calculated via other methods than the one shown here. Experiment!
Run snippet using Full page -
var ctx = c.getContext("2d"),
img = new Image; // some demo image
img.crossOrigin =""; // needed for demo
img.onload = setup;
img.src = "//i.imgur.com/VtNwHbU.jpg";
function setup() {
// set canvas size based on image
c.width = this.width;
c.height = this.height;
// draw in image to canvas
ctx.drawImage(this, 0, 0);
// keep the original for comparsion and for demo
org.src = c.toDataURL();
process(this, +tv.value);
}
function process(img, thold) { //thold = % of hist max
var width = img.width, height = img.height,
idata, data,
i, min = -1, max = -1, // to find min-max
maxH = 0, // to find scale of histogram
scale,
hgram = new Uint32Array(width); // histogram buffer (or use Float32)
// get image data
idata = ctx.getImageData(0, 0, img.width, img.height); // needed for later
data = idata.data; // the bitmap itself
// get lumas and build histogram
for(i = 0; i < data.length; i += 4) {
var luma = Math.round(rgb2luma(data, i));
hgram[luma]++; // add to the luma bar (and why we need an integer)
}
// find tallest bar so we can use that to scale threshold
for(i = 0; i < width; i++) {
if (hgram[i] > maxH) maxH = hgram[i];
}
// use that for threshold
thold *= maxH;
// find min value
for(i = 0; i < width * 0.5; i++) {
if (hgram[i] > thold) {
min = i;
break;
}
}
if (min < 0) min = 0; // not found, set to default 0
// find max value
for(i = width - 1; i > width * 0.5; i--) {
if (hgram[i] > thold) {
max = i;
break;
}
}
if (max < 0) max = 255; // not found, set to default 255
scale = 255 / (max - min) * 2; // x2 compensates (play with value)
out.innerHTML = "Min: " + min + " Max: " + max +
" Scale: " + scale.toFixed(1) + "x";
// scale all pixels
for(i = 0; i < data.length; i += 4) {
data[i ] = Math.max(0, data[i] - min) * scale;
data[i+1] = Math.max(0, data[i+1] - min) * scale;
data[i+2] = Math.max(0, data[i+2] - min) * scale;
}
ctx.putImageData(idata, 0, 0)
}
tv.oninput = function() {
v.innerHTML = (tv.value * 100).toFixed(0) + "%";
ctx.drawImage(img, 0, 0);
process(img, +tv.value)
};
function rgb2luma(px, pos) {
return px[pos] * 0.299 + px[pos+1] * 0.587 + px[pos+2] * 0.114
}
<label>Threshold:
<input id=tv type=range min=0 max=1 step= 0.01 value=0.1></label>
<span id=v>10%</span><br>
<canvas id=c></canvas><br>
<div id=out></div>
<h3>Original:</h3>
<img id=org>

Mirroring right half of webcam image

I saw that you have helped David with his mirroring canvas problem before. Canvas - flip half the image
I have a similar problem and hope that maybe you could help me.
I want to apply the same mirror effect on my webcam-canvas, but instead of the left side, I want to take the RIGHT half of the image, flip it and apply it to the LEFT.
This is the code you've posted for David. It also works for my webcam cancas. Now I tried to change it, so that it works for the other side, but unfortunately I'm not able to get it.
for(var y = 0; y < height; y++) {
for(var x = 0; x < width / 2; x++) { // divide by 2 to only loop through the left half of the image.
var offset = ((width* y) + x) * 4; // Pixel origin
// Get pixel
var r = data[offset];
var g = data[offset + 1];
var b = data[offset + 2];
var a = data[offset + 3];
// Calculate how far to the right the mirrored pixel is
var mirrorOffset = (width - (x * 2)) * 4;
// Get set mirrored pixel's colours
data[offset + mirrorOffset] = r;
data[offset + 1 + mirrorOffset] = g;
data[offset + 2 + mirrorOffset] = b;
data[offset + 3 + mirrorOffset] = a;
}
}
Even if the accepted answer you're relying on uses imageData, there's absolutely no use for that.
Canvas allows, with drawImage and its transform (scale, rotate, translate), to perform many operations, one of them being to safely copy the canvas on itself.
Advantages is that it will be way easier AND way way faster than handling the image by its rgb components.
I'll let you read the code below, hopefully it's commented and clear enough.
The fiddle is here :
http://jsbin.com/betufeha/2/edit?js,output
One output example - i took also a mountain, a Canadian one :-) - :
Original :
Output :
html
<canvas id='cv'></canvas>
javascript
var mountain = new Image() ;
mountain.onload = drawMe;
mountain.src = 'http://www.hdwallpapers.in/walls/brooks_mountain_range_alaska-normal.jpg';
function drawMe() {
var cv=document.getElementById('cv');
// set the width/height same as image.
cv.width=mountain.width;
cv.height = mountain.height;
var ctx=cv.getContext('2d');
// first copy the whole image.
ctx.drawImage(mountain, 0, 0);
// save to avoid messing up context.
ctx.save();
// translate to the middle of the left part of the canvas = 1/4th of the image.
ctx.translate(cv.width/4, 0);
// flip the x coordinates to have a mirror effect
ctx.scale(-1,1);
// copy the right part on the left part.
ctx.drawImage(cv,
/*source */ cv.width/2,0,cv.width/2, cv.height,
/*destination*/ -cv.width/4, 0, cv.width/2, cv.height);
// restore context
ctx.restore();
}

KinectJS: Algorithm required to determine new X,Y coords after image resize

BACKGROUND:
The app allows users to upload a photo of themselves and then place a pair of glasses over their face to see what it looks like. For the most part, it is working fine. After the user selects the location of the 2 pupils, I auto zoom the image based on the ratio between the distance of the pupils and then already known distance between the center points of the glasses. All is working fine there, but now I need to automatically place the glasses image over the eyes.
I am using KinectJS, but the problem is not with regards to that library or javascript.. it is more of an algorithm requirement
WHAT I HAVE TO WORK WITH:
Distance between pupils (eyes)
Distance between pupils (glasses)
Glasses width
Glasses height
Zoom ratio
SOME CODE:
//.. code before here just zooms the image, etc..
//problem is here (this is wrong, but I need to know what is the right way to calculate this)
var newLeftEyeX = self.leftEyePosition.x * ratio;
var newLeftEyeY = self.leftEyePosition.y * ratio;
//create a blue dot for testing (remove later)
var newEyePosition = new Kinetic.Circle({
radius: 3,
fill: "blue",
stroke: "blue",
strokeWidth: 0,
x: newLeftEyeX,
y: newLeftEyeY
});
self.pointsLayer.add(newEyePosition);
var glassesWidth = glassesImage.getWidth();
var glassesHeight = glassesImage.getHeight();
// this code below works perfect, as I can see the glasses center over the blue dot created above
newGlassesPosition.x = newLeftEyeX - (glassesWidth / 4);
newGlassesPosition.y = newLeftEyeY - (glassesHeight / 2);
NEEDED
A math genius to give me the algorithm to determine where the new left eye position should be AFTER the image has been resized
UPDATE
After researching this for the past 6 hours or so, I think I need to do some sort of "translate transform", but the examples I see only allow setting this by x and y amounts.. whereas I will only know the scale of the underlying image. Here's the example I found (which cannot help me):
http://tutorials.jenkov.com/html5-canvas/transformation.html
and here is something which looks interesting, but it is for Silverlight:
Get element position after transform
Is there perhaps some way to do the same in Html5 and/or KinectJS? Or perhaps I am going down the wrong road here... any ideas people?
UPDATE 2
I tried this:
// if zoomFactor > 1, then picture got bigger, so...
if (zoomFactor > 1) {
// if x = 10 (for example) and if zoomFactor = 2, that means new x should be 5
// current x / zoomFactor => 10 / 2 = 5
newLeftEyeX = self.leftEyePosition.x / zoomFactor;
// same for y
newLeftEyeY = self.leftEyePosition.y / zoomFactor;
}
else {
// else picture got smaller, so...
// if x = 10 (for example) and if zoomFactor = 0.5, that means new x should be 20
// current x * (1 / zoomFactor) => 10 * (1 / 0.5) = 10 * 2 = 20
newLeftEyeX = self.leftEyePosition.x * (1 / zoomFactor);
// same for y
newLeftEyeY = self.leftEyePosition.y * (1 / zoomFactor);
}
that didn't work, so then I tried an implementation of Rody Oldenhuis' suggestion (thanks Rody):
var xFromCenter = self.leftEyePosition.x - self.xCenter;
var yFromCenter = self.leftEyePosition.y - self.yCenter;
var angle = Math.atan2(yFromCenter, xFromCenter);
var length = Math.hypotenuse(xFromCenter, yFromCenter);
var xNew = zoomFactor * length * Math.cos(angle);
var yNew = zoomFactor * length * Math.sin(angle);
newLeftEyeX = xNew + self.xCenter;
newLeftEyeY = yNew + self.yCenter;
However, that is still not working as expected. So, I am not sure what the issue is currently. If anyone has worked with KinectJS before and has an idea of what the issue may be, please let me know.
UPDATE 3
I checked Rody's calculations on paper and they seem fine, so there is obviously something else here messing things up.. I got the coordinates of the left pupil at zoom factors 1 and 2. With those coordinates, maybe someone can figure out what the issue is:
Zoom Factor 1: x = 239, y = 209
Zoom Factor 2: x = 201, y = 133
OK, since it's an algorithmic question, I'm going to keep this generic and only write pseudo code.
I f I understand you correctly, What you want is the following:
Transform all coordinates such that the origin of your coordinate system is at the zoom center (usually, central pixel)
Compute the angle a line drawn from this new origin to a point of interest makes with the positive x-axis. Compute also the length of this line.
The new x and y coordinates after zooming are defined by elongating this line, such that the new line is the zoom factor times the length of the original line.
Transform the newly found x and y coordinates back to a coordinate system that makes sense to the computer (e.g., top left pixel = 0,0)
Repeat for all points of interest.
In pseudo-code (with formulas):
x_center = image_width/2
y_center = image_height/2
x_from_zoom_center = x_from_topleft - x_center
y_from_zoom_center = y_from_topleft - y_center
angle = atan2(y_from_zoom_center, x_from_zoom_center)
length = hypot(x_from_zoom_center, y_from_zoom_center)
x_new = zoom_factor * length * cos(angle)
y_new = zoom_factor * length * sin(angle)
x_new_topleft = x_new + x_center
y_new_topleft = y_new + y_center
Note that this assumes the number of pixels used for length and width stays the same after zooming. Note also that some rounding should take place (keep everything double precision, and only round to int after all calculations)
In the code above, atan2 is the four-quadrant arctangent, available in most programming languages, and hypot is simply sqrt(x*x + y*y), but then computed more carefully (e.g., to avoid overflow etc.), also available in most programing languages.
Is this indeed what you were after?

HTML5 Canvas: Splitting/Calculating Lines

I've been banging my head on the keyboard for about one week now and I can't figure a proper solution for my problem. I think it's more Math related than HTML Canvas... hopefully someone can point me into the right direction.
I'm having an HTML Canvas where users can draw lines using they mouse and the very simple moveTo() and lineTo() functions. When the user is done I save the coords in a MongoDB. When the user hits the page later again I want to display his drawing BUT I don't want to load the entire picture with all stored coordinates at once, I want to return it in tiles (for better performance by caching each tile).
The tiles are 200x200 pixels (fixed offsets and width, starting at 0 -> 200-> 400 ->...).
Now, when the user draws a line from let's say 50,50(x/y) to 250,250(x/y) there's only one dot in each bounding box (tile). I need to split the lines and calculate the start and ending points of each line in each bounding box (tile). Otherwise I can't draw the image partially (in tiles). It get's even more complicated when a single line crosses multiple bounding boxes (tiles). For instance: 100,100 (x/y) -> -1234,-300 (x/y).
The lines can go from any point (+/-) to ANY direction for ANY distance.
Of course I looked at Bresenham's good old algorithm and it worked - partially, but it seems like the longest and most resource-hungry solution to me.
So, the reason I'm here is that I hope someone can point me into the right direction with (perhaps) another approach of calculating the start/ending points of my lines for each bounding box.
Code examples are very welcome in JavaScript or PHP.
Thank you for reading and thinking about it :)
tl;dr: Use planes, maths explained below. There's a canvas example at the bottom.
Given that all of your cells are axis-aligned bounding boxes, you could use the plane equation to find the intersection of your line with the edges.
Planes
You can think of your box as a set of four geometric planes. Each plane has a normal, or a vector of length one, indicating which direction is the "front" of the plane. The normals for the planes that make up your cell's sides would be:
top = {x: 0, y: -1};
bottom = {x: 0, y: 1};
left = {x: -1, y: 0};
right = {x: 1, y: 0};
Given a point on the plane, the plane has the equation:
distance = (normal.x * point.x) + (normal.y * point.y)
You can use this equation to calculate the distance of the plane. In this case, you know the top-left corner of your box (let's say x is 10 and y is 100) is on the top plane, so you can do:
distance = (0 * 10) + (-1 * 100)
distance = -100
Checking a point against a plane
Once you have the distance, you can reuse the equation to check where any point is, relative to the plane. For a random point p (where x is -50 and y is 90), you can do:
result = (normal.x * p.x) + (normal.y * p.y) - distance
result = (0 * -50) + (-1 * 90) - (-100)
result = 0 + (-90) - (-100)
result = -90 + 100
result = 10
There are two possible results:
if (result >= 0) {
// point is in front of the plane, or coplanar.
// zero means it is coplanar, but we don't need to distinguish.
} else {
// point is behind the plane
}
Checking a line against a plane
You can check both endpoints of a line from a to b in this way:
result1 = (normal.x * a.x) + (normal.y * a.y) - distance
result2 = (normal.x * b.x) + (normal.y * b.y) - distance
There are four possible results:
if (result1 >= 0 && result2 >= 0) {
// the line is completely in front of the plane
} else if (result1 < 0 && result2 < 0) {
// the line is completely behind the plane
} else if (result1 >= 0 && result2 < 0) {
// a is in front, but b is behind, line is entering the plane
} else if (result1 < 0 && result2 >= 0) {
// a is behind, but b is in front, line is exiting the plane
}
When the line intersects the plane, you want to find the point of intersection. It helps to think of a line in vector terms:
a + t * (b - a)
If t == 0, you are at the start of the line, and t == 1 is the end of the line. In this context, you can calculate the point of intersection as:
time = result1 / (result1 - result2)
And the point of intersection as:
hit.x = a.x + (b.x - a.x) * time
hit.y = a.y + (b.y - a.y) * time
Checking a line against the box
With that math, you can figure out the lines of intersection with your box. You just need to test the endpoints of your line against each plane, and find the minimum and maximum values of time.
Because your box is a convex polygon, there is an early out in this check: if the line is completely in front of any one plane in your box, it cannot intersect with your box. You can skip checking the rest of the planes.
In JavaScript, your result might look something like this:
/**
* Find the points where a line intersects a box.
*
* #param a Start point for the line.
* #param b End point for the line.
* #param tl Top left of the box.
* #param br Bottom right of the box.
* #return Object {nearTime, farTime, nearHit, farHit}, or false.
*/
function intersectLineBox(a, b, tl, br) {
var nearestTime = -Infinity;
var furthestTime = Infinity;
var planes = [
{nx: 0, ny: -1, dist: -tl.y}, // top
{nx: 0, ny: 1, dist: br.y}, // bottom
{nx: -1, ny: 0, dist: -tl.x}, // left
{nx: 1, ny: 0, dist: br.x} // right
];
for (var i = 0; i < 4; ++i) {
var plane = planes[i];
var nearDist = (plane.nx * a.x + plane.ny * a.y) - plane.dist;
var farDist = (plane.nx * b.x + plane.ny * b.y) - plane.dist;
if (nearDist >= 0 && farDist >= 0) {
// both are in front of the plane, line doesn't hit box
return false;
} else if (nearDist < 0 && farDist < 0) {
// both are behind the plane
continue;
} else {
var time = nearDist / (nearDist - farDist);
if (nearDist >= farDist) {
// entering the plane
if (time > nearestTime) {
nearestTime = time;
}
} else {
// exiting the plane
if (time < furthestTime) {
furthestTime = time;
}
}
}
}
if (furthestTime < nearestTime) {
return false;
}
return {
nearTime: nearestTime,
farTime: furthestTime,
nearHit: {
x: a.x + (b.x - a.x) * nearestTime,
y: a.y + (b.y - a.y) * nearestTime
},
farHit: {
x: a.x + (b.x - a.x) * furthestTime,
y: a.y + (b.y - a.y) * furthestTime
}
};
}
If this is still too slow, you can also do broadphase culling by dividing the world up into big rects, and assigning lines to those rects. If your line and cell aren't in the same rect, they don't collide.
I've uploaded a canvas example of this.
This looks like you'd have to figure out at what point each line intersects with the bounds of each tile.
Check out the answer to this question: Is there an easy way to detect line segment intersections?
The answers don't provide code, but it shouldn't be too hard to convert the equations to PHP or Javascript...
EDIT:
Why, exactly, do you want to split the lines? I understand you don't want to load all the lines at once, since that could take a while. But what's wrong with just loading and drawing the first few lines, and drawing the remainder later on?
Methinks that would be a lot simpler than having to cut up each line to fit in a specific tile. Tiling is a nice way of optimizing bitmap loading; I don't think it's very appropriate for vector-based drawings.
You could also consider sending an Ajax request, and start drawing the whole thing whenever it comes in; this would not interfere with the loading of the page.

Categories

Resources