I plan to use it with JavaScript to crop an image to fit the entire window.
Edit: I'll be using a 3rd party component that only accepts the aspect ratio in the format like: 4:3, 16:9.
~12 year old edit: this kind of question is rather interesting! There is something here right? Absolutely!
I gather you're looking for an usable aspect ratio integer:integer solution like 16:9 rather than a float:1 solution like 1.77778:1.
If so, what you need to do is find the greatest common divisor (GCD) and divide both values by that. The GCD is the highest number that evenly divides both numbers. So the GCD for 6 and 10 is 2, the GCD for 44 and 99 is 11.
For example, a 1024x768 monitor has a GCD of 256. When you divide both values by that you get 4x3 or 4:3.
A (recursive) GCD algorithm:
function gcd (a,b):
if b == 0:
return a
return gcd (b, a mod b)
In C:
static int gcd (int a, int b) {
return (b == 0) ? a : gcd (b, a%b);
}
int main(void) {
printf ("gcd(1024,768) = %d\n",gcd(1024,768));
}
And here's some complete HTML/Javascript which shows one way to detect the screen size and calculate the aspect ratio from that. This works in FF3, I'm unsure what support other browsers have for screen.width and screen.height.
<html><body>
<script type="text/javascript">
function gcd (a, b) {
return (b == 0) ? a : gcd (b, a%b);
}
var w = screen.width;
var h = screen.height;
var r = gcd (w, h);
document.write ("<pre>");
document.write ("Dimensions = ", w, " x ", h, "<br>");
document.write ("Gcd = ", r, "<br>");
document.write ("Aspect = ", w/r, ":", h/r);
document.write ("</pre>");
</script>
</body></html>
It outputs (on my weird wide-screen monitor):
Dimensions = 1680 x 1050
Gcd = 210
Aspect = 8:5
Others that I tested this on:
Dimensions = 1280 x 1024
Gcd = 256
Aspect = 5:4
Dimensions = 1152 x 960
Gcd = 192
Aspect = 6:5
Dimensions = 1280 x 960
Gcd = 320
Aspect = 4:3
Dimensions = 1920 x 1080
Gcd = 120
Aspect = 16:9
I wish I had that last one at home but, no, it's a work machine unfortunately.
What you do if you find out the aspect ratio is not supported by your graphic resize tool is another matter. I suspect the best bet there would be to add letter-boxing lines (like the ones you get at the top and bottom of your old TV when you're watching a wide-screen movie on it). I'd add them at the top/bottom or the sides (whichever one results in the least number of letter-boxing lines) until the image meets the requirements.
One thing you may want to consider is the quality of a picture that's been changed from 16:9 to 5:4 - I still remember the incredibly tall, thin cowboys I used to watch in my youth on television before letter-boxing was introduced. You may be better off having one different image per aspect ratio and just resize the correct one for the actual screen dimensions before sending it down the wire.
aspectRatio = width / height
if that is what you're after. You can then multiply it by one of the dimensions of the target space to find out the other (that maintains the ratio)
e.g.
widthT = heightT * aspectRatio
heightT = widthT / aspectRatio
paxdiablo's answer is great, but there are a lot of common resolutions that have just a few more or less pixels in a given direction, and the greatest common divisor approach gives horrible results to them.
Take for example the well behaved resolution of 1360x765 which gives a nice 16:9 ratio using the gcd approach. According to Steam, this resolution is only used by 0.01% of it's users, while 1366x768 is used by a whoping 18.9%. Let's see what we get using the gcd approach:
1360x765 - 16:9 (0.01%)
1360x768 - 85:48 (2.41%)
1366x768 - 683:384 (18.9%)
We'd want to round up that 683:384 ratio to the closest, 16:9 ratio.
I wrote a python script that parses a text file with pasted numbers from the Steam Hardware survey page, and prints all resolutions and closest known ratios, as well as the prevalence of each ratio (which was my goal when I started this):
# Contents pasted from store.steampowered.com/hwsurvey, section 'Primary Display Resolution'
steam_file = './steam.txt'
# Taken from http://upload.wikimedia.org/wikipedia/commons/thumb/f/f0/Vector_Video_Standards4.svg/750px-Vector_Video_Standards4.svg.png
accepted_ratios = ['5:4', '4:3', '3:2', '8:5', '5:3', '16:9', '17:9']
#-------------------------------------------------------
def gcd(a, b):
if b == 0: return a
return gcd (b, a % b)
#-------------------------------------------------------
class ResData:
#-------------------------------------------------------
# Expected format: 1024 x 768 4.37% -0.21% (w x h prevalence% change%)
def __init__(self, steam_line):
tokens = steam_line.split(' ')
self.width = int(tokens[0])
self.height = int(tokens[2])
self.prevalence = float(tokens[3].replace('%', ''))
# This part based on pixdiablo's gcd answer - http://stackoverflow.com/a/1186465/828681
common = gcd(self.width, self.height)
self.ratio = str(self.width / common) + ':' + str(self.height / common)
self.ratio_error = 0
# Special case: ratio is not well behaved
if not self.ratio in accepted_ratios:
lesser_error = 999
lesser_index = -1
my_ratio_normalized = float(self.width) / float(self.height)
# Check how far from each known aspect this resolution is, and take one with the smaller error
for i in range(len(accepted_ratios)):
ratio = accepted_ratios[i].split(':')
w = float(ratio[0])
h = float(ratio[1])
known_ratio_normalized = w / h
distance = abs(my_ratio_normalized - known_ratio_normalized)
if (distance < lesser_error):
lesser_index = i
lesser_error = distance
self.ratio_error = distance
self.ratio = accepted_ratios[lesser_index]
#-------------------------------------------------------
def __str__(self):
descr = str(self.width) + 'x' + str(self.height) + ' - ' + self.ratio + ' - ' + str(self.prevalence) + '%'
if self.ratio_error > 0:
descr += ' error: %.2f' % (self.ratio_error * 100) + '%'
return descr
#-------------------------------------------------------
# Returns a list of ResData
def parse_steam_file(steam_file):
result = []
for line in file(steam_file):
result.append(ResData(line))
return result
#-------------------------------------------------------
ratios_prevalence = {}
data = parse_steam_file(steam_file)
print('Known Steam resolutions:')
for res in data:
print(res)
acc_prevalence = ratios_prevalence[res.ratio] if (res.ratio in ratios_prevalence) else 0
ratios_prevalence[res.ratio] = acc_prevalence + res.prevalence
# Hack to fix 8:5, more known as 16:10
ratios_prevalence['16:10'] = ratios_prevalence['8:5']
del ratios_prevalence['8:5']
print('\nSteam screen ratio prevalences:')
sorted_ratios = sorted(ratios_prevalence.items(), key=lambda x: x[1], reverse=True)
for value in sorted_ratios:
print(value[0] + ' -> ' + str(value[1]) + '%')
For the curious, these are the prevalence of screen ratios amongst Steam users (as of October 2012):
16:9 -> 58.9%
16:10 -> 24.0%
5:4 -> 9.57%
4:3 -> 6.38%
5:3 -> 0.84%
17:9 -> 0.11%
I guess you want to decide which of 4:3 and 16:9 is the best fit.
function getAspectRatio(width, height) {
var ratio = width / height;
return ( Math.abs( ratio - 4 / 3 ) < Math.abs( ratio - 16 / 9 ) ) ? '4:3' : '16:9';
}
James Farey's best rational approximation algorithm with adjustable level of fuzziness ported to Javascript from the aspect ratio calculation code originally written in python.
The method takes a float (width/height) and an upper limit for the fraction numerator/denominator.
In the example below I am setting an upper limit of 50 because I need 1035x582 (1.77835051546) to be treated as 16:9 (1.77777777778) rather than 345:194 which you get with the plain gcd algorithm listed in other answers.
function aspect_ratio(val, lim) {
var lower = [0, 1];
var upper = [1, 0];
while (true) {
var mediant = [lower[0] + upper[0], lower[1] + upper[1]];
if (val * mediant[1] > mediant[0]) {
if (lim < mediant[1]) {
return upper;
}
lower = mediant;
} else if (val * mediant[1] == mediant[0]) {
if (lim >= mediant[1]) {
return mediant;
}
if (lower[1] < upper[1]) {
return lower;
}
return upper;
} else {
if (lim < mediant[1]) {
return lower;
}
upper = mediant;
}
}
}
console.log(aspect_ratio(801/600, 50));
console.log(aspect_ratio(1035/582, 50));
console.log(aspect_ratio(2560/1441, 50));
Just in case you're a performance freak...
The Fastest way (in JavaScript) to compute a rectangle ratio it o use a true binary Great Common Divisor algorithm.
(All speed and timing tests have been done by others, you can check one benchmark here: https://lemire.me/blog/2013/12/26/fastest-way-to-compute-the-greatest-common-divisor/)
Here is it:
/* the binary Great Common Divisor calculator */
function gcd (u, v) {
if (u === v) return u;
if (u === 0) return v;
if (v === 0) return u;
if (~u & 1)
if (v & 1)
return gcd(u >> 1, v);
else
return gcd(u >> 1, v >> 1) << 1;
if (~v & 1) return gcd(u, v >> 1);
if (u > v) return gcd((u - v) >> 1, v);
return gcd((v - u) >> 1, u);
}
/* returns an array with the ratio */
function ratio (w, h) {
var d = gcd(w,h);
return [w/d, h/d];
}
/* example */
var r1 = ratio(1600, 900);
var r2 = ratio(1440, 900);
var r3 = ratio(1366, 768);
var r4 = ratio(1280, 1024);
var r5 = ratio(1280, 720);
var r6 = ratio(1024, 768);
/* will output this:
r1: [16, 9]
r2: [8, 5]
r3: [683, 384]
r4: [5, 4]
r5: [16, 9]
r6: [4, 3]
*/
Here is my solution it is pretty straight forward since all I care about is not necessarily GCD or even accurate ratios: because then you get weird things like 345/113 which are not human comprehensible.
I basically set acceptable landscape, or portrait ratios and their "value" as a float... I then compare my float version of the ratio to each and which ever has the lowest absolute value difference is the ratio closest to the item. That way when the user makes it 16:9 but then removes 10 pixels from the bottom it still counts as 16:9...
accepted_ratios = {
'landscape': (
(u'5:4', 1.25),
(u'4:3', 1.33333333333),
(u'3:2', 1.5),
(u'16:10', 1.6),
(u'5:3', 1.66666666667),
(u'16:9', 1.77777777778),
(u'17:9', 1.88888888889),
(u'21:9', 2.33333333333),
(u'1:1', 1.0)
),
'portrait': (
(u'4:5', 0.8),
(u'3:4', 0.75),
(u'2:3', 0.66666666667),
(u'10:16', 0.625),
(u'3:5', 0.6),
(u'9:16', 0.5625),
(u'9:17', 0.5294117647),
(u'9:21', 0.4285714286),
(u'1:1', 1.0)
),
}
def find_closest_ratio(ratio):
lowest_diff, best_std = 9999999999, '1:1'
layout = 'portrait' if ratio < 1.0 else 'landscape'
for pretty_str, std_ratio in accepted_ratios[layout]:
diff = abs(std_ratio - ratio)
if diff < lowest_diff:
lowest_diff = diff
best_std = pretty_str
return best_std
def extract_ratio(width, height):
try:
divided = float(width)/float(height)
if divided == 1.0: return '1:1'
return find_closest_ratio(divided)
except TypeError:
return None
You can always start by making a lookup table based on common aspect ratios. Check https://en.wikipedia.org/wiki/Display_aspect_ratio Then you can simply do the division
For real life problems, you can do something like below
let ERROR_ALLOWED = 0.05
let STANDARD_ASPECT_RATIOS = [
[1, '1:1'],
[4/3, '4:3'],
[5/4, '5:4'],
[3/2, '3:2'],
[16/10, '16:10'],
[16/9, '16:9'],
[21/9, '21:9'],
[32/9, '32:9'],
]
let RATIOS = STANDARD_ASPECT_RATIOS.map(function(tpl){return tpl[0]}).sort()
let LOOKUP = Object()
for (let i=0; i < STANDARD_ASPECT_RATIOS.length; i++){
LOOKUP[STANDARD_ASPECT_RATIOS[i][0]] = STANDARD_ASPECT_RATIOS[i][1]
}
/*
Find the closest value in a sorted array
*/
function findClosest(arrSorted, value){
closest = arrSorted[0]
closestDiff = Math.abs(arrSorted[0] - value)
for (let i=1; i<arrSorted.length; i++){
let diff = Math.abs(arrSorted[i] - value)
if (diff < closestDiff){
closestDiff = diff
closest = arrSorted[i]
} else {
return closest
}
}
return arrSorted[arrSorted.length-1]
}
/*
Estimate the aspect ratio based on width x height (order doesn't matter)
*/
function estimateAspectRatio(dim1, dim2){
let ratio = Math.max(dim1, dim2) / Math.min(dim1, dim2)
if (ratio in LOOKUP){
return LOOKUP[ratio]
}
// Look by approximation
closest = findClosest(RATIOS, ratio)
if (Math.abs(closest - ratio) <= ERROR_ALLOWED){
return '~' + LOOKUP[closest]
}
return 'non standard ratio: ' + Math.round(ratio * 100) / 100 + ':1'
}
Then you simply give the dimensions in any order
estimateAspectRatio(1920, 1080) // 16:9
estimateAspectRatio(1920, 1085) // ~16:9
estimateAspectRatio(1920, 1150) // non standard ratio: 1.65:1
estimateAspectRatio(1920, 1200) // 16:10
estimateAspectRatio(1920, 1220) // ~16:10
As an alternative solution to the GCD searching, I suggest you to check against a set of standard values. You can find a list on Wikipedia.
Im assuming your talking about video here, in which case you may also need to worry about pixel aspect ratio of the source video. For example.
PAL DV comes in a resolution of 720x576. Which would look like its 4:3. Now depending on the Pixel aspect ratio (PAR) the screen ratio can be either 4:3 or 16:9.
For more info have a look here http://en.wikipedia.org/wiki/Pixel_aspect_ratio
You can get Square pixel Aspect Ratio, and a lot of web video is that, but you may want to watch out of the other cases.
Hope this helps
Mark
Based on the other answers, here is how I got the numbers I needed in Python;
from decimal import Decimal
def gcd(a,b):
if b == 0:
return a
return gcd(b, a%b)
def closest_aspect_ratio(width, height):
g = gcd(width, height)
x = Decimal(str(float(width)/float(g)))
y = Decimal(str(float(height)/float(g)))
dec = Decimal(str(x/y))
return dict(x=x, y=y, dec=dec)
>>> closest_aspect_ratio(1024, 768)
{'y': Decimal('3.0'),
'x': Decimal('4.0'),
'dec': Decimal('1.333333333333333333333333333')}
function ratio(w, h) {
function mdc(w, h) {
var resto;
do {
resto = w % h;
w = h;
h = resto;
} while (resto != 0);
return w;
}
var mdc = mdc(w, h);
var width = w/mdc;
var height = h/mdc;
console.log(width + ':' + height);
}
ratio(1920, 1080);
I think this does what you are asking for:
webdeveloper.com - decimal to fraction
Width/height gets you a decimal, converted to a fraction with ":" in place of '/' gives you a "ratio".
This algorithm in Python gets you part of the way there.
Tell me what happens if the windows is a funny size.
Maybe what you should have is a list of all acceptable ratios (to the 3rd party component). Then, find the closest match to your window and return that ratio from the list.
bit of a strange way to do this but use the resolution as the aspect.
E.G.
1024:768
or you can try
var w = screen.width;
var h = screen.height;
for(var i=1,asp=w/h;i<5000;i++){
if(asp*i % 1==0){
i=9999;
document.write(asp*i,":",1*i);
}
}
in my case i want something like
[10,5,15,20,25] -> [ 2, 1, 3, 4, 5 ]
function ratio(array){
let min = Math.min(...array);
let ratio = array.map((element)=>{
return element/min;
});
return ratio;
}
document.write(ratio([10,5,15,20,25])); // [ 2, 1, 3, 4, 5 ]
I believe that aspect ratio is width divided by height.
r = w/h
Width / Height
?
Related
I want to draw StackOverflow's logo with this Neural Network:
The NN should ideally become [r, g, b] = f([x, y]). In other words, it should return RGB colors for a given pair of coordinates. The FFNN works pretty well for simple shapes like a circle or a box. For example after several thousands epochs a circle looks like this:
Try it yourself: https://codepen.io/adelriosantiago/pen/PoNGeLw
However since StackOverflow's logo is far more complex even after several thousands of iterations the FFNN's results are somewhat poor:
From left to right:
StackOverflow's logo at 256 colors.
With 15 hidden neurons: The left handle never appears.
50 hidden neurons: Pretty poor result in general.
0.03 as learning rate: Shows blue in the results (blue is not in the orignal image)
A time-decreasing learning rate: The left handle appears but other details are now lost.
Try it yourself: https://codepen.io/adelriosantiago/pen/xxVEjeJ
Some parameters of interest are synaptic.Architect.Perceptron definition and learningRate value.
How can I improve the accuracy of this NN?
Could you improve the snippet? If so, please explain what you did. If there is a better NN architecture to tackle this type of job could you please provide an example?
Additional info:
Artificial Neural Network library used: Synaptic.js
To run this example in your localhost: See repository
By adding another layer, you get better results :
let perceptron = new synaptic.Architect.Perceptron(2, 15, 10, 3)
There are small improvements that you can do to improve efficiency (marginally):
Here is my optimized code:
const width = 125
const height = 125
const outputCtx = document.getElementById("output").getContext("2d")
const iterationLabel = document.getElementById("iteration")
const stopAtIteration = 3000
let perceptron = new synaptic.Architect.Perceptron(2, 15, 10, 3)
let iteration = 0
let inputData = (() => {
const tempCtx = document.createElement("canvas").getContext("2d")
tempCtx.drawImage(document.getElementById("input"), 0, 0)
return tempCtx.getImageData(0, 0, width, height)
})()
const getRGB = (img, x, y) => {
var k = (height * y + x) * 4;
return [
img.data[k] / 255, // R
img.data[k + 1] / 255, // G
img.data[k + 2] / 255, // B
//img.data[(height * y + x) * 4 + 3], // Alpha not used
]
}
const paint = () => {
var imageData = outputCtx.getImageData(0, 0, width, height)
for (let x = 0; x < width; x++) {
for (let y = 0; y < height; y++) {
var rgb = perceptron.activate([x / width, y / height])
var k = (height * y + x) * 4;
imageData.data[k] = rgb[0] * 255
imageData.data[k + 1] = rgb[1] * 255
imageData.data[k + 2] = rgb[2] * 255
imageData.data[k + 3] = 255 // Alpha not used
}
}
outputCtx.putImageData(imageData, 0, 0)
setTimeout(train, 0)
}
const train = () => {
iterationLabel.innerHTML = ++iteration
if (iteration > stopAtIteration) return
let learningRate = 0.01 / (1 + 0.0005 * iteration) // Attempt with dynamic learning rate
//let learningRate = 0.01 // Attempt with non-dynamic learning rate
for (let x = 0; x < width; x += 1) {
for (let y = 0; y < height; y += 1) {
perceptron.activate([x / width, y / height])
perceptron.propagate(learningRate, getRGB(inputData, x, y))
}
}
paint()
}
const startTraining = (btn) => {
btn.disabled = true
train()
}
EDIT : I made another CodePen with even better results:
https://codepen.io/xurei/pen/KKzWLxg
It is likely to be over-fitted BTW.
The perceptron definition:
let perceptron = new synaptic.Architect.Perceptron(2, 8, 15, 7, 3)
Taking some insights from the lecture/slides of Bhiksha Raj (from slides 62 onwards), and summarizing as below:
Each node can be assumed like a linear classifier, and combination of several nodes in a single layer of neural networks can approximate any basic shapes. For example, a rectangle can be formed by 4 nodes for each lines, assuming each nodes contributes to one line, and the shape can be approximated by the final output layer.
Falling back to the summary of complex shapes such as circle, it may require infinite nodes in a layer. Or this would likely hold true for a single layer with two disjoint shapes (A non-overlapping triangle and rectangle). However, this can still be learnt using more than 1 hidden layers. Where, the 1st layer learns the basic shapes, followed by 2nd layer approximating their disjoint combinations.
Thus, you can assume that this logo is combination of disjoint rectangles (5 rectangles for orange and 3 rectangles for grey). We can use atleast 32 nodes in 1st hidden layer and few nodes in the 2nd hidden layer. However, we don't have control over what each node learns. Hence, a few more number of neurons than required neurons should be helpful.
How does css filter contrast will work ? Is there a formula? I want to reproduce in javascript and I need a formula.
For example css filter brightness(2) take each pixel and multiply by 2, but for contrast I don't have any idea
Thanks
Multiply by 2 is a contrast filter. All multiplication and division of an images RGB values affects the contrast.
The function I like to use is a exponential ease function where the power controls the contrast.
function contrastPixel(r,g,b,power) {
r /= 255; // normalize channels
g /= 255;
b /= 255;
var rr = Math.pow(r,power); // raise each to the power
var gg = Math.pow(r,power);
var bb = Math.pow(r,power);
r = Math.floor((rr / (rr + Math.pow(1 - r, power)))*255);
g = Math.floor((gg / (gg + Math.pow(1 - g, power)))*255);
b = Math.floor((bb / (bb + Math.pow(1 - b, power)))*255);
return {r,g,b};
}
Using it
var dat = ctx.getPixelData(0,0,100,100);
var data = dat.data;
var i = 0;
while(i < data.length){
var res = contrastPixel(data[i],data[i+1],data[i+2],power);
data[i++] = res.r;
data[i++] = res.g;
data[i++] = res.b;
i++;
}
ctx.putImageData(dat,0,0);
The argument power controls the contrast.
power = 1; // no change to the image
0 < power < 1; // reduces contrast
1 < power; // increases contrast
Because the scaling of power is logarithmic it can be hard to control with a linear slider. To give the slider a linear feel use the following instructions to get a value from a slider
For a slider with a min -100 and max 100 and center 0 (0 being no contrast change) get the contrast power value using
power = Math.pow(((Number(slider.value)* 0.0498) + 5)/5,Math.log2(10));
It's not perfectly linear, and the range is limited but will cover most needs.
The test image shows the results. Center bottom is the original. Using the scale in the paragraph above from left to right slider values of -100, -50, 50, 100
So, i'm trying to implement hough transform, this version is 1-dimensional (its for all dims reduced to 1 dim optimization) version based on the minor properties.
Enclosed is my code, with a sample image... input and output.
Obvious question is what am i doing wrong. I've tripled check my logic and code and it looks good also my parameters. But obviously i'm missing on something.
Notice that the red pixels are supposed to be ellipses centers , while the blue pixels are edges to be removed (belong to the ellipse that conform to the mathematical equations).
also, i'm not interested in openCV / matlab / ocatve / etc.. usage (nothing against them).
Thank you very much!
var fs = require("fs"),
Canvas = require("canvas"),
Image = Canvas.Image;
var LEAST_REQUIRED_DISTANCE = 40, // LEAST required distance between 2 points , lets say smallest ellipse minor
LEAST_REQUIRED_ELLIPSES = 6, // number of found ellipse
arr_accum = [],
arr_edges = [],
edges_canvas,
xy,
x1y1,
x2y2,
x0,
y0,
a,
alpha,
d,
b,
max_votes,
cos_tau,
sin_tau_sqr,
f,
new_x0,
new_y0,
any_minor_dist,
max_minor,
i,
found_minor_in_accum,
arr_edges_len,
hough_file = 'sample_me2.jpg',
edges_canvas = drawImgToCanvasSync(hough_file); // make sure everything is black and white!
arr_edges = getEdgesArr(edges_canvas);
arr_edges_len = arr_edges.length;
var hough_canvas_img_data = edges_canvas.getContext('2d').getImageData(0, 0, edges_canvas.width,edges_canvas.height);
for(x1y1 = 0; x1y1 < arr_edges_len ; x1y1++){
if (arr_edges[x1y1].x === -1) { continue; }
for(x2y2 = 0 ; x2y2 < arr_edges_len; x2y2++){
if ((arr_edges[x2y2].x === -1) ||
(arr_edges[x2y2].x === arr_edges[x1y1].x && arr_edges[x2y2].y === arr_edges[x1y1].y)) { continue; }
if (distance(arr_edges[x1y1],arr_edges[x2y2]) > LEAST_REQUIRED_DISTANCE){
x0 = (arr_edges[x1y1].x + arr_edges[x2y2].x) / 2;
y0 = (arr_edges[x1y1].y + arr_edges[x2y2].y) / 2;
a = Math.sqrt((arr_edges[x1y1].x - arr_edges[x2y2].x) * (arr_edges[x1y1].x - arr_edges[x2y2].x) + (arr_edges[x1y1].y - arr_edges[x2y2].y) * (arr_edges[x1y1].y - arr_edges[x2y2].y)) / 2;
alpha = Math.atan((arr_edges[x2y2].y - arr_edges[x1y1].y) / (arr_edges[x2y2].x - arr_edges[x1y1].x));
for(xy = 0 ; xy < arr_edges_len; xy++){
if ((arr_edges[xy].x === -1) ||
(arr_edges[xy].x === arr_edges[x2y2].x && arr_edges[xy].y === arr_edges[x2y2].y) ||
(arr_edges[xy].x === arr_edges[x1y1].x && arr_edges[xy].y === arr_edges[x1y1].y)) { continue; }
d = distance({x: x0, y: y0},arr_edges[xy]);
if (d > LEAST_REQUIRED_DISTANCE){
f = distance(arr_edges[xy],arr_edges[x2y2]); // focus
cos_tau = (a * a + d * d - f * f) / (2 * a * d);
sin_tau_sqr = (1 - cos_tau * cos_tau);//Math.sqrt(1 - cos_tau * cos_tau); // getting sin out of cos
b = (a * a * d * d * sin_tau_sqr ) / (a * a - d * d * cos_tau * cos_tau);
b = Math.sqrt(b);
b = parseInt(b.toFixed(0));
d = parseInt(d.toFixed(0));
if (b > 0){
found_minor_in_accum = arr_accum.hasOwnProperty(b);
if (!found_minor_in_accum){
arr_accum[b] = {f: f, cos_tau: cos_tau, sin_tau_sqr: sin_tau_sqr, b: b, d: d, xy: xy, xy_point: JSON.stringify(arr_edges[xy]), x0: x0, y0: y0, accum: 0};
}
else{
arr_accum[b].accum++;
}
}// b
}// if2 - LEAST_REQUIRED_DISTANCE
}// for xy
max_votes = getMaxMinor(arr_accum);
// ONE ellipse has been detected
if (max_votes != null &&
(max_votes.max_votes > LEAST_REQUIRED_ELLIPSES)){
// output ellipse details
new_x0 = parseInt(arr_accum[max_votes.index].x0.toFixed(0)),
new_y0 = parseInt(arr_accum[max_votes.index].y0.toFixed(0));
setPixel(hough_canvas_img_data,new_x0,new_y0,255,0,0,255); // Red centers
// remove the pixels on the detected ellipse from edge pixel array
for (i=0; i < arr_edges.length; i++){
any_minor_dist = distance({x:new_x0, y: new_y0}, arr_edges[i]);
any_minor_dist = parseInt(any_minor_dist.toFixed(0));
max_minor = b;//Math.max(b,arr_accum[max_votes.index].d); // between the max and the min
// coloring in blue the edges we don't need
if (any_minor_dist <= max_minor){
setPixel(hough_canvas_img_data,arr_edges[i].x,arr_edges[i].y,0,0,255,255);
arr_edges[i] = {x: -1, y: -1};
}// if
}// for
}// if - LEAST_REQUIRED_ELLIPSES
// clear accumulated array
arr_accum = [];
}// if1 - LEAST_REQUIRED_DISTANCE
}// for x2y2
}// for xy
edges_canvas.getContext('2d').putImageData(hough_canvas_img_data, 0, 0);
writeCanvasToFile(edges_canvas, __dirname + '/hough.jpg', function() {
});
function getMaxMinor(accum_in){
var max_votes = -1,
max_votes_idx,
i,
accum_len = accum_in.length;
for(i in accum_in){
if (accum_in[i].accum > max_votes){
max_votes = accum_in[i].accum;
max_votes_idx = i;
} // if
}
if (max_votes > 0){
return {max_votes: max_votes, index: max_votes_idx};
}
return null;
}
function distance(point_a,point_b){
return Math.sqrt((point_a.x - point_b.x) * (point_a.x - point_b.x) + (point_a.y - point_b.y) * (point_a.y - point_b.y));
}
function getEdgesArr(canvas_in){
var x,
y,
width = canvas_in.width,
height = canvas_in.height,
pixel,
edges = [],
ctx = canvas_in.getContext('2d'),
img_data = ctx.getImageData(0, 0, width, height);
for(x = 0; x < width; x++){
for(y = 0; y < height; y++){
pixel = getPixel(img_data, x,y);
if (pixel.r !== 0 &&
pixel.g !== 0 &&
pixel.b !== 0 ){
edges.push({x: x, y: y});
}
} // for
}// for
return edges
} // getEdgesArr
function drawImgToCanvasSync(file) {
var data = fs.readFileSync(file)
var canvas = dataToCanvas(data);
return canvas;
}
function dataToCanvas(imagedata) {
img = new Canvas.Image();
img.src = new Buffer(imagedata, 'binary');
var canvas = new Canvas(img.width, img.height);
var ctx = canvas.getContext('2d');
ctx.patternQuality = "best";
ctx.drawImage(img, 0, 0, img.width, img.height,
0, 0, img.width, img.height);
return canvas;
}
function writeCanvasToFile(canvas, file, callback) {
var out = fs.createWriteStream(file)
var stream = canvas.createPNGStream();
stream.on('data', function(chunk) {
out.write(chunk);
});
stream.on('end', function() {
callback();
});
}
function setPixel(imageData, x, y, r, g, b, a) {
index = (x + y * imageData.width) * 4;
imageData.data[index+0] = r;
imageData.data[index+1] = g;
imageData.data[index+2] = b;
imageData.data[index+3] = a;
}
function getPixel(imageData, x, y) {
index = (x + y * imageData.width) * 4;
return {
r: imageData.data[index+0],
g: imageData.data[index+1],
b: imageData.data[index+2],
a: imageData.data[index+3]
}
}
It seems you try to implement the algorithm of Yonghong Xie; Qiang Ji (2002). A new efficient ellipse detection method 2. p. 957.
Ellipse removal suffers from several bugs
In your code, you perform the removal of found ellipse (step 12 of the original paper's algorithm) by resetting coordinates to {-1, -1}.
You need to add:
`if (arr_edges[x1y1].x === -1) break;`
at the end of the x2y2 block. Otherwise, the loop will consider -1, -1 as a white point.
More importantly, your algorithm consists in erasing every point which distance to the center is smaller than b. b supposedly is the minor axis half-length (per the original algorithm). But in your code, variable b actually is the latest (and not most frequent) half-length, and you erase points with a distance lower than b (instead of greater, since it's the minor axis). In other words, you clear all points inside a circle with a distance lower than latest computed axis.
Your sample image can actually be processed with a clearing of all points inside a circle with a distance lower than selected major axis with:
max_minor = arr_accum[max_votes.index].d;
Indeed, you don't have overlapping ellipses and they are spread enough. Please consider a better algorithm for overlapping or closer ellipses.
The algorithm mixes major and minor axes
Step 6 of the paper reads:
For each third pixel (x, y), if the distance between (x, y) and (x0,
y0) is greater than the required least distance for a pair of pixels
to be considered then carry out the following steps from (7) to (9).
This clearly is an approximation. If you do so, you will end up considering points further than the minor axis half length, and eventually on the major axis (with axes swapped). You should make sure the distance between the considered point and the tested ellipse center is smaller than currently considered major axis half-length (condition should be d <= a). This will help with the ellipse erasing part of the algorithm.
Also, if you also compare with the least distance for a pair of pixels, as per the original paper, 40 is too large for the smaller ellipse in your picture. The comment in your code is wrong, it should be at maximum half the smallest ellipse minor axis half-length.
LEAST_REQUIRED_ELLIPSES is too small
This parameter is also misnamed. It is the minimum number of votes an ellipse should get to be considered valid. Each vote corresponds to a pixel. So a value of 6 means that only 6+2 pixels make an ellipse. Since pixels coordinates are integers and you have more than 1 ellipse in your picture, the algorithm might detect ellipses that are not, and eventually clear edges (especially when combined with the buggy ellipse erasing algorithm). Based on tests, a value of 100 will find four of the five ellipses of your picture, while 80 will find them all. Smaller values will not find the proper centers of the ellipses.
Sample image is not black & white
Despite the comment, sample image is not exactly black and white. You should convert it or apply some threshold (e.g. RGB values greater than 10 instead of simply different form 0).
Diff of minimum changes to make it work is available here:
https://gist.github.com/pguyot/26149fec29ffa47f0cfb/revisions
Finally, please note that parseInt(x.toFixed(0)) could be rewritten Math.floor(x), and you probably want to not truncate all floats like this, but rather round them, and proceed where needed: the algorithm to erase the ellipse from the picture would benefit from non truncated values for the center coordinates. This code definitely could be improved further, for example it currently computes the distance between points x1y1 and x2y2 twice.
I have an rectangle area that I want to fit a varing amount of sqaure items. Here is an image to help with the problem.
Could anyone help me with a formular to calculate the width/height(Bw/Bh) of the items?
I tried √(WxH/N).
But with an example of W = 1400, H = 380, N = 16 that gave me 182. But 1400/182 only gives 7.7 boxes width and 2.08 high (Multiplied I get my 16, but I need them to fit within the area).
Any ideas?
EDIT:
Getting closer I think what I really need to know is based around the aspect ratio and how to work out a grid that accommodate the items. E.g. 16 boxes below in 254 x 133 is 6 by 3.
EDIT:
I've now wrote the following code to work out the grid (javascript). Problem is that it is using a trail and error method.
var W = 254,
H = 133,
N = 16,
Bh = H;
while( ((Math.floor(W/Bh)) * (Math.floor(H/Bh))) < N ){
Bh--;
}
alert('Columns: '+Math.floor(W/Bh)+', Rows: '+Math.floor(H/Bh)+', Bow width: '+(Bh) );
See http://jsfiddle.net/GVp4X/ to test the code. I'm still certain there is a better way though.
You should define some meaningful constraint for the aspect ratio of the (small) boxes. For example, you can always divide the big box into N parts vertically or horizontally, but I don't think that is what you want to do. And for prime numbers N, this is the only thing you can do. Would it be okay to add "padding" of empty boxes in this case?
EDIT:
If N is reasonably small, you can just loop through all possible, w, the numbers of boxes per row and minimize some suitable penalty function for wrong aspect ratio and number of unused boxes. Here's an example (in Matlab code)
N = 123;
target_aspect = 4/3;
W = 80;
H = 60;
min_F = inf;
for w=1:N,
h = ceil(N/w);
Bh = H/h;
Bw = W/w;
padding = h*w-N;
aspect = Bw / Bh;
%# The penalty function to minimize
F = abs(aspect-target_aspect) + padding * 0.05;
if F < min_F,
min_F = F;
best_w = w;
end
end
EDIT2:
It is also possible to do this with a fixed aspect ratio if empty space ("ypadding") is allowed, for example, at the bottom margin. Then the loop body could be something like
Bw = W/w;
Bh = Bw/aspect;
h = floor(H/Bh);
n = w*h;
if n >= N,
ypadding = H-Bh*h;
padding = h*w-N;
%# penalty function
F = (ypadding/Bh)*0.3 + (padding / w)*0.2
if F < min_F,
min_F = F;
best_w = w;
end
end
In this case the search range for w can also be reduced by solving a quadratic problem.
I'm working with a canvas element with a height of 600 to 1000 pixels and a width of several tens or hundreds of thousands of pixels. However, after a certain number of pixels (obviously unknown), the canvas no longer display shapes I draw with JS.
Does anyone know if there's a limit?
Tested both in Chrome 12 and Firefox 4.
Updated 10/13/2014
All tested browsers have limits to the height/width of canvas elements, but many browsers also limit the total area of the canvas element. The limits are as follows for the browsers I'm able to test:
Chrome:
Maximum height/width: 32,767 pixels
Maximum area: 268,435,456 pixels (e.g., 16,384 x 16,384)
Firefox:
Maximum height/width: 32,767 pixels
Maximum area: 472,907,776 pixels (e.g., 22,528 x 20,992)
IE:
Maximum height/width: 8,192 pixels
Maximum area: N/A
IE Mobile:
Maximum height/width: 4,096 pixels
Maximum area: N/A
Other:
I'm not able to test other browsers at this time. Refer to the other answers on this page for additional limits.
Exceeding the maximum length/width/area on most browsers renders the canvas unusable. (It will ignore any draw commands, even in the usable area.) IE and IE Mobile will honor all draw commands within the usable space.
The accepted answer is outdated and incomplete.
Browsers impose different canvas size limitations, but these limitations often change based on the platform and hardware available. This makes it difficult to make statements like "the maximum canvas [area/height/width] of [browser] is [value]" because [value] can change based the operating system, available RAM, or GPU type.
There are two approaches to working with large HTML <canvas> elements:
Limit canvas dimensions to those known to work on all supported platforms.
Programmatically determine canvas limitations on the client before rendering.
Those looking to programmatically determine canvas limitations on the client should consider using canvas-size.
GitHub: https://github.com/jhildenbiddle/canvas-size
NPM: https://www.npmjs.com/package/canvas-size
From the docs:
The HTML canvas element is widely supported by modern and legacy
browsers, but each browser and platform combination imposes unique
size limitations that will render a canvas unusable when exceeded.
Unfortunately, browsers do not provide a way to determine what their
limitations are, nor do they provide any kind of feedback after an
unusable canvas has been created. This makes working with large canvas
elements a challenge, especially for applications that support a
variety of browsers and platforms.
This micro-library provides the maximum area, height, and width of an
HTML canvas element supported by the browser as well as the ability to
test custom canvas dimensions. By collecting this information before a
new canvas element is created, applications are able to reliably set
canvas dimensions within the size limitations of each
browser/platform.
Test results for a variety of platform and browser combinations are available here:
Test Results: https://github.com/jhildenbiddle/canvas-size#test-results
Full disclosure, I am the author of the library. I created it back in 2014 and recently revisited the code for a new canvas-related project. I was surprised to find the same lack of available tools for detecting canvas size limitations in 2018 so I updated code, released it, and hope it helps others running into similar issues.
I've ran into out of memory errors on Firefox with canvas heights greater than 8000, chrome seems to handle much higher, at least to 32000.
EDIT: After running some more tests, I've found some very strange errors with Firefox 16.0.2.
First, I seem to get different behavior from in memory (created in javascript) canvas as opposed to html declared canvas.
Second, if you don't have the proper html tag and meta charset, the canvas might be restricted to 8196, otherwise you can go up to 32767.
Third, if you get the 2d context of the canvas and then change the canvas size, you might be restricted to 8196 as well. Simply setting the canvas size before grabbing the 2d context allows you to have up to 32767 without getting memory errors.
I haven't been able to consistently get the memory errors, sometimes it's only on the first page load, and then subsequent height changes work. This is the html file I was testing with http://pastebin.com/zK8nZxdE.
iOS max canvas size (width x height):
iPod Touch 16GB = 1448x1448
iPad Mini = 2290x2289
iPhone 3 = 1448x1448
iPhone 5 = 2290x2289
tested on march 2014.
To expand a bit on #FredericCharette answer:
As per safari's content guide under section "Know iOS Resource Limits":
The maximum size for a canvas element is 3 megapixels for devices with less than 256 MB RAM and 5 megapixels for devices with greater or equal than 256 MB RAM
Therefore, any size variation of 5242880 (5 x 1024 x 1024) pixels will work on large memory devices, otherwise it's 3145728 pixels.
Example for 5 megapixel canvas (width x height):
Any total <= 5242880
--------------------
5 x 1048576 ~= 5MP (1048576 = 1024 x 1024)
50 x 104857 ~= 5MP
500 x 10485 ~= 5MP
and so on..
The largest SQUARE canvases are ("MiB" = 1024x1024 Bytes):
device < 256 MiB device >= 256 MiB iPhone 6 [not confirmed]
----------------- ----------------- ---------------------
<= 3145728 pixels <= 5242880 pixels <= 16 x 1024 x 1024 p
1773 x 1773 2289 x 2289 4096 x 4096
According to w3 specs, the width/height interface is an unsigned long - so 0 to 4,294,967,295 (if I remember that number right -- might be off a few).
EDIT: Strangely, it says unsigned long, but it testing shows just a normal long value as the max: 2147483647. Jsfiddle - 47 works but up to 48 and it reverts back to default.
Even though the canvas will allow you to put height=2147483647, when you start drawing, nothing will happen
Drawing happens only when I bring the height back to 32767
iOS has different limits.
Using the iOS 7 simulator I was able to demonstrate the limit is 5MB like this:
var canvas = document.createElement('canvas');
canvas.width = 1024 * 5;
canvas.height = 1024;
alert(canvas.toDataURL('image/jpeg').length);
// prints "110087" - the expected length of the dataURL
but if I nudge the canvas size up by a single row of pixels:
var canvas = document.createElement('canvas');
canvas.width = 1024 * 5;
canvas.height = 1025;
alert(canvas.toDataURL('image/jpeg'));
// prints "data:," - a broken dataURL
On PC-
I don't think there is a restriction but yes you can get out of memory exception.
On Mobile devices-
Here is the restrictions for the canvas for mobile devices:-
The maximum size for a canvas element is 3 megapixels for devices with less than 256 MB RAM and 5 megapixels for devices with greater or equal than 256 MB RAM.
So for example - if you want to support Apple’s older hardware, the size of your canvas cannot exceed 2048×1464.
Hope these resources will help you to pull you out.
When you are using WebGL canvases, the browsers (including the desktop ones) will impose extra limits on the size of the underlying buffer. Even if your canvas is big, e.g. 16,000x16,000, most browsers will render a smaller (let's say 4096x4096) picture, and scale it up. That might cause ugly pixelating, etc.
I have written some code to determine that maximum size using exponential search, if anyone ever needs it. determineMaxCanvasSize() is the function you are interested in.
function makeGLCanvas()
{
// Get A WebGL context
var canvas = document.createElement('canvas');
var contextNames = ["webgl", "experimental-webgl"];
var gl = null;
for (var i = 0; i < contextNames.length; ++i)
{
try
{
gl = canvas.getContext(contextNames[i], {
// Used so that the buffer contains valid information, and bytes can
// be retrieved from it. Otherwise, WebGL will switch to the back buffer
preserveDrawingBuffer: true
});
}
catch(e) {}
if (gl != null)
{
break;
}
}
if (gl == null)
{
alert("WebGL not supported.\nGlobus won't work\nTry using browsers such as Mozilla " +
"Firefox, Google Chrome or Opera");
// TODO: Expecting that the canvas will be collected. If that is not the case, it will
// need to be destroyed somehow.
return;
}
return [canvas, gl];
}
// From Wikipedia
function gcd(a,b) {
a = Math.abs(a);
b = Math.abs(b);
if (b > a) {var temp = a; a = b; b = temp;}
while (true) {
if (b == 0) return a;
a %= b;
if (a == 0) return b;
b %= a;
}
}
function isGlContextFillingTheCanvas(gl) {
return gl.canvas.width == gl.drawingBufferWidth && gl.canvas.height == gl.drawingBufferHeight;
}
// (See issue #2) All browsers reduce the size of the WebGL draw buffer for large canvases
// (usually over 4096px in width or height). This function uses a varian of binary search to
// find the maximum size for a canvas given the provided x to y size ratio.
//
// To produce exact results, this function expects an integer ratio. The ratio will be equal to:
// xRatio/yRatio.
function determineMaxCanvasSize(xRatio, yRatio) {
// This function works experimentally, by creating an actual canvas and finding the maximum
// value, the browser allows.
[canvas, gl] = makeGLCanvas();
// Reduce the ratio to minimum
gcdOfRatios = gcd(xRatio, yRatio);
[xRatio, yRatio] = [xRatio/gcdOfRatios, yRatio/gcdOfRatios];
// if the browser cannot handle the minimum ratio, there is not much we can do
canvas.width = xRatio;
canvas.height = yRatio;
if (!isGlContextFillingTheCanvas(gl)) {
throw "The browser is unable to use WebGL canvases with the specified ratio: " +
xRatio + ":" + yRatio;
}
// First find an upper bound
var ratioMultiple = 1; // to maintain the exact ratio, we will keep the multiplyer that
// resulted in the upper bound for the canvas size
while (isGlContextFillingTheCanvas(gl)) {
canvas.width *= 2;
canvas.height *= 2;
ratioMultiple *= 2;
}
// Search with minVal inclusive, maxVal exclusive
function binarySearch(minVal, maxVal) {
if (minVal == maxVal) {
return minVal;
}
middle = Math.floor((maxVal - minVal)/2) + minVal;
canvas.width = middle * xRatio;
canvas.height = middle * yRatio;
if (isGlContextFillingTheCanvas(gl)) {
return binarySearch(middle + 1, maxVal);
} else {
return binarySearch(minVal, middle);
}
}
ratioMultiple = binarySearch(1, ratioMultiple);
return [xRatio * ratioMultiple, yRatio * ratioMultiple];
}
Also in a jsfiddle https://jsfiddle.net/1sh47wfk/1/
The limitations for Safari (all platforms) are much lower.
Known iOS/Safari Limitations
For example, I had a 6400x6400px canvas buffer with data drawn onto it. By tracing/ exporting the content and by testing on other browsers, I was able to see that everything was fine. But on Safari, it would skip the drawing of this specific buffer onto my main context.
I tried to programmatically figure out the limit: setting canvas size starting from 35000, stepping down by 100 until valid size is found. In every step writing the right-bottom pixel and then reading it. It works - with caution.
The speed is acceptable if either width or height is set to some low value (eg. 10-200) this way: get_max_canvas_size('height', 20).
But if called without width or height like get_max_canvas_size(), the created canvas is so big that reading SINGLE pixel color is very slow, and in IE causes serious hang.
If this like test could be done someway without reading pixel value, the speed would be acceptable.
Of course the easiest way to detect maximum size would be some native way to query the max width and height. But Canvas is 'a living standard', so may be it is coming some day.
http://jsfiddle.net/timo2012/tcg6363r/2/ (Be aware! Your browser may hang!)
if (!Date.now)
{
Date.now = function now()
{
return new Date().getTime();
};
}
var t0 = Date.now();
//var size = get_max_canvas_size('width', 200);
var size = get_max_canvas_size('height', 20);
//var size = get_max_canvas_size();
var t1 = Date.now();
var c = size.canvas;
delete size.canvas;
$('body').append('time: ' + (t1 - t0) + '<br>max size:' + JSON.stringify(size) + '<br>');
//$('body').append(c);
function get_max_canvas_size(h_or_w, _size)
{
var c = document.createElement('canvas');
if (h_or_w == 'height') h = _size;
else if (h_or_w == 'width') w = _size;
else if (h_or_w && h_or_w !== 'width' && h_or_w !== 'height' || !window.CanvasRenderingContext2D)
return {
width: null,
height: null
};
var w, h;
var size = 35000;
var cnt = 0;
if (h_or_w == 'height') w = size;
else if (h_or_w == 'width') h = size;
else
{
w = size;
h = size;
}
if (!valid(w, h))
for (; size > 10; size -= 100)
{
cnt++;
if (h_or_w == 'height') w = size;
else if (h_or_w == 'width') h = size;
else
{
w = size;
h = size;
}
if (valid(w, h)) break;
}
return {
width: w,
height: h,
iterations: cnt,
canvas: c
};
function valid(w, h)
{
var t0 = Date.now();
var color, p, ctx;
c.width = w;
c.height = h;
if (c && c.getContext)
ctx = c.getContext("2d");
if (ctx)
{
ctx.fillStyle = "#ff0000";
try
{
ctx.fillRect(w - 1, h - 1, 1, 1);
p = ctx.getImageData(w - 1, h - 1, 1, 1).data;
}
catch (err)
{
console.log('err');
}
if (p)
color = p[0] + '' + p[1] + '' + p[2];
}
var t1 = Date.now();
if (color == '25500')
{
console.log(w, h, true, t1 - t0);
return true;
}
console.log(w, h, false, t1 - t0);
return false;
}
}
You could chunk it and in javascript auto add as many smaller canvases as needed and draw the elements on the appropriate canvas. You may still run out of memory eventually but would get you by the single canvas limit.
I don't know how to detect the max possible size without itteration, but you can detect if a given canvas size works by filling a pixel and then reading the colour back out. If the canvas has not rendered then the color you get back will not match. W
partial code:
function rgbToHex(r, g, b) {
if (r > 255 || g > 255 || b > 255)
throw "Invalid color component";
return ((r << 16) | (g << 8) | b).toString(16);
}
var test_colour = '8ed6ff';
working_context.fillStyle = '#' + test_colour;
working_context.fillRect(0,0,1,1);
var colour_data = working_context.getImageData(0, 0, 1, 1).data;
var colour_hex = ("000000" + rgbToHex(colour_data[0], colour_data[1], colour_data[2])).slice(-6);