I have used this code to exactly try to have the RGB code of color:
var huePixel = HueCanvas.css('background-color').match(/^rgb\((\d+),\s*(\d+),\s*(\d+)\)$/);//["rgb(0, 70, 255", "0", "70", "255"]
var svPixel = SVCanvas.get(0).getContext("2d").getImageData(satPos,valPos,1,1).data;
//opacity*original + (1-opacity)*background = resulting pixel
var opacity =(svPixel[3]/255);
var r =parseInt((opacity*svPixel[0])+((1-opacity)*huePixel[1]));
var g =parseInt((opacity*svPixel[1])+((1-opacity)*huePixel[2]));
var b =parseInt((opacity*svPixel[2])+((1-opacity)*huePixel[3]));
The problem is that in some pixels , the RGB is not exactly the same . If i use Math.round than parseInt there is more problems , and more pixels have little changes than real ones.
I know that the problem is in var opacity =(svPixel[3]/255); , but i dont know how to put the equation to not have that problems.
Thanks for your attention.
I don't know the definite answer to your question (I'm not even sure I understand the question itself), but I'll take a shot.
It appears that you're trying to calculate the RGB value that you see when something else (the browser?) blends a non-opaque canvas on top of opaque background. (Are you sure this is the right thing to do at all?)
First, please don't use parseInt to round a number. It's used to parse strings and you should use it to convert huePixel[i] to an integer: parseInt(huePixel[i], 10) (note that I specify the radix explicitly to avoid numbers being parsed as octal).
To round values, you should use Math methods: Math.round (to closest integer), Math.ceil (round up) or Math.floor (round down).
Maybe the problem you're having is caused by rounding errors (hard to say without the specific inputs and expected outputs of the calculation). To minimize the rounding errors, you could try rewriting the formula like this:
(opacity * svPixel[0]) + ((1-opacity) * huePixel[1]) =
huePixel[1] + opacity * (svPixel[0]-huePixel[1]) =
huePixel[1] + svPixel[3] * (svPixel[0]-huePixel[1]) / 255
Related
I am trying to achieve what was asked in this question. In most scenarios, it is working fine by using the following code:
Math.round(num * 100) / 100
But I encountered a situation where it fails due to a very odd behavior. The number I'm trying to round is 1.275. I am expecting this to be rounded to 1.28, but it is being rounded to 1.27. The reason behind this is the fact that num * 100 is resulting in 127.49999999999999 instead of 127.5.
Why is this happening and is there a way around it?
EDIT
Yes, this question is probably the root cause of the issue I'm facing, but the suggested solutions of the selected answer are not working for me. The desired end result is a rounded number that is displayed correctly. So basically I'm trying to achieve what that question is instructing (check below) but I cannot figure out how.
If you just don’t want to see all those extra decimal places: simply
format your result rounded to a fixed number of decimal places when
displaying it.
You can use toFixed to ensure the precision that you're looking for:
var x = 1.275;
var y = x * 100;
y = y.toFixed(1); // y = 127.5
I want to use toPrecision() to reduce the size of a number before I display it. However, I sometimes cannot multiply output of the function by another number without gaining a small rounding error. See the code sample below:
var x = 0.0197992182093305
alert (x.toPrecision(4)) //Correct: 0.01980
alert (Number(x.toPrecision(4))) //Correct: 0.0198
alert( Number(x.toPrecision(4)) * 100) //Incorrect: 1.9800000000000002
JSFiddle: http://jsfiddle.net/ueLsL460/4/ What's going on here?
Based on what I understand, Number(x.toPrecision(4)) * 100 creates a new Number object which will not inherit the precision of the parent.
If you still want it to be precise after Math, you need to put it in precision again.
alert((x * 100).toPrecision(4));
Technically, it's not an error. It's just the way javascript is supposed to work.
The use of primitive constructors is not that ideal, unless you are trying to do something trivial. Can you please try to do the code on the following fiddle and see if this will do for you?
http://jsfiddle.net/ueLsL460/5/
var x = 0.0197992182093305;
alert((x * 100).toPrecision(4));
Javascript doesn't have a decimal equivalent - so in your case floating points are used. What this means is that Number(x.toPrecision(4)) doesn't give you exactly 0.0198, but something the FP binary number closest to 0.0198. So any arithmetic you do will be subject to the loss of precision introduced in floating point arithmetic. You can see the same effect if you do
alert(0.0198 * 100);
By the way
alert(0.0198 * 10 * 10);
gives you no problem (the loss of precision does not build up enough to make it into the digits Javascript deems to display) - but that's for this particular number alone.
I use KineticJS to make a lot of transformations. Like the following:
var baseX = pos.x;
var baseY = pos.y;
var w = self.getWidth();
var h = self.getHeight();
var halfW = w / 2;
var halfH = h / 2;
var aspectRadian = Math.atan2(halfH, halfW);
Pretty quickly you int numbers turn into double numbers. So how do I deal with it? For example when I have to set a position of a Rect. I do
rect.x(10);
Since there are no half pixel the following does not make any sense:
rect.x(10.3);
I guess that KineticJS makes a rounding internally or may be it works with double numbers.
Is it better to use int numbers or double when working with KineticJS? Does one or the other solution leads to performance problems or rounding errors? Should I use double all the time to be as precise as possible?
If you just plan on making calculations that use only pixels, then sticking to float (or double) is fine and shouldn't produce any issues (except for minor anti-aliasing, if that's an issue at all for you). But if, say, you're going to try to convert pixels into different measurements (e.g. inches), then you might find Javascript's floating point precision working against you.
I am trying to create a custom linear congruential generator (LCQ) in JavaScript (the one used in glibc).
Its properties as it's stated on Wikipedia are: m=2^31 , a=1103515245 , c=12345.
Now I am getting next seed value with
x = (1103515245 * x + 12345) % 0x80000000 ; // (The same as &0x7fffffff)
Although the generator seems to work, but when the numbers are tested on canvas:
cx = (x & 0x3fffffff) % canvasWidth; // Coordinate x (the same for cy)
They seem to be horribly biased: http://jsfiddle.net/7VmR9/3/show/
Why does this happen? By choosing a different modulo, the result of a visual test looks much better.
The testing JSFiddle is here: http://jsfiddle.net/7VmR9/3/
Update
At last I fixed the transformation to canvas coordinates as in this formula:
var cx = ((x & 0x3fffffff)/0x3fffffff*canvasWidth)|0
Now the pixel coordinates are not so much malformed as when used the modulo operation.
Updated fiddle: http://jsfiddle.net/7VmR9/14/
For the generator the formula is (you forgot a modulus in the first part):
current = (multiplier * current * modul + addend) % modulus) / modulus
I realize that you try to optimize it so I updated the fiddle with this so you can use it as a basis for the optimizations:
http://jsfiddle.net/AbdiasSoftware/7VmR9/12/
Yes, it looks like you solved it. I've done the same thing.
A linear congruential generator is in the form:
seed = (seed * factor + offset) % range;
But, most importantly, when obtaining an actual random number from it, the following does not work:
random = seed % random_maximum;
This won't work because the second modulus seems to counteract the effect of the generator. Instead, you need to use:
random = floor (seed / range * random_maximum);
(This would be a random integer; remove the floor call to obtain a random float.)
Lastly, I will warn you: In JavaScript, when working with numbers that exceed the dword limit, there is a loss of precision. Thus, the random results of your LCG may be random, but they most likely won't match the results of the same LCG implemented in C++ or another low-level language that actually supports dword math.
Also due to imprecision, the cycle of the LCG is highly liable to be greatly reduced. So, for instance, the cycle of the glibc LCG you reference is probably 4 billion (that is, it will generate over 4 billion random numbers before starting over and re-generating the exact same set of numbers). This JavaScript implementation may only get 1 billion, or perhaps far less, due to the fact that when multiplying by the factor, the number surpasses 4 billion, and loses precision in doing so.
To calculated the value of the derivative of a given function Func at a given point x, to get a good precision one would think this:
a = Fun( x - Number.MIN_VALUE)
b = Func( x + Number.MIN_VALUE)
return (b-a)/(2*Number.MIN_VALUE)
Now for any x + Number.MIN_VALUE (or x - Number.MIN_VALUE) both return x in javascript.
I tried of different values, and 1 + 1e-15 returns 1.000000000000001. Trying to get more precision, 1 + 1e-16 returns 1. So I'll have to use 1e-15 instead of Number.MIN_VALUE which is 5e-324.
Is there a way to get a better precision in this case in javascript?
This is not at all about javascript, actually.
Reducing the separation between the points will not increase your precision of the derivative. The values of the function in the close points will be computed with some error in the last digits, and finally, your error will be much larger than the point separation. Then you divide very small difference which has a huge relative error by a very small number, and you most likely get complete rubbish.
The best way to get a better value of the derivative is to calculate function in multiple points (with not very small separation) and then construct an approximation polynomial and differentiate it in the point of your interest.