I just stumble accross something really odd in Javascript.
My script was multiplying 13596 by 0.1, and the output was : 1359.6000000000001
We can agree that 0.1 = 1/10, so I tried it :
13596/10 = 1359.6
I tested it with Firefox and Chrome, the same results occurs.
I wondered if it was related to floating, so I tried the following :
13596 * parseFloat(0.1) = 1359.6000000000001
Nope.
By the way, this is not equal :
(13596*0.1) === (13596/10) => false
Does anyone has an idea about this result ?
(Here's the JSFiddle.)
Remember that, with floating points, a number you see on-screen is not necessarily exactly the number the computer is modelling.
For example, using node:
> 1359.6
1359.6
> (1359.6).toFixed(20)
'1359.59999999999990905053'
Node shows you 1359.6, but when you ask for more precision, you can see that the real number is not exactly what you saw -- it was rounded. The same is true of 0.1:
> (0.1).toFixed(20)
'0.10000000000000000555'
Generally when you work with floats you accept this imprecision and round numbers before displaying them.
Some numbers are exactly representable, but it's safest to assume that floating point is imprecise. For example, 0.5 can be represented exactly:
> (0.5).toFixed(20)
'0.50000000000000000000'
Dividing by 10 actually produces the same result as multiplying by 0.1:
> 13596/10
1359.6
> (13596/10).toFixed(20)
'1359.59999999999990905053'
In other languages division between integers results in an integer, however in JavaScript all numbers are modelled as floating points.
Generally whenever you need to precisely represent decimal numbers you should use a decimal number type, though this is not available natively in JavaScript.
Also there is no point to use the code parseFloat(0.1) as 0.1 is already a float.
Related
I have been trying to figure this floating-point problem out in javascript.
This is an example of what I want to do:
var x1 = 0
for(i=0; i<10; i++)
{
x1+= 0.2
}
However in this form I will get a rounding error, 0.2 -> 0.4 -> 0.600...001 doing that.
I have tried parseFloat, toFixed and Math.round suggested in other threads but none of it have worked for me. So are there anyone who could make this work, because I feel that I have run out of options.
You can almost always ignore the floating point "errors" while you're performing calculations - they won't make any difference to the end result unless you really care about the 17th significant digit or so.
You normally only need to worry about rounding when you display those values, for which .toFixed(1) would do perfectly well.
Whatever happens you simply cannot coerce the number 0.6 into exactly that value. The closest IEEE 754 double precision is exactly 0.59999999999999997779553950749686919152736663818359375, which when displayed within typical precision limits in JS is displayed as 0.5999999999999999778
Indeed JS can't even tell that 0.5999999999999999778 !== (e.g) 0.5999999999999999300 since their binary representation is the same.
To better understand how the rounding errors are accumulating, and get more insight on what is happenning at lower level, here is a small explanantion:
I will assume that IEEE 754 double precision standard is used by underlying software/hardware, with default rounding mode (round to nearest even).
1/5 could be written in base 2 with a pattern repeating infinitely
0.00110011001100110011001100110011001100110011001100110011...
But in floating point, the significand - starting at most significant 1 bit - has to be rounded to a finite number of bits (53)
So there is a small rounding error when representing 0.2 in binary:
0.0011001100110011001100110011001100110011001100110011010
Back to decimal representation, this rounding error corresponds to a small excess 0.000000000000000011102230246251565404236316680908203125 above 1/5
The first operation is then exact because 0.2+0.2 is like 2*0.2 and thus does not introduce any additional error, it's like shifting the fraction point:
0.0011001100110011001100110011001100110011001100110011010
+ 0.0011001100110011001100110011001100110011001100110011010
---------------------------------------------------------
0.0110011001100110011001100110011001100110011001100110100
But of course, the excess above 2/5 is doubled 0.00000000000000002220446049250313080847263336181640625
The third operation 0.2+0.2+0.2 will result in this binary number
0.011001100110011001100110011001100110011001100110011010
+ 0.0011001100110011001100110011001100110011001100110011010
---------------------------------------------------------
0.1001100110011001100110011001100110011001100110011001110
But unfortunately, it requires 54 bits of significand (the span between leading 1 and trailing 1), so another rounding error is necessary to represent the result as a double:
0.10011001100110011001100110011001100110011001100110100
Notice that the number was rounded upper, because by default floats are rounded to nearest even in case of perfect tie. We already had an error by excess, so bad luck, successive errors did cumulate rather than annihilate...
So the excess above 3/5 is now 0.000000000000000088817841970012523233890533447265625
You could reduce a bit this accumulation of errors by using
x1 = i / 5.0
Since 5 is represented exactly in float (101.0 in binary, 3 significand bits are enough), and since that will also be the case of i (up to 2^53), there is a single rounding error when performing the division, and IEEE 754 then guarantees that you get the nearest possible representation.
For example 3/5.0 is represented as:
0.10011001100110011001100110011001100110011001100110011
Back to decimal, the value is represented by default 0.00000000000000002220446049250313080847263336181640625 under 3/5
Note that both errors are very tiny, but in second case 3/5.0, four times smaller in magnitude than 0.2+0.2+0.2.
Depending on what you're doing, you may want to do fixed-point arithmetic instead of floating point. For example, if you are doing financial calculations in dollars with amounts that are always multiples of $0.01, you can switch to using cents internally, and then convert to (and from) dollars only when displaying values to the user (or reading input from the user). For more complicated scenarios, you can use a fixed-point arithmetic library.
I have what may be an edge case scenario. When trying to round the value 4.015 to 2 decimal places, I always end up with 4.01 instead of the expected 4.02. This happens consistently for all numbers with .015 as the decimal portion.
I round using a fairly common method in JS:
val = Math.round(val * 100) / 100;
I think the problem starts when multiplying by 100. The floating point inaccuracy causes this value to be rounded down rather than up.
var a = 4.015, // 4.015
mult = a * 100, // 401.49999999999994 (the issue)
round = Math.round(mult), // 401
result = round / 100; // 4.01 (expected 4.02)
Fiddle: http://jsfiddle.net/eVXRL/
This problem does not happen if I try to round 4.025. The expected value of 4.03 does return; it's only an issue with .015 (so far).
Is there a way to elegantly resolve this? There is of course the hack of just looking for .015 and handling that case one-off, but that just seems wrong!
I ended up using math.js to do mathematical operations and that solved all my floating point issues.
The advantage of this lib was that there was no need to instantiate any sort of Big Decimal object (even though the lib does support BigDecimal). It was just as simple as replacing Math with math and passing the precision.
Floating point numbers are not real numbers, they are floating point numbers.
There are infinite number of real numbers, but only finite number of bits to represent them, thus sometimes, there must be some rounding error if the exact number you want cannot be represented in the floating point system.
Thus, when dealing with floating point numbers, you must take into consideration, that you won't have the exact same number you had in mind.
If you need an exact number, you should use a library that gives you better precision, usually it will be using a fixed point, and/or symblic representation
More information can be found in the wikipedia page, and in this (a bit complex, but important) article: What Every Computer Scientist Should Know About Floating-Point Arithmetic
If you are going to work with numbers as decimals, then use a decimal library, like big.js.
Floating point values in most languages (including javascript) are stored in a binary representation. Mostly, that does what you expect. In circumstances like this, your 4.015 is converted to a binary string, and happens to get encoded as the 4.014999999... value you saw, which is the closest binary representation available in a double precision (8-byte) IEEE754 value.
If you are doing financial math, or math for human consumption (i.e. as decimals), then you will want 4.015 to round to 4.02, and you need a decimal library.
There are plans to include decimal representation of floating point values in javascript (e.g. here), since the new IEEE754-2008 standard includes decimal32 etc as decimal floating point value representations. For more read here: http://speleotrove.com/decimal/
Finally, if you are doing accounting maths in javascript (i.e. financial calculations which should not accidentally create or disappear money), then please do all calculations in whole cents/pence.
You can use a regexp to extract and replace the digits to get what you want :
val = (val + "").replace(/^([0-9]+\.[0-9])([0-9])([0-9]).*$/, function(whole, head, lastdigit, followup) {
if(followup >= 5) {
return head + ("" + (parseInt(lastdigit) + 1));
}else return head + lastdigit;
});
Otherwise you can use val = val.toFixed(2) but the value specific 4.015 gives 4.01 (4.0151 gives 4.02 as "expected").
There is some problem, i can't understand anyway.
look at this code please
<script type="text/javascript">
function math(x)
{
var y;
y = x*10;
alert(y);
}
</script>
<input type="button" onclick="math(0.011)">
What must be alerted after i click on button?
i think 0.11, but no, it alerts
0.10999999999999999
explain please this behavior.
thanks in advance
Ok. Let me try to explain it.
The basic thing to remember with floating point numbers is this: They occupy a limited amount of bits and try to represent the original number using base-2 arithmetic.
As you know, in base-2 arithmetic integers are represented by the powers of 2 that they contain. Thus, 6 would be represented as 4 + 2, ie. in binary as 110.
In order to understand how fractional numbers are represented, you have to think about how we represent fractional numbers in our decimal system. The fractional part of numbers (for example 0.11) is represented as multiples of inverse powers of 10 (since the base is 10). Thus 0.11 is actually 1/10 + 1/100. As you can appreciate, this is not powerful enough to represent all fractional numbers in a limited number of digits. For example, 1/3 would be 0.333333.... in a never ending fashion. If we had only 32 digits of space to write the number down, we would end up having only an approximation to the original number, 0.33333333333333333333333333333333. This number, for example, would give 0.99999999999999999999999999999999 if it was multiplied by 3 and not 1 as you would have expected.
The situation is similar in base-2. Each fractional number would be represented as multiples of inverse powers of 2. Thus 0.75 (in decimal) (ie 3/4) would be represented as 1/2 + 1/4, which would mean 0.11 (in base-2). Just as base 10 is not capable enough to represent every fractional number in a finite manner, base-2 cannot represent all fractional numbers given a limited amount of space.
Now, try to represent 0.11 in base-2; you start with 11/100 and try to find an inverse power of 2 that is just less than this number. 1/2 doesn't work, 1/4 neither, nor does 1/8. 1/16 fits the bill, so you mark a 1 in the 4th place after the decimal point and subtract 1/16 from 11/100. You are left with 19/400. Now try to find the next power of 2 that fits the description. 1/32 seems to be that one, mark the 5th place after the point and subtract 1/32 from 19/400, you get 13/800. Next one is 1/64 and you are left with 1/1600 thus the next one is all the way up at 1/2048, etc. etc. Thus we got as far as 0.00011100001 but it goes on and on; and you will see that there always is a fraction remaining. Now, I didn't go through the whole calculation, but after you have put in 32 binary digits after the dot you will still probably have some fraction left (and this is assuming that all of the 32 bits of space is spent representing the decimal part, which it is not). Thus, I am sure you can appreciate that the resulting number might differ from its actual value by some amount.
In your case, the difference is 0.00000000000000001 which is 1/100000000000000000 = 1/10^17 and I am sure that you can see why you might have that.
this is because you are dealing with floating point, and this is the expected behavior of floating point math.
what you need to do is format that number.
see this java explanation which also applies here if you want to know why this is happening.
in javascript all numbers are represented as 64bit floats, so you will run into this sort of thing often.
the quick overview of that article is that floating point tries to represent a range of values larger then would fit in 64bits, therefor there is going to be some imprecise representation, and this is what you are seeing.
With floating point number you get a representation of the number you try to encode. Mostly it is a number that is very close the the original number. More information on encoding/storing floating point numbers can be found here.
Note:
If you show the value of x, it still shows 0.011 because JavaScript has not yet decided what variable type x has. But after multiplying it with 10 the type got set to floating point (it is the only possibility) and the round error shows.
You can try to fix the nr of decimals with this one:
// fl is a float number with some nr of decimals
// d is how many decimals you want
function dec(fl, d) {
var p = Math.pow(10, d);
return Math.round(fl*p)/p;
}
Ex:
var n = 0.0012345;
console.log(dec(n,6)); // 0.001235
console.log(dec(n,5)); // 0.00123
console.log(dec(n,4)); // 0.0012
console.log(dec(n,3)); // 0.001
It works by first multiplying the float with 10^3 (1000) for three decimals, or 10^2 (100) for two decimals. Then do round on that and divide it back to original size.
Math.pow(10, d) makes 10^d (means that d will give us 1000).
In your case, do alert(dec(y,2));, it should work.
There is some problem, i can't understand anyway.
look at this code please
<script type="text/javascript">
function math(x)
{
var y;
y = x*10;
alert(y);
}
</script>
<input type="button" onclick="math(0.011)">
What must be alerted after i click on button?
i think 0.11, but no, it alerts
0.10999999999999999
explain please this behavior.
thanks in advance
Ok. Let me try to explain it.
The basic thing to remember with floating point numbers is this: They occupy a limited amount of bits and try to represent the original number using base-2 arithmetic.
As you know, in base-2 arithmetic integers are represented by the powers of 2 that they contain. Thus, 6 would be represented as 4 + 2, ie. in binary as 110.
In order to understand how fractional numbers are represented, you have to think about how we represent fractional numbers in our decimal system. The fractional part of numbers (for example 0.11) is represented as multiples of inverse powers of 10 (since the base is 10). Thus 0.11 is actually 1/10 + 1/100. As you can appreciate, this is not powerful enough to represent all fractional numbers in a limited number of digits. For example, 1/3 would be 0.333333.... in a never ending fashion. If we had only 32 digits of space to write the number down, we would end up having only an approximation to the original number, 0.33333333333333333333333333333333. This number, for example, would give 0.99999999999999999999999999999999 if it was multiplied by 3 and not 1 as you would have expected.
The situation is similar in base-2. Each fractional number would be represented as multiples of inverse powers of 2. Thus 0.75 (in decimal) (ie 3/4) would be represented as 1/2 + 1/4, which would mean 0.11 (in base-2). Just as base 10 is not capable enough to represent every fractional number in a finite manner, base-2 cannot represent all fractional numbers given a limited amount of space.
Now, try to represent 0.11 in base-2; you start with 11/100 and try to find an inverse power of 2 that is just less than this number. 1/2 doesn't work, 1/4 neither, nor does 1/8. 1/16 fits the bill, so you mark a 1 in the 4th place after the decimal point and subtract 1/16 from 11/100. You are left with 19/400. Now try to find the next power of 2 that fits the description. 1/32 seems to be that one, mark the 5th place after the point and subtract 1/32 from 19/400, you get 13/800. Next one is 1/64 and you are left with 1/1600 thus the next one is all the way up at 1/2048, etc. etc. Thus we got as far as 0.00011100001 but it goes on and on; and you will see that there always is a fraction remaining. Now, I didn't go through the whole calculation, but after you have put in 32 binary digits after the dot you will still probably have some fraction left (and this is assuming that all of the 32 bits of space is spent representing the decimal part, which it is not). Thus, I am sure you can appreciate that the resulting number might differ from its actual value by some amount.
In your case, the difference is 0.00000000000000001 which is 1/100000000000000000 = 1/10^17 and I am sure that you can see why you might have that.
this is because you are dealing with floating point, and this is the expected behavior of floating point math.
what you need to do is format that number.
see this java explanation which also applies here if you want to know why this is happening.
in javascript all numbers are represented as 64bit floats, so you will run into this sort of thing often.
the quick overview of that article is that floating point tries to represent a range of values larger then would fit in 64bits, therefor there is going to be some imprecise representation, and this is what you are seeing.
With floating point number you get a representation of the number you try to encode. Mostly it is a number that is very close the the original number. More information on encoding/storing floating point numbers can be found here.
Note:
If you show the value of x, it still shows 0.011 because JavaScript has not yet decided what variable type x has. But after multiplying it with 10 the type got set to floating point (it is the only possibility) and the round error shows.
You can try to fix the nr of decimals with this one:
// fl is a float number with some nr of decimals
// d is how many decimals you want
function dec(fl, d) {
var p = Math.pow(10, d);
return Math.round(fl*p)/p;
}
Ex:
var n = 0.0012345;
console.log(dec(n,6)); // 0.001235
console.log(dec(n,5)); // 0.00123
console.log(dec(n,4)); // 0.0012
console.log(dec(n,3)); // 0.001
It works by first multiplying the float with 10^3 (1000) for three decimals, or 10^2 (100) for two decimals. Then do round on that and divide it back to original size.
Math.pow(10, d) makes 10^d (means that d will give us 1000).
In your case, do alert(dec(y,2));, it should work.
I heard that you could right-shift a number by .5 instead of using Math.floor(). I decided to check its limits to make sure that it was a suitable replacement, so I checked the following values and got the following results in Google Chrome:
2.5 >> .5 == 2;
2.9999 >> .5 == 2;
2.999999999999999 >> .5 == 2; // 15 9s
2.9999999999999999 >> .5 == 3; // 16 9s
After some fiddling, I found out that the highest possible value of two which, when right-shifted by .5, would yield 2 is 2.9999999999999997779553950749686919152736663818359374999999¯ (with the 9 repeating) in Chrome and Firefox. The number is 2.9999999999999997779¯ in IE.
My question is: what is the significance of the number .0000000000000007779553950749686919152736663818359374? It's a very strange number and it really piqued my curiosity.
I've been trying to find an answer or at least some kind of pattern, but I think my problem lies in the fact that I really don't understand the bitwise operation. I understand the idea in principle, but shifting a bit sequence by .5 doesn't make any sense at all to me. Any help is appreciated.
For the record, the weird digit sequence changes with 2^x. The highest possible values of the following numbers that still truncate properly:
for 0: 0.9999999999999999444888487687421729788184165954589843749¯
for 1: 1.9999999999999999888977697537484345957636833190917968749¯
for 2-3: x+.99999999999999977795539507496869191527366638183593749¯
for 4-7: x+.9999999999999995559107901499373838305473327636718749¯
for 8-15: x+.999999999999999111821580299874767661094665527343749¯
...and so forth
Actually, you're simply ending up doing a floor() on the first operand, without any floating point operations going on. Since the left shift and right shift bitwise operations only make sense with integer operands, the JavaScript engine is converting the two operands to integers first:
2.999999 >> 0.5
Becomes:
Math.floor(2.999999) >> Math.floor(0.5)
Which in turn is:
2 >> 0
Shifting by 0 bits means "don't do a shift" and therefore you end up with the first operand, simply truncated to an integer.
The SpiderMonkey source code has:
switch (op) {
case JSOP_LSH:
case JSOP_RSH:
if (!js_DoubleToECMAInt32(cx, d, &i)) // Same as Math.floor()
return JS_FALSE;
if (!js_DoubleToECMAInt32(cx, d2, &j)) // Same as Math.floor()
return JS_FALSE;
j &= 31;
d = (op == JSOP_LSH) ? i << j : i >> j;
break;
Your seeing a "rounding up" with certain numbers is due to the fact the JavaScript engine can't handle decimal digits beyond a certain precision and therefore your number ends up getting rounded up to the next integer. Try this in your browser:
alert(2.999999999999999);
You'll get 2.999999999999999. Now try adding one more 9:
alert(2.9999999999999999);
You'll get a 3.
This is possibly the single worst idea I have ever seen. Its only possible purpose for existing is for winning an obfusticated code contest. There's no significance to the long numbers you posted -- they're an artifact of the underlying floating-point implementation, filtered through god-knows how many intermediate layers. Bit-shifting by a fractional number of bytes is insane and I'm surprised it doesn't raise an exception -- but that's Javascript, always willing to redefine "insane".
If I were you, I'd avoid ever using this "feature". Its only value is as a possible root cause for an unusual error condition. Use Math.floor() and take pity on the next programmer who will maintain the code.
Confirming a couple suspicions I had when reading the question:
Right-shifting any fractional number x by any fractional number y will simply truncate x, giving the same result as Math.floor() while thoroughly confusing the reader.
2.999999999999999777955395074968691915... is simply the largest number that can be differentiated from "3". Try evaluating it by itself -- if you add anything to it, it will evaluate to 3. This is an artifact of the browser and local system's floating-point implementation.
If you wanna go deeper, read "What Every Computer Scientist Should Know About Floating-Point Arithmetic": https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
Try this javascript out:
alert(parseFloat("2.9999999999999997779553950749686919152736663818359374999999"));
Then try this:
alert(parseFloat("2.9999999999999997779553950749686919152736663818359375"));
What you are seeing is simple floating point inaccuracy. For more information about that, see this for example: http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems.
The basic issue is that the closest that a floating point value can get to representing the second number is greater than or equal to 3, whereas the closes that the a float can get to the first number is strictly less than three.
As for why right shifting by 0.5 does anything sane at all, it seems that 0.5 is just itself getting converted to an int (0) beforehand. Then the original float (2.999...) is getting converted to an int by truncation, as usual.
I don't think your right shift is relevant. You are simply beyond the resolution of a double precision floating point constant.
In Chrome:
var x = 2.999999999999999777955395074968691915273666381835937499999;
var y = 2.9999999999999997779553950749686919152736663818359375;
document.write("x=" + x);
document.write(" y=" + y);
Prints out: x = 2.9999999999999996 y=3
The shift right operator only operates on integers (both sides). So, shifting right by .5 bits should be exactly equivalent to shifting right by 0 bits. And, the left hand side is converted to an integer before the shift operation, which does the same thing as Math.floor().
I suspect that converting 2.9999999999999997779553950749686919152736663818359374999999
to it's binary representation would be enlightening. It's probably only 1 bit different
from true 3.
Good guess, but no cigar.
As the double precision FP number has 53 bits, the last FP number before 3 is actually
(exact): 2.999999999999999555910790149937383830547332763671875
But why it is
2.9999999999999997779553950749686919152736663818359375
(and this is exact, not 49999... !)
which is higher than the last displayable unit ? Rounding. The conversion routine (String to number) simply is correctly programmed to round the input the the next floating point number.
2.999999999999999555910790149937383830547332763671875
.......(values between, increasing) -> round down
2.9999999999999997779553950749686919152736663818359375
....... (values between, increasing) -> round up to 3
3
The conversion input must use full precision. If the number is exactly the half between
those two fp numbers (which is 2.9999999999999997779553950749686919152736663818359375)
the rounding depends on the setted flags. The default rounding is round to even, meaning that the number will be rounded to the next even number.
Now
3 = 11. (binary)
2.999... = 10.11111111111...... (binary)
All bits are set, the number is always odd. That means that the exact half number will be rounded up, so you are getting the strange .....49999 period because it must be smaller than the exact half to be distinguishable from 3.
I suspect that converting 2.9999999999999997779553950749686919152736663818359374999999 to its binary representation would be enlightening. It's probably only 1 bit different from true 3.
And to add to John's answer, the odds of this being more performant than Math.floor are vanishingly small.
I don't know if JavaScript uses floating-point numbers or some kind of infinite-precision library, but either way, you're going to get rounding errors on an operation like this -- even if it's pretty well defined.
It should be noted that the number ".0000000000000007779553950749686919152736663818359374" is quite possibly the Epsilon, defined as "the smallest number E such that (1+E) > 1."