Floating-point error mess - javascript

I have been trying to figure this floating-point problem out in javascript.
This is an example of what I want to do:
var x1 = 0
for(i=0; i<10; i++)
{
x1+= 0.2
}
However in this form I will get a rounding error, 0.2 -> 0.4 -> 0.600...001 doing that.
I have tried parseFloat, toFixed and Math.round suggested in other threads but none of it have worked for me. So are there anyone who could make this work, because I feel that I have run out of options.

You can almost always ignore the floating point "errors" while you're performing calculations - they won't make any difference to the end result unless you really care about the 17th significant digit or so.
You normally only need to worry about rounding when you display those values, for which .toFixed(1) would do perfectly well.
Whatever happens you simply cannot coerce the number 0.6 into exactly that value. The closest IEEE 754 double precision is exactly 0.59999999999999997779553950749686919152736663818359375, which when displayed within typical precision limits in JS is displayed as 0.5999999999999999778
Indeed JS can't even tell that 0.5999999999999999778 !== (e.g) 0.5999999999999999300 since their binary representation is the same.

To better understand how the rounding errors are accumulating, and get more insight on what is happenning at lower level, here is a small explanantion:
I will assume that IEEE 754 double precision standard is used by underlying software/hardware, with default rounding mode (round to nearest even).
1/5 could be written in base 2 with a pattern repeating infinitely
0.00110011001100110011001100110011001100110011001100110011...
But in floating point, the significand - starting at most significant 1 bit - has to be rounded to a finite number of bits (53)
So there is a small rounding error when representing 0.2 in binary:
0.0011001100110011001100110011001100110011001100110011010
Back to decimal representation, this rounding error corresponds to a small excess 0.000000000000000011102230246251565404236316680908203125 above 1/5
The first operation is then exact because 0.2+0.2 is like 2*0.2 and thus does not introduce any additional error, it's like shifting the fraction point:
0.0011001100110011001100110011001100110011001100110011010
+ 0.0011001100110011001100110011001100110011001100110011010
---------------------------------------------------------
0.0110011001100110011001100110011001100110011001100110100
But of course, the excess above 2/5 is doubled 0.00000000000000002220446049250313080847263336181640625
The third operation 0.2+0.2+0.2 will result in this binary number
0.011001100110011001100110011001100110011001100110011010
+ 0.0011001100110011001100110011001100110011001100110011010
---------------------------------------------------------
0.1001100110011001100110011001100110011001100110011001110
But unfortunately, it requires 54 bits of significand (the span between leading 1 and trailing 1), so another rounding error is necessary to represent the result as a double:
0.10011001100110011001100110011001100110011001100110100
Notice that the number was rounded upper, because by default floats are rounded to nearest even in case of perfect tie. We already had an error by excess, so bad luck, successive errors did cumulate rather than annihilate...
So the excess above 3/5 is now 0.000000000000000088817841970012523233890533447265625
You could reduce a bit this accumulation of errors by using
x1 = i / 5.0
Since 5 is represented exactly in float (101.0 in binary, 3 significand bits are enough), and since that will also be the case of i (up to 2^53), there is a single rounding error when performing the division, and IEEE 754 then guarantees that you get the nearest possible representation.
For example 3/5.0 is represented as:
0.10011001100110011001100110011001100110011001100110011
Back to decimal, the value is represented by default 0.00000000000000002220446049250313080847263336181640625 under 3/5
Note that both errors are very tiny, but in second case 3/5.0, four times smaller in magnitude than 0.2+0.2+0.2.

Depending on what you're doing, you may want to do fixed-point arithmetic instead of floating point. For example, if you are doing financial calculations in dollars with amounts that are always multiples of $0.01, you can switch to using cents internally, and then convert to (and from) dollars only when displaying values to the user (or reading input from the user). For more complicated scenarios, you can use a fixed-point arithmetic library.

Related

Javascript Round error

I have a problem with rounding numbers.
x = 0.175;
console.log(x.toFixed(2));
// RESULT: 0.17
x = 1.175;
console.log(x.toFixed(2));
// RESULT: 1.18
x = 2.175;
console.log(x.toFixed(2));
// RESULT: 2.17
Why is (X!=1).175 not rounded to X.18?
The problem here is that 0.175 is a repeating decimal in binary (specifically, after a short prefix, it settles down to a repeating 0011 pattern). When represented in a finite floating point representation, this repeating pattern gets truncated. When you change the integer part from 0 to 1 to 2, you are adding one additional bit each time to the integer part of the number, which pushes off one trailing bit. Depending on what bit value gets pushed off, that can change the rounded value enough to affect the visible result. Note that after 2.175, the next change in rounding behavior doesn't occur until 8.175 (after two more low-order bits have been pushed off the representation).
This is the Reason behind this...
Squeezing infinitely many real numbers are into a finite number of bits requires an approximate representation.
Although there are infinitely many integers, in most programs the result of integer computations can be stored in 32 bits.
In contrast, given any fixed number of bits, most calculations with real numbers will produce quantities that cannot be exactly represented using that many bits. Therefore the result of a floating-point calculation must often be rounded in order to fit back into its finite representation.
x = 0.175;
console.log(x.toFixed(20));
// RESULT: 0.17
x = 1.175;
console.log(x.toFixed(20));
// RESULT: 1.18
x = 2.175;
console.log(x.toFixed(20));
// RESULT: 2.17
This rounding error is the characteristic feature of floating-point computation.
Source : http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
JavaScript has plenty of rounding problems, it's the result of binary machines trying to represent fractions in a decimal number system. There's always inaccuracies. Sometimes, a 5 is rounded up and other times it is rounded down. It's talked about in these articles or topics:
http://www.jacklmoore.com/notes/rounding-in-javascript/
Avoiding problems with JavaScript's weird decimal calculations
How to deal with floating point number precision in JavaScript?
Even a more precise control of floating point representation in JavaScript doesn't fix the issue:
> x=2175e-3; x.toFixed(2);
"2.17"
> x=1175e-3; x.toFixed(2);
"1.18"
In cases where it's super important to get predictable results, at least one of these articles suggest using a technique "epsilon estimation," which actually is the heart of several definitions in calculus. To learn that fix is to probably learn a lot more than you bargained for.
JavaScript Numbers are Always 64-bit Floating Point.
Unlike many other programming languages, JavaScript does not define different types of numbers, like integers, short, long, floating-point etc.
JavaScript numbers are always stored as double precision floating point numbers, following the international IEEE 754 standard.
The maximum number of decimals is 17, but floating point arithmetic is not always 100% accurate:

Why does 230/100*100 not return 230? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Is JavaScript’s Math broken?
In Javascript, I cannot figure out why 230/100*100 returns 229.99999999999997, while 240/100*100 returns 240.
This also applies to 460, 920 and so on...
Is there any solution?
The Issue:
230/100*100
= (230 / 100) * 100
= 2.3 * 100 in binary
2.3 in binary is the recurring decimal: 10.01001100110011001100110011001100...
This recurring decimal, cannot be accurately represented, due to limited precision, we get something like 2.29999999981373548507....
Interestingly, if you chose a division operation like such that it was accurately representable (not a recurring decimal and all digits lying within the maximum significant digits accommodated by the FP standard) in binary, you wouldn't see any such discrepancy.
E.g. 225/100*100 = 225
2.25 in binary is 10.01
Test Conversion: Binary to/from Decimal
Dealing with it:
Always be wary of precision when checking for equality between floating point values. Rounding up/down to a certain number of significant digits is good practice.
In JavaScript all numeric values are stored as IEEE 754 64-bit floating-point values (also known as double in many languages). This representation has only finite precision (so not all numbers can be accurately represented) and it is binary, so values that seem to be easy to represent in decimal can turn out to be problematic to handle.
There is no fire-and-forget solution suitable for everyone. If you need an integer then simply round using Math.round.
This problem relates to floating point inaccuracy. See this question for more details:
Is floating point math broken?
For the same reason that if you were to be forced to stay to a certain precision, and to take every step, you'd give 10/3*3 as 9.99999....
Say the precision you had to keep to was 10 digits. After 10/3 you'd have 3.333333333. Then when you multiplied that by 3, you'd have 9.999999999.
Now, since we know that the 3s will go on forever, we know that the 9s will go on forever, and so we know that the answer is really 10. But that's not the deal here, the deal is you apply each step as best you can, and then go on to the next.
As well as numbers that will result in recurring representations, there could be those that could be represented precisely, but not with the number of digits you are using.
Just as 10/3 cannot be represented perfectly in decimal, so 230/100 cannot be represented perfectly in binary.
The division in JavaScript is not integer division, but floating point.
2.3 or 2.4 can't be exactly represented in floating points. The difference is that the closest fp for 2.4 is 2.4000000953, while 2.3 is about 2.2999999523.
One can use Math.round(x) or one can use a JavaScript trick:
(x|0) converts x to integer, as the '|' operator forces the operands to integers.
Even in this case 299.9943 is not rounded but truncated.

Unexpected value on multiplication in javascript [duplicate]

There is some problem, i can't understand anyway.
look at this code please
<script type="text/javascript">
function math(x)
{
var y;
y = x*10;
alert(y);
}
</script>
<input type="button" onclick="math(0.011)">
What must be alerted after i click on button?
i think 0.11, but no, it alerts
0.10999999999999999
explain please this behavior.
thanks in advance
Ok. Let me try to explain it.
The basic thing to remember with floating point numbers is this: They occupy a limited amount of bits and try to represent the original number using base-2 arithmetic.
As you know, in base-2 arithmetic integers are represented by the powers of 2 that they contain. Thus, 6 would be represented as 4 + 2, ie. in binary as 110.
In order to understand how fractional numbers are represented, you have to think about how we represent fractional numbers in our decimal system. The fractional part of numbers (for example 0.11) is represented as multiples of inverse powers of 10 (since the base is 10). Thus 0.11 is actually 1/10 + 1/100. As you can appreciate, this is not powerful enough to represent all fractional numbers in a limited number of digits. For example, 1/3 would be 0.333333.... in a never ending fashion. If we had only 32 digits of space to write the number down, we would end up having only an approximation to the original number, 0.33333333333333333333333333333333. This number, for example, would give 0.99999999999999999999999999999999 if it was multiplied by 3 and not 1 as you would have expected.
The situation is similar in base-2. Each fractional number would be represented as multiples of inverse powers of 2. Thus 0.75 (in decimal) (ie 3/4) would be represented as 1/2 + 1/4, which would mean 0.11 (in base-2). Just as base 10 is not capable enough to represent every fractional number in a finite manner, base-2 cannot represent all fractional numbers given a limited amount of space.
Now, try to represent 0.11 in base-2; you start with 11/100 and try to find an inverse power of 2 that is just less than this number. 1/2 doesn't work, 1/4 neither, nor does 1/8. 1/16 fits the bill, so you mark a 1 in the 4th place after the decimal point and subtract 1/16 from 11/100. You are left with 19/400. Now try to find the next power of 2 that fits the description. 1/32 seems to be that one, mark the 5th place after the point and subtract 1/32 from 19/400, you get 13/800. Next one is 1/64 and you are left with 1/1600 thus the next one is all the way up at 1/2048, etc. etc. Thus we got as far as 0.00011100001 but it goes on and on; and you will see that there always is a fraction remaining. Now, I didn't go through the whole calculation, but after you have put in 32 binary digits after the dot you will still probably have some fraction left (and this is assuming that all of the 32 bits of space is spent representing the decimal part, which it is not). Thus, I am sure you can appreciate that the resulting number might differ from its actual value by some amount.
In your case, the difference is 0.00000000000000001 which is 1/100000000000000000 = 1/10^17 and I am sure that you can see why you might have that.
this is because you are dealing with floating point, and this is the expected behavior of floating point math.
what you need to do is format that number.
see this java explanation which also applies here if you want to know why this is happening.
in javascript all numbers are represented as 64bit floats, so you will run into this sort of thing often.
the quick overview of that article is that floating point tries to represent a range of values larger then would fit in 64bits, therefor there is going to be some imprecise representation, and this is what you are seeing.
With floating point number you get a representation of the number you try to encode. Mostly it is a number that is very close the the original number. More information on encoding/storing floating point numbers can be found here.
Note:
If you show the value of x, it still shows 0.011 because JavaScript has not yet decided what variable type x has. But after multiplying it with 10 the type got set to floating point (it is the only possibility) and the round error shows.
You can try to fix the nr of decimals with this one:
// fl is a float number with some nr of decimals
// d is how many decimals you want
function dec(fl, d) {
var p = Math.pow(10, d);
return Math.round(fl*p)/p;
}
Ex:
var n = 0.0012345;
console.log(dec(n,6)); // 0.001235
console.log(dec(n,5)); // 0.00123
console.log(dec(n,4)); // 0.0012
console.log(dec(n,3)); // 0.001
It works by first multiplying the float with 10^3 (1000) for three decimals, or 10^2 (100) for two decimals. Then do round on that and divide it back to original size.
Math.pow(10, d) makes 10^d (means that d will give us 1000).
In your case, do alert(dec(y,2));, it should work.

Javascript Math Error: Inexact Floats [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicates:
Is JavaScript’s Math broken?
How is floating point stored? When does it matter?
Code:
var tax= 14900*(0.108);
alert(tax);
The above gives an answer of 1609.2
var tax1= 14900*(10.8/100);
alert(tax1);
The above gives an answer of 1609.200000000003
why? i guess i can round up the values, but why is this happening?
UPDATE:
Found a temp solution for the problem.
Multiply first:
(14900*10.8)/100 = 1609.2
However
(14898*10.8)/100 = 1608.9840000000002
For this multiply the 10.8 by a factor(100 in this case) and adjust the denominator:
(14898*(10.8*100))/10000 = 1608.984
I guess if one can do a preg_match for the extra 000s and then adjust the factor accordingly, the float error can be avoided.
The final solution would however be a math library.
Floating point value is inexact.
This is pretty much the answer to the question. There is finite precision, which means that some numbers can not be represented exactly.
Some languages support arbitrary precision numeric types/rational/complex numbers at the language level, etc, but not Javascript. Neither does C nor Java.
The IEEE 754 standard floating point value can not represent e.g. 0.1 exactly. This is why numerical calculations with cents etc must be done very carefully. Sometimes the solution is to store values in cents as integers instead of in dollars as floating point values.
"Floating" point concept, analog in base 10
To see why floating point values are imprecise, consider the following analog:
You only have enough memory to remember 5 digits
You want to be able to represent values in as wide range as practically possible
In representing integers, you can represent values in the range of -99999 to +99999. Values outside of those range would require you to remember more than 5 digits, which (for the sake of this example) you can't do.
Now you may consider a fixed-point representation, something like abc.de. Now you can represent values in the range of -999.99 to +999.99, up to 2 digits of precision, e.g. 3.14, -456.78, etc.
Now consider a floating point version. In your resourcefulness, you came up with the following scheme:
n = abc x 10de
Now you can still remember only 5 digits a, b, c, d, e, but you can now represent much wider range of numbers, even non-integers. For example:
123 x 100 = 123.0
123 x 103 = 123,000.0
123 x 106 = 123,000,000.0
123 x 10-3 = 0.123
123 x 10-6 = 0.000123
This is how the name "floating point" came into being: the decimal point "floats around" in the above examples.
Now you can represent a wide range of numbers, but note that you can't represent 0.1234. Neither can you represent 123,001.0. In fact, there's a lot of values that you can't represent.
This is pretty much why floating point values are inexact. They can represent a wide range of values, but since you are limited to a fixed amount of memory, you must sacrifice precision for magnitude.
More technicalities
The abc is called the significand, aka coefficient/mantissa. The de is the exponent, aka scale/characteristics. As usual, the computer uses base 2 instead 10. In addition to remembering the "digits" (bits, really), it must also remember the signs of the significand and exponent.
A single precision floating point type usually uses 32 bits. A double precision usually uses 64 bits.
See also
What Every Computer Scientist Should Know About Floating-Point Arithmetic
Wikipedia/IEEE 754
That behavior is inherent to floating point arithmic. That is why floating point arithmic is not suitable for dealing with money issues, which need to be exact.
There exist libraries, like this one, which help you limit rounding errors to the point where you actually need them (to represent as text). Those libraries don't really deal with floating point values, but with fractions (of integer values). So no 0.25, but 1/4 and so on.
Floating point values can be used for efficiently representing values in a much wide range than integer values could. However, it comes at a price: some values cannot be represented exactly (because they are stored binary) Every negative power of 10 for example (0.1, 0.01, etc.)
If you want exact results, try not to use floating point arithmetic.
Of course sometimes you can't avoid them. In that case, a few simple guidelines may help you minimize roundoff errors:
Don't subtract nearly equal values. (0.1-0.0999)
Add or multiply the biggest values first. (100*10)* 0.1 instead of 100*(10*0.1)
Multiply first, then divide. (14900*10.8)/100 instead of 14900*(10.8/100)
If exact values are available, use them instead of calculating them to get 'prettier' code
Also,
let JavaScript figure out math precedence, there is no reason to use parentheses:
var tax1 = 14900 * 10.8 / 100
1609.2
It's magic. Just remember to avoid useless parentheses.

understanding floating point variables

There is some problem, i can't understand anyway.
look at this code please
<script type="text/javascript">
function math(x)
{
var y;
y = x*10;
alert(y);
}
</script>
<input type="button" onclick="math(0.011)">
What must be alerted after i click on button?
i think 0.11, but no, it alerts
0.10999999999999999
explain please this behavior.
thanks in advance
Ok. Let me try to explain it.
The basic thing to remember with floating point numbers is this: They occupy a limited amount of bits and try to represent the original number using base-2 arithmetic.
As you know, in base-2 arithmetic integers are represented by the powers of 2 that they contain. Thus, 6 would be represented as 4 + 2, ie. in binary as 110.
In order to understand how fractional numbers are represented, you have to think about how we represent fractional numbers in our decimal system. The fractional part of numbers (for example 0.11) is represented as multiples of inverse powers of 10 (since the base is 10). Thus 0.11 is actually 1/10 + 1/100. As you can appreciate, this is not powerful enough to represent all fractional numbers in a limited number of digits. For example, 1/3 would be 0.333333.... in a never ending fashion. If we had only 32 digits of space to write the number down, we would end up having only an approximation to the original number, 0.33333333333333333333333333333333. This number, for example, would give 0.99999999999999999999999999999999 if it was multiplied by 3 and not 1 as you would have expected.
The situation is similar in base-2. Each fractional number would be represented as multiples of inverse powers of 2. Thus 0.75 (in decimal) (ie 3/4) would be represented as 1/2 + 1/4, which would mean 0.11 (in base-2). Just as base 10 is not capable enough to represent every fractional number in a finite manner, base-2 cannot represent all fractional numbers given a limited amount of space.
Now, try to represent 0.11 in base-2; you start with 11/100 and try to find an inverse power of 2 that is just less than this number. 1/2 doesn't work, 1/4 neither, nor does 1/8. 1/16 fits the bill, so you mark a 1 in the 4th place after the decimal point and subtract 1/16 from 11/100. You are left with 19/400. Now try to find the next power of 2 that fits the description. 1/32 seems to be that one, mark the 5th place after the point and subtract 1/32 from 19/400, you get 13/800. Next one is 1/64 and you are left with 1/1600 thus the next one is all the way up at 1/2048, etc. etc. Thus we got as far as 0.00011100001 but it goes on and on; and you will see that there always is a fraction remaining. Now, I didn't go through the whole calculation, but after you have put in 32 binary digits after the dot you will still probably have some fraction left (and this is assuming that all of the 32 bits of space is spent representing the decimal part, which it is not). Thus, I am sure you can appreciate that the resulting number might differ from its actual value by some amount.
In your case, the difference is 0.00000000000000001 which is 1/100000000000000000 = 1/10^17 and I am sure that you can see why you might have that.
this is because you are dealing with floating point, and this is the expected behavior of floating point math.
what you need to do is format that number.
see this java explanation which also applies here if you want to know why this is happening.
in javascript all numbers are represented as 64bit floats, so you will run into this sort of thing often.
the quick overview of that article is that floating point tries to represent a range of values larger then would fit in 64bits, therefor there is going to be some imprecise representation, and this is what you are seeing.
With floating point number you get a representation of the number you try to encode. Mostly it is a number that is very close the the original number. More information on encoding/storing floating point numbers can be found here.
Note:
If you show the value of x, it still shows 0.011 because JavaScript has not yet decided what variable type x has. But after multiplying it with 10 the type got set to floating point (it is the only possibility) and the round error shows.
You can try to fix the nr of decimals with this one:
// fl is a float number with some nr of decimals
// d is how many decimals you want
function dec(fl, d) {
var p = Math.pow(10, d);
return Math.round(fl*p)/p;
}
Ex:
var n = 0.0012345;
console.log(dec(n,6)); // 0.001235
console.log(dec(n,5)); // 0.00123
console.log(dec(n,4)); // 0.0012
console.log(dec(n,3)); // 0.001
It works by first multiplying the float with 10^3 (1000) for three decimals, or 10^2 (100) for two decimals. Then do round on that and divide it back to original size.
Math.pow(10, d) makes 10^d (means that d will give us 1000).
In your case, do alert(dec(y,2));, it should work.

Categories

Resources