Javascript rounding issue [duplicate] - javascript

This question already has answers here:
Why does floating-point arithmetic not give exact results when adding decimal fractions?
(31 answers)
Closed 11 months ago.
I've got a weird maths/rounding problem in Javascript.
The snippet below is a very basic example of the code I'm running. Obviously it's not this exact code, but I'm taking a decimal value from a text box, working out a percentage and taking this away from one.
var ten = "10";
var eight = "8";
alert(1 - (eight/ten));
The problem is the answer is 0.2 but the calculation returns 0.1999999999999996. Yet if I do 1 + (eight/ten) 1.8 is returned. What is going on?

Welcome to the world of floating-point numbers!
Computers don't actually work with decimal numbers--i.e. numbers in base ten, the way we use them in normal life. They work with binary numbers, and that includes floating-point numbers. They're represented in binary as well, and that means that "number of decimal places" is not always a meaningful property of a floating-point number.
For instance, a floating-point number cannot exactly represent 0.1, and you'll get something like 0.1000000001 if you try to use it in your code. The exact value you get varies by implementation, and is not correctable by subtracting the difference, as the computer can't tell that there is a difference--that's as close as it can get to 0.1.
(Stole most of this answer from a previous answer of mine.)

It's because of the way floating point numbers are represented.

I have the same result on my android device which means your device or computer works with 64 bits floating point representation. For correct result, you must limit displaying your result to 15 digits. I found this workaround : running :
var res = 1 - 0.8;
var roundedRes = parseFloat(res.toPrecision(15) ) ;
alert ("res="+res+"\n" + "roundedRes="+roundedRes);
ouputs :
res=0.19999999999999996
roundedRes=0.2

JavaScript use binary floating point for representation of numbers like 1/2,1/8...1/1024. But decimal fractions which we mostly use are represented in form 1/10,1/100...
This will not be a problem with JavaScript only but with all languages that use binary floating point representation.

Related

Why JavaScript return me a wrong result in this subtraction of decimals values? [duplicate]

This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 7 years ago.
I am pretty new in JavaScript and I have found a very strange situation doing an extremely simple mathematical operation: a subtraction.
So in a jQuery function I have this code:
saldoRicalcolato = finanziamento - variazioneAnticipoNumber;
Where finanziamento and variazioneAnticipoNumber are 2 numbers having decimal digits.
It works almost always good except for some specific values.
You can replicate the strange behavior performing this statement into the FireBug console:
2205.88 - 1103.01
1102.8700000000001
So in this case the result is substantially wrong because I obtain 1102.8700000000001 and not 1102.87 as I expected.
Why? What am I missing? Is it a JavaScript engine bug or something like this? Possible?
It's not a JavaScript problem but a more general computer problem. Floating number can't store all decimal numbers properly, because they store stuff in binary For example:
0.5 is store as b0.1
but 0.1 = 1/10 so it's 1/16 + (1/10-1/16) = 1/16 + 0.0375
0.0375 = 1/32 + (0.0375-1/32) = 1/32 + 00625 ... etc
so in binary 0.1 is 0.00011...
but that's endless. Except the computer has to stop at some point. So if in our example we stop at 0.00011 we have 0.09375 instead of 0.1.
It doesn't depend on which language but on the computer, what depends on the language is how you display the numbers. Usually the language will round numbers to an acceptable representation but apparently JavaScript doesn't.
If you're looking to get 1102.87. You'll need to set the decimal place to 2 by using toFixed()
This is just a solution of getting the number you want.
alert((2205.88 - 1103.01).toFixed(2));

JS floating point causing incorrect rounding

I have what may be an edge case scenario. When trying to round the value 4.015 to 2 decimal places, I always end up with 4.01 instead of the expected 4.02. This happens consistently for all numbers with .015 as the decimal portion.
I round using a fairly common method in JS:
val = Math.round(val * 100) / 100;
I think the problem starts when multiplying by 100. The floating point inaccuracy causes this value to be rounded down rather than up.
var a = 4.015, // 4.015
mult = a * 100, // 401.49999999999994 (the issue)
round = Math.round(mult), // 401
result = round / 100; // 4.01 (expected 4.02)
Fiddle: http://jsfiddle.net/eVXRL/
This problem does not happen if I try to round 4.025. The expected value of 4.03 does return; it's only an issue with .015 (so far).
Is there a way to elegantly resolve this? There is of course the hack of just looking for .015 and handling that case one-off, but that just seems wrong!
I ended up using math.js to do mathematical operations and that solved all my floating point issues.
The advantage of this lib was that there was no need to instantiate any sort of Big Decimal object (even though the lib does support BigDecimal). It was just as simple as replacing Math with math and passing the precision.
Floating point numbers are not real numbers, they are floating point numbers.
There are infinite number of real numbers, but only finite number of bits to represent them, thus sometimes, there must be some rounding error if the exact number you want cannot be represented in the floating point system.
Thus, when dealing with floating point numbers, you must take into consideration, that you won't have the exact same number you had in mind.
If you need an exact number, you should use a library that gives you better precision, usually it will be using a fixed point, and/or symblic representation
More information can be found in the wikipedia page, and in this (a bit complex, but important) article: What Every Computer Scientist Should Know About Floating-Point Arithmetic
If you are going to work with numbers as decimals, then use a decimal library, like big.js.
Floating point values in most languages (including javascript) are stored in a binary representation. Mostly, that does what you expect. In circumstances like this, your 4.015 is converted to a binary string, and happens to get encoded as the 4.014999999... value you saw, which is the closest binary representation available in a double precision (8-byte) IEEE754 value.
If you are doing financial math, or math for human consumption (i.e. as decimals), then you will want 4.015 to round to 4.02, and you need a decimal library.
There are plans to include decimal representation of floating point values in javascript (e.g. here), since the new IEEE754-2008 standard includes decimal32 etc as decimal floating point value representations. For more read here: http://speleotrove.com/decimal/
Finally, if you are doing accounting maths in javascript (i.e. financial calculations which should not accidentally create or disappear money), then please do all calculations in whole cents/pence.
You can use a regexp to extract and replace the digits to get what you want :
val = (val + "").replace(/^([0-9]+\.[0-9])([0-9])([0-9]).*$/, function(whole, head, lastdigit, followup) {
if(followup >= 5) {
return head + ("" + (parseInt(lastdigit) + 1));
}else return head + lastdigit;
});
Otherwise you can use val = val.toFixed(2) but the value specific 4.015 gives 4.01 (4.0151 gives 4.02 as "expected").

Why does 230/100*100 not return 230? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Is JavaScript’s Math broken?
In Javascript, I cannot figure out why 230/100*100 returns 229.99999999999997, while 240/100*100 returns 240.
This also applies to 460, 920 and so on...
Is there any solution?
The Issue:
230/100*100
= (230 / 100) * 100
= 2.3 * 100 in binary
2.3 in binary is the recurring decimal: 10.01001100110011001100110011001100...
This recurring decimal, cannot be accurately represented, due to limited precision, we get something like 2.29999999981373548507....
Interestingly, if you chose a division operation like such that it was accurately representable (not a recurring decimal and all digits lying within the maximum significant digits accommodated by the FP standard) in binary, you wouldn't see any such discrepancy.
E.g. 225/100*100 = 225
2.25 in binary is 10.01
Test Conversion: Binary to/from Decimal
Dealing with it:
Always be wary of precision when checking for equality between floating point values. Rounding up/down to a certain number of significant digits is good practice.
In JavaScript all numeric values are stored as IEEE 754 64-bit floating-point values (also known as double in many languages). This representation has only finite precision (so not all numbers can be accurately represented) and it is binary, so values that seem to be easy to represent in decimal can turn out to be problematic to handle.
There is no fire-and-forget solution suitable for everyone. If you need an integer then simply round using Math.round.
This problem relates to floating point inaccuracy. See this question for more details:
Is floating point math broken?
For the same reason that if you were to be forced to stay to a certain precision, and to take every step, you'd give 10/3*3 as 9.99999....
Say the precision you had to keep to was 10 digits. After 10/3 you'd have 3.333333333. Then when you multiplied that by 3, you'd have 9.999999999.
Now, since we know that the 3s will go on forever, we know that the 9s will go on forever, and so we know that the answer is really 10. But that's not the deal here, the deal is you apply each step as best you can, and then go on to the next.
As well as numbers that will result in recurring representations, there could be those that could be represented precisely, but not with the number of digits you are using.
Just as 10/3 cannot be represented perfectly in decimal, so 230/100 cannot be represented perfectly in binary.
The division in JavaScript is not integer division, but floating point.
2.3 or 2.4 can't be exactly represented in floating points. The difference is that the closest fp for 2.4 is 2.4000000953, while 2.3 is about 2.2999999523.
One can use Math.round(x) or one can use a JavaScript trick:
(x|0) converts x to integer, as the '|' operator forces the operands to integers.
Even in this case 299.9943 is not rounded but truncated.

Unexpected value on multiplication in javascript [duplicate]

There is some problem, i can't understand anyway.
look at this code please
<script type="text/javascript">
function math(x)
{
var y;
y = x*10;
alert(y);
}
</script>
<input type="button" onclick="math(0.011)">
What must be alerted after i click on button?
i think 0.11, but no, it alerts
0.10999999999999999
explain please this behavior.
thanks in advance
Ok. Let me try to explain it.
The basic thing to remember with floating point numbers is this: They occupy a limited amount of bits and try to represent the original number using base-2 arithmetic.
As you know, in base-2 arithmetic integers are represented by the powers of 2 that they contain. Thus, 6 would be represented as 4 + 2, ie. in binary as 110.
In order to understand how fractional numbers are represented, you have to think about how we represent fractional numbers in our decimal system. The fractional part of numbers (for example 0.11) is represented as multiples of inverse powers of 10 (since the base is 10). Thus 0.11 is actually 1/10 + 1/100. As you can appreciate, this is not powerful enough to represent all fractional numbers in a limited number of digits. For example, 1/3 would be 0.333333.... in a never ending fashion. If we had only 32 digits of space to write the number down, we would end up having only an approximation to the original number, 0.33333333333333333333333333333333. This number, for example, would give 0.99999999999999999999999999999999 if it was multiplied by 3 and not 1 as you would have expected.
The situation is similar in base-2. Each fractional number would be represented as multiples of inverse powers of 2. Thus 0.75 (in decimal) (ie 3/4) would be represented as 1/2 + 1/4, which would mean 0.11 (in base-2). Just as base 10 is not capable enough to represent every fractional number in a finite manner, base-2 cannot represent all fractional numbers given a limited amount of space.
Now, try to represent 0.11 in base-2; you start with 11/100 and try to find an inverse power of 2 that is just less than this number. 1/2 doesn't work, 1/4 neither, nor does 1/8. 1/16 fits the bill, so you mark a 1 in the 4th place after the decimal point and subtract 1/16 from 11/100. You are left with 19/400. Now try to find the next power of 2 that fits the description. 1/32 seems to be that one, mark the 5th place after the point and subtract 1/32 from 19/400, you get 13/800. Next one is 1/64 and you are left with 1/1600 thus the next one is all the way up at 1/2048, etc. etc. Thus we got as far as 0.00011100001 but it goes on and on; and you will see that there always is a fraction remaining. Now, I didn't go through the whole calculation, but after you have put in 32 binary digits after the dot you will still probably have some fraction left (and this is assuming that all of the 32 bits of space is spent representing the decimal part, which it is not). Thus, I am sure you can appreciate that the resulting number might differ from its actual value by some amount.
In your case, the difference is 0.00000000000000001 which is 1/100000000000000000 = 1/10^17 and I am sure that you can see why you might have that.
this is because you are dealing with floating point, and this is the expected behavior of floating point math.
what you need to do is format that number.
see this java explanation which also applies here if you want to know why this is happening.
in javascript all numbers are represented as 64bit floats, so you will run into this sort of thing often.
the quick overview of that article is that floating point tries to represent a range of values larger then would fit in 64bits, therefor there is going to be some imprecise representation, and this is what you are seeing.
With floating point number you get a representation of the number you try to encode. Mostly it is a number that is very close the the original number. More information on encoding/storing floating point numbers can be found here.
Note:
If you show the value of x, it still shows 0.011 because JavaScript has not yet decided what variable type x has. But after multiplying it with 10 the type got set to floating point (it is the only possibility) and the round error shows.
You can try to fix the nr of decimals with this one:
// fl is a float number with some nr of decimals
// d is how many decimals you want
function dec(fl, d) {
var p = Math.pow(10, d);
return Math.round(fl*p)/p;
}
Ex:
var n = 0.0012345;
console.log(dec(n,6)); // 0.001235
console.log(dec(n,5)); // 0.00123
console.log(dec(n,4)); // 0.0012
console.log(dec(n,3)); // 0.001
It works by first multiplying the float with 10^3 (1000) for three decimals, or 10^2 (100) for two decimals. Then do round on that and divide it back to original size.
Math.pow(10, d) makes 10^d (means that d will give us 1000).
In your case, do alert(dec(y,2));, it should work.

Javascript Math Error: Inexact Floats [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicates:
Is JavaScript’s Math broken?
How is floating point stored? When does it matter?
Code:
var tax= 14900*(0.108);
alert(tax);
The above gives an answer of 1609.2
var tax1= 14900*(10.8/100);
alert(tax1);
The above gives an answer of 1609.200000000003
why? i guess i can round up the values, but why is this happening?
UPDATE:
Found a temp solution for the problem.
Multiply first:
(14900*10.8)/100 = 1609.2
However
(14898*10.8)/100 = 1608.9840000000002
For this multiply the 10.8 by a factor(100 in this case) and adjust the denominator:
(14898*(10.8*100))/10000 = 1608.984
I guess if one can do a preg_match for the extra 000s and then adjust the factor accordingly, the float error can be avoided.
The final solution would however be a math library.
Floating point value is inexact.
This is pretty much the answer to the question. There is finite precision, which means that some numbers can not be represented exactly.
Some languages support arbitrary precision numeric types/rational/complex numbers at the language level, etc, but not Javascript. Neither does C nor Java.
The IEEE 754 standard floating point value can not represent e.g. 0.1 exactly. This is why numerical calculations with cents etc must be done very carefully. Sometimes the solution is to store values in cents as integers instead of in dollars as floating point values.
"Floating" point concept, analog in base 10
To see why floating point values are imprecise, consider the following analog:
You only have enough memory to remember 5 digits
You want to be able to represent values in as wide range as practically possible
In representing integers, you can represent values in the range of -99999 to +99999. Values outside of those range would require you to remember more than 5 digits, which (for the sake of this example) you can't do.
Now you may consider a fixed-point representation, something like abc.de. Now you can represent values in the range of -999.99 to +999.99, up to 2 digits of precision, e.g. 3.14, -456.78, etc.
Now consider a floating point version. In your resourcefulness, you came up with the following scheme:
n = abc x 10de
Now you can still remember only 5 digits a, b, c, d, e, but you can now represent much wider range of numbers, even non-integers. For example:
123 x 100 = 123.0
123 x 103 = 123,000.0
123 x 106 = 123,000,000.0
123 x 10-3 = 0.123
123 x 10-6 = 0.000123
This is how the name "floating point" came into being: the decimal point "floats around" in the above examples.
Now you can represent a wide range of numbers, but note that you can't represent 0.1234. Neither can you represent 123,001.0. In fact, there's a lot of values that you can't represent.
This is pretty much why floating point values are inexact. They can represent a wide range of values, but since you are limited to a fixed amount of memory, you must sacrifice precision for magnitude.
More technicalities
The abc is called the significand, aka coefficient/mantissa. The de is the exponent, aka scale/characteristics. As usual, the computer uses base 2 instead 10. In addition to remembering the "digits" (bits, really), it must also remember the signs of the significand and exponent.
A single precision floating point type usually uses 32 bits. A double precision usually uses 64 bits.
See also
What Every Computer Scientist Should Know About Floating-Point Arithmetic
Wikipedia/IEEE 754
That behavior is inherent to floating point arithmic. That is why floating point arithmic is not suitable for dealing with money issues, which need to be exact.
There exist libraries, like this one, which help you limit rounding errors to the point where you actually need them (to represent as text). Those libraries don't really deal with floating point values, but with fractions (of integer values). So no 0.25, but 1/4 and so on.
Floating point values can be used for efficiently representing values in a much wide range than integer values could. However, it comes at a price: some values cannot be represented exactly (because they are stored binary) Every negative power of 10 for example (0.1, 0.01, etc.)
If you want exact results, try not to use floating point arithmetic.
Of course sometimes you can't avoid them. In that case, a few simple guidelines may help you minimize roundoff errors:
Don't subtract nearly equal values. (0.1-0.0999)
Add or multiply the biggest values first. (100*10)* 0.1 instead of 100*(10*0.1)
Multiply first, then divide. (14900*10.8)/100 instead of 14900*(10.8/100)
If exact values are available, use them instead of calculating them to get 'prettier' code
Also,
let JavaScript figure out math precedence, there is no reason to use parentheses:
var tax1 = 14900 * 10.8 / 100
1609.2
It's magic. Just remember to avoid useless parentheses.

Categories

Resources