JS floating point causing incorrect rounding - javascript

I have what may be an edge case scenario. When trying to round the value 4.015 to 2 decimal places, I always end up with 4.01 instead of the expected 4.02. This happens consistently for all numbers with .015 as the decimal portion.
I round using a fairly common method in JS:
val = Math.round(val * 100) / 100;
I think the problem starts when multiplying by 100. The floating point inaccuracy causes this value to be rounded down rather than up.
var a = 4.015, // 4.015
mult = a * 100, // 401.49999999999994 (the issue)
round = Math.round(mult), // 401
result = round / 100; // 4.01 (expected 4.02)
Fiddle: http://jsfiddle.net/eVXRL/
This problem does not happen if I try to round 4.025. The expected value of 4.03 does return; it's only an issue with .015 (so far).
Is there a way to elegantly resolve this? There is of course the hack of just looking for .015 and handling that case one-off, but that just seems wrong!

I ended up using math.js to do mathematical operations and that solved all my floating point issues.
The advantage of this lib was that there was no need to instantiate any sort of Big Decimal object (even though the lib does support BigDecimal). It was just as simple as replacing Math with math and passing the precision.

Floating point numbers are not real numbers, they are floating point numbers.
There are infinite number of real numbers, but only finite number of bits to represent them, thus sometimes, there must be some rounding error if the exact number you want cannot be represented in the floating point system.
Thus, when dealing with floating point numbers, you must take into consideration, that you won't have the exact same number you had in mind.
If you need an exact number, you should use a library that gives you better precision, usually it will be using a fixed point, and/or symblic representation
More information can be found in the wikipedia page, and in this (a bit complex, but important) article: What Every Computer Scientist Should Know About Floating-Point Arithmetic

If you are going to work with numbers as decimals, then use a decimal library, like big.js.
Floating point values in most languages (including javascript) are stored in a binary representation. Mostly, that does what you expect. In circumstances like this, your 4.015 is converted to a binary string, and happens to get encoded as the 4.014999999... value you saw, which is the closest binary representation available in a double precision (8-byte) IEEE754 value.
If you are doing financial math, or math for human consumption (i.e. as decimals), then you will want 4.015 to round to 4.02, and you need a decimal library.
There are plans to include decimal representation of floating point values in javascript (e.g. here), since the new IEEE754-2008 standard includes decimal32 etc as decimal floating point value representations. For more read here: http://speleotrove.com/decimal/
Finally, if you are doing accounting maths in javascript (i.e. financial calculations which should not accidentally create or disappear money), then please do all calculations in whole cents/pence.

You can use a regexp to extract and replace the digits to get what you want :
val = (val + "").replace(/^([0-9]+\.[0-9])([0-9])([0-9]).*$/, function(whole, head, lastdigit, followup) {
if(followup >= 5) {
return head + ("" + (parseInt(lastdigit) + 1));
}else return head + lastdigit;
});
Otherwise you can use val = val.toFixed(2) but the value specific 4.015 gives 4.01 (4.0151 gives 4.02 as "expected").

Related

Why 13596*0.1 is different than 13596/10?

I just stumble accross something really odd in Javascript.
My script was multiplying 13596 by 0.1, and the output was : 1359.6000000000001
We can agree that 0.1 = 1/10, so I tried it :
13596/10 = 1359.6
I tested it with Firefox and Chrome, the same results occurs.
I wondered if it was related to floating, so I tried the following :
13596 * parseFloat(0.1) = 1359.6000000000001
Nope.
By the way, this is not equal :
(13596*0.1) === (13596/10) => false
Does anyone has an idea about this result ?
(Here's the JSFiddle.)
Remember that, with floating points, a number you see on-screen is not necessarily exactly the number the computer is modelling.
For example, using node:
> 1359.6
1359.6
> (1359.6).toFixed(20)
'1359.59999999999990905053'
Node shows you 1359.6, but when you ask for more precision, you can see that the real number is not exactly what you saw -- it was rounded. The same is true of 0.1:
> (0.1).toFixed(20)
'0.10000000000000000555'
Generally when you work with floats you accept this imprecision and round numbers before displaying them.
Some numbers are exactly representable, but it's safest to assume that floating point is imprecise. For example, 0.5 can be represented exactly:
> (0.5).toFixed(20)
'0.50000000000000000000'
Dividing by 10 actually produces the same result as multiplying by 0.1:
> 13596/10
1359.6
> (13596/10).toFixed(20)
'1359.59999999999990905053'
In other languages division between integers results in an integer, however in JavaScript all numbers are modelled as floating points.
Generally whenever you need to precisely represent decimal numbers you should use a decimal number type, though this is not available natively in JavaScript.
Also there is no point to use the code parseFloat(0.1) as 0.1 is already a float.

Floating-point error mess

I have been trying to figure this floating-point problem out in javascript.
This is an example of what I want to do:
var x1 = 0
for(i=0; i<10; i++)
{
x1+= 0.2
}
However in this form I will get a rounding error, 0.2 -> 0.4 -> 0.600...001 doing that.
I have tried parseFloat, toFixed and Math.round suggested in other threads but none of it have worked for me. So are there anyone who could make this work, because I feel that I have run out of options.
You can almost always ignore the floating point "errors" while you're performing calculations - they won't make any difference to the end result unless you really care about the 17th significant digit or so.
You normally only need to worry about rounding when you display those values, for which .toFixed(1) would do perfectly well.
Whatever happens you simply cannot coerce the number 0.6 into exactly that value. The closest IEEE 754 double precision is exactly 0.59999999999999997779553950749686919152736663818359375, which when displayed within typical precision limits in JS is displayed as 0.5999999999999999778
Indeed JS can't even tell that 0.5999999999999999778 !== (e.g) 0.5999999999999999300 since their binary representation is the same.
To better understand how the rounding errors are accumulating, and get more insight on what is happenning at lower level, here is a small explanantion:
I will assume that IEEE 754 double precision standard is used by underlying software/hardware, with default rounding mode (round to nearest even).
1/5 could be written in base 2 with a pattern repeating infinitely
0.00110011001100110011001100110011001100110011001100110011...
But in floating point, the significand - starting at most significant 1 bit - has to be rounded to a finite number of bits (53)
So there is a small rounding error when representing 0.2 in binary:
0.0011001100110011001100110011001100110011001100110011010
Back to decimal representation, this rounding error corresponds to a small excess 0.000000000000000011102230246251565404236316680908203125 above 1/5
The first operation is then exact because 0.2+0.2 is like 2*0.2 and thus does not introduce any additional error, it's like shifting the fraction point:
0.0011001100110011001100110011001100110011001100110011010
+ 0.0011001100110011001100110011001100110011001100110011010
---------------------------------------------------------
0.0110011001100110011001100110011001100110011001100110100
But of course, the excess above 2/5 is doubled 0.00000000000000002220446049250313080847263336181640625
The third operation 0.2+0.2+0.2 will result in this binary number
0.011001100110011001100110011001100110011001100110011010
+ 0.0011001100110011001100110011001100110011001100110011010
---------------------------------------------------------
0.1001100110011001100110011001100110011001100110011001110
But unfortunately, it requires 54 bits of significand (the span between leading 1 and trailing 1), so another rounding error is necessary to represent the result as a double:
0.10011001100110011001100110011001100110011001100110100
Notice that the number was rounded upper, because by default floats are rounded to nearest even in case of perfect tie. We already had an error by excess, so bad luck, successive errors did cumulate rather than annihilate...
So the excess above 3/5 is now 0.000000000000000088817841970012523233890533447265625
You could reduce a bit this accumulation of errors by using
x1 = i / 5.0
Since 5 is represented exactly in float (101.0 in binary, 3 significand bits are enough), and since that will also be the case of i (up to 2^53), there is a single rounding error when performing the division, and IEEE 754 then guarantees that you get the nearest possible representation.
For example 3/5.0 is represented as:
0.10011001100110011001100110011001100110011001100110011
Back to decimal, the value is represented by default 0.00000000000000002220446049250313080847263336181640625 under 3/5
Note that both errors are very tiny, but in second case 3/5.0, four times smaller in magnitude than 0.2+0.2+0.2.
Depending on what you're doing, you may want to do fixed-point arithmetic instead of floating point. For example, if you are doing financial calculations in dollars with amounts that are always multiples of $0.01, you can switch to using cents internally, and then convert to (and from) dollars only when displaying values to the user (or reading input from the user). For more complicated scenarios, you can use a fixed-point arithmetic library.

Javascript .toFixed() more precise rounding alternative?

For rounding decimals (prices), I've been using .toFixed(2) for quite some time now. But I just recently discovered that Javascript can't "precisely" round decimals. I was a bit shocked that even 10.005 couldn't be rounded correctly to 10.01. It just got rounded down to 10.00. And other times it did round correctly. I like to have control over my code, so this is a big no-no for me.
And since I'm calculating prices, I think I need something more (100%) accurate for rounding only 2- or 3-decimal numbers, maybe a 4-decimal one.
Is there no straightforward way of doing basic rounding in javascript, the correct way?
UPDATE: As Felix Kling has suggested, the method of processing my prices as integers of cents, there are also drawbacks to this (besides more code)?
The reason that a number like 10.005 can't be rounded corretly is that you don't really have the number 10.005, you only have a number that is the closest possible one that can be represented using a double precision floating point variable.
The actual number that you have might be someting like 10.00499999999276253, and that would naturally round to 10.00 rather than 10.01.
To handle monetary values you should use a data type that can represent the value exactly. As numbers in Javascript are always floating point numbers, what you are left with is representing the numbers as text, and writing your own functions to do the math (or find someone who has done that already).
This should work for you, here's a fiddle to play around with it http://jsfiddle.net/5ffyC/1/
function moneyRound(flt){
var splitStr = flt.toString().split('.'),
whole = (flt * 100) | 0;
if (splitStr.length > 1 && splitStr[1].length > 2){
return splitStr[1][2] > 4? (whole + 1) / 100: whole / 100;
} else {
return flt;
}
}

Javascript rounding issue [duplicate]

This question already has answers here:
Why does floating-point arithmetic not give exact results when adding decimal fractions?
(31 answers)
Closed 11 months ago.
I've got a weird maths/rounding problem in Javascript.
The snippet below is a very basic example of the code I'm running. Obviously it's not this exact code, but I'm taking a decimal value from a text box, working out a percentage and taking this away from one.
var ten = "10";
var eight = "8";
alert(1 - (eight/ten));
The problem is the answer is 0.2 but the calculation returns 0.1999999999999996. Yet if I do 1 + (eight/ten) 1.8 is returned. What is going on?
Welcome to the world of floating-point numbers!
Computers don't actually work with decimal numbers--i.e. numbers in base ten, the way we use them in normal life. They work with binary numbers, and that includes floating-point numbers. They're represented in binary as well, and that means that "number of decimal places" is not always a meaningful property of a floating-point number.
For instance, a floating-point number cannot exactly represent 0.1, and you'll get something like 0.1000000001 if you try to use it in your code. The exact value you get varies by implementation, and is not correctable by subtracting the difference, as the computer can't tell that there is a difference--that's as close as it can get to 0.1.
(Stole most of this answer from a previous answer of mine.)
It's because of the way floating point numbers are represented.
I have the same result on my android device which means your device or computer works with 64 bits floating point representation. For correct result, you must limit displaying your result to 15 digits. I found this workaround : running :
var res = 1 - 0.8;
var roundedRes = parseFloat(res.toPrecision(15) ) ;
alert ("res="+res+"\n" + "roundedRes="+roundedRes);
ouputs :
res=0.19999999999999996
roundedRes=0.2
JavaScript use binary floating point for representation of numbers like 1/2,1/8...1/1024. But decimal fractions which we mostly use are represented in form 1/10,1/100...
This will not be a problem with JavaScript only but with all languages that use binary floating point representation.

Javascript Math Error: Inexact Floats [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicates:
Is JavaScript’s Math broken?
How is floating point stored? When does it matter?
Code:
var tax= 14900*(0.108);
alert(tax);
The above gives an answer of 1609.2
var tax1= 14900*(10.8/100);
alert(tax1);
The above gives an answer of 1609.200000000003
why? i guess i can round up the values, but why is this happening?
UPDATE:
Found a temp solution for the problem.
Multiply first:
(14900*10.8)/100 = 1609.2
However
(14898*10.8)/100 = 1608.9840000000002
For this multiply the 10.8 by a factor(100 in this case) and adjust the denominator:
(14898*(10.8*100))/10000 = 1608.984
I guess if one can do a preg_match for the extra 000s and then adjust the factor accordingly, the float error can be avoided.
The final solution would however be a math library.
Floating point value is inexact.
This is pretty much the answer to the question. There is finite precision, which means that some numbers can not be represented exactly.
Some languages support arbitrary precision numeric types/rational/complex numbers at the language level, etc, but not Javascript. Neither does C nor Java.
The IEEE 754 standard floating point value can not represent e.g. 0.1 exactly. This is why numerical calculations with cents etc must be done very carefully. Sometimes the solution is to store values in cents as integers instead of in dollars as floating point values.
"Floating" point concept, analog in base 10
To see why floating point values are imprecise, consider the following analog:
You only have enough memory to remember 5 digits
You want to be able to represent values in as wide range as practically possible
In representing integers, you can represent values in the range of -99999 to +99999. Values outside of those range would require you to remember more than 5 digits, which (for the sake of this example) you can't do.
Now you may consider a fixed-point representation, something like abc.de. Now you can represent values in the range of -999.99 to +999.99, up to 2 digits of precision, e.g. 3.14, -456.78, etc.
Now consider a floating point version. In your resourcefulness, you came up with the following scheme:
n = abc x 10de
Now you can still remember only 5 digits a, b, c, d, e, but you can now represent much wider range of numbers, even non-integers. For example:
123 x 100 = 123.0
123 x 103 = 123,000.0
123 x 106 = 123,000,000.0
123 x 10-3 = 0.123
123 x 10-6 = 0.000123
This is how the name "floating point" came into being: the decimal point "floats around" in the above examples.
Now you can represent a wide range of numbers, but note that you can't represent 0.1234. Neither can you represent 123,001.0. In fact, there's a lot of values that you can't represent.
This is pretty much why floating point values are inexact. They can represent a wide range of values, but since you are limited to a fixed amount of memory, you must sacrifice precision for magnitude.
More technicalities
The abc is called the significand, aka coefficient/mantissa. The de is the exponent, aka scale/characteristics. As usual, the computer uses base 2 instead 10. In addition to remembering the "digits" (bits, really), it must also remember the signs of the significand and exponent.
A single precision floating point type usually uses 32 bits. A double precision usually uses 64 bits.
See also
What Every Computer Scientist Should Know About Floating-Point Arithmetic
Wikipedia/IEEE 754
That behavior is inherent to floating point arithmic. That is why floating point arithmic is not suitable for dealing with money issues, which need to be exact.
There exist libraries, like this one, which help you limit rounding errors to the point where you actually need them (to represent as text). Those libraries don't really deal with floating point values, but with fractions (of integer values). So no 0.25, but 1/4 and so on.
Floating point values can be used for efficiently representing values in a much wide range than integer values could. However, it comes at a price: some values cannot be represented exactly (because they are stored binary) Every negative power of 10 for example (0.1, 0.01, etc.)
If you want exact results, try not to use floating point arithmetic.
Of course sometimes you can't avoid them. In that case, a few simple guidelines may help you minimize roundoff errors:
Don't subtract nearly equal values. (0.1-0.0999)
Add or multiply the biggest values first. (100*10)* 0.1 instead of 100*(10*0.1)
Multiply first, then divide. (14900*10.8)/100 instead of 14900*(10.8/100)
If exact values are available, use them instead of calculating them to get 'prettier' code
Also,
let JavaScript figure out math precedence, there is no reason to use parentheses:
var tax1 = 14900 * 10.8 / 100
1609.2
It's magic. Just remember to avoid useless parentheses.

Categories

Resources