Avoiding problems with JavaScript's weird decimal calculations - javascript

I just read on MDN that one of the quirks of JS's handling of numbers due to everything being "double-precision 64-bit format IEEE 754 values" is that when you do something like .2 + .1 you get 0.30000000000000004 (that's what the article reads, but I get 0.29999999999999993 in Firefox). Therefore:
(.2 + .1) * 10 == 3
evaluates to false.
This seems like it would be very problematic. So what can be done to avoid bugs due to the imprecise decimal calculations in JS?
I've noticed that if you do 1.2 + 1.1 you get the right answer. So should you just avoid any kind of math that involves values less than 1? Because that seems very impractical. Are there any other dangers to doing math in JS?
Edit:
I understand that many decimal fractions can't be stored as binary, but the way most other languages I've encountered appear to deal with the error (like JS handles numbers greater than 1) seems more intuitive, so I'm not used to this, which is why I want to see how other programmers deal with these calculations.

1.2 + 1.1 may be ok but 0.2 + 0.1 may not be ok.
This is a problem in virtually every language that is in use today. The problem is that 1/10 cannot be accurately represented as a binary fraction just like 1/3 cannot be represented as a decimal fraction.
The workarounds include rounding to only the number of decimal places that you need and either work with strings, which are accurate:
(0.2 + 0.1).toFixed(4) === 0.3.toFixed(4) // true
or you can convert it to numbers after that:
+(0.2 + 0.1).toFixed(4) === 0.3 // true
or using Math.round:
Math.round(0.2 * X + 0.1 * X) / X === 0.3 // true
where X is some power of 10 e.g. 100 or 10000 - depending on what precision you need.
Or you can use cents instead of dollars when counting money:
cents = 1499; // $14.99
That way you only work with integers and you don't have to worry about decimal and binary fractions at all.
2017 Update
The situation of representing numbers in JavaScript may be a little bit more complicated than it used to. It used to be the case that we had only one numeric type in JavaScript:
64-bit floating point (the IEEE 754 double precision floating-point number - see: ECMA-262 Edition 5.1, Section 8.5 and ECMA-262 Edition 6.0, Section 6.1.6)
This is no longer the case - not only there are currently more numerical types in JavaScript today, more are on the way, including a proposal to add arbitrary-precision integers to ECMAScript, and hopefully, arbitrary-precision decimals will follow - see this answer for details:
Difference between floats and ints in Javascript?
See also
Another relevant answer with some examples of how to handle the calculations:
Node giving strange output on sum of particular float digits

In situations like these you would tipically rather make use of an epsilon estimation.
Something like (pseudo code)
if (abs(((.2 + .1) * 10) - 3) > epsilon)
where epsilon is something like 0.00000001, or whatever precision you require.
Have a quick read at Comparing floating point numbers

(Math.floor(( 0.1+0.2 )*1000))/1000
This will reduce the precision of float numbers but solves the problem if you are not working with very small values.
For example:
.1+.2 =
0.30000000000000004
after the proposed operation you will get 0.3 But any value between:
0.30000000000000000
0.30000000000000999
will be also considered 0.3

There are libraries that seek to solve this problem but if you don't want to include one of those (or can't for some reason, like working inside a GTM variable) then you can use this little function I wrote:
Usage:
var a = 194.1193;
var b = 159;
a - b; // returns 35.11930000000001
doDecimalSafeMath(a, '-', b); // returns 35.1193
Here's the function:
function doDecimalSafeMath(a, operation, b, precision) {
function decimalLength(numStr) {
var pieces = numStr.toString().split(".");
if(!pieces[1]) return 0;
return pieces[1].length;
}
// Figure out what we need to multiply by to make everything a whole number
precision = precision || Math.pow(10, Math.max(decimalLength(a), decimalLength(b)));
a = a*precision;
b = b*precision;
// Figure out which operation to perform.
var operator;
switch(operation.toLowerCase()) {
case '-':
operator = function(a,b) { return a - b; }
break;
case '+':
operator = function(a,b) { return a + b; }
break;
case '*':
case 'x':
precision = precision*precision;
operator = function(a,b) { return a * b; }
break;
case '÷':
case '/':
precision = 1;
operator = function(a,b) { return a / b; }
break;
// Let us pass in a function to perform other operations.
default:
operator = operation;
}
var result = operator(a,b);
// Remove our multiplier to put the decimal back.
return result/precision;
}

Understanding rounding errors in floating point arithmetic is not for the faint-hearted! Basically, calculations are done as though there were infinity bits of precision available. The result is then rounded according to rules laid down in the relevant IEEE specifications.
This rounding can throw up some funky answers:
Math.floor(Math.log(1000000000) / Math.LN10) == 8 // true
This an an entire order of magnitude out. That's some rounding error!
For any floating point architecture, there is a number that represents the smallest interval between distinguishable numbers. It is called EPSILON.
It will be a part of the EcmaScript standard in the near future. In the meantime, you can calculate it as follows:
function epsilon() {
if ("EPSILON" in Number) {
return Number.EPSILON;
}
var eps = 1.0;
// Halve epsilon until we can no longer distinguish
// 1 + (eps / 2) from 1
do {
eps /= 2.0;
}
while (1.0 + (eps / 2.0) != 1.0);
return eps;
}
You can then use it, something like this:
function numericallyEquivalent(n, m) {
var delta = Math.abs(n - m);
return (delta < epsilon());
}
Or, since rounding errors can accumulate alarmingly, you may want to use delta / 2 or delta * delta rather than delta.

You need a bit of error control.
Make a little double comparing method:
int CompareDouble(Double a,Double b) {
Double eplsilon = 0.00000001; //maximum error allowed
if ((a < b + epsilon) && (a > b - epsilon)) {
return 0;
}
else if (a < b + epsilon)
return -1;
}
else return 1;
}

As I found it while working with monetary values, I found a solution just by changing the values to cents, so I did the following:
result = ((value1*100) + (value2*100))/100;
Working with monetary values, we have only two decimal houses, thats why I multiplied and dived by 100.
If you're going to work with more decimal houses, you'll have to multiply the amount of decimal houses by then, having:
.0 -> 10
.00 -> 100
.000 -> 1000
.0000 -> 10000
...
With this, you'll always dodge working with decimal values.

Convert the decimals into integers with multiplication, then at the end convert back the result by dividing it with the same number.
Example in your case:
(0.2 * 100 + 0.1 * 100) / 100 * 10 === 3

Related

How to calculate the exact sum of two floats in javascript [duplicate]

I have the following dummy test script:
function test() {
var x = 0.1 * 0.2;
document.write(x);
}
test();
This will print the result 0.020000000000000004 while it should just print 0.02 (if you use your calculator). As far as I understood this is due to errors in the floating point multiplication precision.
Does anyone have a good solution so that in such case I get the correct result 0.02? I know there are functions like toFixed or rounding would be another possibility, but I'd like to really have the whole number printed without any cutting and rounding. Just wanted to know if one of you has some nice, elegant solution.
Of course, otherwise I'll round to some 10 digits or so.
From the Floating-Point Guide:
What can I do to avoid this problem?
That depends on what kind of
calculations you’re doing.
If you really need your results to add up exactly, especially when you
work with money: use a special decimal
datatype.
If you just don’t want to see all those extra decimal places: simply
format your result rounded to a fixed
number of decimal places when
displaying it.
If you have no decimal datatype available, an alternative is to work
with integers, e.g. do money
calculations entirely in cents. But
this is more work and has some
drawbacks.
Note that the first point only applies if you really need specific precise decimal behaviour. Most people don't need that, they're just irritated that their programs don't work correctly with numbers like 1/10 without realizing that they wouldn't even blink at the same error if it occurred with 1/3.
If the first point really applies to you, use BigDecimal for JavaScript or DecimalJS, which actually solves the problem rather than providing an imperfect workaround.
I like Pedro Ladaria's solution and use something similar.
function strip(number) {
return (parseFloat(number).toPrecision(12));
}
Unlike Pedros solution this will round up 0.999...repeating and is accurate to plus/minus one on the least significant digit.
Note: When dealing with 32 or 64 bit floats, you should use toPrecision(7) and toPrecision(15) for best results. See this question for info as to why.
For the mathematically inclined: http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
The recommended approach is to use correction factors (multiply by a suitable power of 10 so that the arithmetic happens between integers). For example, in the case of 0.1 * 0.2, the correction factor is 10, and you are performing the calculation:
> var x = 0.1
> var y = 0.2
> var cf = 10
> x * y
0.020000000000000004
> (x * cf) * (y * cf) / (cf * cf)
0.02
A (very quick) solution looks something like:
var _cf = (function() {
function _shift(x) {
var parts = x.toString().split('.');
return (parts.length < 2) ? 1 : Math.pow(10, parts[1].length);
}
return function() {
return Array.prototype.reduce.call(arguments, function (prev, next) { return prev === undefined || next === undefined ? undefined : Math.max(prev, _shift (next)); }, -Infinity);
};
})();
Math.a = function () {
var f = _cf.apply(null, arguments); if(f === undefined) return undefined;
function cb(x, y, i, o) { return x + f * y; }
return Array.prototype.reduce.call(arguments, cb, 0) / f;
};
Math.s = function (l,r) { var f = _cf(l,r); return (l * f - r * f) / f; };
Math.m = function () {
var f = _cf.apply(null, arguments);
function cb(x, y, i, o) { return (x*f) * (y*f) / (f * f); }
return Array.prototype.reduce.call(arguments, cb, 1);
};
Math.d = function (l,r) { var f = _cf(l,r); return (l * f) / (r * f); };
In this case:
> Math.m(0.1, 0.2)
0.02
I definitely recommend using a tested library like SinfulJS
Are you only performing multiplication? If so then you can use to your advantage a neat secret about decimal arithmetic. That is that NumberOfDecimals(X) + NumberOfDecimals(Y) = ExpectedNumberOfDecimals. That is to say that if we have 0.123 * 0.12 then we know that there will be 5 decimal places because 0.123 has 3 decimal places and 0.12 has two. Thus if JavaScript gave us a number like 0.014760000002 we can safely round to the 5th decimal place without fear of losing precision.
Surprisingly, this function has not been posted yet although others have similar variations of it. It is from the MDN web docs for Math.round().
It's concise and allows for varying precision.
function precisionRound(number, precision) {
var factor = Math.pow(10, precision);
return Math.round(number * factor) / factor;
}
console.log(precisionRound(1234.5678, 1));
// expected output: 1234.6
console.log(precisionRound(1234.5678, -1));
// expected output: 1230
var inp = document.querySelectorAll('input');
var btn = document.querySelector('button');
btn.onclick = function(){
inp[2].value = precisionRound( parseFloat(inp[0].value) * parseFloat(inp[1].value) , 5 );
};
//MDN function
function precisionRound(number, precision) {
var factor = Math.pow(10, precision);
return Math.round(number * factor) / factor;
}
button{
display: block;
}
<input type='text' value='0.1'>
<input type='text' value='0.2'>
<button>Get Product</button>
<input type='text'>
UPDATE: Aug/20/2019
Just noticed this error. I believe it's due to a floating point precision error with Math.round().
precisionRound(1.005, 2) // produces 1, incorrect, should be 1.01
These conditions work correctly:
precisionRound(0.005, 2) // produces 0.01
precisionRound(1.0005, 3) // produces 1.001
precisionRound(1234.5, 0) // produces 1235
precisionRound(1234.5, -1) // produces 1230
Fix:
function precisionRoundMod(number, precision) {
var factor = Math.pow(10, precision);
var n = precision < 0 ? number : 0.01 / factor + number;
return Math.round( n * factor) / factor;
}
This just adds a digit to the right when rounding decimals.
MDN has updated the Math.round() page so maybe someone could provide a better solution.
I'm finding BigNumber.js meets my needs.
A JavaScript library for arbitrary-precision decimal and non-decimal arithmetic.
It has good documentation and the author is very diligent responding to feedback.
The same author has 2 other similar libraries:
Big.js
A small, fast JavaScript library for arbitrary-precision decimal arithmetic. The little sister to bignumber.js.
and Decimal.js
An arbitrary-precision Decimal type for JavaScript.
Here's some code using BigNumber:
$(function(){
var product = BigNumber(.1).times(.2);
$('#product').text(product);
var sum = BigNumber(.1).plus(.2);
$('#sum').text(sum);
});
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script>
<!-- 1.4.1 is not the current version, but works for this example. -->
<script src="http://cdn.bootcss.com/bignumber.js/1.4.1/bignumber.min.js"></script>
.1 × .2 = <span id="product"></span><br>
.1 &plus; .2 = <span id="sum"></span><br>
You are looking for an sprintf implementation for JavaScript, so that you can write out floats with small errors in them (since they are stored in binary format) in a format that you expect.
Try javascript-sprintf, you would call it like this:
var yourString = sprintf("%.2f", yourNumber);
to print out your number as a float with two decimal places.
You may also use Number.toFixed() for display purposes, if you'd rather not include more files merely for floating point rounding to a given precision.
var times = function (a, b) {
return Math.round((a * b) * 100)/100;
};
---or---
var fpFix = function (n) {
return Math.round(n * 100)/100;
};
fpFix(0.1*0.2); // -> 0.02
---also---
var fpArithmetic = function (op, x, y) {
var n = {
'*': x * y,
'-': x - y,
'+': x + y,
'/': x / y
}[op];
return Math.round(n * 100)/100;
};
--- as in ---
fpArithmetic('*', 0.1, 0.2);
// 0.02
fpArithmetic('+', 0.1, 0.2);
// 0.3
fpArithmetic('-', 0.1, 0.2);
// -0.1
fpArithmetic('/', 0.2, 0.1);
// 2
You can use parseFloat() and toFixed() if you want to bypass this issue for a small operation:
a = 0.1;
b = 0.2;
a + b = 0.30000000000000004;
c = parseFloat((a+b).toFixed(2));
c = 0.3;
a = 0.3;
b = 0.2;
a - b = 0.09999999999999998;
c = parseFloat((a-b).toFixed(2));
c = 0.1;
You just have to make up your mind on how many decimal digits you actually want - can't have the cake and eat it too :-)
Numerical errors accumulate with every further operation and if you don't cut it off early it's just going to grow. Numerical libraries which present results that look clean simply cut off the last 2 digits at every step, numerical co-processors also have a "normal" and "full" lenght for the same reason. Cuf-offs are cheap for a processor but very expensive for you in a script (multiplying and dividing and using pov(...)). Good math lib would provide floor(x,n) to do the cut-off for you.
So at the very least you should make global var/constant with pov(10,n) - meaning that you decided on the precision you need :-) Then do:
Math.floor(x*PREC_LIM)/PREC_LIM // floor - you are cutting off, not rounding
You could also keep doing math and only cut-off at the end - assuming that you are only displaying and not doing if-s with results. If you can do that, then .toFixed(...) might be more efficient.
If you are doing if-s/comparisons and don't want to cut of then you also need a small constant, usually called eps, which is one decimal place higher than max expected error. Say that your cut-off is last two decimals - then your eps has 1 at the 3rd place from the last (3rd least significant) and you can use it to compare whether the result is within eps range of expected (0.02 -eps < 0.1*0.2 < 0.02 +eps).
Notice that for the general purpose use, this behavior is likely to be acceptable.
The problem arises when comparing those floating points values to determine an appropriate action.
With the advent of ES6, a new constant Number.EPSILON is defined to determine the acceptable error margin :
So instead of performing the comparison like this
0.1 + 0.2 === 0.3 // which returns false
you can define a custom compare function, like this :
function epsEqu(x, y) {
return Math.abs(x - y) < Number.EPSILON;
}
console.log(epsEqu(0.1+0.2, 0.3)); // true
Source : http://2ality.com/2015/04/numbers-math-es6.html#numberepsilon
The result you've got is correct and fairly consistent across floating point implementations in different languages, processors and operating systems - the only thing that changes is the level of the inaccuracy when the float is actually a double (or higher).
0.1 in binary floating points is like 1/3 in decimal (i.e. 0.3333333333333... forever), there's just no accurate way to handle it.
If you're dealing with floats always expect small rounding errors, so you'll also always have to round the displayed result to something sensible. In return you get very very fast and powerful arithmetic because all the computations are in the native binary of the processor.
Most of the time the solution is not to switch to fixed-point arithmetic, mainly because it's much slower and 99% of the time you just don't need the accuracy. If you're dealing with stuff that does need that level of accuracy (for instance financial transactions) Javascript probably isn't the best tool to use anyway (as you've want to enforce the fixed-point types a static language is probably better).
You're looking for the elegant solution then I'm afraid this is it: floats are quick but have small rounding errors - always round to something sensible when displaying their results.
The round() function at phpjs.org works nicely: http://phpjs.org/functions/round
num = .01 + .06; // yields 0.0699999999999
rnum = round(num,12); // yields 0.07
decimal.js, big.js or bignumber.js can be used to avoid floating-point manipulation problems in Javascript:
0.1 * 0.2 // 0.020000000000000004
x = new Decimal(0.1)
y = x.times(0.2) // '0.2'
x.times(0.2).equals(0.2) // true
big.js: minimalist; easy-to-use; precision specified in decimal places; precision applied to division only.
bignumber.js: bases 2-64; configuration options; NaN; Infinity; precision specified in decimal places; precision applied to division only; base prefixes.
decimal.js: bases 2-64; configuration options; NaN; Infinity; non-integer powers, exp, ln, log; precision specified in significant digits; precision always applied; random numbers.
link to detailed comparisons
0.6 * 3 it's awesome!))
For me this works fine:
function dec( num )
{
var p = 100;
return Math.round( num * p ) / p;
}
Very very simple))
To avoid this you should work with integer values instead of floating points. So when you want to have 2 positions precision work with the values * 100, for 3 positions use 1000. When displaying you use a formatter to put in the separator.
Many systems omit working with decimals this way. That is the reason why many systems work with cents (as integer) instead of dollars/euro's (as floating point).
not elegant but does the job (removes trailing zeros)
var num = 0.1*0.2;
alert(parseFloat(num.toFixed(10))); // shows 0.02
Problem
Floating point can't store all decimal values exactly. So when using floating point formats there will always be rounding errors on the input values.
The errors on the inputs of course results on errors on the output.
In case of a discrete function or operator there can be big differences on the output around the point where the function or operator is discrete.
Input and output for floating point values
So, when using floating point variables, you should always be aware of this. And whatever output you want from a calculation with floating points should always be formatted/conditioned before displaying with this in mind.
When only continuous functions and operators are used, rounding to the desired precision often will do (don't truncate). Standard formatting features used to convert floats to string will usually do this for you.
Because the rounding adds an error which can cause the total error to be more then half of the desired precision, the output should be corrected based on expected precision of inputs and desired precision of output. You should
Round inputs to the expected precision or make sure no values can be entered with higher precision.
Add a small value to the outputs before rounding/formatting them which is smaller than or equal to 1/4 of the desired precision and bigger than the maximum expected error caused by rounding errors on input and during calculation. If that is not possible the combination of the precision of the used data type isn't enough to deliver the desired output precision for your calculation.
These 2 things are usually not done and in most cases the differences caused by not doing them are too small to be important for most users, but I already had a project where output wasn't accepted by the users without those corrections.
Discrete functions or operators (like modula)
When discrete operators or functions are involved, extra corrections might be required to make sure the output is as expected. Rounding and adding small corrections before rounding can't solve the problem.
A special check/correction on intermediate calculation results, immediately after applying the discrete function or operator might be required.
For a specific case (modula operator), see my answer on question: Why does modulus operator return fractional number in javascript?
Better avoid having the problem
It is often more efficient to avoid these problems by using data types (integer or fixed point formats) for calculations like this which can store the expected input without rounding errors.
An example of that is that you should never use floating point values for financial calculations.
Elegant, Predictable, and Reusable
Let's deal with the problem in an elegant way reusable way. The following seven lines will let you access the floating point precision you desire on any number simply by appending .decimal to the end of the number, formula, or built in Math function.
// First extend the native Number object to handle precision. This populates
// the functionality to all math operations.
Object.defineProperty(Number.prototype, "decimal", {
get: function decimal() {
Number.precision = "precision" in Number ? Number.precision : 3;
var f = Math.pow(10, Number.precision);
return Math.round( this * f ) / f;
}
});
// Now lets see how it works by adjusting our global precision level and
// checking our results.
console.log("'1/3 + 1/3 + 1/3 = 1' Right?");
console.log((0.3333 + 0.3333 + 0.3333).decimal == 1); // true
console.log(0.3333.decimal); // 0.333 - A raw 4 digit decimal, trimmed to 3...
Number.precision = 3;
console.log("Precision: 3");
console.log((0.8 + 0.2).decimal); // 1
console.log((0.08 + 0.02).decimal); // 0.1
console.log((0.008 + 0.002).decimal); // 0.01
console.log((0.0008 + 0.0002).decimal); // 0.001
Number.precision = 2;
console.log("Precision: 2");
console.log((0.8 + 0.2).decimal); // 1
console.log((0.08 + 0.02).decimal); // 0.1
console.log((0.008 + 0.002).decimal); // 0.01
console.log((0.0008 + 0.0002).decimal); // 0
Number.precision = 1;
console.log("Precision: 1");
console.log((0.8 + 0.2).decimal); // 1
console.log((0.08 + 0.02).decimal); // 0.1
console.log((0.008 + 0.002).decimal); // 0
console.log((0.0008 + 0.0002).decimal); // 0
Number.precision = 0;
console.log("Precision: 0");
console.log((0.8 + 0.2).decimal); // 1
console.log((0.08 + 0.02).decimal); // 0
console.log((0.008 + 0.002).decimal); // 0
console.log((0.0008 + 0.0002).decimal); // 0
Cheers!
Solved it by first making both numbers integers, executing the expression and afterwards dividing the result to get the decimal places back:
function evalMathematicalExpression(a, b, op) {
const smallest = String(a < b ? a : b);
const factor = smallest.length - smallest.indexOf('.');
for (let i = 0; i < factor; i++) {
b *= 10;
a *= 10;
}
a = Math.round(a);
b = Math.round(b);
const m = 10 ** factor;
switch (op) {
case '+':
return (a + b) / m;
case '-':
return (a - b) / m;
case '*':
return (a * b) / (m ** 2);
case '/':
return a / b;
}
throw `Unknown operator ${op}`;
}
Results for several operations (the excluded numbers are results from eval):
0.1 + 0.002 = 0.102 (0.10200000000000001)
53 + 1000 = 1053 (1053)
0.1 - 0.3 = -0.2 (-0.19999999999999998)
53 - -1000 = 1053 (1053)
0.3 * 0.0003 = 0.00009 (0.00008999999999999999)
100 * 25 = 2500 (2500)
0.9 / 0.03 = 30 (30.000000000000004)
100 / 50 = 2 (2)
From my point of view, the idea here is to round the fp number in order to have a nice/short default string representation.
The 53-bit significand precision gives from 15 to 17 significant decimal digits precision (2−53 ≈ 1.11 × 10−16).
If a decimal string with at most 15 significant digits is converted to IEEE 754 double-precision representation,
and then converted back to a decimal string with the same number of digits, the final result should match the original string.
If an IEEE 754 double-precision number is converted to a decimal string with at least 17 significant digits,
and then converted back to double-precision representation, the final result must match the original number.
...
With the 52 bits of the fraction (F) significand appearing in the memory format, the total precision is therefore 53 bits (approximately 16 decimal digits, 53 log10(2) ≈ 15.955). The bits are laid out as follows ... wikipedia
(0.1).toPrecision(100) ->
0.1000000000000000055511151231257827021181583404541015625000000000000000000000000000000000000000000000
(0.1+0.2).toPrecision(100) ->
0.3000000000000000444089209850062616169452667236328125000000000000000000000000000000000000000000000000
Then, as far as I understand, we can round the value up to 15 digits to keep a nice string representation.
10**Math.floor(53 * Math.log10(2)) // 1e15
eg.
Math.round((0.2+0.1) * 1e15 ) / 1e15
0.3
(Math.round((0.2+0.1) * 1e15 ) / 1e15).toPrecision(100)
0.2999999999999999888977697537484345957636833190917968750000000000000000000000000000000000000000000000
The function would be:
function roundNumberToHaveANiceDefaultStringRepresentation(num) {
const integerDigits = Math.floor(Math.log10(Math.abs(num))+1);
const mult = 10**(15-integerDigits); // also consider integer digits
return Math.round(num * mult) / mult;
}
Have a look at Fixed-point arithmetic. It will probably solve your problem, if the range of numbers you want to operate on is small (eg, currency). I would round it off to a few decimal values, which is the simplest solution.
You can't represent most decimal fractions exactly with binary floating point types (which is what ECMAScript uses to represent floating point values). So there isn't an elegant solution unless you use arbitrary precision arithmetic types or a decimal based floating point type. For example, the Calculator app that ships with Windows now uses arbitrary precision arithmetic to solve this problem.
You are right, the reason for that is limited precision of floating point numbers. Store your rational numbers as a division of two integer numbers and in most situations you'll be able to store numbers without any precision loss. When it comes to printing, you may want to display the result as fraction. With representation I proposed, it becomes trivial.
Of course that won't help much with irrational numbers. But you may want to optimize your computations in the way they will cause the least problem (e.g. detecting situations like sqrt(3)^2).
I had a nasty rounding error problem with mod 3. Sometimes when I should get 0 I would get .000...01. That's easy enough to handle, just test for <= .01. But then sometimes I would get 2.99999999999998. OUCH!
BigNumbers solved the problem, but introduced another, somewhat ironic, problem. When trying to load 8.5 into BigNumbers I was informed that it was really 8.4999… and had more than 15 significant digits. This meant BigNumbers could not accept it (I believe I mentioned this problem was somewhat ironic).
Simple solution to ironic problem:
x = Math.round(x*100);
// I only need 2 decimal places, if i needed 3 I would use 1,000, etc.
x = x / 100;
xB = new BigNumber(x);
You can use library https://github.com/MikeMcl/decimal.js/.
it will help lot to give proper solution.
javascript console output 95 *722228.630 /100 = 686117.1984999999
decimal library implementation
var firstNumber = new Decimal(95);
var secondNumber = new Decimal(722228.630);
var thirdNumber = new Decimal(100);
var partialOutput = firstNumber.times(secondNumber);
console.log(partialOutput);
var output = new Decimal(partialOutput).div(thirdNumber);
alert(output.valueOf());
console.log(output.valueOf())== 686117.1985
Avoid dealing with floating points during the operation using Integers
As stated on the most voted answer until now, you can work with integers, that would mean to multiply all your factors by 10 for each decimal you are working with, and divide the result by the same number used.
For example, if you are working with 2 decimals, you multiply all your factors by 100 before doing the operation, and then divide the result by 100.
Here's an example, Result1 is the usual result, Result2 uses the solution:
var Factor1="1110.7";
var Factor2="2220.2";
var Result1=Number(Factor1)+Number(Factor2);
var Result2=((Number(Factor1)*100)+(Number(Factor2)*100))/100;
var Result3=(Number(parseFloat(Number(Factor1))+parseFloat(Number(Factor2))).toPrecision(2));
document.write("Result1: "+Result1+"<br>Result2: "+Result2+"<br>Result3: "+Result3);
The third result is to show what happens when using parseFloat instead, which created a conflict in our case.
I could not find a solution using the built in Number.EPSILON that's meant to help with this kind of problem, so here is my solution:
function round(value, precision) {
const power = Math.pow(10, precision)
return Math.round((value*power)+(Number.EPSILON*power)) / power
}
This uses the known smallest difference between 1 and the smallest floating point number greater than one to fix the EPSILON rounding error ending up just one EPSILON below the rounding up threshold.
Maximum precision is 15 for 64bit floating point and 6 for 32bit floating point. Your javascript is likely 64bit.
Try my chiliadic arithmetic library, which you can see here.
If you want a later version, I can get you one.
Use Number(1.234443).toFixed(2); it will print 1.23
function test(){
var x = 0.1 * 0.2;
document.write(Number(x).toFixed(2));
}
test();

Why .toFixed not same working if difference value? [duplicate]

I have the following dummy test script:
function test() {
var x = 0.1 * 0.2;
document.write(x);
}
test();
This will print the result 0.020000000000000004 while it should just print 0.02 (if you use your calculator). As far as I understood this is due to errors in the floating point multiplication precision.
Does anyone have a good solution so that in such case I get the correct result 0.02? I know there are functions like toFixed or rounding would be another possibility, but I'd like to really have the whole number printed without any cutting and rounding. Just wanted to know if one of you has some nice, elegant solution.
Of course, otherwise I'll round to some 10 digits or so.
From the Floating-Point Guide:
What can I do to avoid this problem?
That depends on what kind of
calculations you’re doing.
If you really need your results to add up exactly, especially when you
work with money: use a special decimal
datatype.
If you just don’t want to see all those extra decimal places: simply
format your result rounded to a fixed
number of decimal places when
displaying it.
If you have no decimal datatype available, an alternative is to work
with integers, e.g. do money
calculations entirely in cents. But
this is more work and has some
drawbacks.
Note that the first point only applies if you really need specific precise decimal behaviour. Most people don't need that, they're just irritated that their programs don't work correctly with numbers like 1/10 without realizing that they wouldn't even blink at the same error if it occurred with 1/3.
If the first point really applies to you, use BigDecimal for JavaScript or DecimalJS, which actually solves the problem rather than providing an imperfect workaround.
I like Pedro Ladaria's solution and use something similar.
function strip(number) {
return (parseFloat(number).toPrecision(12));
}
Unlike Pedros solution this will round up 0.999...repeating and is accurate to plus/minus one on the least significant digit.
Note: When dealing with 32 or 64 bit floats, you should use toPrecision(7) and toPrecision(15) for best results. See this question for info as to why.
For the mathematically inclined: http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
The recommended approach is to use correction factors (multiply by a suitable power of 10 so that the arithmetic happens between integers). For example, in the case of 0.1 * 0.2, the correction factor is 10, and you are performing the calculation:
> var x = 0.1
> var y = 0.2
> var cf = 10
> x * y
0.020000000000000004
> (x * cf) * (y * cf) / (cf * cf)
0.02
A (very quick) solution looks something like:
var _cf = (function() {
function _shift(x) {
var parts = x.toString().split('.');
return (parts.length < 2) ? 1 : Math.pow(10, parts[1].length);
}
return function() {
return Array.prototype.reduce.call(arguments, function (prev, next) { return prev === undefined || next === undefined ? undefined : Math.max(prev, _shift (next)); }, -Infinity);
};
})();
Math.a = function () {
var f = _cf.apply(null, arguments); if(f === undefined) return undefined;
function cb(x, y, i, o) { return x + f * y; }
return Array.prototype.reduce.call(arguments, cb, 0) / f;
};
Math.s = function (l,r) { var f = _cf(l,r); return (l * f - r * f) / f; };
Math.m = function () {
var f = _cf.apply(null, arguments);
function cb(x, y, i, o) { return (x*f) * (y*f) / (f * f); }
return Array.prototype.reduce.call(arguments, cb, 1);
};
Math.d = function (l,r) { var f = _cf(l,r); return (l * f) / (r * f); };
In this case:
> Math.m(0.1, 0.2)
0.02
I definitely recommend using a tested library like SinfulJS
Are you only performing multiplication? If so then you can use to your advantage a neat secret about decimal arithmetic. That is that NumberOfDecimals(X) + NumberOfDecimals(Y) = ExpectedNumberOfDecimals. That is to say that if we have 0.123 * 0.12 then we know that there will be 5 decimal places because 0.123 has 3 decimal places and 0.12 has two. Thus if JavaScript gave us a number like 0.014760000002 we can safely round to the 5th decimal place without fear of losing precision.
Surprisingly, this function has not been posted yet although others have similar variations of it. It is from the MDN web docs for Math.round().
It's concise and allows for varying precision.
function precisionRound(number, precision) {
var factor = Math.pow(10, precision);
return Math.round(number * factor) / factor;
}
console.log(precisionRound(1234.5678, 1));
// expected output: 1234.6
console.log(precisionRound(1234.5678, -1));
// expected output: 1230
var inp = document.querySelectorAll('input');
var btn = document.querySelector('button');
btn.onclick = function(){
inp[2].value = precisionRound( parseFloat(inp[0].value) * parseFloat(inp[1].value) , 5 );
};
//MDN function
function precisionRound(number, precision) {
var factor = Math.pow(10, precision);
return Math.round(number * factor) / factor;
}
button{
display: block;
}
<input type='text' value='0.1'>
<input type='text' value='0.2'>
<button>Get Product</button>
<input type='text'>
UPDATE: Aug/20/2019
Just noticed this error. I believe it's due to a floating point precision error with Math.round().
precisionRound(1.005, 2) // produces 1, incorrect, should be 1.01
These conditions work correctly:
precisionRound(0.005, 2) // produces 0.01
precisionRound(1.0005, 3) // produces 1.001
precisionRound(1234.5, 0) // produces 1235
precisionRound(1234.5, -1) // produces 1230
Fix:
function precisionRoundMod(number, precision) {
var factor = Math.pow(10, precision);
var n = precision < 0 ? number : 0.01 / factor + number;
return Math.round( n * factor) / factor;
}
This just adds a digit to the right when rounding decimals.
MDN has updated the Math.round() page so maybe someone could provide a better solution.
I'm finding BigNumber.js meets my needs.
A JavaScript library for arbitrary-precision decimal and non-decimal arithmetic.
It has good documentation and the author is very diligent responding to feedback.
The same author has 2 other similar libraries:
Big.js
A small, fast JavaScript library for arbitrary-precision decimal arithmetic. The little sister to bignumber.js.
and Decimal.js
An arbitrary-precision Decimal type for JavaScript.
Here's some code using BigNumber:
$(function(){
var product = BigNumber(.1).times(.2);
$('#product').text(product);
var sum = BigNumber(.1).plus(.2);
$('#sum').text(sum);
});
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script>
<!-- 1.4.1 is not the current version, but works for this example. -->
<script src="http://cdn.bootcss.com/bignumber.js/1.4.1/bignumber.min.js"></script>
.1 × .2 = <span id="product"></span><br>
.1 &plus; .2 = <span id="sum"></span><br>
You are looking for an sprintf implementation for JavaScript, so that you can write out floats with small errors in them (since they are stored in binary format) in a format that you expect.
Try javascript-sprintf, you would call it like this:
var yourString = sprintf("%.2f", yourNumber);
to print out your number as a float with two decimal places.
You may also use Number.toFixed() for display purposes, if you'd rather not include more files merely for floating point rounding to a given precision.
var times = function (a, b) {
return Math.round((a * b) * 100)/100;
};
---or---
var fpFix = function (n) {
return Math.round(n * 100)/100;
};
fpFix(0.1*0.2); // -> 0.02
---also---
var fpArithmetic = function (op, x, y) {
var n = {
'*': x * y,
'-': x - y,
'+': x + y,
'/': x / y
}[op];
return Math.round(n * 100)/100;
};
--- as in ---
fpArithmetic('*', 0.1, 0.2);
// 0.02
fpArithmetic('+', 0.1, 0.2);
// 0.3
fpArithmetic('-', 0.1, 0.2);
// -0.1
fpArithmetic('/', 0.2, 0.1);
// 2
You can use parseFloat() and toFixed() if you want to bypass this issue for a small operation:
a = 0.1;
b = 0.2;
a + b = 0.30000000000000004;
c = parseFloat((a+b).toFixed(2));
c = 0.3;
a = 0.3;
b = 0.2;
a - b = 0.09999999999999998;
c = parseFloat((a-b).toFixed(2));
c = 0.1;
You just have to make up your mind on how many decimal digits you actually want - can't have the cake and eat it too :-)
Numerical errors accumulate with every further operation and if you don't cut it off early it's just going to grow. Numerical libraries which present results that look clean simply cut off the last 2 digits at every step, numerical co-processors also have a "normal" and "full" lenght for the same reason. Cuf-offs are cheap for a processor but very expensive for you in a script (multiplying and dividing and using pov(...)). Good math lib would provide floor(x,n) to do the cut-off for you.
So at the very least you should make global var/constant with pov(10,n) - meaning that you decided on the precision you need :-) Then do:
Math.floor(x*PREC_LIM)/PREC_LIM // floor - you are cutting off, not rounding
You could also keep doing math and only cut-off at the end - assuming that you are only displaying and not doing if-s with results. If you can do that, then .toFixed(...) might be more efficient.
If you are doing if-s/comparisons and don't want to cut of then you also need a small constant, usually called eps, which is one decimal place higher than max expected error. Say that your cut-off is last two decimals - then your eps has 1 at the 3rd place from the last (3rd least significant) and you can use it to compare whether the result is within eps range of expected (0.02 -eps < 0.1*0.2 < 0.02 +eps).
Notice that for the general purpose use, this behavior is likely to be acceptable.
The problem arises when comparing those floating points values to determine an appropriate action.
With the advent of ES6, a new constant Number.EPSILON is defined to determine the acceptable error margin :
So instead of performing the comparison like this
0.1 + 0.2 === 0.3 // which returns false
you can define a custom compare function, like this :
function epsEqu(x, y) {
return Math.abs(x - y) < Number.EPSILON;
}
console.log(epsEqu(0.1+0.2, 0.3)); // true
Source : http://2ality.com/2015/04/numbers-math-es6.html#numberepsilon
The result you've got is correct and fairly consistent across floating point implementations in different languages, processors and operating systems - the only thing that changes is the level of the inaccuracy when the float is actually a double (or higher).
0.1 in binary floating points is like 1/3 in decimal (i.e. 0.3333333333333... forever), there's just no accurate way to handle it.
If you're dealing with floats always expect small rounding errors, so you'll also always have to round the displayed result to something sensible. In return you get very very fast and powerful arithmetic because all the computations are in the native binary of the processor.
Most of the time the solution is not to switch to fixed-point arithmetic, mainly because it's much slower and 99% of the time you just don't need the accuracy. If you're dealing with stuff that does need that level of accuracy (for instance financial transactions) Javascript probably isn't the best tool to use anyway (as you've want to enforce the fixed-point types a static language is probably better).
You're looking for the elegant solution then I'm afraid this is it: floats are quick but have small rounding errors - always round to something sensible when displaying their results.
The round() function at phpjs.org works nicely: http://phpjs.org/functions/round
num = .01 + .06; // yields 0.0699999999999
rnum = round(num,12); // yields 0.07
decimal.js, big.js or bignumber.js can be used to avoid floating-point manipulation problems in Javascript:
0.1 * 0.2 // 0.020000000000000004
x = new Decimal(0.1)
y = x.times(0.2) // '0.2'
x.times(0.2).equals(0.2) // true
big.js: minimalist; easy-to-use; precision specified in decimal places; precision applied to division only.
bignumber.js: bases 2-64; configuration options; NaN; Infinity; precision specified in decimal places; precision applied to division only; base prefixes.
decimal.js: bases 2-64; configuration options; NaN; Infinity; non-integer powers, exp, ln, log; precision specified in significant digits; precision always applied; random numbers.
link to detailed comparisons
0.6 * 3 it's awesome!))
For me this works fine:
function dec( num )
{
var p = 100;
return Math.round( num * p ) / p;
}
Very very simple))
To avoid this you should work with integer values instead of floating points. So when you want to have 2 positions precision work with the values * 100, for 3 positions use 1000. When displaying you use a formatter to put in the separator.
Many systems omit working with decimals this way. That is the reason why many systems work with cents (as integer) instead of dollars/euro's (as floating point).
not elegant but does the job (removes trailing zeros)
var num = 0.1*0.2;
alert(parseFloat(num.toFixed(10))); // shows 0.02
Problem
Floating point can't store all decimal values exactly. So when using floating point formats there will always be rounding errors on the input values.
The errors on the inputs of course results on errors on the output.
In case of a discrete function or operator there can be big differences on the output around the point where the function or operator is discrete.
Input and output for floating point values
So, when using floating point variables, you should always be aware of this. And whatever output you want from a calculation with floating points should always be formatted/conditioned before displaying with this in mind.
When only continuous functions and operators are used, rounding to the desired precision often will do (don't truncate). Standard formatting features used to convert floats to string will usually do this for you.
Because the rounding adds an error which can cause the total error to be more then half of the desired precision, the output should be corrected based on expected precision of inputs and desired precision of output. You should
Round inputs to the expected precision or make sure no values can be entered with higher precision.
Add a small value to the outputs before rounding/formatting them which is smaller than or equal to 1/4 of the desired precision and bigger than the maximum expected error caused by rounding errors on input and during calculation. If that is not possible the combination of the precision of the used data type isn't enough to deliver the desired output precision for your calculation.
These 2 things are usually not done and in most cases the differences caused by not doing them are too small to be important for most users, but I already had a project where output wasn't accepted by the users without those corrections.
Discrete functions or operators (like modula)
When discrete operators or functions are involved, extra corrections might be required to make sure the output is as expected. Rounding and adding small corrections before rounding can't solve the problem.
A special check/correction on intermediate calculation results, immediately after applying the discrete function or operator might be required.
For a specific case (modula operator), see my answer on question: Why does modulus operator return fractional number in javascript?
Better avoid having the problem
It is often more efficient to avoid these problems by using data types (integer or fixed point formats) for calculations like this which can store the expected input without rounding errors.
An example of that is that you should never use floating point values for financial calculations.
Elegant, Predictable, and Reusable
Let's deal with the problem in an elegant way reusable way. The following seven lines will let you access the floating point precision you desire on any number simply by appending .decimal to the end of the number, formula, or built in Math function.
// First extend the native Number object to handle precision. This populates
// the functionality to all math operations.
Object.defineProperty(Number.prototype, "decimal", {
get: function decimal() {
Number.precision = "precision" in Number ? Number.precision : 3;
var f = Math.pow(10, Number.precision);
return Math.round( this * f ) / f;
}
});
// Now lets see how it works by adjusting our global precision level and
// checking our results.
console.log("'1/3 + 1/3 + 1/3 = 1' Right?");
console.log((0.3333 + 0.3333 + 0.3333).decimal == 1); // true
console.log(0.3333.decimal); // 0.333 - A raw 4 digit decimal, trimmed to 3...
Number.precision = 3;
console.log("Precision: 3");
console.log((0.8 + 0.2).decimal); // 1
console.log((0.08 + 0.02).decimal); // 0.1
console.log((0.008 + 0.002).decimal); // 0.01
console.log((0.0008 + 0.0002).decimal); // 0.001
Number.precision = 2;
console.log("Precision: 2");
console.log((0.8 + 0.2).decimal); // 1
console.log((0.08 + 0.02).decimal); // 0.1
console.log((0.008 + 0.002).decimal); // 0.01
console.log((0.0008 + 0.0002).decimal); // 0
Number.precision = 1;
console.log("Precision: 1");
console.log((0.8 + 0.2).decimal); // 1
console.log((0.08 + 0.02).decimal); // 0.1
console.log((0.008 + 0.002).decimal); // 0
console.log((0.0008 + 0.0002).decimal); // 0
Number.precision = 0;
console.log("Precision: 0");
console.log((0.8 + 0.2).decimal); // 1
console.log((0.08 + 0.02).decimal); // 0
console.log((0.008 + 0.002).decimal); // 0
console.log((0.0008 + 0.0002).decimal); // 0
Cheers!
Solved it by first making both numbers integers, executing the expression and afterwards dividing the result to get the decimal places back:
function evalMathematicalExpression(a, b, op) {
const smallest = String(a < b ? a : b);
const factor = smallest.length - smallest.indexOf('.');
for (let i = 0; i < factor; i++) {
b *= 10;
a *= 10;
}
a = Math.round(a);
b = Math.round(b);
const m = 10 ** factor;
switch (op) {
case '+':
return (a + b) / m;
case '-':
return (a - b) / m;
case '*':
return (a * b) / (m ** 2);
case '/':
return a / b;
}
throw `Unknown operator ${op}`;
}
Results for several operations (the excluded numbers are results from eval):
0.1 + 0.002 = 0.102 (0.10200000000000001)
53 + 1000 = 1053 (1053)
0.1 - 0.3 = -0.2 (-0.19999999999999998)
53 - -1000 = 1053 (1053)
0.3 * 0.0003 = 0.00009 (0.00008999999999999999)
100 * 25 = 2500 (2500)
0.9 / 0.03 = 30 (30.000000000000004)
100 / 50 = 2 (2)
From my point of view, the idea here is to round the fp number in order to have a nice/short default string representation.
The 53-bit significand precision gives from 15 to 17 significant decimal digits precision (2−53 ≈ 1.11 × 10−16).
If a decimal string with at most 15 significant digits is converted to IEEE 754 double-precision representation,
and then converted back to a decimal string with the same number of digits, the final result should match the original string.
If an IEEE 754 double-precision number is converted to a decimal string with at least 17 significant digits,
and then converted back to double-precision representation, the final result must match the original number.
...
With the 52 bits of the fraction (F) significand appearing in the memory format, the total precision is therefore 53 bits (approximately 16 decimal digits, 53 log10(2) ≈ 15.955). The bits are laid out as follows ... wikipedia
(0.1).toPrecision(100) ->
0.1000000000000000055511151231257827021181583404541015625000000000000000000000000000000000000000000000
(0.1+0.2).toPrecision(100) ->
0.3000000000000000444089209850062616169452667236328125000000000000000000000000000000000000000000000000
Then, as far as I understand, we can round the value up to 15 digits to keep a nice string representation.
10**Math.floor(53 * Math.log10(2)) // 1e15
eg.
Math.round((0.2+0.1) * 1e15 ) / 1e15
0.3
(Math.round((0.2+0.1) * 1e15 ) / 1e15).toPrecision(100)
0.2999999999999999888977697537484345957636833190917968750000000000000000000000000000000000000000000000
The function would be:
function roundNumberToHaveANiceDefaultStringRepresentation(num) {
const integerDigits = Math.floor(Math.log10(Math.abs(num))+1);
const mult = 10**(15-integerDigits); // also consider integer digits
return Math.round(num * mult) / mult;
}
Have a look at Fixed-point arithmetic. It will probably solve your problem, if the range of numbers you want to operate on is small (eg, currency). I would round it off to a few decimal values, which is the simplest solution.
You can't represent most decimal fractions exactly with binary floating point types (which is what ECMAScript uses to represent floating point values). So there isn't an elegant solution unless you use arbitrary precision arithmetic types or a decimal based floating point type. For example, the Calculator app that ships with Windows now uses arbitrary precision arithmetic to solve this problem.
You are right, the reason for that is limited precision of floating point numbers. Store your rational numbers as a division of two integer numbers and in most situations you'll be able to store numbers without any precision loss. When it comes to printing, you may want to display the result as fraction. With representation I proposed, it becomes trivial.
Of course that won't help much with irrational numbers. But you may want to optimize your computations in the way they will cause the least problem (e.g. detecting situations like sqrt(3)^2).
I had a nasty rounding error problem with mod 3. Sometimes when I should get 0 I would get .000...01. That's easy enough to handle, just test for <= .01. But then sometimes I would get 2.99999999999998. OUCH!
BigNumbers solved the problem, but introduced another, somewhat ironic, problem. When trying to load 8.5 into BigNumbers I was informed that it was really 8.4999… and had more than 15 significant digits. This meant BigNumbers could not accept it (I believe I mentioned this problem was somewhat ironic).
Simple solution to ironic problem:
x = Math.round(x*100);
// I only need 2 decimal places, if i needed 3 I would use 1,000, etc.
x = x / 100;
xB = new BigNumber(x);
You can use library https://github.com/MikeMcl/decimal.js/.
it will help lot to give proper solution.
javascript console output 95 *722228.630 /100 = 686117.1984999999
decimal library implementation
var firstNumber = new Decimal(95);
var secondNumber = new Decimal(722228.630);
var thirdNumber = new Decimal(100);
var partialOutput = firstNumber.times(secondNumber);
console.log(partialOutput);
var output = new Decimal(partialOutput).div(thirdNumber);
alert(output.valueOf());
console.log(output.valueOf())== 686117.1985
Avoid dealing with floating points during the operation using Integers
As stated on the most voted answer until now, you can work with integers, that would mean to multiply all your factors by 10 for each decimal you are working with, and divide the result by the same number used.
For example, if you are working with 2 decimals, you multiply all your factors by 100 before doing the operation, and then divide the result by 100.
Here's an example, Result1 is the usual result, Result2 uses the solution:
var Factor1="1110.7";
var Factor2="2220.2";
var Result1=Number(Factor1)+Number(Factor2);
var Result2=((Number(Factor1)*100)+(Number(Factor2)*100))/100;
var Result3=(Number(parseFloat(Number(Factor1))+parseFloat(Number(Factor2))).toPrecision(2));
document.write("Result1: "+Result1+"<br>Result2: "+Result2+"<br>Result3: "+Result3);
The third result is to show what happens when using parseFloat instead, which created a conflict in our case.
I could not find a solution using the built in Number.EPSILON that's meant to help with this kind of problem, so here is my solution:
function round(value, precision) {
const power = Math.pow(10, precision)
return Math.round((value*power)+(Number.EPSILON*power)) / power
}
This uses the known smallest difference between 1 and the smallest floating point number greater than one to fix the EPSILON rounding error ending up just one EPSILON below the rounding up threshold.
Maximum precision is 15 for 64bit floating point and 6 for 32bit floating point. Your javascript is likely 64bit.
Try my chiliadic arithmetic library, which you can see here.
If you want a later version, I can get you one.
Use Number(1.234443).toFixed(2); it will print 1.23
function test(){
var x = 0.1 * 0.2;
document.write(Number(x).toFixed(2));
}
test();

Cash register program doesnt work correctly [duplicate]

I have the following dummy test script:
function test() {
var x = 0.1 * 0.2;
document.write(x);
}
test();
This will print the result 0.020000000000000004 while it should just print 0.02 (if you use your calculator). As far as I understood this is due to errors in the floating point multiplication precision.
Does anyone have a good solution so that in such case I get the correct result 0.02? I know there are functions like toFixed or rounding would be another possibility, but I'd like to really have the whole number printed without any cutting and rounding. Just wanted to know if one of you has some nice, elegant solution.
Of course, otherwise I'll round to some 10 digits or so.
From the Floating-Point Guide:
What can I do to avoid this problem?
That depends on what kind of
calculations you’re doing.
If you really need your results to add up exactly, especially when you
work with money: use a special decimal
datatype.
If you just don’t want to see all those extra decimal places: simply
format your result rounded to a fixed
number of decimal places when
displaying it.
If you have no decimal datatype available, an alternative is to work
with integers, e.g. do money
calculations entirely in cents. But
this is more work and has some
drawbacks.
Note that the first point only applies if you really need specific precise decimal behaviour. Most people don't need that, they're just irritated that their programs don't work correctly with numbers like 1/10 without realizing that they wouldn't even blink at the same error if it occurred with 1/3.
If the first point really applies to you, use BigDecimal for JavaScript or DecimalJS, which actually solves the problem rather than providing an imperfect workaround.
I like Pedro Ladaria's solution and use something similar.
function strip(number) {
return (parseFloat(number).toPrecision(12));
}
Unlike Pedros solution this will round up 0.999...repeating and is accurate to plus/minus one on the least significant digit.
Note: When dealing with 32 or 64 bit floats, you should use toPrecision(7) and toPrecision(15) for best results. See this question for info as to why.
For the mathematically inclined: http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
The recommended approach is to use correction factors (multiply by a suitable power of 10 so that the arithmetic happens between integers). For example, in the case of 0.1 * 0.2, the correction factor is 10, and you are performing the calculation:
> var x = 0.1
> var y = 0.2
> var cf = 10
> x * y
0.020000000000000004
> (x * cf) * (y * cf) / (cf * cf)
0.02
A (very quick) solution looks something like:
var _cf = (function() {
function _shift(x) {
var parts = x.toString().split('.');
return (parts.length < 2) ? 1 : Math.pow(10, parts[1].length);
}
return function() {
return Array.prototype.reduce.call(arguments, function (prev, next) { return prev === undefined || next === undefined ? undefined : Math.max(prev, _shift (next)); }, -Infinity);
};
})();
Math.a = function () {
var f = _cf.apply(null, arguments); if(f === undefined) return undefined;
function cb(x, y, i, o) { return x + f * y; }
return Array.prototype.reduce.call(arguments, cb, 0) / f;
};
Math.s = function (l,r) { var f = _cf(l,r); return (l * f - r * f) / f; };
Math.m = function () {
var f = _cf.apply(null, arguments);
function cb(x, y, i, o) { return (x*f) * (y*f) / (f * f); }
return Array.prototype.reduce.call(arguments, cb, 1);
};
Math.d = function (l,r) { var f = _cf(l,r); return (l * f) / (r * f); };
In this case:
> Math.m(0.1, 0.2)
0.02
I definitely recommend using a tested library like SinfulJS
Are you only performing multiplication? If so then you can use to your advantage a neat secret about decimal arithmetic. That is that NumberOfDecimals(X) + NumberOfDecimals(Y) = ExpectedNumberOfDecimals. That is to say that if we have 0.123 * 0.12 then we know that there will be 5 decimal places because 0.123 has 3 decimal places and 0.12 has two. Thus if JavaScript gave us a number like 0.014760000002 we can safely round to the 5th decimal place without fear of losing precision.
Surprisingly, this function has not been posted yet although others have similar variations of it. It is from the MDN web docs for Math.round().
It's concise and allows for varying precision.
function precisionRound(number, precision) {
var factor = Math.pow(10, precision);
return Math.round(number * factor) / factor;
}
console.log(precisionRound(1234.5678, 1));
// expected output: 1234.6
console.log(precisionRound(1234.5678, -1));
// expected output: 1230
var inp = document.querySelectorAll('input');
var btn = document.querySelector('button');
btn.onclick = function(){
inp[2].value = precisionRound( parseFloat(inp[0].value) * parseFloat(inp[1].value) , 5 );
};
//MDN function
function precisionRound(number, precision) {
var factor = Math.pow(10, precision);
return Math.round(number * factor) / factor;
}
button{
display: block;
}
<input type='text' value='0.1'>
<input type='text' value='0.2'>
<button>Get Product</button>
<input type='text'>
UPDATE: Aug/20/2019
Just noticed this error. I believe it's due to a floating point precision error with Math.round().
precisionRound(1.005, 2) // produces 1, incorrect, should be 1.01
These conditions work correctly:
precisionRound(0.005, 2) // produces 0.01
precisionRound(1.0005, 3) // produces 1.001
precisionRound(1234.5, 0) // produces 1235
precisionRound(1234.5, -1) // produces 1230
Fix:
function precisionRoundMod(number, precision) {
var factor = Math.pow(10, precision);
var n = precision < 0 ? number : 0.01 / factor + number;
return Math.round( n * factor) / factor;
}
This just adds a digit to the right when rounding decimals.
MDN has updated the Math.round() page so maybe someone could provide a better solution.
I'm finding BigNumber.js meets my needs.
A JavaScript library for arbitrary-precision decimal and non-decimal arithmetic.
It has good documentation and the author is very diligent responding to feedback.
The same author has 2 other similar libraries:
Big.js
A small, fast JavaScript library for arbitrary-precision decimal arithmetic. The little sister to bignumber.js.
and Decimal.js
An arbitrary-precision Decimal type for JavaScript.
Here's some code using BigNumber:
$(function(){
var product = BigNumber(.1).times(.2);
$('#product').text(product);
var sum = BigNumber(.1).plus(.2);
$('#sum').text(sum);
});
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script>
<!-- 1.4.1 is not the current version, but works for this example. -->
<script src="http://cdn.bootcss.com/bignumber.js/1.4.1/bignumber.min.js"></script>
.1 × .2 = <span id="product"></span><br>
.1 &plus; .2 = <span id="sum"></span><br>
You are looking for an sprintf implementation for JavaScript, so that you can write out floats with small errors in them (since they are stored in binary format) in a format that you expect.
Try javascript-sprintf, you would call it like this:
var yourString = sprintf("%.2f", yourNumber);
to print out your number as a float with two decimal places.
You may also use Number.toFixed() for display purposes, if you'd rather not include more files merely for floating point rounding to a given precision.
var times = function (a, b) {
return Math.round((a * b) * 100)/100;
};
---or---
var fpFix = function (n) {
return Math.round(n * 100)/100;
};
fpFix(0.1*0.2); // -> 0.02
---also---
var fpArithmetic = function (op, x, y) {
var n = {
'*': x * y,
'-': x - y,
'+': x + y,
'/': x / y
}[op];
return Math.round(n * 100)/100;
};
--- as in ---
fpArithmetic('*', 0.1, 0.2);
// 0.02
fpArithmetic('+', 0.1, 0.2);
// 0.3
fpArithmetic('-', 0.1, 0.2);
// -0.1
fpArithmetic('/', 0.2, 0.1);
// 2
You can use parseFloat() and toFixed() if you want to bypass this issue for a small operation:
a = 0.1;
b = 0.2;
a + b = 0.30000000000000004;
c = parseFloat((a+b).toFixed(2));
c = 0.3;
a = 0.3;
b = 0.2;
a - b = 0.09999999999999998;
c = parseFloat((a-b).toFixed(2));
c = 0.1;
You just have to make up your mind on how many decimal digits you actually want - can't have the cake and eat it too :-)
Numerical errors accumulate with every further operation and if you don't cut it off early it's just going to grow. Numerical libraries which present results that look clean simply cut off the last 2 digits at every step, numerical co-processors also have a "normal" and "full" lenght for the same reason. Cuf-offs are cheap for a processor but very expensive for you in a script (multiplying and dividing and using pov(...)). Good math lib would provide floor(x,n) to do the cut-off for you.
So at the very least you should make global var/constant with pov(10,n) - meaning that you decided on the precision you need :-) Then do:
Math.floor(x*PREC_LIM)/PREC_LIM // floor - you are cutting off, not rounding
You could also keep doing math and only cut-off at the end - assuming that you are only displaying and not doing if-s with results. If you can do that, then .toFixed(...) might be more efficient.
If you are doing if-s/comparisons and don't want to cut of then you also need a small constant, usually called eps, which is one decimal place higher than max expected error. Say that your cut-off is last two decimals - then your eps has 1 at the 3rd place from the last (3rd least significant) and you can use it to compare whether the result is within eps range of expected (0.02 -eps < 0.1*0.2 < 0.02 +eps).
Notice that for the general purpose use, this behavior is likely to be acceptable.
The problem arises when comparing those floating points values to determine an appropriate action.
With the advent of ES6, a new constant Number.EPSILON is defined to determine the acceptable error margin :
So instead of performing the comparison like this
0.1 + 0.2 === 0.3 // which returns false
you can define a custom compare function, like this :
function epsEqu(x, y) {
return Math.abs(x - y) < Number.EPSILON;
}
console.log(epsEqu(0.1+0.2, 0.3)); // true
Source : http://2ality.com/2015/04/numbers-math-es6.html#numberepsilon
The result you've got is correct and fairly consistent across floating point implementations in different languages, processors and operating systems - the only thing that changes is the level of the inaccuracy when the float is actually a double (or higher).
0.1 in binary floating points is like 1/3 in decimal (i.e. 0.3333333333333... forever), there's just no accurate way to handle it.
If you're dealing with floats always expect small rounding errors, so you'll also always have to round the displayed result to something sensible. In return you get very very fast and powerful arithmetic because all the computations are in the native binary of the processor.
Most of the time the solution is not to switch to fixed-point arithmetic, mainly because it's much slower and 99% of the time you just don't need the accuracy. If you're dealing with stuff that does need that level of accuracy (for instance financial transactions) Javascript probably isn't the best tool to use anyway (as you've want to enforce the fixed-point types a static language is probably better).
You're looking for the elegant solution then I'm afraid this is it: floats are quick but have small rounding errors - always round to something sensible when displaying their results.
The round() function at phpjs.org works nicely: http://phpjs.org/functions/round
num = .01 + .06; // yields 0.0699999999999
rnum = round(num,12); // yields 0.07
decimal.js, big.js or bignumber.js can be used to avoid floating-point manipulation problems in Javascript:
0.1 * 0.2 // 0.020000000000000004
x = new Decimal(0.1)
y = x.times(0.2) // '0.2'
x.times(0.2).equals(0.2) // true
big.js: minimalist; easy-to-use; precision specified in decimal places; precision applied to division only.
bignumber.js: bases 2-64; configuration options; NaN; Infinity; precision specified in decimal places; precision applied to division only; base prefixes.
decimal.js: bases 2-64; configuration options; NaN; Infinity; non-integer powers, exp, ln, log; precision specified in significant digits; precision always applied; random numbers.
link to detailed comparisons
0.6 * 3 it's awesome!))
For me this works fine:
function dec( num )
{
var p = 100;
return Math.round( num * p ) / p;
}
Very very simple))
To avoid this you should work with integer values instead of floating points. So when you want to have 2 positions precision work with the values * 100, for 3 positions use 1000. When displaying you use a formatter to put in the separator.
Many systems omit working with decimals this way. That is the reason why many systems work with cents (as integer) instead of dollars/euro's (as floating point).
not elegant but does the job (removes trailing zeros)
var num = 0.1*0.2;
alert(parseFloat(num.toFixed(10))); // shows 0.02
Problem
Floating point can't store all decimal values exactly. So when using floating point formats there will always be rounding errors on the input values.
The errors on the inputs of course results on errors on the output.
In case of a discrete function or operator there can be big differences on the output around the point where the function or operator is discrete.
Input and output for floating point values
So, when using floating point variables, you should always be aware of this. And whatever output you want from a calculation with floating points should always be formatted/conditioned before displaying with this in mind.
When only continuous functions and operators are used, rounding to the desired precision often will do (don't truncate). Standard formatting features used to convert floats to string will usually do this for you.
Because the rounding adds an error which can cause the total error to be more then half of the desired precision, the output should be corrected based on expected precision of inputs and desired precision of output. You should
Round inputs to the expected precision or make sure no values can be entered with higher precision.
Add a small value to the outputs before rounding/formatting them which is smaller than or equal to 1/4 of the desired precision and bigger than the maximum expected error caused by rounding errors on input and during calculation. If that is not possible the combination of the precision of the used data type isn't enough to deliver the desired output precision for your calculation.
These 2 things are usually not done and in most cases the differences caused by not doing them are too small to be important for most users, but I already had a project where output wasn't accepted by the users without those corrections.
Discrete functions or operators (like modula)
When discrete operators or functions are involved, extra corrections might be required to make sure the output is as expected. Rounding and adding small corrections before rounding can't solve the problem.
A special check/correction on intermediate calculation results, immediately after applying the discrete function or operator might be required.
For a specific case (modula operator), see my answer on question: Why does modulus operator return fractional number in javascript?
Better avoid having the problem
It is often more efficient to avoid these problems by using data types (integer or fixed point formats) for calculations like this which can store the expected input without rounding errors.
An example of that is that you should never use floating point values for financial calculations.
Elegant, Predictable, and Reusable
Let's deal with the problem in an elegant way reusable way. The following seven lines will let you access the floating point precision you desire on any number simply by appending .decimal to the end of the number, formula, or built in Math function.
// First extend the native Number object to handle precision. This populates
// the functionality to all math operations.
Object.defineProperty(Number.prototype, "decimal", {
get: function decimal() {
Number.precision = "precision" in Number ? Number.precision : 3;
var f = Math.pow(10, Number.precision);
return Math.round( this * f ) / f;
}
});
// Now lets see how it works by adjusting our global precision level and
// checking our results.
console.log("'1/3 + 1/3 + 1/3 = 1' Right?");
console.log((0.3333 + 0.3333 + 0.3333).decimal == 1); // true
console.log(0.3333.decimal); // 0.333 - A raw 4 digit decimal, trimmed to 3...
Number.precision = 3;
console.log("Precision: 3");
console.log((0.8 + 0.2).decimal); // 1
console.log((0.08 + 0.02).decimal); // 0.1
console.log((0.008 + 0.002).decimal); // 0.01
console.log((0.0008 + 0.0002).decimal); // 0.001
Number.precision = 2;
console.log("Precision: 2");
console.log((0.8 + 0.2).decimal); // 1
console.log((0.08 + 0.02).decimal); // 0.1
console.log((0.008 + 0.002).decimal); // 0.01
console.log((0.0008 + 0.0002).decimal); // 0
Number.precision = 1;
console.log("Precision: 1");
console.log((0.8 + 0.2).decimal); // 1
console.log((0.08 + 0.02).decimal); // 0.1
console.log((0.008 + 0.002).decimal); // 0
console.log((0.0008 + 0.0002).decimal); // 0
Number.precision = 0;
console.log("Precision: 0");
console.log((0.8 + 0.2).decimal); // 1
console.log((0.08 + 0.02).decimal); // 0
console.log((0.008 + 0.002).decimal); // 0
console.log((0.0008 + 0.0002).decimal); // 0
Cheers!
Solved it by first making both numbers integers, executing the expression and afterwards dividing the result to get the decimal places back:
function evalMathematicalExpression(a, b, op) {
const smallest = String(a < b ? a : b);
const factor = smallest.length - smallest.indexOf('.');
for (let i = 0; i < factor; i++) {
b *= 10;
a *= 10;
}
a = Math.round(a);
b = Math.round(b);
const m = 10 ** factor;
switch (op) {
case '+':
return (a + b) / m;
case '-':
return (a - b) / m;
case '*':
return (a * b) / (m ** 2);
case '/':
return a / b;
}
throw `Unknown operator ${op}`;
}
Results for several operations (the excluded numbers are results from eval):
0.1 + 0.002 = 0.102 (0.10200000000000001)
53 + 1000 = 1053 (1053)
0.1 - 0.3 = -0.2 (-0.19999999999999998)
53 - -1000 = 1053 (1053)
0.3 * 0.0003 = 0.00009 (0.00008999999999999999)
100 * 25 = 2500 (2500)
0.9 / 0.03 = 30 (30.000000000000004)
100 / 50 = 2 (2)
From my point of view, the idea here is to round the fp number in order to have a nice/short default string representation.
The 53-bit significand precision gives from 15 to 17 significant decimal digits precision (2−53 ≈ 1.11 × 10−16).
If a decimal string with at most 15 significant digits is converted to IEEE 754 double-precision representation,
and then converted back to a decimal string with the same number of digits, the final result should match the original string.
If an IEEE 754 double-precision number is converted to a decimal string with at least 17 significant digits,
and then converted back to double-precision representation, the final result must match the original number.
...
With the 52 bits of the fraction (F) significand appearing in the memory format, the total precision is therefore 53 bits (approximately 16 decimal digits, 53 log10(2) ≈ 15.955). The bits are laid out as follows ... wikipedia
(0.1).toPrecision(100) ->
0.1000000000000000055511151231257827021181583404541015625000000000000000000000000000000000000000000000
(0.1+0.2).toPrecision(100) ->
0.3000000000000000444089209850062616169452667236328125000000000000000000000000000000000000000000000000
Then, as far as I understand, we can round the value up to 15 digits to keep a nice string representation.
10**Math.floor(53 * Math.log10(2)) // 1e15
eg.
Math.round((0.2+0.1) * 1e15 ) / 1e15
0.3
(Math.round((0.2+0.1) * 1e15 ) / 1e15).toPrecision(100)
0.2999999999999999888977697537484345957636833190917968750000000000000000000000000000000000000000000000
The function would be:
function roundNumberToHaveANiceDefaultStringRepresentation(num) {
const integerDigits = Math.floor(Math.log10(Math.abs(num))+1);
const mult = 10**(15-integerDigits); // also consider integer digits
return Math.round(num * mult) / mult;
}
Have a look at Fixed-point arithmetic. It will probably solve your problem, if the range of numbers you want to operate on is small (eg, currency). I would round it off to a few decimal values, which is the simplest solution.
You can't represent most decimal fractions exactly with binary floating point types (which is what ECMAScript uses to represent floating point values). So there isn't an elegant solution unless you use arbitrary precision arithmetic types or a decimal based floating point type. For example, the Calculator app that ships with Windows now uses arbitrary precision arithmetic to solve this problem.
You are right, the reason for that is limited precision of floating point numbers. Store your rational numbers as a division of two integer numbers and in most situations you'll be able to store numbers without any precision loss. When it comes to printing, you may want to display the result as fraction. With representation I proposed, it becomes trivial.
Of course that won't help much with irrational numbers. But you may want to optimize your computations in the way they will cause the least problem (e.g. detecting situations like sqrt(3)^2).
I had a nasty rounding error problem with mod 3. Sometimes when I should get 0 I would get .000...01. That's easy enough to handle, just test for <= .01. But then sometimes I would get 2.99999999999998. OUCH!
BigNumbers solved the problem, but introduced another, somewhat ironic, problem. When trying to load 8.5 into BigNumbers I was informed that it was really 8.4999… and had more than 15 significant digits. This meant BigNumbers could not accept it (I believe I mentioned this problem was somewhat ironic).
Simple solution to ironic problem:
x = Math.round(x*100);
// I only need 2 decimal places, if i needed 3 I would use 1,000, etc.
x = x / 100;
xB = new BigNumber(x);
You can use library https://github.com/MikeMcl/decimal.js/.
it will help lot to give proper solution.
javascript console output 95 *722228.630 /100 = 686117.1984999999
decimal library implementation
var firstNumber = new Decimal(95);
var secondNumber = new Decimal(722228.630);
var thirdNumber = new Decimal(100);
var partialOutput = firstNumber.times(secondNumber);
console.log(partialOutput);
var output = new Decimal(partialOutput).div(thirdNumber);
alert(output.valueOf());
console.log(output.valueOf())== 686117.1985
Avoid dealing with floating points during the operation using Integers
As stated on the most voted answer until now, you can work with integers, that would mean to multiply all your factors by 10 for each decimal you are working with, and divide the result by the same number used.
For example, if you are working with 2 decimals, you multiply all your factors by 100 before doing the operation, and then divide the result by 100.
Here's an example, Result1 is the usual result, Result2 uses the solution:
var Factor1="1110.7";
var Factor2="2220.2";
var Result1=Number(Factor1)+Number(Factor2);
var Result2=((Number(Factor1)*100)+(Number(Factor2)*100))/100;
var Result3=(Number(parseFloat(Number(Factor1))+parseFloat(Number(Factor2))).toPrecision(2));
document.write("Result1: "+Result1+"<br>Result2: "+Result2+"<br>Result3: "+Result3);
The third result is to show what happens when using parseFloat instead, which created a conflict in our case.
I could not find a solution using the built in Number.EPSILON that's meant to help with this kind of problem, so here is my solution:
function round(value, precision) {
const power = Math.pow(10, precision)
return Math.round((value*power)+(Number.EPSILON*power)) / power
}
This uses the known smallest difference between 1 and the smallest floating point number greater than one to fix the EPSILON rounding error ending up just one EPSILON below the rounding up threshold.
Maximum precision is 15 for 64bit floating point and 6 for 32bit floating point. Your javascript is likely 64bit.
Try my chiliadic arithmetic library, which you can see here.
If you want a later version, I can get you one.
Use Number(1.234443).toFixed(2); it will print 1.23
function test(){
var x = 0.1 * 0.2;
document.write(Number(x).toFixed(2));
}
test();

Javascript what's up with the math? [duplicate]

I have the following dummy test script:
function test() {
var x = 0.1 * 0.2;
document.write(x);
}
test();
This will print the result 0.020000000000000004 while it should just print 0.02 (if you use your calculator). As far as I understood this is due to errors in the floating point multiplication precision.
Does anyone have a good solution so that in such case I get the correct result 0.02? I know there are functions like toFixed or rounding would be another possibility, but I'd like to really have the whole number printed without any cutting and rounding. Just wanted to know if one of you has some nice, elegant solution.
Of course, otherwise I'll round to some 10 digits or so.
From the Floating-Point Guide:
What can I do to avoid this problem?
That depends on what kind of
calculations you’re doing.
If you really need your results to add up exactly, especially when you
work with money: use a special decimal
datatype.
If you just don’t want to see all those extra decimal places: simply
format your result rounded to a fixed
number of decimal places when
displaying it.
If you have no decimal datatype available, an alternative is to work
with integers, e.g. do money
calculations entirely in cents. But
this is more work and has some
drawbacks.
Note that the first point only applies if you really need specific precise decimal behaviour. Most people don't need that, they're just irritated that their programs don't work correctly with numbers like 1/10 without realizing that they wouldn't even blink at the same error if it occurred with 1/3.
If the first point really applies to you, use BigDecimal for JavaScript or DecimalJS, which actually solves the problem rather than providing an imperfect workaround.
I like Pedro Ladaria's solution and use something similar.
function strip(number) {
return (parseFloat(number).toPrecision(12));
}
Unlike Pedros solution this will round up 0.999...repeating and is accurate to plus/minus one on the least significant digit.
Note: When dealing with 32 or 64 bit floats, you should use toPrecision(7) and toPrecision(15) for best results. See this question for info as to why.
For the mathematically inclined: http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
The recommended approach is to use correction factors (multiply by a suitable power of 10 so that the arithmetic happens between integers). For example, in the case of 0.1 * 0.2, the correction factor is 10, and you are performing the calculation:
> var x = 0.1
> var y = 0.2
> var cf = 10
> x * y
0.020000000000000004
> (x * cf) * (y * cf) / (cf * cf)
0.02
A (very quick) solution looks something like:
var _cf = (function() {
function _shift(x) {
var parts = x.toString().split('.');
return (parts.length < 2) ? 1 : Math.pow(10, parts[1].length);
}
return function() {
return Array.prototype.reduce.call(arguments, function (prev, next) { return prev === undefined || next === undefined ? undefined : Math.max(prev, _shift (next)); }, -Infinity);
};
})();
Math.a = function () {
var f = _cf.apply(null, arguments); if(f === undefined) return undefined;
function cb(x, y, i, o) { return x + f * y; }
return Array.prototype.reduce.call(arguments, cb, 0) / f;
};
Math.s = function (l,r) { var f = _cf(l,r); return (l * f - r * f) / f; };
Math.m = function () {
var f = _cf.apply(null, arguments);
function cb(x, y, i, o) { return (x*f) * (y*f) / (f * f); }
return Array.prototype.reduce.call(arguments, cb, 1);
};
Math.d = function (l,r) { var f = _cf(l,r); return (l * f) / (r * f); };
In this case:
> Math.m(0.1, 0.2)
0.02
I definitely recommend using a tested library like SinfulJS
Are you only performing multiplication? If so then you can use to your advantage a neat secret about decimal arithmetic. That is that NumberOfDecimals(X) + NumberOfDecimals(Y) = ExpectedNumberOfDecimals. That is to say that if we have 0.123 * 0.12 then we know that there will be 5 decimal places because 0.123 has 3 decimal places and 0.12 has two. Thus if JavaScript gave us a number like 0.014760000002 we can safely round to the 5th decimal place without fear of losing precision.
Surprisingly, this function has not been posted yet although others have similar variations of it. It is from the MDN web docs for Math.round().
It's concise and allows for varying precision.
function precisionRound(number, precision) {
var factor = Math.pow(10, precision);
return Math.round(number * factor) / factor;
}
console.log(precisionRound(1234.5678, 1));
// expected output: 1234.6
console.log(precisionRound(1234.5678, -1));
// expected output: 1230
var inp = document.querySelectorAll('input');
var btn = document.querySelector('button');
btn.onclick = function(){
inp[2].value = precisionRound( parseFloat(inp[0].value) * parseFloat(inp[1].value) , 5 );
};
//MDN function
function precisionRound(number, precision) {
var factor = Math.pow(10, precision);
return Math.round(number * factor) / factor;
}
button{
display: block;
}
<input type='text' value='0.1'>
<input type='text' value='0.2'>
<button>Get Product</button>
<input type='text'>
UPDATE: Aug/20/2019
Just noticed this error. I believe it's due to a floating point precision error with Math.round().
precisionRound(1.005, 2) // produces 1, incorrect, should be 1.01
These conditions work correctly:
precisionRound(0.005, 2) // produces 0.01
precisionRound(1.0005, 3) // produces 1.001
precisionRound(1234.5, 0) // produces 1235
precisionRound(1234.5, -1) // produces 1230
Fix:
function precisionRoundMod(number, precision) {
var factor = Math.pow(10, precision);
var n = precision < 0 ? number : 0.01 / factor + number;
return Math.round( n * factor) / factor;
}
This just adds a digit to the right when rounding decimals.
MDN has updated the Math.round() page so maybe someone could provide a better solution.
I'm finding BigNumber.js meets my needs.
A JavaScript library for arbitrary-precision decimal and non-decimal arithmetic.
It has good documentation and the author is very diligent responding to feedback.
The same author has 2 other similar libraries:
Big.js
A small, fast JavaScript library for arbitrary-precision decimal arithmetic. The little sister to bignumber.js.
and Decimal.js
An arbitrary-precision Decimal type for JavaScript.
Here's some code using BigNumber:
$(function(){
var product = BigNumber(.1).times(.2);
$('#product').text(product);
var sum = BigNumber(.1).plus(.2);
$('#sum').text(sum);
});
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script>
<!-- 1.4.1 is not the current version, but works for this example. -->
<script src="http://cdn.bootcss.com/bignumber.js/1.4.1/bignumber.min.js"></script>
.1 × .2 = <span id="product"></span><br>
.1 &plus; .2 = <span id="sum"></span><br>
You are looking for an sprintf implementation for JavaScript, so that you can write out floats with small errors in them (since they are stored in binary format) in a format that you expect.
Try javascript-sprintf, you would call it like this:
var yourString = sprintf("%.2f", yourNumber);
to print out your number as a float with two decimal places.
You may also use Number.toFixed() for display purposes, if you'd rather not include more files merely for floating point rounding to a given precision.
var times = function (a, b) {
return Math.round((a * b) * 100)/100;
};
---or---
var fpFix = function (n) {
return Math.round(n * 100)/100;
};
fpFix(0.1*0.2); // -> 0.02
---also---
var fpArithmetic = function (op, x, y) {
var n = {
'*': x * y,
'-': x - y,
'+': x + y,
'/': x / y
}[op];
return Math.round(n * 100)/100;
};
--- as in ---
fpArithmetic('*', 0.1, 0.2);
// 0.02
fpArithmetic('+', 0.1, 0.2);
// 0.3
fpArithmetic('-', 0.1, 0.2);
// -0.1
fpArithmetic('/', 0.2, 0.1);
// 2
You can use parseFloat() and toFixed() if you want to bypass this issue for a small operation:
a = 0.1;
b = 0.2;
a + b = 0.30000000000000004;
c = parseFloat((a+b).toFixed(2));
c = 0.3;
a = 0.3;
b = 0.2;
a - b = 0.09999999999999998;
c = parseFloat((a-b).toFixed(2));
c = 0.1;
You just have to make up your mind on how many decimal digits you actually want - can't have the cake and eat it too :-)
Numerical errors accumulate with every further operation and if you don't cut it off early it's just going to grow. Numerical libraries which present results that look clean simply cut off the last 2 digits at every step, numerical co-processors also have a "normal" and "full" lenght for the same reason. Cuf-offs are cheap for a processor but very expensive for you in a script (multiplying and dividing and using pov(...)). Good math lib would provide floor(x,n) to do the cut-off for you.
So at the very least you should make global var/constant with pov(10,n) - meaning that you decided on the precision you need :-) Then do:
Math.floor(x*PREC_LIM)/PREC_LIM // floor - you are cutting off, not rounding
You could also keep doing math and only cut-off at the end - assuming that you are only displaying and not doing if-s with results. If you can do that, then .toFixed(...) might be more efficient.
If you are doing if-s/comparisons and don't want to cut of then you also need a small constant, usually called eps, which is one decimal place higher than max expected error. Say that your cut-off is last two decimals - then your eps has 1 at the 3rd place from the last (3rd least significant) and you can use it to compare whether the result is within eps range of expected (0.02 -eps < 0.1*0.2 < 0.02 +eps).
Notice that for the general purpose use, this behavior is likely to be acceptable.
The problem arises when comparing those floating points values to determine an appropriate action.
With the advent of ES6, a new constant Number.EPSILON is defined to determine the acceptable error margin :
So instead of performing the comparison like this
0.1 + 0.2 === 0.3 // which returns false
you can define a custom compare function, like this :
function epsEqu(x, y) {
return Math.abs(x - y) < Number.EPSILON;
}
console.log(epsEqu(0.1+0.2, 0.3)); // true
Source : http://2ality.com/2015/04/numbers-math-es6.html#numberepsilon
The result you've got is correct and fairly consistent across floating point implementations in different languages, processors and operating systems - the only thing that changes is the level of the inaccuracy when the float is actually a double (or higher).
0.1 in binary floating points is like 1/3 in decimal (i.e. 0.3333333333333... forever), there's just no accurate way to handle it.
If you're dealing with floats always expect small rounding errors, so you'll also always have to round the displayed result to something sensible. In return you get very very fast and powerful arithmetic because all the computations are in the native binary of the processor.
Most of the time the solution is not to switch to fixed-point arithmetic, mainly because it's much slower and 99% of the time you just don't need the accuracy. If you're dealing with stuff that does need that level of accuracy (for instance financial transactions) Javascript probably isn't the best tool to use anyway (as you've want to enforce the fixed-point types a static language is probably better).
You're looking for the elegant solution then I'm afraid this is it: floats are quick but have small rounding errors - always round to something sensible when displaying their results.
The round() function at phpjs.org works nicely: http://phpjs.org/functions/round
num = .01 + .06; // yields 0.0699999999999
rnum = round(num,12); // yields 0.07
decimal.js, big.js or bignumber.js can be used to avoid floating-point manipulation problems in Javascript:
0.1 * 0.2 // 0.020000000000000004
x = new Decimal(0.1)
y = x.times(0.2) // '0.2'
x.times(0.2).equals(0.2) // true
big.js: minimalist; easy-to-use; precision specified in decimal places; precision applied to division only.
bignumber.js: bases 2-64; configuration options; NaN; Infinity; precision specified in decimal places; precision applied to division only; base prefixes.
decimal.js: bases 2-64; configuration options; NaN; Infinity; non-integer powers, exp, ln, log; precision specified in significant digits; precision always applied; random numbers.
link to detailed comparisons
0.6 * 3 it's awesome!))
For me this works fine:
function dec( num )
{
var p = 100;
return Math.round( num * p ) / p;
}
Very very simple))
To avoid this you should work with integer values instead of floating points. So when you want to have 2 positions precision work with the values * 100, for 3 positions use 1000. When displaying you use a formatter to put in the separator.
Many systems omit working with decimals this way. That is the reason why many systems work with cents (as integer) instead of dollars/euro's (as floating point).
not elegant but does the job (removes trailing zeros)
var num = 0.1*0.2;
alert(parseFloat(num.toFixed(10))); // shows 0.02
Problem
Floating point can't store all decimal values exactly. So when using floating point formats there will always be rounding errors on the input values.
The errors on the inputs of course results on errors on the output.
In case of a discrete function or operator there can be big differences on the output around the point where the function or operator is discrete.
Input and output for floating point values
So, when using floating point variables, you should always be aware of this. And whatever output you want from a calculation with floating points should always be formatted/conditioned before displaying with this in mind.
When only continuous functions and operators are used, rounding to the desired precision often will do (don't truncate). Standard formatting features used to convert floats to string will usually do this for you.
Because the rounding adds an error which can cause the total error to be more then half of the desired precision, the output should be corrected based on expected precision of inputs and desired precision of output. You should
Round inputs to the expected precision or make sure no values can be entered with higher precision.
Add a small value to the outputs before rounding/formatting them which is smaller than or equal to 1/4 of the desired precision and bigger than the maximum expected error caused by rounding errors on input and during calculation. If that is not possible the combination of the precision of the used data type isn't enough to deliver the desired output precision for your calculation.
These 2 things are usually not done and in most cases the differences caused by not doing them are too small to be important for most users, but I already had a project where output wasn't accepted by the users without those corrections.
Discrete functions or operators (like modula)
When discrete operators or functions are involved, extra corrections might be required to make sure the output is as expected. Rounding and adding small corrections before rounding can't solve the problem.
A special check/correction on intermediate calculation results, immediately after applying the discrete function or operator might be required.
For a specific case (modula operator), see my answer on question: Why does modulus operator return fractional number in javascript?
Better avoid having the problem
It is often more efficient to avoid these problems by using data types (integer or fixed point formats) for calculations like this which can store the expected input without rounding errors.
An example of that is that you should never use floating point values for financial calculations.
Elegant, Predictable, and Reusable
Let's deal with the problem in an elegant way reusable way. The following seven lines will let you access the floating point precision you desire on any number simply by appending .decimal to the end of the number, formula, or built in Math function.
// First extend the native Number object to handle precision. This populates
// the functionality to all math operations.
Object.defineProperty(Number.prototype, "decimal", {
get: function decimal() {
Number.precision = "precision" in Number ? Number.precision : 3;
var f = Math.pow(10, Number.precision);
return Math.round( this * f ) / f;
}
});
// Now lets see how it works by adjusting our global precision level and
// checking our results.
console.log("'1/3 + 1/3 + 1/3 = 1' Right?");
console.log((0.3333 + 0.3333 + 0.3333).decimal == 1); // true
console.log(0.3333.decimal); // 0.333 - A raw 4 digit decimal, trimmed to 3...
Number.precision = 3;
console.log("Precision: 3");
console.log((0.8 + 0.2).decimal); // 1
console.log((0.08 + 0.02).decimal); // 0.1
console.log((0.008 + 0.002).decimal); // 0.01
console.log((0.0008 + 0.0002).decimal); // 0.001
Number.precision = 2;
console.log("Precision: 2");
console.log((0.8 + 0.2).decimal); // 1
console.log((0.08 + 0.02).decimal); // 0.1
console.log((0.008 + 0.002).decimal); // 0.01
console.log((0.0008 + 0.0002).decimal); // 0
Number.precision = 1;
console.log("Precision: 1");
console.log((0.8 + 0.2).decimal); // 1
console.log((0.08 + 0.02).decimal); // 0.1
console.log((0.008 + 0.002).decimal); // 0
console.log((0.0008 + 0.0002).decimal); // 0
Number.precision = 0;
console.log("Precision: 0");
console.log((0.8 + 0.2).decimal); // 1
console.log((0.08 + 0.02).decimal); // 0
console.log((0.008 + 0.002).decimal); // 0
console.log((0.0008 + 0.0002).decimal); // 0
Cheers!
Solved it by first making both numbers integers, executing the expression and afterwards dividing the result to get the decimal places back:
function evalMathematicalExpression(a, b, op) {
const smallest = String(a < b ? a : b);
const factor = smallest.length - smallest.indexOf('.');
for (let i = 0; i < factor; i++) {
b *= 10;
a *= 10;
}
a = Math.round(a);
b = Math.round(b);
const m = 10 ** factor;
switch (op) {
case '+':
return (a + b) / m;
case '-':
return (a - b) / m;
case '*':
return (a * b) / (m ** 2);
case '/':
return a / b;
}
throw `Unknown operator ${op}`;
}
Results for several operations (the excluded numbers are results from eval):
0.1 + 0.002 = 0.102 (0.10200000000000001)
53 + 1000 = 1053 (1053)
0.1 - 0.3 = -0.2 (-0.19999999999999998)
53 - -1000 = 1053 (1053)
0.3 * 0.0003 = 0.00009 (0.00008999999999999999)
100 * 25 = 2500 (2500)
0.9 / 0.03 = 30 (30.000000000000004)
100 / 50 = 2 (2)
From my point of view, the idea here is to round the fp number in order to have a nice/short default string representation.
The 53-bit significand precision gives from 15 to 17 significant decimal digits precision (2−53 ≈ 1.11 × 10−16).
If a decimal string with at most 15 significant digits is converted to IEEE 754 double-precision representation,
and then converted back to a decimal string with the same number of digits, the final result should match the original string.
If an IEEE 754 double-precision number is converted to a decimal string with at least 17 significant digits,
and then converted back to double-precision representation, the final result must match the original number.
...
With the 52 bits of the fraction (F) significand appearing in the memory format, the total precision is therefore 53 bits (approximately 16 decimal digits, 53 log10(2) ≈ 15.955). The bits are laid out as follows ... wikipedia
(0.1).toPrecision(100) ->
0.1000000000000000055511151231257827021181583404541015625000000000000000000000000000000000000000000000
(0.1+0.2).toPrecision(100) ->
0.3000000000000000444089209850062616169452667236328125000000000000000000000000000000000000000000000000
Then, as far as I understand, we can round the value up to 15 digits to keep a nice string representation.
10**Math.floor(53 * Math.log10(2)) // 1e15
eg.
Math.round((0.2+0.1) * 1e15 ) / 1e15
0.3
(Math.round((0.2+0.1) * 1e15 ) / 1e15).toPrecision(100)
0.2999999999999999888977697537484345957636833190917968750000000000000000000000000000000000000000000000
The function would be:
function roundNumberToHaveANiceDefaultStringRepresentation(num) {
const integerDigits = Math.floor(Math.log10(Math.abs(num))+1);
const mult = 10**(15-integerDigits); // also consider integer digits
return Math.round(num * mult) / mult;
}
Have a look at Fixed-point arithmetic. It will probably solve your problem, if the range of numbers you want to operate on is small (eg, currency). I would round it off to a few decimal values, which is the simplest solution.
You can't represent most decimal fractions exactly with binary floating point types (which is what ECMAScript uses to represent floating point values). So there isn't an elegant solution unless you use arbitrary precision arithmetic types or a decimal based floating point type. For example, the Calculator app that ships with Windows now uses arbitrary precision arithmetic to solve this problem.
You are right, the reason for that is limited precision of floating point numbers. Store your rational numbers as a division of two integer numbers and in most situations you'll be able to store numbers without any precision loss. When it comes to printing, you may want to display the result as fraction. With representation I proposed, it becomes trivial.
Of course that won't help much with irrational numbers. But you may want to optimize your computations in the way they will cause the least problem (e.g. detecting situations like sqrt(3)^2).
I had a nasty rounding error problem with mod 3. Sometimes when I should get 0 I would get .000...01. That's easy enough to handle, just test for <= .01. But then sometimes I would get 2.99999999999998. OUCH!
BigNumbers solved the problem, but introduced another, somewhat ironic, problem. When trying to load 8.5 into BigNumbers I was informed that it was really 8.4999… and had more than 15 significant digits. This meant BigNumbers could not accept it (I believe I mentioned this problem was somewhat ironic).
Simple solution to ironic problem:
x = Math.round(x*100);
// I only need 2 decimal places, if i needed 3 I would use 1,000, etc.
x = x / 100;
xB = new BigNumber(x);
You can use library https://github.com/MikeMcl/decimal.js/.
it will help lot to give proper solution.
javascript console output 95 *722228.630 /100 = 686117.1984999999
decimal library implementation
var firstNumber = new Decimal(95);
var secondNumber = new Decimal(722228.630);
var thirdNumber = new Decimal(100);
var partialOutput = firstNumber.times(secondNumber);
console.log(partialOutput);
var output = new Decimal(partialOutput).div(thirdNumber);
alert(output.valueOf());
console.log(output.valueOf())== 686117.1985
Avoid dealing with floating points during the operation using Integers
As stated on the most voted answer until now, you can work with integers, that would mean to multiply all your factors by 10 for each decimal you are working with, and divide the result by the same number used.
For example, if you are working with 2 decimals, you multiply all your factors by 100 before doing the operation, and then divide the result by 100.
Here's an example, Result1 is the usual result, Result2 uses the solution:
var Factor1="1110.7";
var Factor2="2220.2";
var Result1=Number(Factor1)+Number(Factor2);
var Result2=((Number(Factor1)*100)+(Number(Factor2)*100))/100;
var Result3=(Number(parseFloat(Number(Factor1))+parseFloat(Number(Factor2))).toPrecision(2));
document.write("Result1: "+Result1+"<br>Result2: "+Result2+"<br>Result3: "+Result3);
The third result is to show what happens when using parseFloat instead, which created a conflict in our case.
I could not find a solution using the built in Number.EPSILON that's meant to help with this kind of problem, so here is my solution:
function round(value, precision) {
const power = Math.pow(10, precision)
return Math.round((value*power)+(Number.EPSILON*power)) / power
}
This uses the known smallest difference between 1 and the smallest floating point number greater than one to fix the EPSILON rounding error ending up just one EPSILON below the rounding up threshold.
Maximum precision is 15 for 64bit floating point and 6 for 32bit floating point. Your javascript is likely 64bit.
Try my chiliadic arithmetic library, which you can see here.
If you want a later version, I can get you one.
Use Number(1.234443).toFixed(2); it will print 1.23
function test(){
var x = 0.1 * 0.2;
document.write(Number(x).toFixed(2));
}
test();

How to do a addition for the number which is in string format [duplicate]

I have the following dummy test script:
function test() {
var x = 0.1 * 0.2;
document.write(x);
}
test();
This will print the result 0.020000000000000004 while it should just print 0.02 (if you use your calculator). As far as I understood this is due to errors in the floating point multiplication precision.
Does anyone have a good solution so that in such case I get the correct result 0.02? I know there are functions like toFixed or rounding would be another possibility, but I'd like to really have the whole number printed without any cutting and rounding. Just wanted to know if one of you has some nice, elegant solution.
Of course, otherwise I'll round to some 10 digits or so.
From the Floating-Point Guide:
What can I do to avoid this problem?
That depends on what kind of
calculations you’re doing.
If you really need your results to add up exactly, especially when you
work with money: use a special decimal
datatype.
If you just don’t want to see all those extra decimal places: simply
format your result rounded to a fixed
number of decimal places when
displaying it.
If you have no decimal datatype available, an alternative is to work
with integers, e.g. do money
calculations entirely in cents. But
this is more work and has some
drawbacks.
Note that the first point only applies if you really need specific precise decimal behaviour. Most people don't need that, they're just irritated that their programs don't work correctly with numbers like 1/10 without realizing that they wouldn't even blink at the same error if it occurred with 1/3.
If the first point really applies to you, use BigDecimal for JavaScript or DecimalJS, which actually solves the problem rather than providing an imperfect workaround.
I like Pedro Ladaria's solution and use something similar.
function strip(number) {
return (parseFloat(number).toPrecision(12));
}
Unlike Pedros solution this will round up 0.999...repeating and is accurate to plus/minus one on the least significant digit.
Note: When dealing with 32 or 64 bit floats, you should use toPrecision(7) and toPrecision(15) for best results. See this question for info as to why.
For the mathematically inclined: http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
The recommended approach is to use correction factors (multiply by a suitable power of 10 so that the arithmetic happens between integers). For example, in the case of 0.1 * 0.2, the correction factor is 10, and you are performing the calculation:
> var x = 0.1
> var y = 0.2
> var cf = 10
> x * y
0.020000000000000004
> (x * cf) * (y * cf) / (cf * cf)
0.02
A (very quick) solution looks something like:
var _cf = (function() {
function _shift(x) {
var parts = x.toString().split('.');
return (parts.length < 2) ? 1 : Math.pow(10, parts[1].length);
}
return function() {
return Array.prototype.reduce.call(arguments, function (prev, next) { return prev === undefined || next === undefined ? undefined : Math.max(prev, _shift (next)); }, -Infinity);
};
})();
Math.a = function () {
var f = _cf.apply(null, arguments); if(f === undefined) return undefined;
function cb(x, y, i, o) { return x + f * y; }
return Array.prototype.reduce.call(arguments, cb, 0) / f;
};
Math.s = function (l,r) { var f = _cf(l,r); return (l * f - r * f) / f; };
Math.m = function () {
var f = _cf.apply(null, arguments);
function cb(x, y, i, o) { return (x*f) * (y*f) / (f * f); }
return Array.prototype.reduce.call(arguments, cb, 1);
};
Math.d = function (l,r) { var f = _cf(l,r); return (l * f) / (r * f); };
In this case:
> Math.m(0.1, 0.2)
0.02
I definitely recommend using a tested library like SinfulJS
Are you only performing multiplication? If so then you can use to your advantage a neat secret about decimal arithmetic. That is that NumberOfDecimals(X) + NumberOfDecimals(Y) = ExpectedNumberOfDecimals. That is to say that if we have 0.123 * 0.12 then we know that there will be 5 decimal places because 0.123 has 3 decimal places and 0.12 has two. Thus if JavaScript gave us a number like 0.014760000002 we can safely round to the 5th decimal place without fear of losing precision.
Surprisingly, this function has not been posted yet although others have similar variations of it. It is from the MDN web docs for Math.round().
It's concise and allows for varying precision.
function precisionRound(number, precision) {
var factor = Math.pow(10, precision);
return Math.round(number * factor) / factor;
}
console.log(precisionRound(1234.5678, 1));
// expected output: 1234.6
console.log(precisionRound(1234.5678, -1));
// expected output: 1230
var inp = document.querySelectorAll('input');
var btn = document.querySelector('button');
btn.onclick = function(){
inp[2].value = precisionRound( parseFloat(inp[0].value) * parseFloat(inp[1].value) , 5 );
};
//MDN function
function precisionRound(number, precision) {
var factor = Math.pow(10, precision);
return Math.round(number * factor) / factor;
}
button{
display: block;
}
<input type='text' value='0.1'>
<input type='text' value='0.2'>
<button>Get Product</button>
<input type='text'>
UPDATE: Aug/20/2019
Just noticed this error. I believe it's due to a floating point precision error with Math.round().
precisionRound(1.005, 2) // produces 1, incorrect, should be 1.01
These conditions work correctly:
precisionRound(0.005, 2) // produces 0.01
precisionRound(1.0005, 3) // produces 1.001
precisionRound(1234.5, 0) // produces 1235
precisionRound(1234.5, -1) // produces 1230
Fix:
function precisionRoundMod(number, precision) {
var factor = Math.pow(10, precision);
var n = precision < 0 ? number : 0.01 / factor + number;
return Math.round( n * factor) / factor;
}
This just adds a digit to the right when rounding decimals.
MDN has updated the Math.round() page so maybe someone could provide a better solution.
I'm finding BigNumber.js meets my needs.
A JavaScript library for arbitrary-precision decimal and non-decimal arithmetic.
It has good documentation and the author is very diligent responding to feedback.
The same author has 2 other similar libraries:
Big.js
A small, fast JavaScript library for arbitrary-precision decimal arithmetic. The little sister to bignumber.js.
and Decimal.js
An arbitrary-precision Decimal type for JavaScript.
Here's some code using BigNumber:
$(function(){
var product = BigNumber(.1).times(.2);
$('#product').text(product);
var sum = BigNumber(.1).plus(.2);
$('#sum').text(sum);
});
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script>
<!-- 1.4.1 is not the current version, but works for this example. -->
<script src="http://cdn.bootcss.com/bignumber.js/1.4.1/bignumber.min.js"></script>
.1 × .2 = <span id="product"></span><br>
.1 &plus; .2 = <span id="sum"></span><br>
You are looking for an sprintf implementation for JavaScript, so that you can write out floats with small errors in them (since they are stored in binary format) in a format that you expect.
Try javascript-sprintf, you would call it like this:
var yourString = sprintf("%.2f", yourNumber);
to print out your number as a float with two decimal places.
You may also use Number.toFixed() for display purposes, if you'd rather not include more files merely for floating point rounding to a given precision.
var times = function (a, b) {
return Math.round((a * b) * 100)/100;
};
---or---
var fpFix = function (n) {
return Math.round(n * 100)/100;
};
fpFix(0.1*0.2); // -> 0.02
---also---
var fpArithmetic = function (op, x, y) {
var n = {
'*': x * y,
'-': x - y,
'+': x + y,
'/': x / y
}[op];
return Math.round(n * 100)/100;
};
--- as in ---
fpArithmetic('*', 0.1, 0.2);
// 0.02
fpArithmetic('+', 0.1, 0.2);
// 0.3
fpArithmetic('-', 0.1, 0.2);
// -0.1
fpArithmetic('/', 0.2, 0.1);
// 2
You can use parseFloat() and toFixed() if you want to bypass this issue for a small operation:
a = 0.1;
b = 0.2;
a + b = 0.30000000000000004;
c = parseFloat((a+b).toFixed(2));
c = 0.3;
a = 0.3;
b = 0.2;
a - b = 0.09999999999999998;
c = parseFloat((a-b).toFixed(2));
c = 0.1;
You just have to make up your mind on how many decimal digits you actually want - can't have the cake and eat it too :-)
Numerical errors accumulate with every further operation and if you don't cut it off early it's just going to grow. Numerical libraries which present results that look clean simply cut off the last 2 digits at every step, numerical co-processors also have a "normal" and "full" lenght for the same reason. Cuf-offs are cheap for a processor but very expensive for you in a script (multiplying and dividing and using pov(...)). Good math lib would provide floor(x,n) to do the cut-off for you.
So at the very least you should make global var/constant with pov(10,n) - meaning that you decided on the precision you need :-) Then do:
Math.floor(x*PREC_LIM)/PREC_LIM // floor - you are cutting off, not rounding
You could also keep doing math and only cut-off at the end - assuming that you are only displaying and not doing if-s with results. If you can do that, then .toFixed(...) might be more efficient.
If you are doing if-s/comparisons and don't want to cut of then you also need a small constant, usually called eps, which is one decimal place higher than max expected error. Say that your cut-off is last two decimals - then your eps has 1 at the 3rd place from the last (3rd least significant) and you can use it to compare whether the result is within eps range of expected (0.02 -eps < 0.1*0.2 < 0.02 +eps).
Notice that for the general purpose use, this behavior is likely to be acceptable.
The problem arises when comparing those floating points values to determine an appropriate action.
With the advent of ES6, a new constant Number.EPSILON is defined to determine the acceptable error margin :
So instead of performing the comparison like this
0.1 + 0.2 === 0.3 // which returns false
you can define a custom compare function, like this :
function epsEqu(x, y) {
return Math.abs(x - y) < Number.EPSILON;
}
console.log(epsEqu(0.1+0.2, 0.3)); // true
Source : http://2ality.com/2015/04/numbers-math-es6.html#numberepsilon
The result you've got is correct and fairly consistent across floating point implementations in different languages, processors and operating systems - the only thing that changes is the level of the inaccuracy when the float is actually a double (or higher).
0.1 in binary floating points is like 1/3 in decimal (i.e. 0.3333333333333... forever), there's just no accurate way to handle it.
If you're dealing with floats always expect small rounding errors, so you'll also always have to round the displayed result to something sensible. In return you get very very fast and powerful arithmetic because all the computations are in the native binary of the processor.
Most of the time the solution is not to switch to fixed-point arithmetic, mainly because it's much slower and 99% of the time you just don't need the accuracy. If you're dealing with stuff that does need that level of accuracy (for instance financial transactions) Javascript probably isn't the best tool to use anyway (as you've want to enforce the fixed-point types a static language is probably better).
You're looking for the elegant solution then I'm afraid this is it: floats are quick but have small rounding errors - always round to something sensible when displaying their results.
The round() function at phpjs.org works nicely: http://phpjs.org/functions/round
num = .01 + .06; // yields 0.0699999999999
rnum = round(num,12); // yields 0.07
decimal.js, big.js or bignumber.js can be used to avoid floating-point manipulation problems in Javascript:
0.1 * 0.2 // 0.020000000000000004
x = new Decimal(0.1)
y = x.times(0.2) // '0.2'
x.times(0.2).equals(0.2) // true
big.js: minimalist; easy-to-use; precision specified in decimal places; precision applied to division only.
bignumber.js: bases 2-64; configuration options; NaN; Infinity; precision specified in decimal places; precision applied to division only; base prefixes.
decimal.js: bases 2-64; configuration options; NaN; Infinity; non-integer powers, exp, ln, log; precision specified in significant digits; precision always applied; random numbers.
link to detailed comparisons
0.6 * 3 it's awesome!))
For me this works fine:
function dec( num )
{
var p = 100;
return Math.round( num * p ) / p;
}
Very very simple))
To avoid this you should work with integer values instead of floating points. So when you want to have 2 positions precision work with the values * 100, for 3 positions use 1000. When displaying you use a formatter to put in the separator.
Many systems omit working with decimals this way. That is the reason why many systems work with cents (as integer) instead of dollars/euro's (as floating point).
not elegant but does the job (removes trailing zeros)
var num = 0.1*0.2;
alert(parseFloat(num.toFixed(10))); // shows 0.02
Problem
Floating point can't store all decimal values exactly. So when using floating point formats there will always be rounding errors on the input values.
The errors on the inputs of course results on errors on the output.
In case of a discrete function or operator there can be big differences on the output around the point where the function or operator is discrete.
Input and output for floating point values
So, when using floating point variables, you should always be aware of this. And whatever output you want from a calculation with floating points should always be formatted/conditioned before displaying with this in mind.
When only continuous functions and operators are used, rounding to the desired precision often will do (don't truncate). Standard formatting features used to convert floats to string will usually do this for you.
Because the rounding adds an error which can cause the total error to be more then half of the desired precision, the output should be corrected based on expected precision of inputs and desired precision of output. You should
Round inputs to the expected precision or make sure no values can be entered with higher precision.
Add a small value to the outputs before rounding/formatting them which is smaller than or equal to 1/4 of the desired precision and bigger than the maximum expected error caused by rounding errors on input and during calculation. If that is not possible the combination of the precision of the used data type isn't enough to deliver the desired output precision for your calculation.
These 2 things are usually not done and in most cases the differences caused by not doing them are too small to be important for most users, but I already had a project where output wasn't accepted by the users without those corrections.
Discrete functions or operators (like modula)
When discrete operators or functions are involved, extra corrections might be required to make sure the output is as expected. Rounding and adding small corrections before rounding can't solve the problem.
A special check/correction on intermediate calculation results, immediately after applying the discrete function or operator might be required.
For a specific case (modula operator), see my answer on question: Why does modulus operator return fractional number in javascript?
Better avoid having the problem
It is often more efficient to avoid these problems by using data types (integer or fixed point formats) for calculations like this which can store the expected input without rounding errors.
An example of that is that you should never use floating point values for financial calculations.
Elegant, Predictable, and Reusable
Let's deal with the problem in an elegant way reusable way. The following seven lines will let you access the floating point precision you desire on any number simply by appending .decimal to the end of the number, formula, or built in Math function.
// First extend the native Number object to handle precision. This populates
// the functionality to all math operations.
Object.defineProperty(Number.prototype, "decimal", {
get: function decimal() {
Number.precision = "precision" in Number ? Number.precision : 3;
var f = Math.pow(10, Number.precision);
return Math.round( this * f ) / f;
}
});
// Now lets see how it works by adjusting our global precision level and
// checking our results.
console.log("'1/3 + 1/3 + 1/3 = 1' Right?");
console.log((0.3333 + 0.3333 + 0.3333).decimal == 1); // true
console.log(0.3333.decimal); // 0.333 - A raw 4 digit decimal, trimmed to 3...
Number.precision = 3;
console.log("Precision: 3");
console.log((0.8 + 0.2).decimal); // 1
console.log((0.08 + 0.02).decimal); // 0.1
console.log((0.008 + 0.002).decimal); // 0.01
console.log((0.0008 + 0.0002).decimal); // 0.001
Number.precision = 2;
console.log("Precision: 2");
console.log((0.8 + 0.2).decimal); // 1
console.log((0.08 + 0.02).decimal); // 0.1
console.log((0.008 + 0.002).decimal); // 0.01
console.log((0.0008 + 0.0002).decimal); // 0
Number.precision = 1;
console.log("Precision: 1");
console.log((0.8 + 0.2).decimal); // 1
console.log((0.08 + 0.02).decimal); // 0.1
console.log((0.008 + 0.002).decimal); // 0
console.log((0.0008 + 0.0002).decimal); // 0
Number.precision = 0;
console.log("Precision: 0");
console.log((0.8 + 0.2).decimal); // 1
console.log((0.08 + 0.02).decimal); // 0
console.log((0.008 + 0.002).decimal); // 0
console.log((0.0008 + 0.0002).decimal); // 0
Cheers!
Solved it by first making both numbers integers, executing the expression and afterwards dividing the result to get the decimal places back:
function evalMathematicalExpression(a, b, op) {
const smallest = String(a < b ? a : b);
const factor = smallest.length - smallest.indexOf('.');
for (let i = 0; i < factor; i++) {
b *= 10;
a *= 10;
}
a = Math.round(a);
b = Math.round(b);
const m = 10 ** factor;
switch (op) {
case '+':
return (a + b) / m;
case '-':
return (a - b) / m;
case '*':
return (a * b) / (m ** 2);
case '/':
return a / b;
}
throw `Unknown operator ${op}`;
}
Results for several operations (the excluded numbers are results from eval):
0.1 + 0.002 = 0.102 (0.10200000000000001)
53 + 1000 = 1053 (1053)
0.1 - 0.3 = -0.2 (-0.19999999999999998)
53 - -1000 = 1053 (1053)
0.3 * 0.0003 = 0.00009 (0.00008999999999999999)
100 * 25 = 2500 (2500)
0.9 / 0.03 = 30 (30.000000000000004)
100 / 50 = 2 (2)
From my point of view, the idea here is to round the fp number in order to have a nice/short default string representation.
The 53-bit significand precision gives from 15 to 17 significant decimal digits precision (2−53 ≈ 1.11 × 10−16).
If a decimal string with at most 15 significant digits is converted to IEEE 754 double-precision representation,
and then converted back to a decimal string with the same number of digits, the final result should match the original string.
If an IEEE 754 double-precision number is converted to a decimal string with at least 17 significant digits,
and then converted back to double-precision representation, the final result must match the original number.
...
With the 52 bits of the fraction (F) significand appearing in the memory format, the total precision is therefore 53 bits (approximately 16 decimal digits, 53 log10(2) ≈ 15.955). The bits are laid out as follows ... wikipedia
(0.1).toPrecision(100) ->
0.1000000000000000055511151231257827021181583404541015625000000000000000000000000000000000000000000000
(0.1+0.2).toPrecision(100) ->
0.3000000000000000444089209850062616169452667236328125000000000000000000000000000000000000000000000000
Then, as far as I understand, we can round the value up to 15 digits to keep a nice string representation.
10**Math.floor(53 * Math.log10(2)) // 1e15
eg.
Math.round((0.2+0.1) * 1e15 ) / 1e15
0.3
(Math.round((0.2+0.1) * 1e15 ) / 1e15).toPrecision(100)
0.2999999999999999888977697537484345957636833190917968750000000000000000000000000000000000000000000000
The function would be:
function roundNumberToHaveANiceDefaultStringRepresentation(num) {
const integerDigits = Math.floor(Math.log10(Math.abs(num))+1);
const mult = 10**(15-integerDigits); // also consider integer digits
return Math.round(num * mult) / mult;
}
Have a look at Fixed-point arithmetic. It will probably solve your problem, if the range of numbers you want to operate on is small (eg, currency). I would round it off to a few decimal values, which is the simplest solution.
You can't represent most decimal fractions exactly with binary floating point types (which is what ECMAScript uses to represent floating point values). So there isn't an elegant solution unless you use arbitrary precision arithmetic types or a decimal based floating point type. For example, the Calculator app that ships with Windows now uses arbitrary precision arithmetic to solve this problem.
You are right, the reason for that is limited precision of floating point numbers. Store your rational numbers as a division of two integer numbers and in most situations you'll be able to store numbers without any precision loss. When it comes to printing, you may want to display the result as fraction. With representation I proposed, it becomes trivial.
Of course that won't help much with irrational numbers. But you may want to optimize your computations in the way they will cause the least problem (e.g. detecting situations like sqrt(3)^2).
I had a nasty rounding error problem with mod 3. Sometimes when I should get 0 I would get .000...01. That's easy enough to handle, just test for <= .01. But then sometimes I would get 2.99999999999998. OUCH!
BigNumbers solved the problem, but introduced another, somewhat ironic, problem. When trying to load 8.5 into BigNumbers I was informed that it was really 8.4999… and had more than 15 significant digits. This meant BigNumbers could not accept it (I believe I mentioned this problem was somewhat ironic).
Simple solution to ironic problem:
x = Math.round(x*100);
// I only need 2 decimal places, if i needed 3 I would use 1,000, etc.
x = x / 100;
xB = new BigNumber(x);
You can use library https://github.com/MikeMcl/decimal.js/.
it will help lot to give proper solution.
javascript console output 95 *722228.630 /100 = 686117.1984999999
decimal library implementation
var firstNumber = new Decimal(95);
var secondNumber = new Decimal(722228.630);
var thirdNumber = new Decimal(100);
var partialOutput = firstNumber.times(secondNumber);
console.log(partialOutput);
var output = new Decimal(partialOutput).div(thirdNumber);
alert(output.valueOf());
console.log(output.valueOf())== 686117.1985
Avoid dealing with floating points during the operation using Integers
As stated on the most voted answer until now, you can work with integers, that would mean to multiply all your factors by 10 for each decimal you are working with, and divide the result by the same number used.
For example, if you are working with 2 decimals, you multiply all your factors by 100 before doing the operation, and then divide the result by 100.
Here's an example, Result1 is the usual result, Result2 uses the solution:
var Factor1="1110.7";
var Factor2="2220.2";
var Result1=Number(Factor1)+Number(Factor2);
var Result2=((Number(Factor1)*100)+(Number(Factor2)*100))/100;
var Result3=(Number(parseFloat(Number(Factor1))+parseFloat(Number(Factor2))).toPrecision(2));
document.write("Result1: "+Result1+"<br>Result2: "+Result2+"<br>Result3: "+Result3);
The third result is to show what happens when using parseFloat instead, which created a conflict in our case.
I could not find a solution using the built in Number.EPSILON that's meant to help with this kind of problem, so here is my solution:
function round(value, precision) {
const power = Math.pow(10, precision)
return Math.round((value*power)+(Number.EPSILON*power)) / power
}
This uses the known smallest difference between 1 and the smallest floating point number greater than one to fix the EPSILON rounding error ending up just one EPSILON below the rounding up threshold.
Maximum precision is 15 for 64bit floating point and 6 for 32bit floating point. Your javascript is likely 64bit.
Try my chiliadic arithmetic library, which you can see here.
If you want a later version, I can get you one.
Use Number(1.234443).toFixed(2); it will print 1.23
function test(){
var x = 0.1 * 0.2;
document.write(Number(x).toFixed(2));
}
test();

Categories

Resources