JavaScript: Decimal Values - javascript

What can I use as a decimal type in JavaScript? It's not supported (0.1 + 0.2 !== 0.3), and I need it for representing exact values in a banking/financial application. See The State and Future of JavaScript for a good read and the dirty details behind JavaScript and it's (lack of) support for decimal arithmetic.
By "decimal", I mean either:
infinite-range and arbitrary-precision (like BigDecimal in Java), or
limited range and precision, but suitable for financial calculations (like decimal in C#).
So, what library or solution exists for working with decimal values? Thanks!

It is often recommended1 to handle money as an integer representing the number of cents: 2572 cents instead of 25.72 dollars. This is to avoid the problems with floating-point arithmetic that you mention. Fortunately integer arithmetic in floating point is exact, so decimal representation errors can be avoided by scaling.
1Douglas Crockford: JavaScript: The Good Parts: Appendix A - Awful Parts (page 105).

Javascript does have floating point support. But anyway, for financial records, the simplest implementation would simply be storing your values in standard integers. You may either declare an integer to represent the amount in cents, or two integers one for dollars and one for cents.
So $18.57 would become 1857cents in the first technique or 18 dollars and 57 cents in the second.
This has the added advantage of being completely accurate, as integers are stored purely as unique binary representation, there would be no such thing as rounding errors.

Take a look at BigNumber and that post too.

It seems the following library implements decimal in js (node and browser):
https://npmjs.org/package/jsdecimal

I think using integers is great till you have to do some calculations with them that require percentages etc, then you're back to floats.
I'd personally just use Decimals and use a library specifically designed to handle Decimals like
https://github.com/MikeMcl/decimal.js
And then you can represent your data and operate on it in a clear way, with your expectations met as to how it's represented, not messing with converting integers back to decimals for display purposes etc.

Related

What's the maximum precision (after the decimal point) of a float in Javascript

An algorithm I'm using needs to squeeze as many levels of precision as possible from a float number in Javascript. I don't mind whether the precision comes from a number that is very large or with a lot of numbers after the decimal point, I just literally need as many numerals in it as possible.
(If you care why, it is for a drag n' drop ranking algorithm which has to deal with a lot of halvings before rebalancing itself. I do also know there are better string-based algorithms but the numerical approach suits my purposes)
The MDN Docs say that:
The JavaScript Number type is a double-precision 64-bit binary format IEEE 754 value, like double in Java or C#. This means it can represent fractional values, but there are some limits to what it can store. A Number only keeps about 17 decimal places of precision; arithmetic is subject to rounding.
How should I best use the "17 decimal places of precision"?
Does the 17 decimal places mean "17 numerals in total, inclusive of those before and after the decimal place"
e.g. (adding underscores to represent thousand-separators for readability)
# 17 numerals: safe
111_222_333_444_555_66
# 17 numerals + decimal point: safe
111_222_333_444_555_6.6
1.11_222_333_444_555_66
# 18 numerals: unsafe
111_222_333_444_555_666
# 18 numerals + decimal point: unsafe
1.11_222_333_444_555_666
111_222_333_444_555_66.6
I assume that the precision of the number determines the number of numerals that you can use and that the position of the decimal point in those numerals is effectively academic.
Am I thinking about the problem correctly?
Does the presence of the decimal point have any bearing on the calculation or is it simply a matter of the number of numerals present
Should I assume that 17 numerals is safe / 18 is unsafe?
Does this vary by browser (not just today but over say, a 10 year window, should one assume that browser precision may increase)?
Short answer: you can probably squeeze out 15 "safe" digits, and it doesn't matter where you place your decimal point.
It's anyone's guess how the JavaScript standard is going to evolve and use other number representations.
Notice how the MDN doc says "about 17 decimals"? Right, it's because sometimes you can represent that many digits, and sometimes less. It's because the floating point representation doesn't map 1-to-1 to our decimal system.
Even numbers with seemingly less information will give rounding errors.
For example
0.1 + 0.2 => 0.30000000000000004
console.log(0.1 + 0.2);
However, in this case we have a lot of margin in the precision, so you can just ask for the precision you want to get rid of the rounding error
console.log((0.1 + 0.2).toPrecision(1));
For a larger illustration of this, consider the following snippet:
for(let i=0;i<22;i++) {
console.log(Number.MAX_SAFE_INTEGER / (10 ** i));
}
You will see a lot of rounding errors on digit 16. However, there would be cases where even the 16th decimal shows a rounding error. If you look here
https://en.wikipedia.org/wiki/IEEE_754
it states that binary 64 has 15.95 decimal digits. That's why I'd guess that 15 digits is the max precision you will get out of this.
You'd have to do your operations, and before you save back the number to any representational form, you'd have to do .toPrecision(15).
Finally this has some good explanations. https://floating-point-gui.de/formats/fp/
BTW, I got curious by reading this question so I read up as I wrote this answer. There are many people with better knowledge of this than me.
Does the presence of the decimal point have any bearing on the calculation or is it simply a matter of the number of numerals present
Kinda. To answer that, you'll need to look into how 64bit "double precision" floating point numbers are represented in memory. The "number of numerals" roughly translates into "length of the mantissa", which is indeed fixed and independent from the position of the point. However: it's binary digits and a binary point, not decimal digits and the decimal point. They do not correspond to each other directly. And then there's stuff like subnormal numbers.
Should I assume that 17 numerals is safe / 18 is unsafe?
No. In fact, only 15 decimal numerals would be "safe" if that's the representation you're starting with and want to exactly represent as a double.
Does this vary by browser (not just today but over say, a 10 year window, should one assume that browser precision may increase)?
No, it doesn't vary. The JavaScript number type will always be 64bit doubles.
Am I thinking about the problem correctly?
No.
You say you're considering this in the context of a drag'n'drop ranking algorithm, and you don't want do this string-based. However, thinking about decimal places in numbers is essentially thinking about string representation of numbers. Don't do that - either go all the way to strings, or treat numbers as binary.
Since you also mention "rebalancing", I assume you want to use numbers to encode the position of each item in a binary tree. That's a reasonable approach, but you really need to consider the binary representation of the number for that. And you really should use integers there, not floating-point numbers, as the logic would be much more complex otherwise. Start by deciding how many bits you want to use. There are some limitations for each, so choose wisely:
31/32 bit are what JS bitwise operators for numbers work on. Supported by all browsers easily.
53 bit are the range of integers you can exactly represent with floating-point numbers. Integer arithmetic will work as expected up to that size. Bitwise operations require extra code.
Fixed multiples of 8 (say, 64 bit) are what you can represent with typed arrays. Bitwise operations can be done part-wise, arithmetic operations require extra code. Or use a BigUint64Array that gives you 64 bits as a bigint to calculate with/operate on, but is not supported in old browsers.
Arbitrary precision can be achieved with bigint numbers, which support both bitwise and arithmetic operations, but again don't work in old browsers. Polyfills and bigint libraries are available though.

Javascript: string representation of numbers

How does javascript convert numbers to strings? I expect it to round the number to some precision but it doesn't look like this is the case. I did the following tests:
> 0.1 + 0.2
0.30000000000000004
> (0.1 + 0.2).toFixed(20)
'0.30000000000000004441'
> 0.2
0.2
> (0.2).toFixed(20)
'0.20000000000000001110'
This is the behavior in Safari 6.1.1, Firefox 25.0.1 and node.js 0.10.21.
It looks like javascript displays the 17th digit after the decimal point for (0.1 + 0.2) but hides it for 0.2 (and so the number is rounded to 0.2).
How exactly does number to string conversion work in javascript?
From the question's author:
I found the answer in the ECMA script specification: http://www.ecma-international.org/ecma-262/5.1/#sec-9.8.1
When printing a number, javascript calls toString(). The specification of toString() explains how javascript decides what to print. This note below
The least significant digit of s is not always uniquely determined by the requirements listed in step 5.
as well as the one here: http://www.ecma-international.org/ecma-262/5.1/#sec-15.7.4.5
The output of toFixed may be more precise than toString for some values because toString only prints enough significant digits to distinguish the number from adjacent number values.
explain the basic idea behind the behavior of toString().
This isn't about how javascript works, but about how floating-point operations work in general. Computers work in binary, but people mostly work in base 10. This introduces some imprecision here and there; how bad the imprecision is depends on how the hardware and (sometimes) software in question works. But the key is that you can't predict exactly what the errors will be, only that there will be errors.
Javascript doesn't have a rule like "display so many numbers after the decimal point for certain numbers but not for others." Instead, the computer is giving you its best estimate of the number requested. 0.2 is not something that can be easily represented in binary, so if you tell the computer to use more precision than it would otherwise, you get rounding errors (the 1110 at the end, in this case).
This is actually the same question as this old one. From the excellent community wiki answer there:
All floating point math is like this and is based on the IEEE 754 standard. JavaScript uses 64-bit floating point representation, which is the same as Java's double.

Make JavaScript Math.sqrt() print more digits

If I write document.write(Math.sqrt(2)) on my HTML page, I get 1.4142135623730951.
Is there any way to make the method output more than 16 decimal places?
No, there is not. Mathematical operations in Javascript are performed using 64-bit floating point values, and 16 digits of precision is right around the limit of what they can represent accurately.
To get more digits of the result, you will need to use an arbitrary-precision math library. I'm not aware offhand of any of these for Javascript that support square roots, though -- the one I was able to find offhand (Big.js) only supports addition, subtraction, and comparisons.
You can use toPrecision. However, ECMA requries only a precision of up to 21 significant digits:
console.log(Math.sqrt(2).toPrecision(21))
But keep in mind that the precision of real values on computer has some limits (see duskwuff's answer).
See also:
toFixed
toExponential

Why am I seeing inexact floating-point results in ECMAScript / ActionScript 3?

Hey all, let's jump straight to a code sample to show how ECMAScript/JavaScript/AS3 can't do simple math right (AS3 uses a 'IEEE-754 double-precision floating-point number' for the Number class which is supposedly identical to that used in JavaScript)...
trace(1.1); //'1.1': Ok, fine, looks good.
trace(1.1*100); //'110.00000000000001': What!?
trace((1.1*100)/100); //'1.1': Brings it back to 1.1 (since we're apparently multiplying by *approximately* 100 and then dividing by the same *approximate* '100' amount)
trace(1.1*100-110); //'1.4210854715202004e-14': Proof that according to AS3, 1.1*100!=110 (i.e. this isn't just a bug in Number.toString())
trace(1.1*100==110); //'false': Even further proof that according to AS3, 1.1*100!=110
What gives?
Welcome to the wonderful world of floating point calculation accuracy. In general, floating point calculations will give you results that are very very nearly correct, but comparing outputs for absolute equality is unlikely to give you results you expect without the use of rounding functions.
This is just a side effect of using floating point numbers - these are binary representations of decimal numbers, there will always be some approximations.
Long explanation
Floating point inconsistencies are a known problem in many languages. This is because computers aren't designed to handle floating point numbers.
Have fun
As moonshadow states, you're running into issues with floating point precision. Floating point numbers aren't suited to the task of representing and performing arithmetic upon decimal values in the manner that you would expect. These kinds of problems are seen most often when people try to using floating point numbers for financial calculations. The wikipedia entry is good, but you might get more out of this page, which steps through an error-prone financial calculation: http://c2.com/cgi/wiki?FloatingPointCurrency
To accurately deal with decimal numbers you need a decimal library. I've outlined two BigDecimal-style libraries written in javascript that may suit your needs in another SO post, hopefully you'll find them useful:
https://stackoverflow.com/questions/744099/javascript-bigdecimal-library/1575569

JavaScript 64 bit numeric precision

Is there a way to represent a number with higher than 53-bit precision in JavaScript? In other words, is there a way to represent 64-bit precision number?
I am trying to implement some logic in which each bit of a 64-bit number represents something. I lose the lower significant bits when I try to set bits higher than 2^53.
Math.pow(2,53) + Math.pow(2,0) == Math.pow(2,53)
Is there a way to implement a custom library or something to achieve this?
Google's Closure library has goog.math.Long for this purpose.
The GWT team have added a long emulation support so java longs really hold 64 bits. Do you want 64 bit floats or whole numbers ?
I'd just use either an array of integers or a string.
The numbers in javascript are doubles, I think there is a rounding error involved in your equation.
Perhaps I should have added some technical detail. Basically the GWT long emulation uses a tuple of two numbers, the first holding the high 32 bits and the second the low 32 bits of the 64 bit long.
The library of course contains methods to add stuff like adding two "longs" and getting a "long" result. Within your GWT Java code it just looks like two regular longs - one doesn't need to fiddle or be aware of the tuple. By using this approach GWT avoids the problem you're probably alluding to, namely "longs" dropping the lower bits of precision which isn't acceptable in many cases.
Whilst floats are by definition imprecise / approximations of a value, a whole number like a long isn't. GWT always holds a 64 bit long - maths using such longs never use precision. The exception to this is overflows but that accurately matches what occurs in Java etc when you add two very large long values which require more than 64 bits - eg 2^32-1 + 2^32-1.
To do the same for floating point numbers will require a similar approach. You will need to have a library that uses a tuple.
The following code might work for you; I haven't tested it however yet:
BigDecimal for JavaScript
Yes, 11 bit are reserved for exponent, only 52 bits containt value also called fraction.
Javascript allows bitwise operations on numbers but only first 32 bits are used in those operations according to Javascript standard specification.
I do not understand misleading GWT/Java/long answers in Javascript/double question though? Javascript is not Java.
Why would anyone need 64 bit precision in javascript ?
Longs sometimes hold ID of stuff in a DB so its important not to lose some of the lower bits... but floating point numbers are most of the time used for calculations. To use floats to hold monetary or similar exacting values is plain wrong. If you truely need 64 bit precision do the maths on the server where its faster and so on.

Categories

Resources