Is there a definitive solution to javascript floating-point errors? - javascript

I write line of business applications. I'd like to build a front-end end using Javascript and am trying to figure out how to deal with, for a business user, are floating point errors (I understand from a computer science perspective they might not be considered errors). I've read plenty on this and seen all kinds of rounding hacks that work on examples given but seem prone to break down unexpectedly. Is there a definitive way to do decimal math in javascript?

According to Douglas Crockford, the only way around this problem is scale your values to integer. Make sure it really is an integer by using Math.round on the scaled value. (DC does not talk about the rounding part, but I discovered it was necessary. e.g. Math.round(1.1 *100)) Do calculation(s). When you are done with the math scale back to original precision. See JavaScript: The Good Parts "Floating Point" section.

One answer is to do the math in decimal instead of binary. Then you never have to worry about the decimal <=> binary conversion errors. You'd represent the numbers as binary digits in an array or a string and write the math routines yourself.
Here are some bignumber libraries you can look into if you don't want to go to that trouble:
http://jsfromhell.com/classes/bignumber
http://stz-ida.de/html/oss/js_bigdecimal.html.en

the only definite solution seems to be writing your own arbitrary precision number type working on strings internally -- which will be complicated and horribly slow.

Related

Working with accurate currency values in Javascript

I'm working on a system that uses financial data. I'm getting subtle rounding errors due to the use of floating point numbers. I'm wondering if there's a better way to deal with this.
One of the issues is that I'm working with a mixture of different currencies, which might have up to 12 decimals, and large numbers for other currencies.
This means that the smallest number I need to represent is 0.000000000001 * (1*10^-12) and the largest 100,000,000,000 (1*10^11).
Are there any recommended ways to work with numbers of this size and not lose precision?
If you're really trying to stay in the JS realm you might consider Decimal.js which should cover your precision range.
If I were writing this and needed to make sure there were no rounding errors I would likely try and use a GMP extension for another lang inside a microservice which was only tasked with the financial math. GMPY2 for Python3 is probably a good bet for something quick and easy.

Can javascript be trusted when making calculations?

I am implementing an invoice system, where everything is dynamically added on the dom through javascript and I am making some calculations on the browser itself with javascript.
for eg I am calculating each invoice line with quantity and price of unit and generating a total sum
price can be a floating point number
but I am not sure if this should be trusted or not, if someone has the same toughts about javascript please comment :)
I don't know but javascript doesn't seem to me to be trusted like other programming languages like PHP or so, this is my opinion, but if you can convince me please do
Thanks
Javascript uses the same data type that almost all languages use for floating point calculations. The double precision floating point data type is very common, because processors have built in support for it.
Floating point numbers have a limited precision, and most numbers with a fractional part can't be represented exactly. However, for what you are going to use it for, the precision is more than enough to show a correct result.
You should just be aware of the limited precision. When displaying the result, you should make sure that it's formatted (and rounded) to the precision that you want to show. Otherwise the limited precision might show up as for example a price of 14.9500000000000001 instead 14.95.
According to JavaScript's specifications, all numbers are 64bit precision (as in 64bit floating point precision).
From this post, you have 3 solutions:
use some implementation of Decimal for JavaScript, as BigDecimal.js
just choose a fixed number of digits to keep, like this (Math.floor(y/x) * x).toFixed(2)
switch to pure integers, treating prices as number of cents. This could lead you to big changes across the whole project
Financial calculations usually require specific fixed rules about (for example) when and how to round (in which direction), etc.
That means you'll often maintain an internal sub-total precision until you move to a next section of your calculation (like adding the tax, as per rules set).
IEEE-754 Floating point (as used in javascript) will give you a maximum accuracy of 2^53 (if you think about it like an integer).
Now your 'job' is to pretend javascript doesn't support floating point and substitute it yourself using the simplest possible way: decrease your maximum integer range to obtain the required floating point precision and see if that resulting range is suitable to your needs. If not, then you might need an external high precision math library (although most basic operations are pretty easy to implement).
First determine your desired internal precision (incl overflow digit for your expected rounding behavior): for example 3 digits:
FLOOR((2^53)/(10^3))=FLOOR(9.007.199.254.740.992/1000)=9.007.199.254.740,000
If this range is sufficient, then you need no other library, just multiply your input 10^float_digits and maintain that internal precision per calculation-section, while rounding each step according to the rules required for your calculation (you'd still need to do that when using a high-precision external math library).
For (visual) output, again, apply proper rounding and just divide your remaining value by 10^(floatDigits-roundingDigit(s)) and pass it through Number.prototype.toFixed() (which then just pads zero's when required).
As to your other question regarding trustworthiness of javascript vs other programming languages: one can even boot/run and use LINUX on javascript inside the browser: http://bellard.org/jslinux/
Let that sink in for a moment...
Now what if I told you this even works in IE6... Pretty humbling. Even servers can run on javascript (node.js)..
Hope this helps (it didn't fit in a comment).
Other answers have addressed issues that JavaScript has with using floating point numbers to represent money.
There's a separate issue with using JavaScript for calculations involving financial transactions that comes to mind.
Because the code is executed in a browser on the client machine, You can only trust the result to the extent that you can trust the client.
Therefore you should really only rely on JavaScript to calculate something that you could take for granted if the client told you.
For instance, if you were writing an e-commerce site, you could trust code that told you what the client wanted to buy, and what the clients shipping address was, but you would need to calculate the price of the goods yourself to prevent the client from telling you a lower price.
It's entirely possible that the invoicing system you're working on will only be used internally to your organisation.
If this is the case, you can disregard this entire answer.
But, if your applications is going to be used by customers to access and manipulate their invoices and orders, then this is something you'd have to consider.

JavaScript and Dealing with Floating Point Determinism

I'm looking to build a browser multiplayer game using rollback netcode that runs a deterministic simulation on the clients. I prototyped the netcode in Flash already before I ran into the floating point roadblock.
Basically, from what I understand, integer math in Flash is done by casting ints to Numbers, doing the math, then casting back to int. It's faster apparently, but it means that there's no chance of deterministic math across different computer architectures.
Before I dump all my eggs into the JavaScript basket then, I'd like to ask a few questions.
Is there true integer arithmetic on all major browsers in JavaScript? Or do some browsers do the Flash thing and cast to floats/doubles to do the math before casting back to int?
Does something like BigDecimal or BigNum work for deterministic math across different computer architectures? I don't mind some performance loss as long as it's within reason. If not, is there some JavaScript fixed point library out there that solves my problem?
This is a long shot, but is there a HTML5 2D game engine that has deterministic math for stuff like x/y positions and collisions? The list of game engines is overwhelming to say the least. I'm uneasy about building a deterministic cross browser compatible engine from scratch, but that might be what I have to do.
NOTE: Edited from HTML5 to JS as per responses. Apologies for my lack of knowledge.
This is a Javascript issue - not an HTML5 one.
All Javascript math is done using IEEE754 floating point double values - there are no "ints".
Although IEEE754 requires (AFAIK) a specific answer for each operation for any given input, you should be aware that JS interpreters are potentially free to optimise expressions, loops, etc, such that the floating point operations don't actually execute in the order you expect.
Over the course of a program this may result in different answers being produced on different browsers.

Why am I seeing inexact floating-point results in ECMAScript / ActionScript 3?

Hey all, let's jump straight to a code sample to show how ECMAScript/JavaScript/AS3 can't do simple math right (AS3 uses a 'IEEE-754 double-precision floating-point number' for the Number class which is supposedly identical to that used in JavaScript)...
trace(1.1); //'1.1': Ok, fine, looks good.
trace(1.1*100); //'110.00000000000001': What!?
trace((1.1*100)/100); //'1.1': Brings it back to 1.1 (since we're apparently multiplying by *approximately* 100 and then dividing by the same *approximate* '100' amount)
trace(1.1*100-110); //'1.4210854715202004e-14': Proof that according to AS3, 1.1*100!=110 (i.e. this isn't just a bug in Number.toString())
trace(1.1*100==110); //'false': Even further proof that according to AS3, 1.1*100!=110
What gives?
Welcome to the wonderful world of floating point calculation accuracy. In general, floating point calculations will give you results that are very very nearly correct, but comparing outputs for absolute equality is unlikely to give you results you expect without the use of rounding functions.
This is just a side effect of using floating point numbers - these are binary representations of decimal numbers, there will always be some approximations.
Long explanation
Floating point inconsistencies are a known problem in many languages. This is because computers aren't designed to handle floating point numbers.
Have fun
As moonshadow states, you're running into issues with floating point precision. Floating point numbers aren't suited to the task of representing and performing arithmetic upon decimal values in the manner that you would expect. These kinds of problems are seen most often when people try to using floating point numbers for financial calculations. The wikipedia entry is good, but you might get more out of this page, which steps through an error-prone financial calculation: http://c2.com/cgi/wiki?FloatingPointCurrency
To accurately deal with decimal numbers you need a decimal library. I've outlined two BigDecimal-style libraries written in javascript that may suit your needs in another SO post, hopefully you'll find them useful:
https://stackoverflow.com/questions/744099/javascript-bigdecimal-library/1575569

JavaScript 64 bit numeric precision

Is there a way to represent a number with higher than 53-bit precision in JavaScript? In other words, is there a way to represent 64-bit precision number?
I am trying to implement some logic in which each bit of a 64-bit number represents something. I lose the lower significant bits when I try to set bits higher than 2^53.
Math.pow(2,53) + Math.pow(2,0) == Math.pow(2,53)
Is there a way to implement a custom library or something to achieve this?
Google's Closure library has goog.math.Long for this purpose.
The GWT team have added a long emulation support so java longs really hold 64 bits. Do you want 64 bit floats or whole numbers ?
I'd just use either an array of integers or a string.
The numbers in javascript are doubles, I think there is a rounding error involved in your equation.
Perhaps I should have added some technical detail. Basically the GWT long emulation uses a tuple of two numbers, the first holding the high 32 bits and the second the low 32 bits of the 64 bit long.
The library of course contains methods to add stuff like adding two "longs" and getting a "long" result. Within your GWT Java code it just looks like two regular longs - one doesn't need to fiddle or be aware of the tuple. By using this approach GWT avoids the problem you're probably alluding to, namely "longs" dropping the lower bits of precision which isn't acceptable in many cases.
Whilst floats are by definition imprecise / approximations of a value, a whole number like a long isn't. GWT always holds a 64 bit long - maths using such longs never use precision. The exception to this is overflows but that accurately matches what occurs in Java etc when you add two very large long values which require more than 64 bits - eg 2^32-1 + 2^32-1.
To do the same for floating point numbers will require a similar approach. You will need to have a library that uses a tuple.
The following code might work for you; I haven't tested it however yet:
BigDecimal for JavaScript
Yes, 11 bit are reserved for exponent, only 52 bits containt value also called fraction.
Javascript allows bitwise operations on numbers but only first 32 bits are used in those operations according to Javascript standard specification.
I do not understand misleading GWT/Java/long answers in Javascript/double question though? Javascript is not Java.
Why would anyone need 64 bit precision in javascript ?
Longs sometimes hold ID of stuff in a DB so its important not to lose some of the lower bits... but floating point numbers are most of the time used for calculations. To use floats to hold monetary or similar exacting values is plain wrong. If you truely need 64 bit precision do the maths on the server where its faster and so on.

Categories

Resources