Hey all, let's jump straight to a code sample to show how ECMAScript/JavaScript/AS3 can't do simple math right (AS3 uses a 'IEEE-754 double-precision floating-point number' for the Number class which is supposedly identical to that used in JavaScript)...
trace(1.1); //'1.1': Ok, fine, looks good.
trace(1.1*100); //'110.00000000000001': What!?
trace((1.1*100)/100); //'1.1': Brings it back to 1.1 (since we're apparently multiplying by *approximately* 100 and then dividing by the same *approximate* '100' amount)
trace(1.1*100-110); //'1.4210854715202004e-14': Proof that according to AS3, 1.1*100!=110 (i.e. this isn't just a bug in Number.toString())
trace(1.1*100==110); //'false': Even further proof that according to AS3, 1.1*100!=110
What gives?
Welcome to the wonderful world of floating point calculation accuracy. In general, floating point calculations will give you results that are very very nearly correct, but comparing outputs for absolute equality is unlikely to give you results you expect without the use of rounding functions.
This is just a side effect of using floating point numbers - these are binary representations of decimal numbers, there will always be some approximations.
Long explanation
Floating point inconsistencies are a known problem in many languages. This is because computers aren't designed to handle floating point numbers.
Have fun
As moonshadow states, you're running into issues with floating point precision. Floating point numbers aren't suited to the task of representing and performing arithmetic upon decimal values in the manner that you would expect. These kinds of problems are seen most often when people try to using floating point numbers for financial calculations. The wikipedia entry is good, but you might get more out of this page, which steps through an error-prone financial calculation: http://c2.com/cgi/wiki?FloatingPointCurrency
To accurately deal with decimal numbers you need a decimal library. I've outlined two BigDecimal-style libraries written in javascript that may suit your needs in another SO post, hopefully you'll find them useful:
https://stackoverflow.com/questions/744099/javascript-bigdecimal-library/1575569
Related
I'm working on a system that uses financial data. I'm getting subtle rounding errors due to the use of floating point numbers. I'm wondering if there's a better way to deal with this.
One of the issues is that I'm working with a mixture of different currencies, which might have up to 12 decimals, and large numbers for other currencies.
This means that the smallest number I need to represent is 0.000000000001 * (1*10^-12) and the largest 100,000,000,000 (1*10^11).
Are there any recommended ways to work with numbers of this size and not lose precision?
If you're really trying to stay in the JS realm you might consider Decimal.js which should cover your precision range.
If I were writing this and needed to make sure there were no rounding errors I would likely try and use a GMP extension for another lang inside a microservice which was only tasked with the financial math. GMPY2 for Python3 is probably a good bet for something quick and easy.
I read this on W3Schools:
All numbers in JavaScript are stored as 64-bit (8-bytes) base 10,
floating point numbers.
This sounds quite strange. Now, it's either wrong or there should be a good reason not to use base 2 like the IEEE standard.
I tried to find a real JavaScript definition, but I couldn't find any. Either on the V8 or WebKit documentation, the two JavaScript implementation I could find on Wikipedia that sounded the most familiar to me, I could find how they stored the JavaScript Number type.
So, does JavaScript use base 10? If so, why? The only reason I could come up with was that maybe using base 10 has an advantage when you want to be able to accurately store integers as well as floating point numbers, but I don't know how using base 10 would have an advantage for that myself.
That's not the World Wide Web Consortium (W3C), that's w3schools, a website that isn't any authority for any web standards.
Numbers in Javascript are double precision floating point numbers, following the IEEE standards.
The site got the part about every number is a 64-bit floating point number right. The base 10 has nothing with the numerical representation to do, that probably comes from the fact that floating point numbers are always parsed and formatted using base 10.
Numbers in JavaScript are, according to the ECMA-262 Standard (ECMAScript 5.1) section 4.3.19:
Primitive values corresponding to a double-precision 64-bit binary format IEEE 754 value.
Thus, any implementation using base 10 floating point numbers is not ECMA-262 conformant.
JavaScript uses, like most modern languages, IEEE754. Which isn't at all stored in base 10.
The specificity of JavaScript is that there is only one number type, which is the double precision float. Which has the side effect that you're somewhat limited contrary to other languages if you want to deal with integers : you can't store any double precision integer, only the ones fitting in the size of the fraction (52 bits).
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Is JavaScript's Math broken?
Any ideas as to why:
(872.23 + 2315.66 + 4361.16) == 7549.05
return false in a Javascript console (e.g. Chrome Developer console)?
If I try it with my calculator, the left side does exact 7549.05... However, Javascript displays it as 7549.049999999999. I could "fix" it or round it, or... but WHY should I be doing that for simple additions?
Thanks
Marco Mariani answered a similar question a short time ago:
What Every Computer Scientist Should Know About Floating Point Arithmetic
and the shorter, more to the point:
http://floating-point-gui.de/
That is not Javascript adding extra precision, that is your computer's floating-point representation not being able to accurately represent your number. Get around it by using rational numbers or fixed-point (instead of floating-point) arithmetic.
By using decimals, you are using floating point numbers. You would have to know a bit about how floating point is represented in binary form, and how binary floating point addition works to understand why adding floating point numbers does not always resolve to what you want.
Here is a quick google result that you might want to glance at: The Complete Javascript Number Reference
Also, if you want to learn how floating point is represented in binary look at IEEE floating point on Wikipedia.
Your best bet in this case would be to round.
This is because of how floats are represented in the hardware (32 bits, probably, but it's the same in any number of bits). Basically the issue is you can't represent something like "7549.05" exactly (more on this issue in wikipedia).
So, for practical uses, if the numbers are currency, a good option is multiplying by 100 so they are always integers, and operating with ints (which will give good results when adding, substracting or multiplying).
I write line of business applications. I'd like to build a front-end end using Javascript and am trying to figure out how to deal with, for a business user, are floating point errors (I understand from a computer science perspective they might not be considered errors). I've read plenty on this and seen all kinds of rounding hacks that work on examples given but seem prone to break down unexpectedly. Is there a definitive way to do decimal math in javascript?
According to Douglas Crockford, the only way around this problem is scale your values to integer. Make sure it really is an integer by using Math.round on the scaled value. (DC does not talk about the rounding part, but I discovered it was necessary. e.g. Math.round(1.1 *100)) Do calculation(s). When you are done with the math scale back to original precision. See JavaScript: The Good Parts "Floating Point" section.
One answer is to do the math in decimal instead of binary. Then you never have to worry about the decimal <=> binary conversion errors. You'd represent the numbers as binary digits in an array or a string and write the math routines yourself.
Here are some bignumber libraries you can look into if you don't want to go to that trouble:
http://jsfromhell.com/classes/bignumber
http://stz-ida.de/html/oss/js_bigdecimal.html.en
the only definite solution seems to be writing your own arbitrary precision number type working on strings internally -- which will be complicated and horribly slow.
Is there a way to represent a number with higher than 53-bit precision in JavaScript? In other words, is there a way to represent 64-bit precision number?
I am trying to implement some logic in which each bit of a 64-bit number represents something. I lose the lower significant bits when I try to set bits higher than 2^53.
Math.pow(2,53) + Math.pow(2,0) == Math.pow(2,53)
Is there a way to implement a custom library or something to achieve this?
Google's Closure library has goog.math.Long for this purpose.
The GWT team have added a long emulation support so java longs really hold 64 bits. Do you want 64 bit floats or whole numbers ?
I'd just use either an array of integers or a string.
The numbers in javascript are doubles, I think there is a rounding error involved in your equation.
Perhaps I should have added some technical detail. Basically the GWT long emulation uses a tuple of two numbers, the first holding the high 32 bits and the second the low 32 bits of the 64 bit long.
The library of course contains methods to add stuff like adding two "longs" and getting a "long" result. Within your GWT Java code it just looks like two regular longs - one doesn't need to fiddle or be aware of the tuple. By using this approach GWT avoids the problem you're probably alluding to, namely "longs" dropping the lower bits of precision which isn't acceptable in many cases.
Whilst floats are by definition imprecise / approximations of a value, a whole number like a long isn't. GWT always holds a 64 bit long - maths using such longs never use precision. The exception to this is overflows but that accurately matches what occurs in Java etc when you add two very large long values which require more than 64 bits - eg 2^32-1 + 2^32-1.
To do the same for floating point numbers will require a similar approach. You will need to have a library that uses a tuple.
The following code might work for you; I haven't tested it however yet:
BigDecimal for JavaScript
Yes, 11 bit are reserved for exponent, only 52 bits containt value also called fraction.
Javascript allows bitwise operations on numbers but only first 32 bits are used in those operations according to Javascript standard specification.
I do not understand misleading GWT/Java/long answers in Javascript/double question though? Javascript is not Java.
Why would anyone need 64 bit precision in javascript ?
Longs sometimes hold ID of stuff in a DB so its important not to lose some of the lower bits... but floating point numbers are most of the time used for calculations. To use floats to hold monetary or similar exacting values is plain wrong. If you truely need 64 bit precision do the maths on the server where its faster and so on.