Javascript adding extra precision when doing additions [duplicate] - javascript

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Is JavaScript's Math broken?
Any ideas as to why:
(872.23 + 2315.66 + 4361.16) == 7549.05
return false in a Javascript console (e.g. Chrome Developer console)?
If I try it with my calculator, the left side does exact 7549.05... However, Javascript displays it as 7549.049999999999. I could "fix" it or round it, or... but WHY should I be doing that for simple additions?
Thanks

Marco Mariani answered a similar question a short time ago:
What Every Computer Scientist Should Know About Floating Point Arithmetic
and the shorter, more to the point:
http://floating-point-gui.de/

That is not Javascript adding extra precision, that is your computer's floating-point representation not being able to accurately represent your number. Get around it by using rational numbers or fixed-point (instead of floating-point) arithmetic.

By using decimals, you are using floating point numbers. You would have to know a bit about how floating point is represented in binary form, and how binary floating point addition works to understand why adding floating point numbers does not always resolve to what you want.
Here is a quick google result that you might want to glance at: The Complete Javascript Number Reference
Also, if you want to learn how floating point is represented in binary look at IEEE floating point on Wikipedia.
Your best bet in this case would be to round.

This is because of how floats are represented in the hardware (32 bits, probably, but it's the same in any number of bits). Basically the issue is you can't represent something like "7549.05" exactly (more on this issue in wikipedia).
So, for practical uses, if the numbers are currency, a good option is multiplying by 100 so they are always integers, and operating with ints (which will give good results when adding, substracting or multiplying).

Related

How do I deal with number greater than 2^53 in Javascript? [duplicate]

This question already has an answer here:
What is the standard solution in JavaScript for handling big numbers (BigNum)?
(1 answer)
Closed 7 years ago.
I have looked around at questions asking about the maximum and/or minimum limit to numbers in JavaScript. They all say things about how the limits are -2^53 to 2^53 inclusive, but none of them say if there is any way to deal with numbers outside of that range if you need to, except one answer that said you can change it into a string but it wasn't very specific and I didn't understand it.
If anyone can either expound on the idea of changing it into a string of offer a new one, that would be very helpful.
Also, as a side note that it probably much simpler. How do you make sure that numbers are not displayed in scientific notation and only in standard form?
Javascript numbers are represented internally as IEEE 754 double-precision floating point numbers.
On an absolute scale, they can represent numbers from -21023 to 21023. For small numbers like 1, they have a very high precision, down to steps of 2-52. However, as the magnitude of a number increases, the precision of the representation decreases. The ±253 range you've read about is the maximum range of integer representations — once a number exceeds 253, the minimum "step" increases from 1 to 2.
If you need to exactly represent integers greater than 253, you will need to use a Javascript bignum library. However, if you just need to represent floating-point values, you're OK; the default number type will do just fine.

Unexpected ParseFloat result [duplicate]

This question already has answers here:
Floating point inaccuracy examples
(7 answers)
Closed 7 years ago.
Today, I have seen strange issue with parseFloat in Javascript. When I an doing addition operation of the value I have seen some strange values, below are the examples:
parseFloat("31.099")+1;
Output:
32.099000000000004
Why do I have only problem with 31.099?
Floating numbers are always strange. Check out some descriptions in PHP Manual as well. There is a warning about precision.
Most of the programming languages do this, when you add an int to a floating number, it will make an approximation of the result. One work around would be to create your floating type and separate the floating value from the integer value.
Yes, floating point numbers are not always precise. It's because fractions cannot be precisely represented in binary.
To fix it, use toFixed() to round floating point numbers to the amount of precision you want. For example, for two decimal places:
num.toFixed(2)

why does JavaScript mess up 0.1 + 0.2 when C++ doesn't? [duplicate]

This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 8 years ago.
I understand that with the IEEE representation (or any binary representation) for double one can't represent 0.1 with a finite amount of bits.
I have two questions with this in mind:
When C++ also uses the same standard for double why doesn't it mess up 0.1 + 0.2 like JavaScript does?
Why does JavaScript print console.log(0.1) correctly when it can't accurately hold it in memory?
There are at least three reasonable choices for conversion of floating point numbers to strings:
Print the exact value. This seems like an obvious choice, but has downsides. The exact decimal value of a finite floating point number always exists, but may have hundreds of significant digits, most of which are of no practical use. java.util.BigDecimal's toString does this.
Print just enough digits to uniquely identify the floating point number. This is the choice made, for example, in Java for default conversion of double or float.
Print few enough digits to ensure that most output digits will be unaffected by rounding error on most simple calculations. That was the choice made in C.
Each of these has advantages and disadvantages. A choice 3 conversion will get "0.3", the "right" answer, for the result of adding 0.1 and 0.2. On the other hand, reading in a value printed this way cannot be depended on to recover the original float, because multiple floating point values map to the same string.
I don't think any of these options is "right" or "wrong". Languages typically have ways of forcing one of the non-default options, and that should be done if the default is not the best choice for a particular output.
Because it prints x to n decimal places. That happens to be correct

Why does JavaScript use base 10 floating point numbers (according to w3schools)?

I read this on W3Schools:
All numbers in JavaScript are stored as 64-bit (8-bytes) base 10,
floating point numbers.
This sounds quite strange. Now, it's either wrong or there should be a good reason not to use base 2 like the IEEE standard.
I tried to find a real JavaScript definition, but I couldn't find any. Either on the V8 or WebKit documentation, the two JavaScript implementation I could find on Wikipedia that sounded the most familiar to me, I could find how they stored the JavaScript Number type.
So, does JavaScript use base 10? If so, why? The only reason I could come up with was that maybe using base 10 has an advantage when you want to be able to accurately store integers as well as floating point numbers, but I don't know how using base 10 would have an advantage for that myself.
That's not the World Wide Web Consortium (W3C), that's w3schools, a website that isn't any authority for any web standards.
Numbers in Javascript are double precision floating point numbers, following the IEEE standards.
The site got the part about every number is a 64-bit floating point number right. The base 10 has nothing with the numerical representation to do, that probably comes from the fact that floating point numbers are always parsed and formatted using base 10.
Numbers in JavaScript are, according to the ECMA-262 Standard (ECMAScript 5.1) section 4.3.19:
Primitive values corresponding to a double-precision 64-bit binary format IEEE 754 value.
Thus, any implementation using base 10 floating point numbers is not ECMA-262 conformant.
JavaScript uses, like most modern languages, IEEE754. Which isn't at all stored in base 10.
The specificity of JavaScript is that there is only one number type, which is the double precision float. Which has the side effect that you're somewhat limited contrary to other languages if you want to deal with integers : you can't store any double precision integer, only the ones fitting in the size of the fraction (52 bits).

Why am I seeing inexact floating-point results in ECMAScript / ActionScript 3?

Hey all, let's jump straight to a code sample to show how ECMAScript/JavaScript/AS3 can't do simple math right (AS3 uses a 'IEEE-754 double-precision floating-point number' for the Number class which is supposedly identical to that used in JavaScript)...
trace(1.1); //'1.1': Ok, fine, looks good.
trace(1.1*100); //'110.00000000000001': What!?
trace((1.1*100)/100); //'1.1': Brings it back to 1.1 (since we're apparently multiplying by *approximately* 100 and then dividing by the same *approximate* '100' amount)
trace(1.1*100-110); //'1.4210854715202004e-14': Proof that according to AS3, 1.1*100!=110 (i.e. this isn't just a bug in Number.toString())
trace(1.1*100==110); //'false': Even further proof that according to AS3, 1.1*100!=110
What gives?
Welcome to the wonderful world of floating point calculation accuracy. In general, floating point calculations will give you results that are very very nearly correct, but comparing outputs for absolute equality is unlikely to give you results you expect without the use of rounding functions.
This is just a side effect of using floating point numbers - these are binary representations of decimal numbers, there will always be some approximations.
Long explanation
Floating point inconsistencies are a known problem in many languages. This is because computers aren't designed to handle floating point numbers.
Have fun
As moonshadow states, you're running into issues with floating point precision. Floating point numbers aren't suited to the task of representing and performing arithmetic upon decimal values in the manner that you would expect. These kinds of problems are seen most often when people try to using floating point numbers for financial calculations. The wikipedia entry is good, but you might get more out of this page, which steps through an error-prone financial calculation: http://c2.com/cgi/wiki?FloatingPointCurrency
To accurately deal with decimal numbers you need a decimal library. I've outlined two BigDecimal-style libraries written in javascript that may suit your needs in another SO post, hopefully you'll find them useful:
https://stackoverflow.com/questions/744099/javascript-bigdecimal-library/1575569

Categories

Resources