Javascript: string representation of numbers - javascript

How does javascript convert numbers to strings? I expect it to round the number to some precision but it doesn't look like this is the case. I did the following tests:
> 0.1 + 0.2
0.30000000000000004
> (0.1 + 0.2).toFixed(20)
'0.30000000000000004441'
> 0.2
0.2
> (0.2).toFixed(20)
'0.20000000000000001110'
This is the behavior in Safari 6.1.1, Firefox 25.0.1 and node.js 0.10.21.
It looks like javascript displays the 17th digit after the decimal point for (0.1 + 0.2) but hides it for 0.2 (and so the number is rounded to 0.2).
How exactly does number to string conversion work in javascript?

From the question's author:
I found the answer in the ECMA script specification: http://www.ecma-international.org/ecma-262/5.1/#sec-9.8.1
When printing a number, javascript calls toString(). The specification of toString() explains how javascript decides what to print. This note below
The least significant digit of s is not always uniquely determined by the requirements listed in step 5.
as well as the one here: http://www.ecma-international.org/ecma-262/5.1/#sec-15.7.4.5
The output of toFixed may be more precise than toString for some values because toString only prints enough significant digits to distinguish the number from adjacent number values.
explain the basic idea behind the behavior of toString().

This isn't about how javascript works, but about how floating-point operations work in general. Computers work in binary, but people mostly work in base 10. This introduces some imprecision here and there; how bad the imprecision is depends on how the hardware and (sometimes) software in question works. But the key is that you can't predict exactly what the errors will be, only that there will be errors.
Javascript doesn't have a rule like "display so many numbers after the decimal point for certain numbers but not for others." Instead, the computer is giving you its best estimate of the number requested. 0.2 is not something that can be easily represented in binary, so if you tell the computer to use more precision than it would otherwise, you get rounding errors (the 1110 at the end, in this case).
This is actually the same question as this old one. From the excellent community wiki answer there:
All floating point math is like this and is based on the IEEE 754 standard. JavaScript uses 64-bit floating point representation, which is the same as Java's double.

Related

why does JavaScript mess up 0.1 + 0.2 when C++ doesn't? [duplicate]

This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 8 years ago.
I understand that with the IEEE representation (or any binary representation) for double one can't represent 0.1 with a finite amount of bits.
I have two questions with this in mind:
When C++ also uses the same standard for double why doesn't it mess up 0.1 + 0.2 like JavaScript does?
Why does JavaScript print console.log(0.1) correctly when it can't accurately hold it in memory?
There are at least three reasonable choices for conversion of floating point numbers to strings:
Print the exact value. This seems like an obvious choice, but has downsides. The exact decimal value of a finite floating point number always exists, but may have hundreds of significant digits, most of which are of no practical use. java.util.BigDecimal's toString does this.
Print just enough digits to uniquely identify the floating point number. This is the choice made, for example, in Java for default conversion of double or float.
Print few enough digits to ensure that most output digits will be unaffected by rounding error on most simple calculations. That was the choice made in C.
Each of these has advantages and disadvantages. A choice 3 conversion will get "0.3", the "right" answer, for the result of adding 0.1 and 0.2. On the other hand, reading in a value printed this way cannot be depended on to recover the original float, because multiple floating point values map to the same string.
I don't think any of these options is "right" or "wrong". Languages typically have ways of forcing one of the non-default options, and that should be done if the default is not the best choice for a particular output.
Because it prints x to n decimal places. That happens to be correct

Javascript Number Representation

It's a famous example that in javascript console logging 0.1 + 0.2 yields
0.1 + 0.2 = 0.30000000000000004
The typical explanation for this is that it happens because of the way javascript represents numbers.
I have 2 questions on that :
1)Why does javascript decide how to represent numbers - isn't it the "environment" (whatever it is compiling the code , be it the browser or something else?) job to decide how it wants to represent numbers?
2)Why is it impossible to fix this behavior to match most programming languages(java , c++ etc) . I mean - if this behavior isn't really good (and most would agree it isn't) , why is it impossible to fix . (Douglas Crockford showed other javascript flaws , for example weird behavior with 'this' , and it's been that way for 20 years .) . What is preventing javascript to fix these mistakes?
Why does javascript decide how to represent numbers - isn't it the "environment"
That would be chaos. By having JavaScript define the behavior of its fundamental types, we can rely on them behaving in that way across environments.
Okay, "chaos" is rather strong. I believe C never defined what float and double actually were other than some range limits, and it would be fair to say that C was and arguably is wildly successful, "chaos" and all. Still, the modern trend is to nail things down a bit more.
Why is it impossible to fix this behavior to match most programming languages(java , c++ etc)
This is the behavior of most modern programming languages. Most modern programming languages use IEEE-754 single- (often "float") and double- (often "double") precision floating point numbers:
JavaScript: http://www.ecma-international.org/ecma-262/5.1/#sec-4.3.19
Number value
primitive value corresponding to a double-precision 64-bit binary format IEEE 754 value
Java: http://docs.oracle.com/javase/specs/jls/se7/html/jls-4.html#jls-4.2.3
The floating-point types are float and double, which are conceptually associated with the single-precision 32-bit and double-precision 64-bit format IEEE 754 values and operations as specified in IEEE Standard for Binary Floating-Point Arithmetic, ANSI/IEEE Standard 754-1985 (IEEE, New York).
C#: http://msdn.microsoft.com/en-us/library/aa691146(v=vs.71).aspx
C# supports two floating point types: float and double. The float and double types are represented using the 32-bit single-precision and 64-bit double-precision IEEE 754 formats

JavaScript floating point

I was wondering what the floating point limitations of JavaScript were. I have the following code that doesn't return a detailed floating point result as in:
2e-2 is the same as 0.02
var numero = 1E-12;
document.write(numero);
returns 1e-12.
What is the max exponential result that JavaScript can handle?
JavaScript is an implementation of ECMAScript, specified in Ecma-262 and ISO/IEC 16262. Ecma-262 specifies that IEEE 754 64-bit binary floating point is used.
In this format, the smallest positive number is 2–1074 (slightly over 4.94e–324), and the largest finite number is 21024–2971 (slightly under 1.798e308). Infinity can be represented, so, in this sense, there is no upper limit to the value of a number in JavaScript.
Numbers in this format have at most 53 bits in their significands (fraction parts). (Numbers under 2–1022 are subnormal and have fewer bits.) The limited number of bits means that many numbers are not exactly representable, including the .02 in your example. Consequently, the results of arithmetic operations are rounded to the nearest representable values, and errors in chains of calculations may cancel or may accumulate, even catastrophically.
The format also includes some special entities called NaNs (for Not a Number). NaNs may be used to indicate that a number has not been initialized, for special debugging purposes, or to represent the result of an operation for which no number is suitable (such as the square root of –1).
The maximum that is less than zero..?
There is a detailed discussion here. Basically, a 0.00000x will be displayed in exponential (5 zeroes after the decimal).
Of course, you could test this for yourself ;) particularly to see if this behaviour is reliable cross-browser.
Personally, I don't think you should be concerned, or rely, on this behaviour. When you come to display a number just format it appropriately.

Why does JavaScript use base 10 floating point numbers (according to w3schools)?

I read this on W3Schools:
All numbers in JavaScript are stored as 64-bit (8-bytes) base 10,
floating point numbers.
This sounds quite strange. Now, it's either wrong or there should be a good reason not to use base 2 like the IEEE standard.
I tried to find a real JavaScript definition, but I couldn't find any. Either on the V8 or WebKit documentation, the two JavaScript implementation I could find on Wikipedia that sounded the most familiar to me, I could find how they stored the JavaScript Number type.
So, does JavaScript use base 10? If so, why? The only reason I could come up with was that maybe using base 10 has an advantage when you want to be able to accurately store integers as well as floating point numbers, but I don't know how using base 10 would have an advantage for that myself.
That's not the World Wide Web Consortium (W3C), that's w3schools, a website that isn't any authority for any web standards.
Numbers in Javascript are double precision floating point numbers, following the IEEE standards.
The site got the part about every number is a 64-bit floating point number right. The base 10 has nothing with the numerical representation to do, that probably comes from the fact that floating point numbers are always parsed and formatted using base 10.
Numbers in JavaScript are, according to the ECMA-262 Standard (ECMAScript 5.1) section 4.3.19:
Primitive values corresponding to a double-precision 64-bit binary format IEEE 754 value.
Thus, any implementation using base 10 floating point numbers is not ECMA-262 conformant.
JavaScript uses, like most modern languages, IEEE754. Which isn't at all stored in base 10.
The specificity of JavaScript is that there is only one number type, which is the double precision float. Which has the side effect that you're somewhat limited contrary to other languages if you want to deal with integers : you can't store any double precision integer, only the ones fitting in the size of the fraction (52 bits).

Another floating point question

I have read most of the posts on here regarding floating point, and I understand the basic underlying issue that using IEEE 754 (and just by the nature of storing numbers in binary) certain fractions cannot be represented. I am trying to figure out the following: If both Python and JavaScript use the IEEE 754 standard, why is it that executing the following in Python
.1 + .1
Results in
0.20000000000000001 (which is to be expected)
Where as in Javascript (in at least Chrome and Firefox) the answer is .2
However performing
.1 + .2
In both languages results in
0.30000000000000004
In addition, executing
var a = 0.3; in JavaScript and printing a results in
0.3
Where as doing a = 0.3 in Python results in 0.29999999999999999
I would like to understand the reason for this difference in behavior.
In addition, many of the posts on OS link to a JavaScript port of Java's BigDecimal, but the link is dead. Does anyone have a copy?
doing a = 0.3 in Python results in
0.29999999999999999
Not quite -- watch:
>>> a = 0.3
>>> print a
0.3
>>> a
0.29999999999999999
As you see, printing a does show 0.3 -- because by default print rounds to 6 or 7 decimal digits, while typing an expression (here a is a single-variable expression) at the prompt shows the result with over twice as many digits (thus revealing floating point's intrinsic limitations).
Javascript may have slightly different rounding rules about how to display numbers, and the exact details of the rounding are plenty enough to explain the differences you observe. Note, for example (on a Chrome javascript console):
> (1 + .1) * 1000000000
1100000000
> (1 + .1) * 100000000000000
110000000000000.02
see? if you manage to see more digits, the anomalies (which inevitably are there) become visible too.
and printing.
They might both have the same IEEE 754 underlying representation, but that doesn't mean they're forced to print the same way. It looks like Javascript is rounding the output when the difference is small enough.
With floating point numbers, the important part is how the binary data is structured, not what it shows on the screen.
I would like to understand the reason for this difference in behavior.
They're different languages.
They use different underlying packages.
They have different implementations.
When you say "Python" -- which implementation are you talking about? C, Jython, IronPython? Did you compare each of those?
The Javascript folks seem to handle repeating binary fractions differently from the way the Python folks handle repeating binary fractions.
Sometimes Javascript quietly suppresses the error bits at the end. Sometimes it doesn't.
That's the reason.
You have the source code for both. If you want to know more, you can. Knowing the source code doesn't change much, however.

Categories

Resources