I have read most of the posts on here regarding floating point, and I understand the basic underlying issue that using IEEE 754 (and just by the nature of storing numbers in binary) certain fractions cannot be represented. I am trying to figure out the following: If both Python and JavaScript use the IEEE 754 standard, why is it that executing the following in Python
.1 + .1
Results in
0.20000000000000001 (which is to be expected)
Where as in Javascript (in at least Chrome and Firefox) the answer is .2
However performing
.1 + .2
In both languages results in
0.30000000000000004
In addition, executing
var a = 0.3; in JavaScript and printing a results in
0.3
Where as doing a = 0.3 in Python results in 0.29999999999999999
I would like to understand the reason for this difference in behavior.
In addition, many of the posts on OS link to a JavaScript port of Java's BigDecimal, but the link is dead. Does anyone have a copy?
doing a = 0.3 in Python results in
0.29999999999999999
Not quite -- watch:
>>> a = 0.3
>>> print a
0.3
>>> a
0.29999999999999999
As you see, printing a does show 0.3 -- because by default print rounds to 6 or 7 decimal digits, while typing an expression (here a is a single-variable expression) at the prompt shows the result with over twice as many digits (thus revealing floating point's intrinsic limitations).
Javascript may have slightly different rounding rules about how to display numbers, and the exact details of the rounding are plenty enough to explain the differences you observe. Note, for example (on a Chrome javascript console):
> (1 + .1) * 1000000000
1100000000
> (1 + .1) * 100000000000000
110000000000000.02
see? if you manage to see more digits, the anomalies (which inevitably are there) become visible too.
and printing.
They might both have the same IEEE 754 underlying representation, but that doesn't mean they're forced to print the same way. It looks like Javascript is rounding the output when the difference is small enough.
With floating point numbers, the important part is how the binary data is structured, not what it shows on the screen.
I would like to understand the reason for this difference in behavior.
They're different languages.
They use different underlying packages.
They have different implementations.
When you say "Python" -- which implementation are you talking about? C, Jython, IronPython? Did you compare each of those?
The Javascript folks seem to handle repeating binary fractions differently from the way the Python folks handle repeating binary fractions.
Sometimes Javascript quietly suppresses the error bits at the end. Sometimes it doesn't.
That's the reason.
You have the source code for both. If you want to know more, you can. Knowing the source code doesn't change much, however.
Related
How does javascript convert numbers to strings? I expect it to round the number to some precision but it doesn't look like this is the case. I did the following tests:
> 0.1 + 0.2
0.30000000000000004
> (0.1 + 0.2).toFixed(20)
'0.30000000000000004441'
> 0.2
0.2
> (0.2).toFixed(20)
'0.20000000000000001110'
This is the behavior in Safari 6.1.1, Firefox 25.0.1 and node.js 0.10.21.
It looks like javascript displays the 17th digit after the decimal point for (0.1 + 0.2) but hides it for 0.2 (and so the number is rounded to 0.2).
How exactly does number to string conversion work in javascript?
From the question's author:
I found the answer in the ECMA script specification: http://www.ecma-international.org/ecma-262/5.1/#sec-9.8.1
When printing a number, javascript calls toString(). The specification of toString() explains how javascript decides what to print. This note below
The least significant digit of s is not always uniquely determined by the requirements listed in step 5.
as well as the one here: http://www.ecma-international.org/ecma-262/5.1/#sec-15.7.4.5
The output of toFixed may be more precise than toString for some values because toString only prints enough significant digits to distinguish the number from adjacent number values.
explain the basic idea behind the behavior of toString().
This isn't about how javascript works, but about how floating-point operations work in general. Computers work in binary, but people mostly work in base 10. This introduces some imprecision here and there; how bad the imprecision is depends on how the hardware and (sometimes) software in question works. But the key is that you can't predict exactly what the errors will be, only that there will be errors.
Javascript doesn't have a rule like "display so many numbers after the decimal point for certain numbers but not for others." Instead, the computer is giving you its best estimate of the number requested. 0.2 is not something that can be easily represented in binary, so if you tell the computer to use more precision than it would otherwise, you get rounding errors (the 1110 at the end, in this case).
This is actually the same question as this old one. From the excellent community wiki answer there:
All floating point math is like this and is based on the IEEE 754 standard. JavaScript uses 64-bit floating point representation, which is the same as Java's double.
It's a famous example that in javascript console logging 0.1 + 0.2 yields
0.1 + 0.2 = 0.30000000000000004
The typical explanation for this is that it happens because of the way javascript represents numbers.
I have 2 questions on that :
1)Why does javascript decide how to represent numbers - isn't it the "environment" (whatever it is compiling the code , be it the browser or something else?) job to decide how it wants to represent numbers?
2)Why is it impossible to fix this behavior to match most programming languages(java , c++ etc) . I mean - if this behavior isn't really good (and most would agree it isn't) , why is it impossible to fix . (Douglas Crockford showed other javascript flaws , for example weird behavior with 'this' , and it's been that way for 20 years .) . What is preventing javascript to fix these mistakes?
Why does javascript decide how to represent numbers - isn't it the "environment"
That would be chaos. By having JavaScript define the behavior of its fundamental types, we can rely on them behaving in that way across environments.
Okay, "chaos" is rather strong. I believe C never defined what float and double actually were other than some range limits, and it would be fair to say that C was and arguably is wildly successful, "chaos" and all. Still, the modern trend is to nail things down a bit more.
Why is it impossible to fix this behavior to match most programming languages(java , c++ etc)
This is the behavior of most modern programming languages. Most modern programming languages use IEEE-754 single- (often "float") and double- (often "double") precision floating point numbers:
JavaScript: http://www.ecma-international.org/ecma-262/5.1/#sec-4.3.19
Number value
primitive value corresponding to a double-precision 64-bit binary format IEEE 754 value
Java: http://docs.oracle.com/javase/specs/jls/se7/html/jls-4.html#jls-4.2.3
The floating-point types are float and double, which are conceptually associated with the single-precision 32-bit and double-precision 64-bit format IEEE 754 values and operations as specified in IEEE Standard for Binary Floating-Point Arithmetic, ANSI/IEEE Standard 754-1985 (IEEE, New York).
C#: http://msdn.microsoft.com/en-us/library/aa691146(v=vs.71).aspx
C# supports two floating point types: float and double. The float and double types are represented using the 32-bit single-precision and 64-bit double-precision IEEE 754 formats
I read this on W3Schools:
All numbers in JavaScript are stored as 64-bit (8-bytes) base 10,
floating point numbers.
This sounds quite strange. Now, it's either wrong or there should be a good reason not to use base 2 like the IEEE standard.
I tried to find a real JavaScript definition, but I couldn't find any. Either on the V8 or WebKit documentation, the two JavaScript implementation I could find on Wikipedia that sounded the most familiar to me, I could find how they stored the JavaScript Number type.
So, does JavaScript use base 10? If so, why? The only reason I could come up with was that maybe using base 10 has an advantage when you want to be able to accurately store integers as well as floating point numbers, but I don't know how using base 10 would have an advantage for that myself.
That's not the World Wide Web Consortium (W3C), that's w3schools, a website that isn't any authority for any web standards.
Numbers in Javascript are double precision floating point numbers, following the IEEE standards.
The site got the part about every number is a 64-bit floating point number right. The base 10 has nothing with the numerical representation to do, that probably comes from the fact that floating point numbers are always parsed and formatted using base 10.
Numbers in JavaScript are, according to the ECMA-262 Standard (ECMAScript 5.1) section 4.3.19:
Primitive values corresponding to a double-precision 64-bit binary format IEEE 754 value.
Thus, any implementation using base 10 floating point numbers is not ECMA-262 conformant.
JavaScript uses, like most modern languages, IEEE754. Which isn't at all stored in base 10.
The specificity of JavaScript is that there is only one number type, which is the double precision float. Which has the side effect that you're somewhat limited contrary to other languages if you want to deal with integers : you can't store any double precision integer, only the ones fitting in the size of the fraction (52 bits).
I love javascript, don't get me wrong, but my problem is I currently want to develop open source web applications for scientific computations and javascript's arithmetic isn't exactly the most precise. I've scripted server-side, but I prefer client-side for the obvious reasons that the experience for the user is generally smoother and there is less load on the server.
What are my options as far as working around this issue? I've read somewhere that you can implement languages on top of javascript--would this be worth it and what does this look like? If I do, say, implement python on top of javascript, does that mean the client needs a python interpreter to use the site?
I just can't handle
0.1 + 0.2 == 0.3 // is False
Floating points operations are approximated calculations
This is not wrong see more about it
Weird programing behavior
THis is not specific to javascript but common in programming as a whole
Here is what i get on chrome:
0.1 + 0.2 = 0.30000000000000004;
And here is a simple but excellent read on the subject:
What Every Programmer Should Know About Floating-Point Arithmetic
Why don’t my numbers, like 0.1 + 0.2 add up to a nice round 0.3, and instead I get a weird result like 0.30000000000000004?
Because internally, computers use a format (binary floating-point)
that cannot accurately represent a number like 0.1, 0.2 or 0.3 at all.
When the code is compiled or interpreted, your “0.1” is already
rounded to the nearest number in that format, which results in a small
rounding error even before the calculation happens.
There are a few libraries that allow for much better mathematical operations:
Big Number
Big Js (Different library)
However, I would very strongly suggest doing this server side. You could easily do it via AJAX and not have to worry about responsiveness. Javascript just wasn't really built for numbers.
Can the same mathematical operation return different results in different architectures or browsers ?
The other answers are incorrect. According to the ECMAScript 5.1 specs (section 15.8.2)
NOTE The behaviour of the functions acos, asin, atan, atan2, cos, exp,
log, pow, sin, sqrt, and tan is not precisely specified here except
to require specific results for certain argument values that represent
boundary cases of interest.
...
Although the choice of algorithms is
left to the implementation, it is recommended (but not specified by
this standard) that implementations use the approximation algorithms
for IEEE 754 arithmetic contained in fdlibm, the freely distributable
mathematical library from Sun Microsystems
However, even if the implementations were specified, the exact results of all floating-point operations would still be dependent on browser/architecture. That includes simple operations like multiplication and division!!
The reason is that IEEE-754 allows systems to do 64-bit floating-point calculations at a higher-precision than the result, leading to different rounding results than systems which use the same precision as the result. This is exactly what the x86 (Intel) architecture does, which is why in C (and javascript) we can sometimes have cos(x) != cos(y) even though x == y, even on the same machine!
This is a big issue for networked peer-to-peer games, since this means, if the higher-precision calculations can't be disabled (as is the case for C#), those games pretty much can't use floating-point calculations at all. However, this is typically not an issue for Javascript games, since they are usually client-server.
If we assume that every browser vendor follows the IEEE standards + ECMA specs and there is no human error while implementing, no there can't be any difference.
Although the ECMAScript language specification 5.1 edition states that numbers are primitive values corresponding to IEEE 754 floats, which implies calculations should be consistent:
http://www.ecma-international.org/publications/files/ecma-st/ECMA-262.pdf
4.3.19 Number value
primitive value corresponding to a double-precision 64-bit binary format IEEE 754 value
NOTE
A Number value is a member of the Number type and is a direct representation of a number.
As BlueRaja points out, there is a sort of caveat in section 15.8.2:
The behaviour of the functions acos, asin, atan, atan2, cos, exp, log,
pow, sin, sqrt, and tan is not precisely specified here...
Meaning, these are at least some cases where the outcome of operations on numbers is implementation dependent and may therefore be inconsistent.
My two cents - #goldilocks notes and others allude to that you shouldn't use == or != on floating point numbers. So what do you mean by "deterministic"? That the behavior is always the same on different machines? Obviously this depends on what you mean by "the same behavior."
Well, at one silly literal level of "the same," of course not, physical bits will be different on e.g. 32 bit versus 64 bit machines. So that interpretation is out.
Ok, so will any program run with the same output on two different machines? In general languages no, because a C program can do something with undefined memory, like read from an uninitialized bit.
Ok, so will any valid program do the same thing on different machines? Well, I would say a program that uses == and != on floating point numbers is as invalid as a program that reads uninitialized memory. I personally don't know if the Javascript standard hammers out the behavior of == and != on floats to the point that it's well-defined if not kooky, so if that is your precise question you'll have to see the other answers. Can you write javascript code that has undefined output with respect to the standard? Never read the standard (other answers cover this somewhat), but for my interest this is moot because the programs that would produce what you call undeterministic behavior are invalid to begin with.