Is there any well-known implementation of Javascript that doesn't use double precision floating point numbers according to IEEE 754? Was it in the standard from the beginning or added later?
All ECMAScript standard versions, including the very first, specified the number type with the double-precision 64-bit format according to IEEE 754.
Related
It's a famous example that in javascript console logging 0.1 + 0.2 yields
0.1 + 0.2 = 0.30000000000000004
The typical explanation for this is that it happens because of the way javascript represents numbers.
I have 2 questions on that :
1)Why does javascript decide how to represent numbers - isn't it the "environment" (whatever it is compiling the code , be it the browser or something else?) job to decide how it wants to represent numbers?
2)Why is it impossible to fix this behavior to match most programming languages(java , c++ etc) . I mean - if this behavior isn't really good (and most would agree it isn't) , why is it impossible to fix . (Douglas Crockford showed other javascript flaws , for example weird behavior with 'this' , and it's been that way for 20 years .) . What is preventing javascript to fix these mistakes?
Why does javascript decide how to represent numbers - isn't it the "environment"
That would be chaos. By having JavaScript define the behavior of its fundamental types, we can rely on them behaving in that way across environments.
Okay, "chaos" is rather strong. I believe C never defined what float and double actually were other than some range limits, and it would be fair to say that C was and arguably is wildly successful, "chaos" and all. Still, the modern trend is to nail things down a bit more.
Why is it impossible to fix this behavior to match most programming languages(java , c++ etc)
This is the behavior of most modern programming languages. Most modern programming languages use IEEE-754 single- (often "float") and double- (often "double") precision floating point numbers:
JavaScript: http://www.ecma-international.org/ecma-262/5.1/#sec-4.3.19
Number value
primitive value corresponding to a double-precision 64-bit binary format IEEE 754 value
Java: http://docs.oracle.com/javase/specs/jls/se7/html/jls-4.html#jls-4.2.3
The floating-point types are float and double, which are conceptually associated with the single-precision 32-bit and double-precision 64-bit format IEEE 754 values and operations as specified in IEEE Standard for Binary Floating-Point Arithmetic, ANSI/IEEE Standard 754-1985 (IEEE, New York).
C#: http://msdn.microsoft.com/en-us/library/aa691146(v=vs.71).aspx
C# supports two floating point types: float and double. The float and double types are represented using the 32-bit single-precision and 64-bit double-precision IEEE 754 formats
JavaScript stores all numbers in double-precision floating-point format, with a 52-bit mantissa and an 11-bit exponent (the IEEE 754 Standard for storing numeric values), and therefore its Number-to-String conversions are often inaccurate. For instance,
111111111*111111111===12345678987654321
is correct, but
(111111111*111111111).toString()
returns "12345678987654320" instead of "12345678987654321". Likewise, 0.362*100 yields 36.199999999999996.
Is there a simple way to accurately convert numbers to strings?
It is NOT the number to string conversion that is inaccurate. It is the storage of the number itself in floating point as all values cannot be represented precisely in floating point and there is a limit to the number of significant digits that can be stored (about 16). You simply can't use floating point if you need perfect precision and you certainly can't be using a number that has this many significant digits. Alternatives are integers, binary coded decimals, big decimal libraries, etc...
Here's a simple demo of the problem representing certain values in floating point: http://jsfiddle.net/jfriend00/nv4MJ/
Depending upon what problem you are actually trying to solve, decimal precision issues can often be worked around by using toFixed(n) when converting to display.
Other references to read for more explanation:
How to deal with floating point number precision in JavaScript?
Why can't decimal numbers be represented exactly in binary?
What range of numbers can be represented in a 16-, 32- and 64-bit IEEE-754 systems?
Why Floating-Point Numbers May Lose Precision - MSDN - Microsoft
Is floating point math broken?
Accurate floating point arithmetic in JavaScript
https://stackoverflow.com/questions/744099/is-there-a-good-javascript-bigdecimal-library
How many bits does JavaScript use to represent a number?
Generally JS implementations use 64-bit double-precision floating-point numbers. Bitwise operations are performed on 32-bit integers.
That depends on the specific implementation, not the language itself.
If you want to know what range of numbers is supported, then see section 8.5 (The Number Type) of the specification.
From the referenced spec:
The Number type has exactly 18437736874454810627 (that is, 264−253+3)
values, representing the doubleprecision 64-bit format IEEE 754 values
as specified in the IEEE Standard for Binary Floating-Point
Arithmetic, except that the 9007199254740990 (that is, 253−2) distinct
"Not-a-Number" values of the IEEE Standard are represented in
ECMAScript as a single special NaN value. (Note that the NaN value is
produced by the program expression NaN.) In some implementations,
external code might be able to detect a difference between various
Not-a-Number values, but such behaviour is implementation-dependent;
to ECMAScript code, all NaN values are indistinguishable from each
other.
That said be aware that when using the bit operators &, ^, >> << etc only the least significant 32 bits are used and the result is converted to a signed value.
Does anyone know of a JavaScript library that accurately implements the IEEE 754 specification for 32-bit floating-point values? I'm asking because I'm trying to write a cross-compiler in JavaScript, and since the source language has strict requirements that floating-point values adhere to IEEE 754, the generated JavaScript code must do so as well. This means that I must be able to get exactly the correct IEEE 754 values for addition, subtraction, multiplication, and division of 32-bit floats. Unfortunately, the standard JavaScript Number type is a 64-bit double, which will give different results than what I'm expecting. The project really does have to be in JavaScript, and this is the only major stumbling block I have yet to get past.
I'm also running into this problem with 64-bit longs.
The Closure library has a 64-bit long implementation, at least.
I know it's an odd question, but does JavaScript have the capacity to work with double's as opposed to single floats? (64 bit floats vs. 32 bits.)
All numbers in JavaScript are 64-bit floating point numbers.
Ref:
http://www.hunlock.com/blogs/The_Complete_Javascript_Number_Reference
http://www.crockford.com/javascript/survey.html
According to the ECMA-262 specification (ECMAScript is the specification for Javascript), section 8.5:
The Number type has exactly 18437736874454810627 (that is, 264−253+3) values, representing the double-precision 64-bit format IEEE 754 values as specified in the IEEE Standard for Binary Floating-Point Arithmetic
Source: http://www.ecma-international.org/publications/files/ecma-st/ECMA-262.pdf (PDF)
In javascript type number it's float 64-bit number that support IEEE 754 standard and it's like double in C. And you can create 32-bit typed arrays by commands below and control each byte in each component by binding corresponded buffer.
let a = new Float32Array(length);
let b = new Float64Array(length);
But note that it's not supported in IE9, here browser compatibility table.
If you want extended presicion like long double, you can use double.js or decimal.js library.