Does JavaScript have double floating point number precision? - javascript

I know it's an odd question, but does JavaScript have the capacity to work with double's as opposed to single floats? (64 bit floats vs. 32 bits.)

All numbers in JavaScript are 64-bit floating point numbers.
Ref:
http://www.hunlock.com/blogs/The_Complete_Javascript_Number_Reference
http://www.crockford.com/javascript/survey.html

According to the ECMA-262 specification (ECMAScript is the specification for Javascript), section 8.5:
The Number type has exactly 18437736874454810627 (that is, 264−253+3) values, representing the double-precision 64-bit format IEEE 754 values as specified in the IEEE Standard for Binary Floating-Point Arithmetic
Source: http://www.ecma-international.org/publications/files/ecma-st/ECMA-262.pdf (PDF)

In javascript type number it's float 64-bit number that support IEEE 754 standard and it's like double in C. And you can create 32-bit typed arrays by commands below and control each byte in each component by binding corresponded buffer.
let a = new Float32Array(length);
let b = new Float64Array(length);
But note that it's not supported in IE9, here browser compatibility table.
If you want extended presicion like long double, you can use double.js or decimal.js library.

Related

Do javascript numbers follow IEEE 754 double precision?

Is there any well-known implementation of Javascript that doesn't use double precision floating point numbers according to IEEE 754? Was it in the standard from the beginning or added later?
All ECMAScript standard versions, including the very first, specified the number type with the double-precision 64-bit format according to IEEE 754.

Javascript Number Representation

It's a famous example that in javascript console logging 0.1 + 0.2 yields
0.1 + 0.2 = 0.30000000000000004
The typical explanation for this is that it happens because of the way javascript represents numbers.
I have 2 questions on that :
1)Why does javascript decide how to represent numbers - isn't it the "environment" (whatever it is compiling the code , be it the browser or something else?) job to decide how it wants to represent numbers?
2)Why is it impossible to fix this behavior to match most programming languages(java , c++ etc) . I mean - if this behavior isn't really good (and most would agree it isn't) , why is it impossible to fix . (Douglas Crockford showed other javascript flaws , for example weird behavior with 'this' , and it's been that way for 20 years .) . What is preventing javascript to fix these mistakes?
Why does javascript decide how to represent numbers - isn't it the "environment"
That would be chaos. By having JavaScript define the behavior of its fundamental types, we can rely on them behaving in that way across environments.
Okay, "chaos" is rather strong. I believe C never defined what float and double actually were other than some range limits, and it would be fair to say that C was and arguably is wildly successful, "chaos" and all. Still, the modern trend is to nail things down a bit more.
Why is it impossible to fix this behavior to match most programming languages(java , c++ etc)
This is the behavior of most modern programming languages. Most modern programming languages use IEEE-754 single- (often "float") and double- (often "double") precision floating point numbers:
JavaScript: http://www.ecma-international.org/ecma-262/5.1/#sec-4.3.19
Number value
primitive value corresponding to a double-precision 64-bit binary format IEEE 754 value
Java: http://docs.oracle.com/javase/specs/jls/se7/html/jls-4.html#jls-4.2.3
The floating-point types are float and double, which are conceptually associated with the single-precision 32-bit and double-precision 64-bit format IEEE 754 values and operations as specified in IEEE Standard for Binary Floating-Point Arithmetic, ANSI/IEEE Standard 754-1985 (IEEE, New York).
C#: http://msdn.microsoft.com/en-us/library/aa691146(v=vs.71).aspx
C# supports two floating point types: float and double. The float and double types are represented using the 32-bit single-precision and 64-bit double-precision IEEE 754 formats

Why does JavaScript use base 10 floating point numbers (according to w3schools)?

I read this on W3Schools:
All numbers in JavaScript are stored as 64-bit (8-bytes) base 10,
floating point numbers.
This sounds quite strange. Now, it's either wrong or there should be a good reason not to use base 2 like the IEEE standard.
I tried to find a real JavaScript definition, but I couldn't find any. Either on the V8 or WebKit documentation, the two JavaScript implementation I could find on Wikipedia that sounded the most familiar to me, I could find how they stored the JavaScript Number type.
So, does JavaScript use base 10? If so, why? The only reason I could come up with was that maybe using base 10 has an advantage when you want to be able to accurately store integers as well as floating point numbers, but I don't know how using base 10 would have an advantage for that myself.
That's not the World Wide Web Consortium (W3C), that's w3schools, a website that isn't any authority for any web standards.
Numbers in Javascript are double precision floating point numbers, following the IEEE standards.
The site got the part about every number is a 64-bit floating point number right. The base 10 has nothing with the numerical representation to do, that probably comes from the fact that floating point numbers are always parsed and formatted using base 10.
Numbers in JavaScript are, according to the ECMA-262 Standard (ECMAScript 5.1) section 4.3.19:
Primitive values corresponding to a double-precision 64-bit binary format IEEE 754 value.
Thus, any implementation using base 10 floating point numbers is not ECMA-262 conformant.
JavaScript uses, like most modern languages, IEEE754. Which isn't at all stored in base 10.
The specificity of JavaScript is that there is only one number type, which is the double precision float. Which has the side effect that you're somewhat limited contrary to other languages if you want to deal with integers : you can't store any double precision integer, only the ones fitting in the size of the fraction (52 bits).

How many bits does JavaScript use to represent a number?

How many bits does JavaScript use to represent a number?
Generally JS implementations use 64-bit double-precision floating-point numbers. Bitwise operations are performed on 32-bit integers.
That depends on the specific implementation, not the language itself.
If you want to know what range of numbers is supported, then see section 8.5 (The Number Type) of the specification.
From the referenced spec:
The Number type has exactly 18437736874454810627 (that is, 264−253+3)
values, representing the doubleprecision 64-bit format IEEE 754 values
as specified in the IEEE Standard for Binary Floating-Point
Arithmetic, except that the 9007199254740990 (that is, 253−2) distinct
"Not-a-Number" values of the IEEE Standard are represented in
ECMAScript as a single special NaN value. (Note that the NaN value is
produced by the program expression NaN.) In some implementations,
external code might be able to detect a difference between various
Not-a-Number values, but such behaviour is implementation-dependent;
to ECMAScript code, all NaN values are indistinguishable from each
other.
That said be aware that when using the bit operators &, ^, >> << etc only the least significant 32 bits are used and the result is converted to a signed value.

Implementation of 32-bit floats or 64-bit longs in JavaScript?

Does anyone know of a JavaScript library that accurately implements the IEEE 754 specification for 32-bit floating-point values? I'm asking because I'm trying to write a cross-compiler in JavaScript, and since the source language has strict requirements that floating-point values adhere to IEEE 754, the generated JavaScript code must do so as well. This means that I must be able to get exactly the correct IEEE 754 values for addition, subtraction, multiplication, and division of 32-bit floats. Unfortunately, the standard JavaScript Number type is a 64-bit double, which will give different results than what I'm expecting. The project really does have to be in JavaScript, and this is the only major stumbling block I have yet to get past.
I'm also running into this problem with 64-bit longs.
The Closure library has a 64-bit long implementation, at least.

Categories

Resources