Implementation of 32-bit floats or 64-bit longs in JavaScript? - javascript

Does anyone know of a JavaScript library that accurately implements the IEEE 754 specification for 32-bit floating-point values? I'm asking because I'm trying to write a cross-compiler in JavaScript, and since the source language has strict requirements that floating-point values adhere to IEEE 754, the generated JavaScript code must do so as well. This means that I must be able to get exactly the correct IEEE 754 values for addition, subtraction, multiplication, and division of 32-bit floats. Unfortunately, the standard JavaScript Number type is a 64-bit double, which will give different results than what I'm expecting. The project really does have to be in JavaScript, and this is the only major stumbling block I have yet to get past.
I'm also running into this problem with 64-bit longs.

The Closure library has a 64-bit long implementation, at least.

Related

Javascript Number Representation

It's a famous example that in javascript console logging 0.1 + 0.2 yields
0.1 + 0.2 = 0.30000000000000004
The typical explanation for this is that it happens because of the way javascript represents numbers.
I have 2 questions on that :
1)Why does javascript decide how to represent numbers - isn't it the "environment" (whatever it is compiling the code , be it the browser or something else?) job to decide how it wants to represent numbers?
2)Why is it impossible to fix this behavior to match most programming languages(java , c++ etc) . I mean - if this behavior isn't really good (and most would agree it isn't) , why is it impossible to fix . (Douglas Crockford showed other javascript flaws , for example weird behavior with 'this' , and it's been that way for 20 years .) . What is preventing javascript to fix these mistakes?
Why does javascript decide how to represent numbers - isn't it the "environment"
That would be chaos. By having JavaScript define the behavior of its fundamental types, we can rely on them behaving in that way across environments.
Okay, "chaos" is rather strong. I believe C never defined what float and double actually were other than some range limits, and it would be fair to say that C was and arguably is wildly successful, "chaos" and all. Still, the modern trend is to nail things down a bit more.
Why is it impossible to fix this behavior to match most programming languages(java , c++ etc)
This is the behavior of most modern programming languages. Most modern programming languages use IEEE-754 single- (often "float") and double- (often "double") precision floating point numbers:
JavaScript: http://www.ecma-international.org/ecma-262/5.1/#sec-4.3.19
Number value
primitive value corresponding to a double-precision 64-bit binary format IEEE 754 value
Java: http://docs.oracle.com/javase/specs/jls/se7/html/jls-4.html#jls-4.2.3
The floating-point types are float and double, which are conceptually associated with the single-precision 32-bit and double-precision 64-bit format IEEE 754 values and operations as specified in IEEE Standard for Binary Floating-Point Arithmetic, ANSI/IEEE Standard 754-1985 (IEEE, New York).
C#: http://msdn.microsoft.com/en-us/library/aa691146(v=vs.71).aspx
C# supports two floating point types: float and double. The float and double types are represented using the 32-bit single-precision and 64-bit double-precision IEEE 754 formats

Why does JavaScript use base 10 floating point numbers (according to w3schools)?

I read this on W3Schools:
All numbers in JavaScript are stored as 64-bit (8-bytes) base 10,
floating point numbers.
This sounds quite strange. Now, it's either wrong or there should be a good reason not to use base 2 like the IEEE standard.
I tried to find a real JavaScript definition, but I couldn't find any. Either on the V8 or WebKit documentation, the two JavaScript implementation I could find on Wikipedia that sounded the most familiar to me, I could find how they stored the JavaScript Number type.
So, does JavaScript use base 10? If so, why? The only reason I could come up with was that maybe using base 10 has an advantage when you want to be able to accurately store integers as well as floating point numbers, but I don't know how using base 10 would have an advantage for that myself.
That's not the World Wide Web Consortium (W3C), that's w3schools, a website that isn't any authority for any web standards.
Numbers in Javascript are double precision floating point numbers, following the IEEE standards.
The site got the part about every number is a 64-bit floating point number right. The base 10 has nothing with the numerical representation to do, that probably comes from the fact that floating point numbers are always parsed and formatted using base 10.
Numbers in JavaScript are, according to the ECMA-262 Standard (ECMAScript 5.1) section 4.3.19:
Primitive values corresponding to a double-precision 64-bit binary format IEEE 754 value.
Thus, any implementation using base 10 floating point numbers is not ECMA-262 conformant.
JavaScript uses, like most modern languages, IEEE754. Which isn't at all stored in base 10.
The specificity of JavaScript is that there is only one number type, which is the double precision float. Which has the side effect that you're somewhat limited contrary to other languages if you want to deal with integers : you can't store any double precision integer, only the ones fitting in the size of the fraction (52 bits).

How many bits does JavaScript use to represent a number?

How many bits does JavaScript use to represent a number?
Generally JS implementations use 64-bit double-precision floating-point numbers. Bitwise operations are performed on 32-bit integers.
That depends on the specific implementation, not the language itself.
If you want to know what range of numbers is supported, then see section 8.5 (The Number Type) of the specification.
From the referenced spec:
The Number type has exactly 18437736874454810627 (that is, 264−253+3)
values, representing the doubleprecision 64-bit format IEEE 754 values
as specified in the IEEE Standard for Binary Floating-Point
Arithmetic, except that the 9007199254740990 (that is, 253−2) distinct
"Not-a-Number" values of the IEEE Standard are represented in
ECMAScript as a single special NaN value. (Note that the NaN value is
produced by the program expression NaN.) In some implementations,
external code might be able to detect a difference between various
Not-a-Number values, but such behaviour is implementation-dependent;
to ECMAScript code, all NaN values are indistinguishable from each
other.
That said be aware that when using the bit operators &, ^, >> << etc only the least significant 32 bits are used and the result is converted to a signed value.

Can precision of floating point numbers in Javascript be a source of non determinism?

Can the same mathematical operation return different results in different architectures or browsers ?
The other answers are incorrect. According to the ECMAScript 5.1 specs (section 15.8.2)
NOTE The behaviour of the functions acos, asin, atan, atan2, cos, exp,
log, pow, sin, sqrt, and tan is not precisely specified here except
to require specific results for certain argument values that represent
boundary cases of interest.
...
Although the choice of algorithms is
left to the implementation, it is recommended (but not specified by
this standard) that implementations use the approximation algorithms
for IEEE 754 arithmetic contained in fdlibm, the freely distributable
mathematical library from Sun Microsystems
However, even if the implementations were specified, the exact results of all floating-point operations would still be dependent on browser/architecture. That includes simple operations like multiplication and division!!
The reason is that IEEE-754 allows systems to do 64-bit floating-point calculations at a higher-precision than the result, leading to different rounding results than systems which use the same precision as the result. This is exactly what the x86 (Intel) architecture does, which is why in C (and javascript) we can sometimes have cos(x) != cos(y) even though x == y, even on the same machine!
This is a big issue for networked peer-to-peer games, since this means, if the higher-precision calculations can't be disabled (as is the case for C#), those games pretty much can't use floating-point calculations at all. However, this is typically not an issue for Javascript games, since they are usually client-server.
If we assume that every browser vendor follows the IEEE standards + ECMA specs and there is no human error while implementing, no there can't be any difference.
Although the ECMAScript language specification 5.1 edition states that numbers are primitive values corresponding to IEEE 754 floats, which implies calculations should be consistent:
http://www.ecma-international.org/publications/files/ecma-st/ECMA-262.pdf
4.3.19 Number value
primitive value corresponding to a double-precision 64-bit binary format IEEE 754 value
NOTE
A Number value is a member of the Number type and is a direct representation of a number.
As BlueRaja points out, there is a sort of caveat in section 15.8.2:
The behaviour of the functions acos, asin, atan, atan2, cos, exp, log,
pow, sin, sqrt, and tan is not precisely specified here...
Meaning, these are at least some cases where the outcome of operations on numbers is implementation dependent and may therefore be inconsistent.
My two cents - #goldilocks notes and others allude to that you shouldn't use == or != on floating point numbers. So what do you mean by "deterministic"? That the behavior is always the same on different machines? Obviously this depends on what you mean by "the same behavior."
Well, at one silly literal level of "the same," of course not, physical bits will be different on e.g. 32 bit versus 64 bit machines. So that interpretation is out.
Ok, so will any program run with the same output on two different machines? In general languages no, because a C program can do something with undefined memory, like read from an uninitialized bit.
Ok, so will any valid program do the same thing on different machines? Well, I would say a program that uses == and != on floating point numbers is as invalid as a program that reads uninitialized memory. I personally don't know if the Javascript standard hammers out the behavior of == and != on floats to the point that it's well-defined if not kooky, so if that is your precise question you'll have to see the other answers. Can you write javascript code that has undefined output with respect to the standard? Never read the standard (other answers cover this somewhat), but for my interest this is moot because the programs that would produce what you call undeterministic behavior are invalid to begin with.

Does JavaScript have double floating point number precision?

I know it's an odd question, but does JavaScript have the capacity to work with double's as opposed to single floats? (64 bit floats vs. 32 bits.)
All numbers in JavaScript are 64-bit floating point numbers.
Ref:
http://www.hunlock.com/blogs/The_Complete_Javascript_Number_Reference
http://www.crockford.com/javascript/survey.html
According to the ECMA-262 specification (ECMAScript is the specification for Javascript), section 8.5:
The Number type has exactly 18437736874454810627 (that is, 264−253+3) values, representing the double-precision 64-bit format IEEE 754 values as specified in the IEEE Standard for Binary Floating-Point Arithmetic
Source: http://www.ecma-international.org/publications/files/ecma-st/ECMA-262.pdf (PDF)
In javascript type number it's float 64-bit number that support IEEE 754 standard and it's like double in C. And you can create 32-bit typed arrays by commands below and control each byte in each component by binding corresponded buffer.
let a = new Float32Array(length);
let b = new Float64Array(length);
But note that it's not supported in IE9, here browser compatibility table.
If you want extended presicion like long double, you can use double.js or decimal.js library.

Categories

Resources