Why are the same 32-bit floats different in JavaScript and Rust? - javascript

In JavaScript, 38_579_240_960 doesn't change when converted to a 32-bit float:
console.log(new Float32Array([38_579_240_960])[0]); // 38579240960
But in Rust, it gets rounded to 38579240000. Howcome?
fn main() {
println!("{}", 38_579_240_960_f32);` // 38579240000
}

While 38,579,240,960 is able to be represented as an IEEE-754 32-bit floating point number exactly, the trailing 960 is not significant. The 24-bit mantissa can only express about 7 meaningful digits. The next representable values above and below are 38,579,245,056 and 38,579,236,864. So the number 38,579,240,960 is the closest representable value in a range spanning in the tens-of-thousands.
So even if you add 1000 to the value, neither languages change their output:
38579240960
38579240000
So the difference is that JavaScript is printing out the exact value that is represented while Rust is only printing out the minimum digits to uniquely express it.
If you want the Rust output to look like JavaScript's, you can specify the precision like so (playground):
println!("{:.0}", 38579240960f32); // display all digits up until the decimal
38579240960
I wouldn't call either one right or wrong necessarily, however one advantage of Rust's default formatting is that you don't get a false sense of precision.
See also:
How do I print a Rust floating-point number with all available precision?
Rust: Formatting a Float with a Minimum Number of Decimal Points

Your code snippets are not equivalent. JS prints f64, and Rust prints f32.
JavaScript does not have a 32-bit float type. When you read an element out of Float32Array it is implicitly converted to 64-bit double, because this is the only way JS can see the value.
If you do the same in Rust, it prints the same value:
println!("{}", 38_579_240_960_f32 as f64);
// 38579240960

Related

.toFixed() returns a string? how can I convert that to floating number

I tried with the example value x = 123,i want two precision. So, i use x.toFixed(2). Then i will get the output "123.00"
but i want the output as 123.00 which is floating number with decimals.
var x = 123
x.toFixed(2);
output: "123.00"
expected: 123.00
A floating point notation or decimal number is not something explicitly declared. When the decimal point has nothing after the point, i.e., .0 It becomes an integer.
The .toFixed() is just for aesthetic purposes only. It also helps you to rounds off to the number of decimals too.
2.50000 and 2.5 are the exact same number. If you want to keep trailing zeroes, you'll have to use a string.
When I try to do this on my Chrome Console:
You can see that even when you do a strict type-checking, the decimals are considered like comments by the JavaScript parser. It might be a bit unclear to understand for developers coming from statically typed or strongly typed languages like Java & C#, where you can have separate float and double types.
Related: JavaScript - Keep trailing zeroes.

js parseInt() one off (large number)

When I run the following:
parseInt('96589218797811259658921879781126'.slice(0, 16), 10);
the output is
9658921879781124
whereas the expected is, as you can see:
9658921879781125
What is the cause? I understand vaguely in googling around that there are issues with large numbers in JS and that this indeed appears to be above that threshold, but I don't know what to do about it, or what is happening in this specific case. Thank you.
Edit: Thanks for the answers, I see that depending on an integer this size is bad news. The reason for using an integer is because I need to increment by one, then compare to the rest of the string. In this case, the first half of the string should equal the second half after incrementing the first half by one. What is an alternative approach?
It is not a JavaScript issue.
JavaScript uses the double-precision 64-bit floating point format (IEEE 754) to store both integer and rational numbers.
This format allows the exact representation of integer numbers between -253 and 253 (-9007199254740992 to 9007199254740992). Outside this interval, some integer values cannot be represented by this format; they are rounded to the nearest integer value that can be represented. For example, from 253 to 254, everything is multiplied by 2, so the representable numbers are the even ones.
The JavaScript global object Number provides the constant Number.MAX_SAFE_INTEGER that represents the maximum integer value that can be stored by the IEEE 574 format (253-1).
It also provides the method Number.isSafeInteger() you can use to find out if an integer number can be safely stored by the Number JavaScript type.
The number you use (9658921879781124) is too large to be stored correctly in JavaScript or any other language that uses the double-precision floating point format to store the numbers.

Midpoint 'rounding' when dealing with large numbers?

So I was trying to understand JavaScript's behavior when dealing with large numbers. Consider the following (tested in Firefox and Chrome):
console.log(9007199254740993) // 9007199254740992
console.log(9007199254740994) // 9007199254740994
console.log(9007199254740995) // 9007199254740996
console.log(9007199254740996) // 9007199254740996
console.log(9007199254740997) // 9007199254740996
console.log(9007199254740998) // 9007199254740998
console.log(9007199254740999) // 9007199254741000
Now, I'm aware of why it's outputting the 'wrong' numbers—it's trying to convert them to floating point representations and it's rounding off to the nearest possible floating point value—but I'm not entirely sure about why it picks these particular numbers. My guess is that it's trying to round to the nearest 'even' number, and since 9007199254740996 is divisible by 4 while 9007199254740994 is not, it considers 9007199254740996 to be more 'even'.
What algorithm is it using to determine the internal representation? My guess is that it's an extension of regular midpoint rounding (round to even is the default rounding mode in IEEE 754 functions).
Is this behavior specified as part of the ECMAScript standard, or is it implementation dependent?
As pointed out by Mark Dickinson in a comment on the question, the ECMA-262 ECMAScript Language Specification requires the use of IEEE 754 64-bit binary floating point to represent the Number Type. The relevant rounding rules are "Choose the member of this set that is closest in value to x. If two values of the set are equally close, then the one with an even significand is chosen...".
These rules are general, applying to rounding results of arithmetic as well as the values of literals.
The following are all the numbers in the relevant range for the question that are exactly representable in IEEE 754 64-bit binary floating point. Each is shown as its decimal value, and also as a hexadecimal representation of its bit pattern. A number with an even significand has an even rightmost hexadecimal digit in its bit pattern.
9007199254740992 bit pattern 0x4340000000000000
9007199254740994 bit pattern 0x4340000000000001
9007199254740996 bit pattern 0x4340000000000002
9007199254740998 bit pattern 0x4340000000000003
9007199254741000 bit pattern 0x4340000000000004
Each of the even inputs is one of these numbers, and rounds to that number. Each of the odd inputs is exactly half way between two of them, and rounds to the one with the even significand. This results in rounding the odd inputs to 9007199254740992, 9007199254740996, and 9007199254741000.
Patricia Shanahan's answer helped a lot and explained my primary question. However, to second part of the question—whether or not this behavior is implementation dependent—it turns out that yes it is, but in a slightly different way than I originally thought. Quoting from ECMA-262
5.1 § 7.8.3:
… the rounded value must be the Number value for the MV (as specified in 8.5), unless the literal is a DecimalLiteral and the literal has more than 20 significant digits, in which case the Number value may be either the Number value for the MV of a literal produced by replacing each significant digit after the 20th with a 0 digit or the Number value for the MV of a literal produced by replacing each significant digit after the 20th with a 0 digit and then incrementing the literal at the 20th significant digit position.
In other words, an implementation may choose to ignore everything after the 20th digit. Consider this:
console.log(9007199254740993.00001)
Both Chrome and Firefox will output 9007199254740994, however, Internet Explorer will output 9007199254740992 because it chooses to ignore the after the 20th digit. Interestingly, this doesn't appear to be standards-compliant behavior (at least as I read this standard). it should interpret this the same as 9007199254740993.0001, but it does not.
JavaScript represents numbers as 64-bit floating point values. This is defined in the standard.
http://en.wikipedia.org/wiki/Double-precision_floating-point_format
So there's nothing related with midpoint rounding going on there.
As a hint, every 32 bit integer has an exact representation in double-precision floating format.
Ok, since you're asking for the exact algorithm, I checked how Chrome's V8 engine does it.
V8 defines a StringToDouble function, which calls InternalStringToDouble in the following file:
https://github.com/v8/v8/blob/master/src/conversions-inl.h#L415
And this in turn, calls the Strotd function defined there:
https://github.com/v8/v8/blob/master/src/strtod.cc

Floating-point error mess

I have been trying to figure this floating-point problem out in javascript.
This is an example of what I want to do:
var x1 = 0
for(i=0; i<10; i++)
{
x1+= 0.2
}
However in this form I will get a rounding error, 0.2 -> 0.4 -> 0.600...001 doing that.
I have tried parseFloat, toFixed and Math.round suggested in other threads but none of it have worked for me. So are there anyone who could make this work, because I feel that I have run out of options.
You can almost always ignore the floating point "errors" while you're performing calculations - they won't make any difference to the end result unless you really care about the 17th significant digit or so.
You normally only need to worry about rounding when you display those values, for which .toFixed(1) would do perfectly well.
Whatever happens you simply cannot coerce the number 0.6 into exactly that value. The closest IEEE 754 double precision is exactly 0.59999999999999997779553950749686919152736663818359375, which when displayed within typical precision limits in JS is displayed as 0.5999999999999999778
Indeed JS can't even tell that 0.5999999999999999778 !== (e.g) 0.5999999999999999300 since their binary representation is the same.
To better understand how the rounding errors are accumulating, and get more insight on what is happenning at lower level, here is a small explanantion:
I will assume that IEEE 754 double precision standard is used by underlying software/hardware, with default rounding mode (round to nearest even).
1/5 could be written in base 2 with a pattern repeating infinitely
0.00110011001100110011001100110011001100110011001100110011...
But in floating point, the significand - starting at most significant 1 bit - has to be rounded to a finite number of bits (53)
So there is a small rounding error when representing 0.2 in binary:
0.0011001100110011001100110011001100110011001100110011010
Back to decimal representation, this rounding error corresponds to a small excess 0.000000000000000011102230246251565404236316680908203125 above 1/5
The first operation is then exact because 0.2+0.2 is like 2*0.2 and thus does not introduce any additional error, it's like shifting the fraction point:
0.0011001100110011001100110011001100110011001100110011010
+ 0.0011001100110011001100110011001100110011001100110011010
---------------------------------------------------------
0.0110011001100110011001100110011001100110011001100110100
But of course, the excess above 2/5 is doubled 0.00000000000000002220446049250313080847263336181640625
The third operation 0.2+0.2+0.2 will result in this binary number
0.011001100110011001100110011001100110011001100110011010
+ 0.0011001100110011001100110011001100110011001100110011010
---------------------------------------------------------
0.1001100110011001100110011001100110011001100110011001110
But unfortunately, it requires 54 bits of significand (the span between leading 1 and trailing 1), so another rounding error is necessary to represent the result as a double:
0.10011001100110011001100110011001100110011001100110100
Notice that the number was rounded upper, because by default floats are rounded to nearest even in case of perfect tie. We already had an error by excess, so bad luck, successive errors did cumulate rather than annihilate...
So the excess above 3/5 is now 0.000000000000000088817841970012523233890533447265625
You could reduce a bit this accumulation of errors by using
x1 = i / 5.0
Since 5 is represented exactly in float (101.0 in binary, 3 significand bits are enough), and since that will also be the case of i (up to 2^53), there is a single rounding error when performing the division, and IEEE 754 then guarantees that you get the nearest possible representation.
For example 3/5.0 is represented as:
0.10011001100110011001100110011001100110011001100110011
Back to decimal, the value is represented by default 0.00000000000000002220446049250313080847263336181640625 under 3/5
Note that both errors are very tiny, but in second case 3/5.0, four times smaller in magnitude than 0.2+0.2+0.2.
Depending on what you're doing, you may want to do fixed-point arithmetic instead of floating point. For example, if you are doing financial calculations in dollars with amounts that are always multiples of $0.01, you can switch to using cents internally, and then convert to (and from) dollars only when displaying values to the user (or reading input from the user). For more complicated scenarios, you can use a fixed-point arithmetic library.

Why does JavaScript use the term "Number" as opposed to "Integer"?

Is "Number" in JavaScript entirely synonymous with "Integer"?
What piqued my curiosity:
--- PHP, Python, Java and others use the term "Integer"
--- JavaScript has the function parseInt() rather than parseNumber()
Are there any details of interest?
Is "Number" in JavaScript entirely synonymous with "Integer"?
No. All numbers in JavaScript are actually 64-bit floating point values.
parseInt() and parseFloat() both return this same data type - the only difference is whether or not any fractional part is truncated.
52 bits of the 64 are for the precision, so this gives you exact signed 53-bit integer values. Outside of this range integers are approximated.
In a bit more detail, all integers from -9007199254740992 to +9007199254740992 are represented exactly (-2^53 to +2^53). The smallest positive integer that JavaScript cannot represent exactly is 9007199254740993. Try pasting that number into a JavaScript console and it will round it down to 9007199254740992. 9007199254740994, 9007199254740996, 9007199254740998, etc. are all represented exactly but not the odd integers in between. The integers that can be represented exactly become more sparse the higher (or more negative) you go until you get to the largest value Number.MAX_VALUE == 1.7976931348623157e+308.
In JavaScript there is a single number type: an IEEE 754 double precision floating point (what is called number.)
This article by D. Crockford is interesting:
http://yuiblog.com/blog/2009/03/10/when-you-cant-count-on-your-numbers/

Categories

Resources