Why does JavaScript use the term "Number" as opposed to "Integer"? - javascript

Is "Number" in JavaScript entirely synonymous with "Integer"?
What piqued my curiosity:
--- PHP, Python, Java and others use the term "Integer"
--- JavaScript has the function parseInt() rather than parseNumber()
Are there any details of interest?

Is "Number" in JavaScript entirely synonymous with "Integer"?
No. All numbers in JavaScript are actually 64-bit floating point values.
parseInt() and parseFloat() both return this same data type - the only difference is whether or not any fractional part is truncated.
52 bits of the 64 are for the precision, so this gives you exact signed 53-bit integer values. Outside of this range integers are approximated.
In a bit more detail, all integers from -9007199254740992 to +9007199254740992 are represented exactly (-2^53 to +2^53). The smallest positive integer that JavaScript cannot represent exactly is 9007199254740993. Try pasting that number into a JavaScript console and it will round it down to 9007199254740992. 9007199254740994, 9007199254740996, 9007199254740998, etc. are all represented exactly but not the odd integers in between. The integers that can be represented exactly become more sparse the higher (or more negative) you go until you get to the largest value Number.MAX_VALUE == 1.7976931348623157e+308.

In JavaScript there is a single number type: an IEEE 754 double precision floating point (what is called number.)
This article by D. Crockford is interesting:
http://yuiblog.com/blog/2009/03/10/when-you-cant-count-on-your-numbers/

Related

Why are the same 32-bit floats different in JavaScript and Rust?

In JavaScript, 38_579_240_960 doesn't change when converted to a 32-bit float:
console.log(new Float32Array([38_579_240_960])[0]); // 38579240960
But in Rust, it gets rounded to 38579240000. Howcome?
fn main() {
println!("{}", 38_579_240_960_f32);` // 38579240000
}
While 38,579,240,960 is able to be represented as an IEEE-754 32-bit floating point number exactly, the trailing 960 is not significant. The 24-bit mantissa can only express about 7 meaningful digits. The next representable values above and below are 38,579,245,056 and 38,579,236,864. So the number 38,579,240,960 is the closest representable value in a range spanning in the tens-of-thousands.
So even if you add 1000 to the value, neither languages change their output:
38579240960
38579240000
So the difference is that JavaScript is printing out the exact value that is represented while Rust is only printing out the minimum digits to uniquely express it.
If you want the Rust output to look like JavaScript's, you can specify the precision like so (playground):
println!("{:.0}", 38579240960f32); // display all digits up until the decimal
38579240960
I wouldn't call either one right or wrong necessarily, however one advantage of Rust's default formatting is that you don't get a false sense of precision.
See also:
How do I print a Rust floating-point number with all available precision?
Rust: Formatting a Float with a Minimum Number of Decimal Points
Your code snippets are not equivalent. JS prints f64, and Rust prints f32.
JavaScript does not have a 32-bit float type. When you read an element out of Float32Array it is implicitly converted to 64-bit double, because this is the only way JS can see the value.
If you do the same in Rust, it prints the same value:
println!("{}", 38_579_240_960_f32 as f64);
// 38579240960

What kinds of values can a Number type hold?

I've read somewhere that a JavaScript number can hold both a 64-bit number AND a 64-bit Integer, is this true? I'm still a little confused on this stuff.
Number can contains integer or floating point number. Here are max/min values for JS
console.log('float min:',Number.MIN_VALUE);
console.log('float max:',Number.MAX_VALUE);
console.log('int min:',Number.MIN_SAFE_INTEGER);
console.log('int max:',Number.MAX_SAFE_INTEGER);
More details in specification ES2018
JavaScript only has a single numeric type for primitives - it's a IEEE 754 standard 64-bit double precision floating point number.
Link to the specifications
Link to MDN
So, there are no integers in JavaScript - everything is a float.

js parseInt() one off (large number)

When I run the following:
parseInt('96589218797811259658921879781126'.slice(0, 16), 10);
the output is
9658921879781124
whereas the expected is, as you can see:
9658921879781125
What is the cause? I understand vaguely in googling around that there are issues with large numbers in JS and that this indeed appears to be above that threshold, but I don't know what to do about it, or what is happening in this specific case. Thank you.
Edit: Thanks for the answers, I see that depending on an integer this size is bad news. The reason for using an integer is because I need to increment by one, then compare to the rest of the string. In this case, the first half of the string should equal the second half after incrementing the first half by one. What is an alternative approach?
It is not a JavaScript issue.
JavaScript uses the double-precision 64-bit floating point format (IEEE 754) to store both integer and rational numbers.
This format allows the exact representation of integer numbers between -253 and 253 (-9007199254740992 to 9007199254740992). Outside this interval, some integer values cannot be represented by this format; they are rounded to the nearest integer value that can be represented. For example, from 253 to 254, everything is multiplied by 2, so the representable numbers are the even ones.
The JavaScript global object Number provides the constant Number.MAX_SAFE_INTEGER that represents the maximum integer value that can be stored by the IEEE 574 format (253-1).
It also provides the method Number.isSafeInteger() you can use to find out if an integer number can be safely stored by the Number JavaScript type.
The number you use (9658921879781124) is too large to be stored correctly in JavaScript or any other language that uses the double-precision floating point format to store the numbers.

Why does this happen to numbers in JavaScript

Found this here:
How does this even work? What is happening here? Why does the number change in the first line?
JavaScript uses double-precision floating-point format numbers as specified in IEEE 754 and can only safely represent numbers between -(253 - 1) and 253 - 1.
The number 111111111111111111 (18 digits) is above that range.
Reference: Number.MAX_SAFE_INTEGER
As mentioned above, JavaScript uses the double-precision 64-bit floating point format for the numbers. 52 bits are reserved for the values, 11 bits for the exponent and 1 bit for the plus/minus sign.
The whole deal with the numbers is beautifully explained in this video. Essentially, JavaScript uses a pointer that moves along the 52 bits to mark the floating point. Naturally, you need more bits to express larger numbers such as your 111111111111111111.
To convert your number into the binary, it would be
sign - 0
exponent - 10000110111
mantissa - 1000101010111110111101111000010001100000011100011100
The more space is taken by the value, the less is available for the decimal digits.
Eventually, simple calculations such as the increment by 1 will become inaccurate due to the lack of bits on the far right and the lowest possible increment will depend on the position of your pointer.

parseInt rounds incorrectly

I stumbled upon this issue with parseInt and I'm not sure why this is happening.
console.log(parseInt("16980884512690999")); // gives 16980884512691000
console.log(parseInt("169808845126909101"));​ // gives 169808845126909100
I clearly not hitting any number limits in JavaScript limits
(Number.MAX_VALUE = 1.7976931348623157e+308)
Running Win 7 64 bit if that matters.
What am I overlooking?
Fiddle
Don't confuse Number.MAX_VALUE with maximum accurate value. All numbers in javascript are stored as 64 bit floating point, which means you can get high (and low) numbers, but they'll only be accurate to a certain point.
Double floating points (i.e. Javascript's) have 53 bits of significand precision, which means the highest/lowest "certainly accurate" integer in javascript is +/-9007199254740992 (2^53). Numbers above/below that may turn out to be accurate (the ones that simply add 0's on the end, because the exponent bits can be used to represent that).
Or, in the words of ECMAScript: "Note that all the positive and negative integers whose magnitude is no greater than 2^53 are representable in the Number type (indeed, the integer 0 has two representations, +0 and −0)."
Update
Just to add a bit to the existing question, the ECMAScript spec requires that if an integral Number has less than 22 digits, .toString() will output it in standard decimal notation (e.g. 169808845126909100000 as in your example). If it has 22 or more digits, it will be output in normalized scientific notation (e.g. 1698088451269091000000 - an additional 0 - is output as 1.698088451269091e+21).
From this answer
All numbers in Javascript are 64 bit "double" precision IEE754
floating point.
The largest positive whole number that can therefore be accurately
represented is 2^53. The remaining bits are reserved for the exponent.
2^53 = 9007199254740992

Categories

Resources