Is `10e1` an Integer literal or Floating point literal? - javascript

I was looking into this article and wondering if 10e1 is an Integer Literal or a Floating Point Literal.
I know that 100 is an Integer literal. Does it make any difference if I write 10e1 instead?
When I check in the spec here(7) or here(5.1), there is nothing called "Floating Point Literal". Is this just another incorrect doc in MDN? Any idea what Floating Point Literal refers to?
To summarise:
Does 100 and 10e1 fall into same category of literals? If yes, which?
Is there something called "Floating Point Literal"?

Does it make any difference if I write 10e1 instead of 100?
Not for the result, no. But it's one character more to transfer and parse (so better use 1e2 :-D), and it will affect readability. Not everyone is familiar with exponents.
In the spec there is nothing called "Floating Point Literal". Is this just another incorrect doc in MDN?
The MDN guide is dubious for sure, naming a sections "Integers" in an article about JS types is confusing at least.
Does 100 and 10e1 fall into same category of literals? If yes, which?
Yes, they're both numeric literals. JS does not distinguish between numbers with and without a fractional part, they all have the same floating-point type. There is only one grammar for decimal number literals, with fractional digits and exponents being optional.
Any idea what Floating Point Literal refers to?
It's meant as "(number) literal for a floating-point number", just as "Integer literal" means "(number) literal for a floating-point number representing an integer".

DecimalLiteral::
DecimalIntegerLiteral.DecimalDigits(opt) ExponentPart(opt)
.DecimalDigits ExponentPart(opt)
DecimalIntegerLiteral ExponentPart(opt)
DecimalIntegerLiteral::
0
NonZeroDigitDecimalDigitsopt
As per spec, 100 and 10e1 are both "DecimalLiteral" while 100 also qualifies to be "DecimalIntegerLiteral"
None of these should make any real difference to the developer as mentioned in comments by #Thilo
The internal representation as per IEEE-754 should as well be the same.
Status Sign [1] Exponent [11] Significand [52]
Normal 0 (+) 10000000101 (+6) 1.1001000000000000000000000000000000000000000000000000 (1.5625)

Related

What does character 'n' after numeric literal mean in JavaScript?

I've seen
const num = 123456789000000000000n;
And don't know what the n at the end of numeric literal does?
At the time of writing, when searching online for "What does character 'n' after numeric literal mean in JavaScript" nothing comes up.
From BigInt on MDN:
A BigInt is created by appending n to the end of an integer literal —
10n — or by calling the function BigInt().
In essence, BigInt allows for storing large integers, as otherwise a large numeric literal would be converted into a floating point and lose precision of the least significant digits.

Why BigInt demand explicit conversion from Number?

BigInt and Number conversions
When working with numbers in JavaScript there are two primitive types to choose from - BigInt and Number. One could expect implicit conversion from "smaller" type to "bigger" type which isn't a case in JavaScript.
Expected
When computing some combination of BigInt and Number user could expect implicit cast from Number to BigInt like in below example:
const number = 16n + 32; // DOESN'T WORK
// Expected: Evaluates to 48n
Actual behavior
Expressions operating on both BigInt and Number are throwing an error:
const number = 16n + 32;
// Throws "TypeError: Cannot mix BigInt and other types, use explicit conversions"
Why explicit conversion is needed in above cases?
Or in other words what is the reason behind this design?
This is documented in the original BigInt proposal: https://github.com/tc39/proposal-bigint/blob/master/README.md#design-goals-or-why-is-this-like-this
When a messy situation comes up, this proposal errs on the side of throwing an exception rather than rely on type coercion and risk giving an imprecise answer.
It's a design choice. In statically typed languages, coercion might give loss of information, like going from float to int the fractional part just gets truncated. JavaScript does type coercion and you may expect 16n + 32 to just use 32 as if it were a BigInt instead of a Number and there wouldn't be a problem.
This was purely a design choice which is motivated here in this part of the documentation
They are not "smaller" and "bigger". One has real but potentially imprecise numbers, the other has integral but precise ones. What do you think should be the result of 16n + 32.5? (note that type-wise, there is no difference between 32 and 32.5). Automatically converting to BigInt will lose any fractional value; automatically converting to Number will risk loss of precision, and potential overflow. The requirement for explicit conversion forces the programmer to choose which behaviour they desire, without leaving it to chance, as a potential (very likely) source of bugs.
You probably missed an important point:
BigInt is about integers
Number is about real numbers
Implicit conversion from 32 to 32n might have sense, but implicit conversion from floating point number e.g. 1.555 to BigInt would be misleading.

.toFixed() returns a string? how can I convert that to floating number

I tried with the example value x = 123,i want two precision. So, i use x.toFixed(2). Then i will get the output "123.00"
but i want the output as 123.00 which is floating number with decimals.
var x = 123
x.toFixed(2);
output: "123.00"
expected: 123.00
A floating point notation or decimal number is not something explicitly declared. When the decimal point has nothing after the point, i.e., .0 It becomes an integer.
The .toFixed() is just for aesthetic purposes only. It also helps you to rounds off to the number of decimals too.
2.50000 and 2.5 are the exact same number. If you want to keep trailing zeroes, you'll have to use a string.
When I try to do this on my Chrome Console:
You can see that even when you do a strict type-checking, the decimals are considered like comments by the JavaScript parser. It might be a bit unclear to understand for developers coming from statically typed or strongly typed languages like Java & C#, where you can have separate float and double types.
Related: JavaScript - Keep trailing zeroes.

Midpoint 'rounding' when dealing with large numbers?

So I was trying to understand JavaScript's behavior when dealing with large numbers. Consider the following (tested in Firefox and Chrome):
console.log(9007199254740993) // 9007199254740992
console.log(9007199254740994) // 9007199254740994
console.log(9007199254740995) // 9007199254740996
console.log(9007199254740996) // 9007199254740996
console.log(9007199254740997) // 9007199254740996
console.log(9007199254740998) // 9007199254740998
console.log(9007199254740999) // 9007199254741000
Now, I'm aware of why it's outputting the 'wrong' numbers—it's trying to convert them to floating point representations and it's rounding off to the nearest possible floating point value—but I'm not entirely sure about why it picks these particular numbers. My guess is that it's trying to round to the nearest 'even' number, and since 9007199254740996 is divisible by 4 while 9007199254740994 is not, it considers 9007199254740996 to be more 'even'.
What algorithm is it using to determine the internal representation? My guess is that it's an extension of regular midpoint rounding (round to even is the default rounding mode in IEEE 754 functions).
Is this behavior specified as part of the ECMAScript standard, or is it implementation dependent?
As pointed out by Mark Dickinson in a comment on the question, the ECMA-262 ECMAScript Language Specification requires the use of IEEE 754 64-bit binary floating point to represent the Number Type. The relevant rounding rules are "Choose the member of this set that is closest in value to x. If two values of the set are equally close, then the one with an even significand is chosen...".
These rules are general, applying to rounding results of arithmetic as well as the values of literals.
The following are all the numbers in the relevant range for the question that are exactly representable in IEEE 754 64-bit binary floating point. Each is shown as its decimal value, and also as a hexadecimal representation of its bit pattern. A number with an even significand has an even rightmost hexadecimal digit in its bit pattern.
9007199254740992 bit pattern 0x4340000000000000
9007199254740994 bit pattern 0x4340000000000001
9007199254740996 bit pattern 0x4340000000000002
9007199254740998 bit pattern 0x4340000000000003
9007199254741000 bit pattern 0x4340000000000004
Each of the even inputs is one of these numbers, and rounds to that number. Each of the odd inputs is exactly half way between two of them, and rounds to the one with the even significand. This results in rounding the odd inputs to 9007199254740992, 9007199254740996, and 9007199254741000.
Patricia Shanahan's answer helped a lot and explained my primary question. However, to second part of the question—whether or not this behavior is implementation dependent—it turns out that yes it is, but in a slightly different way than I originally thought. Quoting from ECMA-262
5.1 § 7.8.3:
… the rounded value must be the Number value for the MV (as specified in 8.5), unless the literal is a DecimalLiteral and the literal has more than 20 significant digits, in which case the Number value may be either the Number value for the MV of a literal produced by replacing each significant digit after the 20th with a 0 digit or the Number value for the MV of a literal produced by replacing each significant digit after the 20th with a 0 digit and then incrementing the literal at the 20th significant digit position.
In other words, an implementation may choose to ignore everything after the 20th digit. Consider this:
console.log(9007199254740993.00001)
Both Chrome and Firefox will output 9007199254740994, however, Internet Explorer will output 9007199254740992 because it chooses to ignore the after the 20th digit. Interestingly, this doesn't appear to be standards-compliant behavior (at least as I read this standard). it should interpret this the same as 9007199254740993.0001, but it does not.
JavaScript represents numbers as 64-bit floating point values. This is defined in the standard.
http://en.wikipedia.org/wiki/Double-precision_floating-point_format
So there's nothing related with midpoint rounding going on there.
As a hint, every 32 bit integer has an exact representation in double-precision floating format.
Ok, since you're asking for the exact algorithm, I checked how Chrome's V8 engine does it.
V8 defines a StringToDouble function, which calls InternalStringToDouble in the following file:
https://github.com/v8/v8/blob/master/src/conversions-inl.h#L415
And this in turn, calls the Strotd function defined there:
https://github.com/v8/v8/blob/master/src/strtod.cc

Why is an integer literal followed by a dot a valid numeric literal in JavaScript?

In JavaScript it is valid to end an integer numeric literal with a dot, like so...
x = 5.;
What's the point of having this notation? Is there any reason to put the dot at the end, and if not, why is that notation allowed in the first place?
Update: Ok guys, since you mention floats and integers... We are talking about JavaScript here. There is only one number type in JavaScript which is IEEE-754.
5 and 5. have the same value, there is no difference between those two values.
I guess it is just compatibility with other C-like languages where the dot does matter.
You DO need the decimal point if you call a method on an integer:
5.toFixed(n) // throws an error
5..toFixed(n) // returns the string '5.' followed by n zeroes
If that doesn't look right, (5).toFixed(n), or 5.0.toFixed(n), will work, too.
That's a floating point number. Unlike any other language I've ever encountered, all numbers in Javascript are actually 64-bit floating numbers. Technically, there are no native integers in Javascript. See The Complete Javascript Number Reference for the full ugly story.
The correct answer in this case is, that it makes absolutely no difference.
Every number in JavaScript is already a 64bit floating point number.
The ". syntax" is only useful in cases where you can ommit the fixed part because it's 0:
.2 // Will end up as 0.2
-.5 // Will end up as -0.5
So overall it's just saving a byte, but it makes the code less readable at the same time.
What if it wouldn't be an integer, but a floating point literal?

Categories

Resources