javascript Number constructor strange behaviour [duplicate] - javascript

This question already has answers here:
Why is 9999999999999999 converted to 10000000000000000 in JavaScript?
(6 answers)
Closed 8 years ago.
Converting string to number produces incremented value:
var n = '9999999999999999';
console.log(n); // -> 9999999999999999
var nn = Number(n)
console.log(nn); // -> 10000000000000000
How to avoid this?

9999999999999999 is treated internally in JavaScript as a floating-point number. It cannot be accurately represented in IEEE 754 double precision as it would require 54 bits of precision (the number of bits is log2(9999999999999999) = 53.150849512 and since fractional bits do not exist, the result must be rouned up) while IEEE 754 provides only 53 bits (1 implict bit + 52 explicitly stored bits of the mantissa) - one bit less. Hence the number simply gets rounded.
Since only one bit is lost in this case, even 54-bit numbers are exactly representable, since they nevertheless contain 0 in the bit, which gets lost. Odd 54-bit numbers are rounded to the nearest value that happens to be a doubled even 53-bit number given the default unbiased rounding mode of IEEE 754.
[Source]

Related

Javascript Number Method not returns expected value [duplicate]

This question already has answers here:
What is JavaScript's highest integer value that a number can go to without losing precision?
(21 answers)
Closed 4 years ago.
I have tried JavaScript Number method and it returns unexpected results.
If i pass 192000000000000005 value into Number() method. It returns 192000000000000000, without 5.
If I Pass 19200000000000005 value, it returns 19200000000000004, which is unexpected.
What is that meaning of those results? You can find the below screenshot too.
Chrome Console:
192000000000000005 can not be represented by a Number in javascript, due to the limitations of 64bit floating point representation
The Maximum integer that is safe (i.e. all integers up to that integer can be represented without gaps) is Number.MAX_SAFE_INTEGER === 9,007,199,254,740,991
you can see that's 16 digits - so 18 digits is not safe, and 19 (whatever, you've never tried) won't work either -
Note, the actual definition of MAX_SAFE_INTEGER is
the largest integer n, where n and n + 1 are both exactly representable as a Number value
console.log(9007199254740991) //=> 9007199254740991
console.log(9007199254740992) //=> 9007199254740992
console.log(9007199254740993) //=> 9007199254740992
So, while 9007199254740992 is representable as a Number, because 9007199254740993 also has the same value when stored in a 64bit float, it is not "safe"
note 9007199254740991 is 2**53 - 1 - which makes sense when you know that 64bit floating point has a 53 bit mantissa
Mozilla Developer Network Documentation

parseInt returning values that differs by 1 [duplicate]

This question already has answers here:
What is JavaScript's highest integer value that a number can go to without losing precision?
(21 answers)
Closed 7 years ago.
I have data like this:
var currentValue="12345678901234561";
and I'm trying to parse it:
var number = parseInt(currentValue, 10) || 0;
and my result is:
number = 12345678901234560
now lets try:
currentValue="12345678901234567"
in this case parseInt(currentValue,10) will result in 12345678901234568
Can anyone explain me why parseInt is adding/substracting 1 from values provided by me?
Can anyone explain me why parseInt is adding/substracting 1 from values provided by me?
It's not, quite, but JavaScript numbers are IEEE-754 double-precision binary floating point (even when you're using parseInt), which have only about 15 digits of precision. Your number is 17 digits long, so precision suffers, and the lowest-order digits get spongy.
The maximum reliable integer value is 9,007,199,254,740,991, which is available from the property Number.MAX_SAFE_INTEGER on modern JavaScript engines. (Similarly, there's Number.MIN_SAFE_INTEGER, which is -9,007,199,254,740,991.)
Some integer-specific operations, like the bitwise operators ~, &, and |, convert their floating-point number operands to signed 32-bit integers, which gives us a much smaller range: -231 (-2,147,483,648) through 231-1 (2,147,483,647). Others, like <<, >>, and >>>, convert it to an unsigned 32-bit integer, giving us the range 0 through 4,294,967,295. Finally, just to round out our integer discussion, the length of an array is always a number within the unsigned 32-bit integer range.

Why does this happen to numbers in JavaScript

Found this here:
How does this even work? What is happening here? Why does the number change in the first line?
JavaScript uses double-precision floating-point format numbers as specified in IEEE 754 and can only safely represent numbers between -(253 - 1) and 253 - 1.
The number 111111111111111111 (18 digits) is above that range.
Reference: Number.MAX_SAFE_INTEGER
As mentioned above, JavaScript uses the double-precision 64-bit floating point format for the numbers. 52 bits are reserved for the values, 11 bits for the exponent and 1 bit for the plus/minus sign.
The whole deal with the numbers is beautifully explained in this video. Essentially, JavaScript uses a pointer that moves along the 52 bits to mark the floating point. Naturally, you need more bits to express larger numbers such as your 111111111111111111.
To convert your number into the binary, it would be
sign - 0
exponent - 10000110111
mantissa - 1000101010111110111101111000010001100000011100011100
The more space is taken by the value, the less is available for the decimal digits.
Eventually, simple calculations such as the increment by 1 will become inaccurate due to the lack of bits on the far right and the lowest possible increment will depend on the position of your pointer.

Why is 9999999999999999 converted to 10000000000000000 in JavaScript?

Can any one explain to me why 9999999999999999 is converted to 10000000000000000?
alert(9999999999999999); //10000000000000000
http://jsfiddle.net/Y2Vb2/
Javascript doesn't have integers, only 64-bit floats - and you've ran out of floating-point precision.
See similar issue in Java: why is the Double.parseDouble making 9999999999999999 to 10000000000000000?
JavaScript only has floating point numbers, no integers.
Read What Every Computer Scientist Should Know About Floating-Point Arithmetic.
Summary: floating point numbers include only limited precision, more than 15 digits (or so) and you'll get rounding.
9999999999999999 is treated internally in JavaScript as a floating-point number. It cannot be accurately represented in IEEE 754 double precision as it would require 54 bits of precision (the number of bits is log2(9999999999999999) = 53,150849512... and since fractional bits do not exist, the result must be rouned up) while IEEE 754 provides only 53 bits (1 implict bit + 52 explicitly stored bits of the mantissa) - one bit less. Hence the number simply gets rounded.
Since only one bit is lost in this case, even 54-bit numbers are exactly representable, since they nevertheless contain 0 in the bit, which gets lost. Odd 54-bit numbers are rounded to the nearest value that happens to be a doubled even 53-bit number given the default unbiased rounding mode of IEEE 754.
Why 9999999999999999 is converted to 10000000000000000 ?
All numbers in JavaScript are stored in 64-bit format IEEE-754, also known as “double precision”, So there are exactly 64 bits to store a number: 52 of them are used to store the digits, 11 of them store the position of the decimal point (they are zero for integer numbers), and 1 bit is for the sign.
If a number is too big, it would overflow the 64-bit storage, potentially giving
an infinity:
alert( 1e500 );
// Result => Infinity
// "e" multiplies the number by 1 with the given zeroes count.
If we check whether the sum of 0.1 and 0.2 is 0.3, we get false.
alert( 0.1 + 0.2 == 0.3 )
Strange! What is it then if not 0.3? This happens because, A number is stored in memory in its binary form, a sequence of ones and zeroes. But fractions like 0.1, 0.2 that look simple in the decimal numeric system are actually unending fractions in their binary form.
In other words, what is 0.1? It is one divided by ten 1/10, one-tenth. In decimal numeral system such numbers are easily re-presentable. Compare it to one-third: 1/3. It becomes an endless fraction 0.33333(3).
There’s just no way to store exactly 0.1 or exactly 0.2 using the binary system, just like there is no way to store one-third as a decimal fraction.
The numeric format IEEE-754 solves this by rounding to the nearest possible number. These rounding rules normally don’t allow us to see that “tiny precision loss”, so the number shows up as 0.3. But beware, the loss still exists.
As you see :
alert( 9999999999999999 ); // shows 10000000000000000
This suffers from the same issue: a loss of precision. There are 64 bits for the number, 52 of them can be used to store digits, but that’s not enough. So the least significant digits disappear.
What is Really Happening behind 9999999999999999 to 10000000000000000 is :
JavaScript doesn’t trigger an error in such events. It does its best to fit the number into the desired format, but unfortunately, this format is not big enough.
Reference : https://javascript.info/number
You can also refer this SO Question, it includes the very detail about the JavaScript Numbers.
Question: Sometimes JavaScript computations seem to yield "inaccurate" results, e.g. 0.362*100 yields 36.199999999999996. How can I avoid this?
Answer: Internally JavaScript stores all numbers in double-precision floating-point format, with a 52-bit mantissa and an 11-bit exponent (the IEEE 754 Standard for storing numeric values). This internal representation of numbers may cause unexpected results like the above. Most integers greater than 253 = 9007199254740992 cannot be represented exactly in this format. Likewise, many decimals/fractions, such as 0.362, cannot be represented exactly, leading to the perceived "inaccuracy" in the above example. To avoid these "inaccurate" results, you might want to round the results to the precision of the data you used.
http://www.javascripter.net/faq/accuracy.htm
9999999999999999 in binary form is 100011100001101111001001101111110000001111111111111111
which has 54 digits.
Below we will convert this figure to Javascript IEEE-754 which has 1 digit for sign,
11 digits for mantissa in binary offset format and 52 signs for the
number itself.
In binary form the first digit is always 1 so Javascript omits the first digit of the number in mantissa when saving to IEEE-754 format.
So, we will have 00011100001101111001001101111110000001111111111111111 for mantissa, which is 53 digits and as for the number we can keep only 52 digits we round the number removing last digit 0001110000110111100100110111111000001000000000000000
the final number in binary form will be 1 0001110000110111100100110111111000001000000000000000 0 which in decimal form is 10000000000000000
1 is the first digit that is not written to 52 bits in mantissa, then
52 bits of mantissa and one 0 to make it 54 digits back which is
10000000000000000 in decimal
That might be hard to understand unless you read this beautiful article

What is the maximum integer allowed in a javascript variable? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
What is JavaScript's Max Int? What's the highest Integer value a Number can go to without losing precision?
What is the maximum integer in javascript? I have a variable that starts in 0 and adds 100 each 0.1 seconds. What is the maximum number it can reach?
BTW, I thought this question had been answered before but I couldn't find it. If it is answered, please send me a link to it =) thanks!
JavaScript numbers are IEE 794 floating point double-precision values. There's a 53-bit mantissa (from memory), so that's pretty much the limit.
Now there are times when JavaScript semantics call for numbers to be cast to a 32-bit integer value, like array indexing and bitwise operators.
A javascript variable can have any value you like. If native support isn't sufficient, there are various libraries that provide support for unlimited precision arithmetic (e.g. BigInt.js).
The largest value for the ECMAScript Number Type is +ve infinity (but infinity isn't a number, it's a concept). The largest numeric value is given by Number.MAX_VALUE, which is just the maximum value representable by an IEEE 754 64-bit double-precision number.
Some quirks:
var x = Number.MAX_VALUE;
var y = x - 1;
var z = x - 2;
x == y; // true
x == z; // false
The range of contiguous integers representable in ECMAScript is from -2^53 to +2^53. The largest exponent is 2^1023.
It is 1.7976931348623157e+308
to try it yourself code it
CODE
alert(Number.MAX_VALUE);
http://jsfiddle.net/XHcZx/

Categories

Resources