I stumbled upon this issue with parseInt and I'm not sure why this is happening.
console.log(parseInt("16980884512690999")); // gives 16980884512691000
console.log(parseInt("169808845126909101")); // gives 169808845126909100
I clearly not hitting any number limits in JavaScript limits
(Number.MAX_VALUE = 1.7976931348623157e+308)
Running Win 7 64 bit if that matters.
What am I overlooking?
Fiddle
Don't confuse Number.MAX_VALUE with maximum accurate value. All numbers in javascript are stored as 64 bit floating point, which means you can get high (and low) numbers, but they'll only be accurate to a certain point.
Double floating points (i.e. Javascript's) have 53 bits of significand precision, which means the highest/lowest "certainly accurate" integer in javascript is +/-9007199254740992 (2^53). Numbers above/below that may turn out to be accurate (the ones that simply add 0's on the end, because the exponent bits can be used to represent that).
Or, in the words of ECMAScript: "Note that all the positive and negative integers whose magnitude is no greater than 2^53 are representable in the Number type (indeed, the integer 0 has two representations, +0 and −0)."
Update
Just to add a bit to the existing question, the ECMAScript spec requires that if an integral Number has less than 22 digits, .toString() will output it in standard decimal notation (e.g. 169808845126909100000 as in your example). If it has 22 or more digits, it will be output in normalized scientific notation (e.g. 1698088451269091000000 - an additional 0 - is output as 1.698088451269091e+21).
From this answer
All numbers in Javascript are 64 bit "double" precision IEE754
floating point.
The largest positive whole number that can therefore be accurately
represented is 2^53. The remaining bits are reserved for the exponent.
2^53 = 9007199254740992
Related
Found this here:
How does this even work? What is happening here? Why does the number change in the first line?
JavaScript uses double-precision floating-point format numbers as specified in IEEE 754 and can only safely represent numbers between -(253 - 1) and 253 - 1.
The number 111111111111111111 (18 digits) is above that range.
Reference: Number.MAX_SAFE_INTEGER
As mentioned above, JavaScript uses the double-precision 64-bit floating point format for the numbers. 52 bits are reserved for the values, 11 bits for the exponent and 1 bit for the plus/minus sign.
The whole deal with the numbers is beautifully explained in this video. Essentially, JavaScript uses a pointer that moves along the 52 bits to mark the floating point. Naturally, you need more bits to express larger numbers such as your 111111111111111111.
To convert your number into the binary, it would be
sign - 0
exponent - 10000110111
mantissa - 1000101010111110111101111000010001100000011100011100
The more space is taken by the value, the less is available for the decimal digits.
Eventually, simple calculations such as the increment by 1 will become inaccurate due to the lack of bits on the far right and the lowest possible increment will depend on the position of your pointer.
I know there's
Math.floor
parseInt
But about this case:
Math.floor(1.99999999999999999999999999)
returning 2, how could I obtain only its integer part, equals to 1?
1.99999999999999999999999999 is not the actual value of the number. It has a value of 2 after the literal is parsed because this is 'as best' as JavaScript can represent such a value.
JavaScript numbers are IEEE-754 binary64 or "double precision" which allow for 15 to 17 significant [decimal] digits of precision - the literal shown requires 27 significant digits which results in information loss.
Test: 1.99999999999999999999999999 === 2 (which results in true).
Here is another answer of mine that describes the issue; the finite relative precision is the problem.
I know JavaScript numbers are just "double" numbers and have only 52bit precisions for the fraction part. However, the REAL JavaScript numbers seem to have more practical precisions for huge numbers.
For example, the predefined constant Number.MAX_VALUE represents the largest positive finite value of the Number type, which is approximately 1.7976931348623157e+308. Here I can access trailing digits of this value using a modulus operator.
> Number.MAX_VALUE
1.7976931348623157e+308
> Number.MAX_VALUE % 10000000000
4124858368
From this result I can assume that this number is 7fef ffff ffff ffff which represents (1 + (1 − 2 ** −52)) × 2 ** 1023 (Wikipedia) and can be transcribed in an exact form as following:
179769313486231570814527423731704356798070567525844996598917476803157260780028538760589558632766878171540458953514382464234321326889464182768467546703537516986049910576551282076245490090389328944075868508455133942304583236903222948165808559332123348274797826204144723168738177180919299881250404026184124858368
...and we only saw trailing 10 digits of this 309 digits. So I think each JavaScript number must have exact digits in the decimal form.
My question is: how to get this 309 digits string in JavaScript? Challenges like Number.MAX_VALUE / 10000000000 % 10000000000 just fails because of such hugeness.
Furthermore, how about tiny numbers such as Number.MIN_VALUE? This must be the following fraction in the decimal form.
0.000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000004940656458412465441765687928682213723650598026143247644255856825006755072702087518652998363616359923797965646954457177309266567103559397963987747960107818781263007131903114045278458171678489821036887186360569987307230500063874091535649843873124733972731696151400317153853980741262385655911710266585566867681870395603106249319452715914924553293054565444011274801297099995419319894090804165633245247571478690147267801593552386115501348035264934720193790268107107491703332226844753335720832431936092382893458368060106011506169809753078342277318329247904982524730776375927247874656084778203734469699533647017972677717585125660551199131504891101451037862738167250955837389733598993664809941164205702637090279242767544565229087538682506419718265533447265625
All digits of MAX_VALUE is:
179769313486231570000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
Check out the code below:
http://howjs.com/?%3Aload%20http%3A%2F%2Fwww.javascriptoo.com%2Fapplication%2Fhtml%2Fjs%2FMikeMcl%2Fbig.js%2Fbig.min.js%0A%0Avar%20max%20%3D%20new%20Big(%20Number.MAX_VALUE%20)%3B%0Amax.toFixed()%3B
The actual implementation of IEEE floating point numbers is a little (little!!!) confusing.
I find it helps if you think of a simpler form, this reacts the same everywhere except near the overflows and underflows where the IEEE format is just better.
This is the form:
A floating point number consists of:
A sign for the number (+/-)
An unsigned integer value called the "mantissa" -- make this 'v'
An unsigned integer value called the "exponent" -- make this 'n'
A "sign" for the exponent.
The sign of the number is simple -- does it have a minus in front.
The value is calculated as:
v*2ⁿ
If the sign for the exponent is positive the exponent is basically 2*2*2*...*2 for as many twos as you have specified. If a large number is represented in decimal it will have lots of digits all the way down to the decimal point BUT they are meaningless. If you display the number in binary after about 53 binary digits all the rest will be zeros and you can't change them.
Notice, with a positive exponent all this is integers, floating point numbers (including IEEE ones) will calculate exactly with integers as long as you don't overflow. When you overflow they are still well behaved, they just have zeros in the lower bits.
Only when the exponent is negative do you have strangeness
v/(2ⁿ)
The value you get for a negative exponent is still based on the 2*2*2*...*2 value but you divide by it instead. So you're trying to represent say a tenth with a sum of halves, quarters, eighths and so forth ... but this doesn't work exactly so you get rounding errors and all the lovely floating point problems.
Your example value:
179769313486231570814527423731704356798070567525844996598917476803157260780028538760589558632766878171540458953514382464234321326889464182768467546703537516986049910576551282076245490090389328944075868508455133942304583236903222948165808559332123348274797826204144723168738177180919299881250404026184124858368
In binary it is
1111111111111111111111111111111111111111111111111111100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
There's lots of zeros on the end.
What every computer scientist should know about floating point
I know a little bit about how floating-point numbers are represented, but not enough, I'm afraid.
The general question is:
For a given precision (for my purposes, the number of accurate decimal places in base 10), what range of numbers can be represented for 16-, 32- and 64-bit IEEE-754 systems?
Specifically, I'm only interested in the range of 16-bit and 32-bit numbers accurate to +/-0.5 (the ones place) or +/- 0.0005 (the thousandths place).
For a given IEEE-754 floating point number X, if
2^E <= abs(X) < 2^(E+1)
then the distance from X to the next largest representable floating point number (epsilon) is:
epsilon = 2^(E-52) % For a 64-bit float (double precision)
epsilon = 2^(E-23) % For a 32-bit float (single precision)
epsilon = 2^(E-10) % For a 16-bit float (half precision)
The above equations allow us to compute the following:
For half precision...
If you want an accuracy of +/-0.5 (or 2^-1), the maximum size that the number can be is 2^10. Any X larger than this limit leads to the distance between floating point numbers greater than 0.5.
If you want an accuracy of +/-0.0005 (about 2^-11), the maximum size that the number can be is 1. Any X larger than this maximum limit leads to the distance between floating point numbers greater than 0.0005.
For single precision...
If you want an accuracy of +/-0.5 (or 2^-1), the maximum size that the number can be is 2^23. Any X larger than this limit leads to the distance between floating point numbers being greater than 0.5.
If you want an accuracy of +/-0.0005 (about 2^-11), the maximum size that the number can be is 2^13. Any X larger than this lmit leads to the distance between floating point numbers being greater than 0.0005.
For double precision...
If you want an accuracy of +/-0.5 (or 2^-1), the maximum size that the number can be is 2^52. Any X larger than this limit leads to the distance between floating point numbers being greater than 0.5.
If you want an accuracy of +/-0.0005 (about 2^-11), the maximum size that the number can be is 2^42. Any X larger than this limit leads to the distance between floating point numbers being greater than 0.0005.
For floating-point integers (I'll give my answer in terms of IEEE double-precision), every integer between 1 and 2^53 is exactly representable. Beyond 2^53, integers that are exactly representable are spaced apart by increasing powers of two. For example:
Every 2nd integer between 2^53 + 2 and 2^54 can be represented exactly.
Every 4th integer between 2^54 + 4 and 2^55 can be represented exactly.
Every 8th integer between 2^55 + 8 and 2^56 can be represented exactly.
Every 16th integer between 2^56 + 16 and 2^57 can be represented exactly.
Every 32nd integer between 2^57 + 32 and 2^58 can be represented exactly.
Every 64th integer between 2^58 + 64 and 2^59 can be represented exactly.
Every 128th integer between 2^59 + 128 and 2^60 can be represented exactly.
Every 256th integer between 2^60 + 256 and 2^61 can be represented exactly.
Every 512th integer between 2^61 + 512 and 2^62 can be represented exactly.
.
.
.
Integers that are not exactly representable are rounded to the nearest representable integer, so the worst case rounding is 1/2 the spacing between representable integers.
The precision quoted form Peter R's link to the MSDN ref is probably a good rule of thumb, but of course reality is more complicated.
The fact that the "point" in "floating point" is a binary point and not decimal point has a way of defeating our intuitions. The classic example is 0.1, which needs a precision of only one digit in decimal but isn't representable exactly in binary at all.
If you have a weekend to kill, have a look at What Every Computer Scientist Should Know About Floating-Point Arithmetic. You'll probably be particularly interested in the sections on Precision and Binary to Decimal Conversion.
First off, neither IEEE-754-2008 nor -1985 have 16-bit floats; but it is a proposed addition with a 5-bit exponent and 10-bit fraction. IEE-754 uses a dedicated sign bit, so the positive and negative range is the same. Also, the fraction has an implied 1 in front, so you get an extra bit.
If you want accuracy to the ones place, as in you can represent each integer, the answer is fairly simple: The exponent shifts the decimal point to the right-end of the fraction. So, a 10-bit fraction gets you ±211.
If you want one bit after the decimal point, you give up one bit before it, so you have ±210.
Single-precision has a 23-bit fraction, so you'd have ±224 integers.
How many bits of precision you need after the decimal point depends entirely on the calculations you're doing, and how many you're doing.
210 = 1,024
211 = 2,048
223 = 8,388,608
224 = 16,777,216
253 = 9,007,199,254,740,992 (double-precision)
2113 = 10,384,593,717,069,655,257,060,992,658,440,192 (quad-precision)
See also
Double-precision
Half-precision
See IEEE 754-1985:
Note (1 + fraction). As #bendin point out, using binary floating point, you cannot express simple decimal values such as 0.1. The implication is that you can introduce rounding errors by doing simple additions many many times or calling things like truncation. If you are interested in any sort of precision whatsoever, the only way to achieve it is to use a fixed-point decimal, which basically is a scaled integer.
If I understand your question correctly, it depends on your language.
For C#, check out the MSDN ref. Float has a 7 digit precision and double 15-16 digit precision.
It took me quite a while to figure out that when using doubles in Java, I wasn't losing significant precision in calculations. floating point actually has a very good ability to represent numbers to quite reasonable precision. The precision I was losing was immediately upon converting decimal numbers typed by users to the binary floating point representation that is natively supported. I've recently started converting all my numbers to BigDecimal. BigDecimal is much more work to deal with in the code than floats or doubles, since it's not one of the primitive types. But on the other hand, I'll be able to exactly represent the numbers that users type in.
Can any one explain to me why 9999999999999999 is converted to 10000000000000000?
alert(9999999999999999); //10000000000000000
http://jsfiddle.net/Y2Vb2/
Javascript doesn't have integers, only 64-bit floats - and you've ran out of floating-point precision.
See similar issue in Java: why is the Double.parseDouble making 9999999999999999 to 10000000000000000?
JavaScript only has floating point numbers, no integers.
Read What Every Computer Scientist Should Know About Floating-Point Arithmetic.
Summary: floating point numbers include only limited precision, more than 15 digits (or so) and you'll get rounding.
9999999999999999 is treated internally in JavaScript as a floating-point number. It cannot be accurately represented in IEEE 754 double precision as it would require 54 bits of precision (the number of bits is log2(9999999999999999) = 53,150849512... and since fractional bits do not exist, the result must be rouned up) while IEEE 754 provides only 53 bits (1 implict bit + 52 explicitly stored bits of the mantissa) - one bit less. Hence the number simply gets rounded.
Since only one bit is lost in this case, even 54-bit numbers are exactly representable, since they nevertheless contain 0 in the bit, which gets lost. Odd 54-bit numbers are rounded to the nearest value that happens to be a doubled even 53-bit number given the default unbiased rounding mode of IEEE 754.
Why 9999999999999999 is converted to 10000000000000000 ?
All numbers in JavaScript are stored in 64-bit format IEEE-754, also known as “double precision”, So there are exactly 64 bits to store a number: 52 of them are used to store the digits, 11 of them store the position of the decimal point (they are zero for integer numbers), and 1 bit is for the sign.
If a number is too big, it would overflow the 64-bit storage, potentially giving
an infinity:
alert( 1e500 );
// Result => Infinity
// "e" multiplies the number by 1 with the given zeroes count.
If we check whether the sum of 0.1 and 0.2 is 0.3, we get false.
alert( 0.1 + 0.2 == 0.3 )
Strange! What is it then if not 0.3? This happens because, A number is stored in memory in its binary form, a sequence of ones and zeroes. But fractions like 0.1, 0.2 that look simple in the decimal numeric system are actually unending fractions in their binary form.
In other words, what is 0.1? It is one divided by ten 1/10, one-tenth. In decimal numeral system such numbers are easily re-presentable. Compare it to one-third: 1/3. It becomes an endless fraction 0.33333(3).
There’s just no way to store exactly 0.1 or exactly 0.2 using the binary system, just like there is no way to store one-third as a decimal fraction.
The numeric format IEEE-754 solves this by rounding to the nearest possible number. These rounding rules normally don’t allow us to see that “tiny precision loss”, so the number shows up as 0.3. But beware, the loss still exists.
As you see :
alert( 9999999999999999 ); // shows 10000000000000000
This suffers from the same issue: a loss of precision. There are 64 bits for the number, 52 of them can be used to store digits, but that’s not enough. So the least significant digits disappear.
What is Really Happening behind 9999999999999999 to 10000000000000000 is :
JavaScript doesn’t trigger an error in such events. It does its best to fit the number into the desired format, but unfortunately, this format is not big enough.
Reference : https://javascript.info/number
You can also refer this SO Question, it includes the very detail about the JavaScript Numbers.
Question: Sometimes JavaScript computations seem to yield "inaccurate" results, e.g. 0.362*100 yields 36.199999999999996. How can I avoid this?
Answer: Internally JavaScript stores all numbers in double-precision floating-point format, with a 52-bit mantissa and an 11-bit exponent (the IEEE 754 Standard for storing numeric values). This internal representation of numbers may cause unexpected results like the above. Most integers greater than 253 = 9007199254740992 cannot be represented exactly in this format. Likewise, many decimals/fractions, such as 0.362, cannot be represented exactly, leading to the perceived "inaccuracy" in the above example. To avoid these "inaccurate" results, you might want to round the results to the precision of the data you used.
http://www.javascripter.net/faq/accuracy.htm
9999999999999999 in binary form is 100011100001101111001001101111110000001111111111111111
which has 54 digits.
Below we will convert this figure to Javascript IEEE-754 which has 1 digit for sign,
11 digits for mantissa in binary offset format and 52 signs for the
number itself.
In binary form the first digit is always 1 so Javascript omits the first digit of the number in mantissa when saving to IEEE-754 format.
So, we will have 00011100001101111001001101111110000001111111111111111 for mantissa, which is 53 digits and as for the number we can keep only 52 digits we round the number removing last digit 0001110000110111100100110111111000001000000000000000
the final number in binary form will be 1 0001110000110111100100110111111000001000000000000000 0 which in decimal form is 10000000000000000
1 is the first digit that is not written to 52 bits in mantissa, then
52 bits of mantissa and one 0 to make it 54 digits back which is
10000000000000000 in decimal
That might be hard to understand unless you read this beautiful article