number to string conversion error in javascript - javascript

Did you ever try to convert a big number to a string in javascript?
Please try this:
var n = 10152557636804775;
console.log(n); // outputs 10152557636804776
Can you help me understand why?

10152557636804775 is higher than the maximum integer number that can be safely represented in JavaScript (it's Number.MAX_SAFE_INTEGER). See also this post for more details.
From MDN (emphasis is mine):
The MAX_SAFE_INTEGER constant has a value of 9007199254740991. The reasoning behind that number is that JavaScript uses double-precision floating-point format numbers as specified in IEEE 754 and can only safely represent numbers between -(2^53 - 1) and 2^53 - 1.
To check if a given variable can be safely represented as an integer (without representation errors) you can use IsSafeInteger():
var n = 10152557636804775;
console.assert(Number.isSafeInteger(n) == false);

Related

Multiplication of same number in Java and Javascript giving different values

I have an expression which I am evaluating both in Java and JavaScript.
Java
System.out.println(582344008L * 719476260);
Output: 418982688909250080
Javascript
document.write(582344008 * 719476260)
Output: 418982688909250050
Why is there a difference in both values?
Javascript numbers are all IEEE 754 doubles, so it looks like there is some floating point error going on here. see Difference between floats and ints in Javascript? for some more details
Preamble
As #Charlie Wallace says
582344008 * 719476260 is greater than Number.MAX_SAFE_INTEGER
What is JavaScript's highest integer value that a number can go to without losing precision?
As **#baseballlover723 and #RealSkeptic say it's a floating point error in precision/different storage type size rounding error.
See javascript types:
What are the 7 primitive data types in JavaScriptThere are 7 basic types in JavaScript.
number for numbers of any kind: integer or floating-point. Integer/Decimal values up to 16 digits of precision. JavaScript numbers are all floating point, stored according to the IEEE 754 standard. That standard has several formats. JavaScript uses binary64 or double precision. As the former name indicates, numbers are stored in a binary format, in 64 bits.
string for strings. A string may have one or more characters, there’s no separate single-character type.
boolean for true/false.
null for unknown values – a standalone type that has a single value null.
undefined for unassigned values – a standalone type that has a single value undefined.
object for more complex data structures.
symbol for unique identifiers.
The typeof operator allows us to see which type is stored in a variable.
Two forms: typeof x or typeof(x).
Returns a string with the name of the type, like "string".
For null returns "object" – this is an error in the language, it’s not actually an object.
What are the 8 primitive data types in Java?
byte for 8-bit signed integer
short for 16-bit signed integer
int for 32-bit signed integer
long for 64-bit signed integer
char for two bytes in Unicode
float for *decimal* values up to 7 digits of precision
double for *decimal* values up to 16 digits of precision (64 bits)
boolean for true/false
Decimal values with a fractional component is called floating point. They can be expressed in either standard or scientific notation.
Testing
A quick way to test is use an online java compiler (Compile and Execute Java Online (JDK 1.8.0))
public class HelloWorld{
public static void main(String []args){
System.out.println("Long Output: "+582344008L * 719476260L);
System.out.println("Float Output: "+582344008.0 * 719476260.0);
}
}
The command:
$javac HelloWorld.java
$java -Xmx128M -Xms16M HelloWorld
The Output:
Long Output: 418982688909250080
Float Output: 4.1898268890925005E17
My desktop calculator goes to this:
4.189826889E17
and for JavaScript online compiler:
//JavaScript-C24.2.0 (SpiderMonkey)
print("Output: "+582344008 * 719476260)
The Output:
Output: 418982688909250050

I can't get proper uint32 number in javascript

i am trying to convert a long number to unit in JavaScript, but the result i got it different from the one i already have in c#.
c#:
var tt=431430059159441001;
var t=(UInt32)tt;//1570754153
js:
var arr =new Uint32Array(1);
arr[0]=431430059159441001;//1570754176
so could any body explain why there is difference.
That's because your number literal is rather a 64 bit integer, and that cannot be represented in JavaScripts regular Number type. The number type is a 64-bit precision floating point number, which can only represent integer values up to around 2**53. So I would recommend to just not use such a huge number literal.
A recent development in the JavaScript world is BigInts. If you can afford to use them, then your code is easy to fix:
var t = Number(BigInt.asUintN(32, 431430059159441001n));
console.log(t); // 1570754153
This is not about uints, but about floats. JavaScript uses floating point numbers, and your number exceeds the maximum range of integers that can safely be represented:
console.log(431430059159441001)
You cannot convert 431430059159441001 to unsigned integer in c#. Max Value of UInt32 is 4294967295. So the var t=(UInt32)431430059159441001; assignment gives Compiler error.
also 431430059159441001 is larger then max value of float number (javascript holds number with float format)

Conversion issue for a long string of integers in JavaScript

I'm trying to convert a long string which has only integers to numbers.
var strOne = '123456789123456789122';
parseInt(strOne, 10);
// => 123456789123456800000
var strTwo = '1234567891234567891232';
parseInt(strTwo, 10);
// => 1.234567891234568e+21
The expected output should be the same as strOne and strTwo but that isn't happening here. While converting the string to a number, the output gets changed.
What's the best way to fix this issue?
BigInt is now available in browsers.
BigInt is a built-in object that provides a way to represent whole
numbers larger than 253, which is the largest number JavaScript can
reliably represent with the Number primitive.
value The numeric value of the object being created. May be a string or an integer.
var strOne = '123456789123456789122';
var intOne = BigInt(strOne);
var strTwo = '1234567891234567891232';
var intTwo = BigInt(strTwo);
console.log(intOne, intTwo);
You number is unfortunately too large and gets wrapped when the conversion is done.
The largest integer you can express in JavaScript is 2^53-1, it is given by Number.MAX_SAFE_INTEGER, see the MDN doc here.
The reasoning behind that number is that JavaScript uses double-precision floating-point format numbers as specified in IEEE 754 and can only safely represent numbers between -(2^53 - 1) and 2^53 - 1.
console.log(Number.MAX_SAFE_INTEGER);
If you want to work with numbers bigger than this limit, you'll have to use a different representation than Number such as String and use a library to handle operations (see the BigInteger library for example).

facing an issue with parseFloat when input is more than 16 digits

I am facing weird issued.
parseFloat(11111111111111111) converts it to 11111111111111112.
I noticed that it works fine till length is 16 but rounds off higher when input length is > 16.
I want to retain the original value passed in parseFloat after it is executed.
Any help?
Integers (numbers without a period or exponent notation) are considered accurate up to 15 digits.
More information here
Numbers in javascript are represented using 64 bit floating point values (so called doubles in other languages).
doubles can hold at most 15/16 significant digits (depends on number magnitute). Since range of double is 1.7E+/-308 some numbers can only be aproximated by double, in your case 11111111111111111 cannot be represented exactly but is aproximated by 11111111111111112 value. If this sounds strange then remember that 0.3 cannot be represented exactly as double too.
double can hold exact integers values in range +/-2^53, when you are operating in this range - you may expect exact values.
Javascript has a constant, Number.MAX_SAFE_INTEGER which is the highest integer that can be exactly represented.
Safe in this context refers to the ability to represent integers exactly and to correctly compare them. For example, Number.MAX_SAFE_INTEGER + 1 === Number.MAX_SAFE_INTEGER + 2 will evaluate to true, which is mathematically incorrect.
The value is 9007199254740991 (2^53 - 1) which makes a maximum of 15 digits safe.
JavaScript now has BigInt
BigInt is a built-in object that provides a way to represent whole numbers larger than 253 - 1, which is the largest number JavaScript can reliably represent with the Number primitive.
BigInt can be used for arbitrarily large integers.
As you can see in the following blog post, JavaScript only supports 53 bit integers.
if you type in the console
var x = 11111111111111111
and then type
x
you'll get
11111111111111112
This has nothing to do with the parseFloat method.
There's also a related question here about working with big numbers in JavaScript.
Try using the unary + operator.
Like this + ("1111111111111111") + 1 = 1111111111111112

Why does JavaScript use the term "Number" as opposed to "Integer"?

Is "Number" in JavaScript entirely synonymous with "Integer"?
What piqued my curiosity:
--- PHP, Python, Java and others use the term "Integer"
--- JavaScript has the function parseInt() rather than parseNumber()
Are there any details of interest?
Is "Number" in JavaScript entirely synonymous with "Integer"?
No. All numbers in JavaScript are actually 64-bit floating point values.
parseInt() and parseFloat() both return this same data type - the only difference is whether or not any fractional part is truncated.
52 bits of the 64 are for the precision, so this gives you exact signed 53-bit integer values. Outside of this range integers are approximated.
In a bit more detail, all integers from -9007199254740992 to +9007199254740992 are represented exactly (-2^53 to +2^53). The smallest positive integer that JavaScript cannot represent exactly is 9007199254740993. Try pasting that number into a JavaScript console and it will round it down to 9007199254740992. 9007199254740994, 9007199254740996, 9007199254740998, etc. are all represented exactly but not the odd integers in between. The integers that can be represented exactly become more sparse the higher (or more negative) you go until you get to the largest value Number.MAX_VALUE == 1.7976931348623157e+308.
In JavaScript there is a single number type: an IEEE 754 double precision floating point (what is called number.)
This article by D. Crockford is interesting:
http://yuiblog.com/blog/2009/03/10/when-you-cant-count-on-your-numbers/

Categories

Resources