Multiplication of same number in Java and Javascript giving different values - javascript

I have an expression which I am evaluating both in Java and JavaScript.
Java
System.out.println(582344008L * 719476260);
Output: 418982688909250080
Javascript
document.write(582344008 * 719476260)
Output: 418982688909250050
Why is there a difference in both values?

Javascript numbers are all IEEE 754 doubles, so it looks like there is some floating point error going on here. see Difference between floats and ints in Javascript? for some more details

Preamble
As #Charlie Wallace says
582344008 * 719476260 is greater than Number.MAX_SAFE_INTEGER
What is JavaScript's highest integer value that a number can go to without losing precision?
As **#baseballlover723 and #RealSkeptic say it's a floating point error in precision/different storage type size rounding error.
See javascript types:
What are the 7 primitive data types in JavaScriptThere are 7 basic types in JavaScript.
number for numbers of any kind: integer or floating-point. Integer/Decimal values up to 16 digits of precision. JavaScript numbers are all floating point, stored according to the IEEE 754 standard. That standard has several formats. JavaScript uses binary64 or double precision. As the former name indicates, numbers are stored in a binary format, in 64 bits.
string for strings. A string may have one or more characters, there’s no separate single-character type.
boolean for true/false.
null for unknown values – a standalone type that has a single value null.
undefined for unassigned values – a standalone type that has a single value undefined.
object for more complex data structures.
symbol for unique identifiers.
The typeof operator allows us to see which type is stored in a variable.
Two forms: typeof x or typeof(x).
Returns a string with the name of the type, like "string".
For null returns "object" – this is an error in the language, it’s not actually an object.
What are the 8 primitive data types in Java?
byte for 8-bit signed integer
short for 16-bit signed integer
int for 32-bit signed integer
long for 64-bit signed integer
char for two bytes in Unicode
float for *decimal* values up to 7 digits of precision
double for *decimal* values up to 16 digits of precision (64 bits)
boolean for true/false
Decimal values with a fractional component is called floating point. They can be expressed in either standard or scientific notation.
Testing
A quick way to test is use an online java compiler (Compile and Execute Java Online (JDK 1.8.0))
public class HelloWorld{
public static void main(String []args){
System.out.println("Long Output: "+582344008L * 719476260L);
System.out.println("Float Output: "+582344008.0 * 719476260.0);
}
}
The command:
$javac HelloWorld.java
$java -Xmx128M -Xms16M HelloWorld
The Output:
Long Output: 418982688909250080
Float Output: 4.1898268890925005E17
My desktop calculator goes to this:
4.189826889E17
and for JavaScript online compiler:
//JavaScript-C24.2.0 (SpiderMonkey)
print("Output: "+582344008 * 719476260)
The Output:
Output: 418982688909250050

Related

Why are the same 32-bit floats different in JavaScript and Rust?

In JavaScript, 38_579_240_960 doesn't change when converted to a 32-bit float:
console.log(new Float32Array([38_579_240_960])[0]); // 38579240960
But in Rust, it gets rounded to 38579240000. Howcome?
fn main() {
println!("{}", 38_579_240_960_f32);` // 38579240000
}
While 38,579,240,960 is able to be represented as an IEEE-754 32-bit floating point number exactly, the trailing 960 is not significant. The 24-bit mantissa can only express about 7 meaningful digits. The next representable values above and below are 38,579,245,056 and 38,579,236,864. So the number 38,579,240,960 is the closest representable value in a range spanning in the tens-of-thousands.
So even if you add 1000 to the value, neither languages change their output:
38579240960
38579240000
So the difference is that JavaScript is printing out the exact value that is represented while Rust is only printing out the minimum digits to uniquely express it.
If you want the Rust output to look like JavaScript's, you can specify the precision like so (playground):
println!("{:.0}", 38579240960f32); // display all digits up until the decimal
38579240960
I wouldn't call either one right or wrong necessarily, however one advantage of Rust's default formatting is that you don't get a false sense of precision.
See also:
How do I print a Rust floating-point number with all available precision?
Rust: Formatting a Float with a Minimum Number of Decimal Points
Your code snippets are not equivalent. JS prints f64, and Rust prints f32.
JavaScript does not have a 32-bit float type. When you read an element out of Float32Array it is implicitly converted to 64-bit double, because this is the only way JS can see the value.
If you do the same in Rust, it prints the same value:
println!("{}", 38_579_240_960_f32 as f64);
// 38579240960

What kinds of values can a Number type hold?

I've read somewhere that a JavaScript number can hold both a 64-bit number AND a 64-bit Integer, is this true? I'm still a little confused on this stuff.
Number can contains integer or floating point number. Here are max/min values for JS
console.log('float min:',Number.MIN_VALUE);
console.log('float max:',Number.MAX_VALUE);
console.log('int min:',Number.MIN_SAFE_INTEGER);
console.log('int max:',Number.MAX_SAFE_INTEGER);
More details in specification ES2018
JavaScript only has a single numeric type for primitives - it's a IEEE 754 standard 64-bit double precision floating point number.
Link to the specifications
Link to MDN
So, there are no integers in JavaScript - everything is a float.

How to obtain only the integer part of a long floating precision number with JS?

I know there's
Math.floor
parseInt
But about this case:
Math.floor(1.99999999999999999999999999)
returning 2, how could I obtain only its integer part, equals to 1?
1.99999999999999999999999999 is not the actual value of the number. It has a value of 2 after the literal is parsed because this is 'as best' as JavaScript can represent such a value.
JavaScript numbers are IEEE-754 binary64 or "double precision" which allow for 15 to 17 significant [decimal] digits of precision - the literal shown requires 27 significant digits which results in information loss.
Test: 1.99999999999999999999999999 === 2 (which results in true).
Here is another answer of mine that describes the issue; the finite relative precision is the problem.

number to string conversion error in javascript

Did you ever try to convert a big number to a string in javascript?
Please try this:
var n = 10152557636804775;
console.log(n); // outputs 10152557636804776
Can you help me understand why?
10152557636804775 is higher than the maximum integer number that can be safely represented in JavaScript (it's Number.MAX_SAFE_INTEGER). See also this post for more details.
From MDN (emphasis is mine):
The MAX_SAFE_INTEGER constant has a value of 9007199254740991. The reasoning behind that number is that JavaScript uses double-precision floating-point format numbers as specified in IEEE 754 and can only safely represent numbers between -(2^53 - 1) and 2^53 - 1.
To check if a given variable can be safely represented as an integer (without representation errors) you can use IsSafeInteger():
var n = 10152557636804775;
console.assert(Number.isSafeInteger(n) == false);

Why does JavaScript use the term "Number" as opposed to "Integer"?

Is "Number" in JavaScript entirely synonymous with "Integer"?
What piqued my curiosity:
--- PHP, Python, Java and others use the term "Integer"
--- JavaScript has the function parseInt() rather than parseNumber()
Are there any details of interest?
Is "Number" in JavaScript entirely synonymous with "Integer"?
No. All numbers in JavaScript are actually 64-bit floating point values.
parseInt() and parseFloat() both return this same data type - the only difference is whether or not any fractional part is truncated.
52 bits of the 64 are for the precision, so this gives you exact signed 53-bit integer values. Outside of this range integers are approximated.
In a bit more detail, all integers from -9007199254740992 to +9007199254740992 are represented exactly (-2^53 to +2^53). The smallest positive integer that JavaScript cannot represent exactly is 9007199254740993. Try pasting that number into a JavaScript console and it will round it down to 9007199254740992. 9007199254740994, 9007199254740996, 9007199254740998, etc. are all represented exactly but not the odd integers in between. The integers that can be represented exactly become more sparse the higher (or more negative) you go until you get to the largest value Number.MAX_VALUE == 1.7976931348623157e+308.
In JavaScript there is a single number type: an IEEE 754 double precision floating point (what is called number.)
This article by D. Crockford is interesting:
http://yuiblog.com/blog/2009/03/10/when-you-cant-count-on-your-numbers/

Categories

Resources