How can I handle numbers bigger than 17-digits in Firefox/IE7? - javascript

For a web application I want to be able to handle numbers up to 64 bits in size.
During testing, I found that javascript (or the browser as a whole) seems to handle as much as 17 digits. A 64-bit number has a maximum of 20 digits, but after the javascript has handled the number, the least significant 3 digits are rounded and set to 0....
Any ideas where this comes from?
More importantly, any idea how to work around it?

In Javascript, all numbers are IEEE double precision floating point numbers, which means that you only have about 16 digits of precision; the remainder of the 64 bits are reserved for the exponent. As Fabien notes, you will need to work some tricks to get more precision if you need all 64 bits.

I think you need to treat them as strings if you have reached Javascript limit (see here)

As others note, JS implements doubles, so you'll have to look elsewhere to handle bigger numbers. BigInt is a library for arbitary precision math for integers.

You could try to split them into two or more numbers (in a class maybe), but you'll might need some arithmetic helper functions to work with them.
Cheers

Related

What's the maximum precision (after the decimal point) of a float in Javascript

An algorithm I'm using needs to squeeze as many levels of precision as possible from a float number in Javascript. I don't mind whether the precision comes from a number that is very large or with a lot of numbers after the decimal point, I just literally need as many numerals in it as possible.
(If you care why, it is for a drag n' drop ranking algorithm which has to deal with a lot of halvings before rebalancing itself. I do also know there are better string-based algorithms but the numerical approach suits my purposes)
The MDN Docs say that:
The JavaScript Number type is a double-precision 64-bit binary format IEEE 754 value, like double in Java or C#. This means it can represent fractional values, but there are some limits to what it can store. A Number only keeps about 17 decimal places of precision; arithmetic is subject to rounding.
How should I best use the "17 decimal places of precision"?
Does the 17 decimal places mean "17 numerals in total, inclusive of those before and after the decimal place"
e.g. (adding underscores to represent thousand-separators for readability)
# 17 numerals: safe
111_222_333_444_555_66
# 17 numerals + decimal point: safe
111_222_333_444_555_6.6
1.11_222_333_444_555_66
# 18 numerals: unsafe
111_222_333_444_555_666
# 18 numerals + decimal point: unsafe
1.11_222_333_444_555_666
111_222_333_444_555_66.6
I assume that the precision of the number determines the number of numerals that you can use and that the position of the decimal point in those numerals is effectively academic.
Am I thinking about the problem correctly?
Does the presence of the decimal point have any bearing on the calculation or is it simply a matter of the number of numerals present
Should I assume that 17 numerals is safe / 18 is unsafe?
Does this vary by browser (not just today but over say, a 10 year window, should one assume that browser precision may increase)?
Short answer: you can probably squeeze out 15 "safe" digits, and it doesn't matter where you place your decimal point.
It's anyone's guess how the JavaScript standard is going to evolve and use other number representations.
Notice how the MDN doc says "about 17 decimals"? Right, it's because sometimes you can represent that many digits, and sometimes less. It's because the floating point representation doesn't map 1-to-1 to our decimal system.
Even numbers with seemingly less information will give rounding errors.
For example
0.1 + 0.2 => 0.30000000000000004
console.log(0.1 + 0.2);
However, in this case we have a lot of margin in the precision, so you can just ask for the precision you want to get rid of the rounding error
console.log((0.1 + 0.2).toPrecision(1));
For a larger illustration of this, consider the following snippet:
for(let i=0;i<22;i++) {
console.log(Number.MAX_SAFE_INTEGER / (10 ** i));
}
You will see a lot of rounding errors on digit 16. However, there would be cases where even the 16th decimal shows a rounding error. If you look here
https://en.wikipedia.org/wiki/IEEE_754
it states that binary 64 has 15.95 decimal digits. That's why I'd guess that 15 digits is the max precision you will get out of this.
You'd have to do your operations, and before you save back the number to any representational form, you'd have to do .toPrecision(15).
Finally this has some good explanations. https://floating-point-gui.de/formats/fp/
BTW, I got curious by reading this question so I read up as I wrote this answer. There are many people with better knowledge of this than me.
Does the presence of the decimal point have any bearing on the calculation or is it simply a matter of the number of numerals present
Kinda. To answer that, you'll need to look into how 64bit "double precision" floating point numbers are represented in memory. The "number of numerals" roughly translates into "length of the mantissa", which is indeed fixed and independent from the position of the point. However: it's binary digits and a binary point, not decimal digits and the decimal point. They do not correspond to each other directly. And then there's stuff like subnormal numbers.
Should I assume that 17 numerals is safe / 18 is unsafe?
No. In fact, only 15 decimal numerals would be "safe" if that's the representation you're starting with and want to exactly represent as a double.
Does this vary by browser (not just today but over say, a 10 year window, should one assume that browser precision may increase)?
No, it doesn't vary. The JavaScript number type will always be 64bit doubles.
Am I thinking about the problem correctly?
No.
You say you're considering this in the context of a drag'n'drop ranking algorithm, and you don't want do this string-based. However, thinking about decimal places in numbers is essentially thinking about string representation of numbers. Don't do that - either go all the way to strings, or treat numbers as binary.
Since you also mention "rebalancing", I assume you want to use numbers to encode the position of each item in a binary tree. That's a reasonable approach, but you really need to consider the binary representation of the number for that. And you really should use integers there, not floating-point numbers, as the logic would be much more complex otherwise. Start by deciding how many bits you want to use. There are some limitations for each, so choose wisely:
31/32 bit are what JS bitwise operators for numbers work on. Supported by all browsers easily.
53 bit are the range of integers you can exactly represent with floating-point numbers. Integer arithmetic will work as expected up to that size. Bitwise operations require extra code.
Fixed multiples of 8 (say, 64 bit) are what you can represent with typed arrays. Bitwise operations can be done part-wise, arithmetic operations require extra code. Or use a BigUint64Array that gives you 64 bits as a bigint to calculate with/operate on, but is not supported in old browsers.
Arbitrary precision can be achieved with bigint numbers, which support both bitwise and arithmetic operations, but again don't work in old browsers. Polyfills and bigint libraries are available though.

In JavaScript, how do I ensure floating point numbers stay under 32bits?

Obviously numbers in JavaScript aren't explicitly typed, but are represented as types by the interpreter. I just saw a thing about Google's V8 JS engine that said it's greatly optimized for 32 bit numbers, but found it odd many JS programmers would have a need for doubles even with floating point. The only examples I could think of personally is if I'm dividing two integers, which I do often in order to normalize screen coordinates between 0 and 1, and the interpreter is truncating the result at 64 bits instead of 32. This also seems unlikely to me, but then again I don't know how else someone needing such precision would specify it. So now I'm wondering...is there a way to ensure the quotient of two (not gigantic) integers is under 32 bits in length?
I just saw a thing about Google's V8 JS engine that said it's greatly optimized for 32 bit numbers
This only means that V8 does internally store those numbers as integers when it can deduce that they will stay in the respective range. This is common for counters or array indices, for example.
Is there a way to ensure the quotient of two (not gigantic) integers is under 32 bits in length?
No - all arithmetic operations are carried out as if they were 64 bit floating point numbers (like all numbers in JS). They only thing you can do is to truncate the result back to a 32 bit integer. You'll use the bitwise right shift operator for that which internally casts its operands to integers:
var q = (a / b) >>> 0;
See What is the JavaScript >>> operator and how do you use it? for details.

Make JavaScript Math.sqrt() print more digits

If I write document.write(Math.sqrt(2)) on my HTML page, I get 1.4142135623730951.
Is there any way to make the method output more than 16 decimal places?
No, there is not. Mathematical operations in Javascript are performed using 64-bit floating point values, and 16 digits of precision is right around the limit of what they can represent accurately.
To get more digits of the result, you will need to use an arbitrary-precision math library. I'm not aware offhand of any of these for Javascript that support square roots, though -- the one I was able to find offhand (Big.js) only supports addition, subtraction, and comparisons.
You can use toPrecision. However, ECMA requries only a precision of up to 21 significant digits:
console.log(Math.sqrt(2).toPrecision(21))
But keep in mind that the precision of real values on computer has some limits (see duskwuff's answer).
See also:
toFixed
toExponential

Math.sqrt() returns infinity?

Math.sqrt(); seems to work fine with any number less than 310 characters long.
However, any number 310 chars or over will return infinity...
If you want to test it out yourself, here it is on jsfiddle http://jsfiddle.net/gqhk9/2
Anyway, I need to get the square root of numbers including some which are 310 chars and longer.
How can I do that in js?
It's not an issue with Math.sqrt - get rid of the Math.sqrt call and you'll still see infinity. Basically, Javascript can't cope with numbers that big - it runs out of the range of 64-bit floating point IEEE 754 values. You'll need to find some sort of library for handling arbitrary-sized integers.
Note that even for numbers smaller than 10309, you're still going to be losing information after the first ~15 digits. If you care about all of those digits, again you should be looking at specialist maths libraries.
A quick look around the web found BigInt.js referenced a few times, but I don't know how good it is.
Look at Number.MAX_VALUE.
The MAX_VALUE property has a value of approximately 1.79E+308.
Values larger than MAX_VALUE are represented as "Infinity".
Javascript numbers cannot be that big.
If you type
javascript:123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890
in your address bar, you'll also get Infinity.
You need to use a bignum library.
The number that you are starting with, 1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890, is Infinity, and Math.sqrt(Infinity) is Infinity.
What you need is a big integer library to simulate it, for example, http://www.leemon.com/crypto/BigInt.html; then with that you can take your big integer to the power of 0.5 to calculate the square root.

JavaScript 64 bit numeric precision

Is there a way to represent a number with higher than 53-bit precision in JavaScript? In other words, is there a way to represent 64-bit precision number?
I am trying to implement some logic in which each bit of a 64-bit number represents something. I lose the lower significant bits when I try to set bits higher than 2^53.
Math.pow(2,53) + Math.pow(2,0) == Math.pow(2,53)
Is there a way to implement a custom library or something to achieve this?
Google's Closure library has goog.math.Long for this purpose.
The GWT team have added a long emulation support so java longs really hold 64 bits. Do you want 64 bit floats or whole numbers ?
I'd just use either an array of integers or a string.
The numbers in javascript are doubles, I think there is a rounding error involved in your equation.
Perhaps I should have added some technical detail. Basically the GWT long emulation uses a tuple of two numbers, the first holding the high 32 bits and the second the low 32 bits of the 64 bit long.
The library of course contains methods to add stuff like adding two "longs" and getting a "long" result. Within your GWT Java code it just looks like two regular longs - one doesn't need to fiddle or be aware of the tuple. By using this approach GWT avoids the problem you're probably alluding to, namely "longs" dropping the lower bits of precision which isn't acceptable in many cases.
Whilst floats are by definition imprecise / approximations of a value, a whole number like a long isn't. GWT always holds a 64 bit long - maths using such longs never use precision. The exception to this is overflows but that accurately matches what occurs in Java etc when you add two very large long values which require more than 64 bits - eg 2^32-1 + 2^32-1.
To do the same for floating point numbers will require a similar approach. You will need to have a library that uses a tuple.
The following code might work for you; I haven't tested it however yet:
BigDecimal for JavaScript
Yes, 11 bit are reserved for exponent, only 52 bits containt value also called fraction.
Javascript allows bitwise operations on numbers but only first 32 bits are used in those operations according to Javascript standard specification.
I do not understand misleading GWT/Java/long answers in Javascript/double question though? Javascript is not Java.
Why would anyone need 64 bit precision in javascript ?
Longs sometimes hold ID of stuff in a DB so its important not to lose some of the lower bits... but floating point numbers are most of the time used for calculations. To use floats to hold monetary or similar exacting values is plain wrong. If you truely need 64 bit precision do the maths on the server where its faster and so on.

Categories

Resources