JavaScript 64 bit numeric precision - javascript

Is there a way to represent a number with higher than 53-bit precision in JavaScript? In other words, is there a way to represent 64-bit precision number?
I am trying to implement some logic in which each bit of a 64-bit number represents something. I lose the lower significant bits when I try to set bits higher than 2^53.
Math.pow(2,53) + Math.pow(2,0) == Math.pow(2,53)
Is there a way to implement a custom library or something to achieve this?

Google's Closure library has goog.math.Long for this purpose.

The GWT team have added a long emulation support so java longs really hold 64 bits. Do you want 64 bit floats or whole numbers ?

I'd just use either an array of integers or a string.
The numbers in javascript are doubles, I think there is a rounding error involved in your equation.

Perhaps I should have added some technical detail. Basically the GWT long emulation uses a tuple of two numbers, the first holding the high 32 bits and the second the low 32 bits of the 64 bit long.
The library of course contains methods to add stuff like adding two "longs" and getting a "long" result. Within your GWT Java code it just looks like two regular longs - one doesn't need to fiddle or be aware of the tuple. By using this approach GWT avoids the problem you're probably alluding to, namely "longs" dropping the lower bits of precision which isn't acceptable in many cases.
Whilst floats are by definition imprecise / approximations of a value, a whole number like a long isn't. GWT always holds a 64 bit long - maths using such longs never use precision. The exception to this is overflows but that accurately matches what occurs in Java etc when you add two very large long values which require more than 64 bits - eg 2^32-1 + 2^32-1.
To do the same for floating point numbers will require a similar approach. You will need to have a library that uses a tuple.

The following code might work for you; I haven't tested it however yet:
BigDecimal for JavaScript

Yes, 11 bit are reserved for exponent, only 52 bits containt value also called fraction.
Javascript allows bitwise operations on numbers but only first 32 bits are used in those operations according to Javascript standard specification.
I do not understand misleading GWT/Java/long answers in Javascript/double question though? Javascript is not Java.

Why would anyone need 64 bit precision in javascript ?
Longs sometimes hold ID of stuff in a DB so its important not to lose some of the lower bits... but floating point numbers are most of the time used for calculations. To use floats to hold monetary or similar exacting values is plain wrong. If you truely need 64 bit precision do the maths on the server where its faster and so on.

Related

NaN payload storing 64 bit ints

I recently found about the topic nan boxing, I'm also trying to build a dynamic typed programming language so nan boxing seemed the choice to represent the types but I'm still a lot confused about it, one of them is how would i store data like a 64 bit integer, one way i think will work is heap allocating them, as it decreases the size since now it points to the memory rather than the whole integer, but wouldn't that be slow and inefficient? Part of this question comes from my other confusion: How can javascript represent numbers as high as Number.MAX_SAFE_INTEGER if it uses IEEE 754 doubles (wouldn't it be limited to 32 bit ints then?), sorry if i sound stupid I'm just new to this kind of bitwise things.
I think i got it, i was really being stupid, since nans are doubles anyway and doubles are 64 bits, i could represent all numbers as doubles like javascript does and store it as a 32 bit int if possible because ints are faster when possible, i'd like further feedback if possible.

In JavaScript, how do I ensure floating point numbers stay under 32bits?

Obviously numbers in JavaScript aren't explicitly typed, but are represented as types by the interpreter. I just saw a thing about Google's V8 JS engine that said it's greatly optimized for 32 bit numbers, but found it odd many JS programmers would have a need for doubles even with floating point. The only examples I could think of personally is if I'm dividing two integers, which I do often in order to normalize screen coordinates between 0 and 1, and the interpreter is truncating the result at 64 bits instead of 32. This also seems unlikely to me, but then again I don't know how else someone needing such precision would specify it. So now I'm wondering...is there a way to ensure the quotient of two (not gigantic) integers is under 32 bits in length?
I just saw a thing about Google's V8 JS engine that said it's greatly optimized for 32 bit numbers
This only means that V8 does internally store those numbers as integers when it can deduce that they will stay in the respective range. This is common for counters or array indices, for example.
Is there a way to ensure the quotient of two (not gigantic) integers is under 32 bits in length?
No - all arithmetic operations are carried out as if they were 64 bit floating point numbers (like all numbers in JS). They only thing you can do is to truncate the result back to a 32 bit integer. You'll use the bitwise right shift operator for that which internally casts its operands to integers:
var q = (a / b) >>> 0;
See What is the JavaScript >>> operator and how do you use it? for details.

What is the best way to store very large binary numbers in JavaScript?

I'm currently learning JavaScript and I'm very surprised there is not built-in way to work with binary numbers. I have to work with integers up to 2^4096 so I just can't use JS integers for my calculus.
My first thought was to use Arrays of 1 and 0. But that's not satisfying since a simple multiplication by 2 requires to shift the whole table.
So how can I work efficiently with binary numbers in JavaScript ?
I'd like not to use any library. I'm interested in how it works, not in using other's abstraction.
Javascript doesn't have any biginteger type, so you would need to use an array to hold that much information.
An array of 0 and 1 values would make it easy to implement functions for it, but not very efficient.
A regular number in Javascript is a double precision floating point number, so it can hold 52 bits of numeric information (ref), but you would use slightly less to be far away from any rounding errors, for example 48 bits.
The bitwise operators in Javascript work with 32 bit integers, i.e. a double is converted to a 32 bit integer when used with a bitwise operator. If you want to use the bitwise operators on the data, you could choose to store 32 bits per item in the array.
JavaScript only supports 53 bit integers.
The best way to store "big integers" is to convert them to strings on server side. And in case you want to manipulate them I would suggest to look at this library https://github.com/rauschma/strint

Make JavaScript Math.sqrt() print more digits

If I write document.write(Math.sqrt(2)) on my HTML page, I get 1.4142135623730951.
Is there any way to make the method output more than 16 decimal places?
No, there is not. Mathematical operations in Javascript are performed using 64-bit floating point values, and 16 digits of precision is right around the limit of what they can represent accurately.
To get more digits of the result, you will need to use an arbitrary-precision math library. I'm not aware offhand of any of these for Javascript that support square roots, though -- the one I was able to find offhand (Big.js) only supports addition, subtraction, and comparisons.
You can use toPrecision. However, ECMA requries only a precision of up to 21 significant digits:
console.log(Math.sqrt(2).toPrecision(21))
But keep in mind that the precision of real values on computer has some limits (see duskwuff's answer).
See also:
toFixed
toExponential

How can I handle numbers bigger than 17-digits in Firefox/IE7?

For a web application I want to be able to handle numbers up to 64 bits in size.
During testing, I found that javascript (or the browser as a whole) seems to handle as much as 17 digits. A 64-bit number has a maximum of 20 digits, but after the javascript has handled the number, the least significant 3 digits are rounded and set to 0....
Any ideas where this comes from?
More importantly, any idea how to work around it?
In Javascript, all numbers are IEEE double precision floating point numbers, which means that you only have about 16 digits of precision; the remainder of the 64 bits are reserved for the exponent. As Fabien notes, you will need to work some tricks to get more precision if you need all 64 bits.
I think you need to treat them as strings if you have reached Javascript limit (see here)
As others note, JS implements doubles, so you'll have to look elsewhere to handle bigger numbers. BigInt is a library for arbitary precision math for integers.
You could try to split them into two or more numbers (in a class maybe), but you'll might need some arithmetic helper functions to work with them.
Cheers

Categories

Resources