What's the biggest BigInt value in js as per spec - javascript

It turns out (outer a bit of thought it's more obvious but whatever) that BigInt recently introduced to javascript has a limit:
My question would be - is there a constant similar to Number.MAX_SAFE_INTEGER but for BigInt?
This snippet of code:
let a = 2n, step = 1;
try{while(true) {
console.log(step);
a=a**2n; step++
}} catch(e){ console.log(e)}
Shows that the limit is about (step = 32) - at least in Chrome. But I wonder what it this value as per spec.

It seems like there is no maximum limit to a BigInt as per spec, which makes sense considering BigInts are supposed to be arbitrary-precision integers, whose "digits of precision are limited only by the available memory of the host system".
As for v8 specifically, according to this article on the v8 blog, the precision of BigInts are "arbitrary up to an implementation-defined limit". Unfortunately, I couldn't find any further information on how the limit is determined. Maybe someone else would be able to shed light on this based on these v8 BigInt implementation notes?
That said, based on the aforementioned articles, there doesn't seem to be a specific maximum value/size for a BigInt. Rather, it is likely determined based on the available memory on the system in some way.

The maximum size of a BigInt in webkit is defined as such
// The maximum length that the current implementation supports would be
// maxInt / digitBits. However, we use a lower limit for now, because
// raising it later is easier than lowering it.
// Support up to 1 million bits.
static constexpr unsigned maxLength = 1024 * 1024 / (sizeof(void*) * bitsPerByte);
The size of void* is platform dependent, 8 on 64 bit systems.
So there's your answer right? Should be 16384 bits.... (-1 for the sign). But I can't create anywhere near that large a number in console.

The size has arbitrary precision, per 4.3.25BigInt value, though, oddly not mentioned in the 20.2 BigInt Objects section.
Here is a quick test program:
/* global BigInt */
let b = BigInt(10)
let exp = 1;
while (true) {
console.log(`BigInt of 10^${exp} `);
b = b * b;
exp *= 2;
}
output with Node v13.2:
BigInt of 10^1
BigInt of 10^2
BigInt of 10^4
...
BigInt of 10^4194304
BigInt of 10^8388608
BigInt of 10^16777216
Performance really drags after about 10 to the millionth.
While there may be a platform specific maximum in a specific browser, the size requirement to show it would be large. Even 10^10^6 takes over 300K to store. You could extend the spec to add a limit with Tetration, e.g., "limit is about 10^10^10^... 8 times", but, seriously, that would be silly.

It turns out it's 2^30 - 1 bits, or 2^(2^30) - 1. Any more bits and it wouldn't work. (I was unable to get the actual value, since bit shifts can't do that) To put that in comparison, there are more digits in that number than the population in the US (almost 2x)!
(Wolfram Alpha link for the number of digits)
Tested on a modern Chrome browser.

Related

What's the point of having values above Number.MAX_SAFE_INTEGER?

JavaScript has a limitation in integer precision defined by Number.MAX_SAFE_INTEGER. Any number that goes beyond this runs the risk of being inaccurate. This is because it cannot be represented accurately in memory due to bit movement as outlined in this answer:
var max = Number.MAX_SAFE_INTEGER;
max + 0; // 9,007,199,254,740,991
max + 1; // 9,007,199,254,740,992
max + 2; // 9,007,199,254,740,992 (!)
max + 3; // 9,007,199,254,740,994
max + 4; // 9,007,199,254,740,996 (!)
// ...
Any result you get above max is very unreliable, making it almost entirely useless, and prone to leading to bugs if the developer isn't expecting it or aware of this limitation.
My question is, what's the point of having numbers beyond this value? Is there any practical use for JavaScript to allow a Number beyond this? It can reach Number.MAX_VALUE (1.79e+308), which is substantially larger than MAX_SAFE_INTEGER, but everything in between the two values is not very useful, and it forces the developer to switch to BigInt.
everything in between the two values is not very useful, and it forces the developer to switch to BigInt.
It can be useful if absolute precision isn't required. When dealing with huge numbers, it's not very usual to require that much.
Take incremental games for an example. They often need to represent huge numbers, but the insignificant tail end of the values (for example, the thousands, when the numbers being dealt with are on the order of 1e20) isn't important and can be safely discarded without impacting functionality.
BigInts are a somewhat new thing too. Before it existed, it was useful to be able represent huge numbers even if they were sometimes slightly inaccurate with their least significant digits. Before BigInts, better to be able to represent huge numbers despite slight inaccuracy than to not be able to represent huge numbers at all.

(Novice Programmer) mod(3^146, 293) among others returning the same incorrect values in Matlab and JS

First note that mod(3^146,293)=292. For some reason, inputting mod(3^146,293) in Matlab returns 275. Inputting Math.pow(3,146) % 293 in JS returns 275. This same error occurs (as far as I can tell) every time. This leads me to believe I am missing something obvious but cannot seem to tell what.
Any help is much appreciated.
As discussed in the answers to this related question, MATLAB uses double-precision floating point numbers by default, which have limits on their resolution (i.e. the floating point relative accuracy, eps). For example:
>> a = 3^146
a =
4.567759074507741e+69
>> eps(a)
ans =
7.662477704329444e+53
In this case, 3146 is on the order of 1069 and the relative accuracy is on the order of 1053. With only 16 digits of precision, a double can't store the exact integer representation of an arbitrary 70 digit integer.
An alternative in MATLAB is to use the Symbolic Toolbox to create symbolic numbers with a greater resolution. This gives you the answer you expect:
>> a = sym('3^146')
a =
4567759074507740406477787437675267212178680251724974985372646979033929
>> mod(a, 293)
ans =
292
Math.pow(3, 146) is is larger than the constant Number.MAX_SAFE_INTEGER in JavaScript which represents the upper limit of numbers that can be represented without losing any accuracy. Therefore JavaScript cannot accurately represent Math.pow(3, 146) within the 64 bit limit.
MatLab also has limits on its integer size but can represent a large number with the Symbolic Math Toolbox.
There are also algorithms that you can implement to accomplish this without overflowing.

Masking a 16-bit value with 0xFFFF

I am following a guide on Gameboy emulation and, in a snippet of code I saw the following:
while(true)
{
var op = MMU.rb(Z80._r.pc++); // Fetch instruction
Z80._map[op](); // Dispatch
Z80._r.pc &= 65535; // Mask PC to 16 bits
Z80._clock.m += Z80._r.m; // Add time to CPU clock
Z80._clock.t += Z80._r.t;
}
Where pc is a 16-bit program counter register and 65535 in hexadecimal is 0xFFFF , what is the purpose of masking a 16-bit value with 0xFFFF? As far as I know this does nothing? Or is it something to do with the sign bit?
I think the important part is that you use JavaScript - it has only one numeric type - floating point. But apparently underlying engine can recognize when it should use integers instead - using bit mask is a strong suggestion that we want to use it as integer since bit operations usually doesn't make sense for floats. It also trims all used pits in this particular variable to last 16 - what guarantee you have that earlier it wasn't using bits older than last 16? If all later operations works on assumption that number is 16-bit then without using mask your assumptions are prone to break.
what is the purpose of masking a 16-bit value
None. But there is no 16-bit value - it's just a (floating-point) number in javascript. Only to make it emulate a 16 bit value this number is cut down to 16 bits after the program counter is incremented - it did not overflow from ++ but just did take the value 65536.
That's also what the comment says: // Mask PC to 16 bits.
The short answer: it throws out all bits except the 16 lower bits. That way, when run on a 32/64 bit machine, you'll discard all others.
JS uses >16 bits and to ensure you're working with 16 bits only you discard the rest by AND-ing with 0xFFFF (or 65535). In this particular example the program-counter, which is 16 bits on a gameboy (apparently :P ), is 'wrapped around' to 0 when the value reaches 65536. An if (Z80._r.pc > 65535) Z80._r.pc = 0 would do the same but would probably perform worse. This kind of "trick" is used very often in bit manipulation code.

Diffie-Hellman implementation doesn't work for bigger numbers

Context
I was looking at this video DHE explained
It talks about how two people can exchange a key without eyedroppers to know much.
The implementation according to the video
// INITIALIZERS (video's values)-------------------------
var prefx = 3
var modulo = 17
// SECRET NUMBERS ---------------------------------------
var alice_secret_number = 19 // replaced 54 since there is a precision loss with it.
var bob_secret_number = 24
// PUBLIC KEYS ------------------------------------------
var public_alice = Math.pow(prefx , alice_secret_number)
var public_bob = Math.pow(prefx , bob_secret_number)
// Check potential overflow -----------------------------
console.log(public_alice , public_bob)
// Apply the modulo -------------------------------------
public_alice %= modulo
public_bob %= modulo
// Check the value again --------------------------------
console.log( public_alice , public_bob )
// Calculate the good number ------------------------------------------
var final_alice = Math.pow( public_bob , alice_secret_number ) % modulo
var final_bob = Math.pow( public_alice , bob_secret_number ) % modulo
console.log( final_alice , final_bob )
Problem
That doesn't always work. First, javascript, for example, loses precision.
So you can try with small numbers only. The speaker talks about big modulos. Even small one won't make it.
I gave you the code, which is not tailored toward performance but readability.
Could someone give me his/her opinion on what I am doing wrong?
All numbers in JavaScript are floats (actually doubles). The corresponding specification is IEEE 754. To represent an integer without loss of precision it must fit into the mantissa which is 53 bit big for 64 bit floats. You can check the maximum integer with Number.MAX_SAFE_INTEGER which is 9007199254740991. Everything beyond that loses precision.
Why is this a problem? (Most of) cryptography must be exact otherwise the secret cannot be learned. What you try to do is exponentiate and then apply the modulus, but since you do this separately, you get a very big number after exponentiation before it can be reduced through the modulus operation.
The solution is to use some kind of BigNumber library (like BigInteger) which handles all those big numbers for you. Note that there is specifically a modPow(exp, mod) function which combines those two steps and calculates the result efficiently.
Note that user secrets should be smaller than the modulus.

Odds to get the usually excluded upper-bound with Math.random()

This may look more like a math question but as it is exclusively linked to Javascript's pseudo-random number generator I guess it is a good fit for SO. If not, feel free to move it elsewhere.
First off, I'm aware that ES does not specify the algorithm to be used in the pseudo-random number generator - Math.random() -, but it does specify that the range should have an approximate uniform distribution:
15.8.2.14 random ( )
Returns a Number value with positive sign, greater than or equal to 0 but less than 1, chosen randomly or pseudo randomly with approximately uniform distribution over that range, using an implementation-dependent algorithm or strategy. This function takes no arguments.
So far, so good. Now I've recently stumbled upon this piece of data from MDN:
Note that as numbers in JavaScript are IEEE 754 floating point numbers with round-to-nearest-even behavior, these ranges, excluding the one for Math.random() itself, aren't exact, and depending on the bounds it's possible in extremely rare cases (on the order of 1 in 2^62) to calculate the usually-excluded upper bound.
Okay. It led me to some testing, the results are (obviously) the same on Chrome console and Firefox's Firebug:
>> 0.99999999999999995
1
>> 0.999999999999999945
1
>> 0.999999999999999944
0.9999999999999999
Let's put it in a simple practical example to make my question more clear:
Math.floor(Math.random() * 1)
Considering the code above, IEEE 754 floating point numbers with round-to-nearest-even behavior, under the assessment of Math.random() range being evenly distributed, I concluded that the odds for it to return the usually excluded upper bound (1 in my code above) would be 0.000000000000000055555..., that is approximately 1/18,000,000,000,000,000.
Looking at the MDN number now, 1/2^62 evaluates to 1/4,611,686,018,427,387,904, that is, over 200 times smaller than the result from my calc.
Am I doing the wrong math? Is Firefox's pseudo-random number generator just not evenly distributed enough as to generate this 200 times difference?
I know how to work around this and I'm aware that such small odds shouldn't even be considered for every day's uses, but I'd love to understand what is going on here and if my math is broken or Mozilla's (I hope it is former). =] Any input is appreciated.
You should not be worried about rounding the number from Math.random() up to 1.
When I was looking at the implementation (inferred from results I am getting) in the current versions of IE, Chrome, and FF, there are several observations that almost certainly mean that you should always get a number in the interval 0 to 0.11111111111111111111111111111111111111111111111111111 in binary (which is 0.999999999999999944.toString(2) and a few smaller decimal numbers too btw.).
Chrome: Here it is simple. It generates numbers by generating 32 bit number and dividing it by 1 << 32. (You can see that (1 << 30) * 4 * Math.random() always return a whole number).
FF: Here it seems that the number is always generated to be at most the 0.11... (53x 1) and it really uses just those 53 decimal places. (You can see that Math.random().toString(2).length - 2 does not return more than 53).
IE: Here it is very similar to FF, except that the number of places can be more if the first digits after a decimal dot are 0 and those will not round to 1 for sure. (You can see that Math.random().toString(2).match(/1[01]*$/)[0].length does not return more than 53).
I think (although I cannot provide any proof now) that any implementation should fall to one of those described groups and have no problem with rounding to 1.

Categories

Resources