JavaScript: is Number.MAX_SAFE_INTEGER in fact NOT a safe integer? - javascript

While performing some tasks with bit manipulations on HackerRank, I noticed a strange thing: despite the limit of numbers up to 10 ** 15 in a task (which is about 9 times smaller than Number.MAX_SAFE_INTEGER), some test cases fail if not to use BigInt – although neither source numbers, nor product numbers of such operations exceed max safe integer. Then, I tried by hand the following in a browser console, and here is what the result was (it really surprised me):
507199254740991 >> 1 // -1011589121 (wrong, although 507199254740991 is about 18 times less than max safe integer)
Number(507199254740991n >> 1n) // 253599627370495 (correct)
Math.floor(507199254740991 / 2) //253599627370495 (correct)
638621066001121 ^ 907368627742749 // -1250667780 (wrong)
Number(638621066001121n ^ 907368627742749n) // 419934881731324 (correct)
Why does it happen? Is Number.MAX_SAFE_INTEGER in fact not a safe integer? It still it is, then why some operations with numbers fitting within this range fail? Is this a bug of JavaScript or am I missing something?

Binary operators (>> and ^ among others) cast number operands to 32bit integers first, then perform the operation based on that.

Related

I need an explanation (and possible workaround) for ((2^32-1) << 0) resulting -1 in Javascript

EDIT: Explanation at the end.
I was trying to implement a 64bit integer class using a Uint32Array and have bitwise operations performed under the hood on two uint32 members. I quickly found out that, as to my understanding of the specification, bitwise operations return a signed 32bit integer. Initially I was hoping that the Uint32Array would just take care of the sign bit, but it doesn't.
I tried coding around the sign issue, but I am stuck at something I simply can't make sense of at all.
var a = (Math.pow(2, 32)-1); //set a to uint32 max value
So far, so good.
a.toString(2);// gives "11111111111111111111111111111111", as expected
However:
(a << 0); // gives "-1"
(a >> 1); // gives "-1"
(a << 0) == (a >> 1); // evaluates to true
Even if JS bitwise operations turn numbers into signed 32bit integers, 32 set bits shifted to the right by 1 should never be -1. Or should they? Should a non-zero number shifted by 0 bits equal itself shifted 1 bit? Is this a bug? Am I running into undefined behaviour?
Usually the answer to similar questions has to do with the signed 32bit conversion but I can't see how that should cause this behaviour.
EDIT2, explanation: The cause of my confusion was a fundamental misunderstanding of how negative numbers are represented in binary. While the first bit is in fact the sign bit, 1 indicating a negative, 0 a positive number, the remaining bits aren't just used to store the abs(), as I assumed.
Signed 4bit example:
0111 equals +7. 1111 does not equal -7, it equals -1. How do we end up with negative one? Because the two's complement of 1111 is 0001. To get a number's two's complement, flip all bits and add one:
1111 -> 0000 -> 0001.
Now that I know that, making sense of 11..11 << 0 being -1 is easy. It's perfectly similar to my 4bit example. 11..11 >> 1 being -1 is also completely expected now. The signed right shift >> is 1 filling, so 11..11 >> 1 is still 11..11 which is still -1.
I will leave this as is for now, because I'm certainly not the only one misunderstanding binary signed integer representation. Thanks for everyone's time.
Even if JS bitwise operations turn numbers into signed 32bit integers, 32 set bits shifted to the right by 1 should never be -1. Or should they? Should a non-zero number shifted by 0 bits equal itself shifted 1 bit? Is this a bug? Am I running into undefined behaviour?
That's normal, expected and defined. And yes, they should.
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Right_shift is what you use, and its description is this:
The right shift operator (>>) shifts the first operand the specified number of bits to the right. Excess bits shifted off to the right are discarded. Copies of the leftmost bit are shifted in from the left. Since the new leftmost bit has the same value as the previous leftmost bit, the sign bit (the leftmost bit) does not change. Hence the name "sign-propagating".
So if you have 32 bits of 1, after applying right shift by 1 you will have 32 bits of 1.
The fact that it's 32 bits wide is in the specs, https://tc39.es/ecma262/
6.1.6.1.10 Number::signedRightShift ( x, y )
[...]
4. Return the result of performing a sign-extending right shift of lnum by shiftCount bits. The most significant bit is propagated. The result is a signed 32-bit integer.
(Similarly, << produces 32-bit signed integer)

Faster way to multiply in javascript?

I'm looking at ways to multiply faster in JavaScript, and I found this
https://medium.com/#p_arithmetic/4-more-javascript-hacks-to-make-your-javascript-faster-1f5fd88a219e#.306ophng2
which has this text
Instead of multiplying, use the bit shift operation. it looks a little more complex, but once you get the hang of it, it’s pretty simple. The formula to multiply x * y is simply x << (y-1)
As a bonus, everyone else will think you’re really smart!
// multiply by 1
[1,2,3,4].forEach(function(n){ return n<<0; }) // 1,2,3,4
// multiply by 2
[1,2,3,4].forEach(function(n){ return n<<1; }) // 2,4,6,8
// multiply by 3
[1,2,3,4].forEach(function(n){ return n<<2; }) // 3,6,9,12
// etc
However this doesn't seem to work for me. Does anyone know what's wrong?
Thanks
Ignore that article. It's intended as humor -- the advice it's giving is intentionally terrible and wrong.
Multiplication does not involve "expensive logarithmic look up tables and extremely lucky guesses" with "dozens of guesses per second". It is a highly optimized hardware operation, and any modern CPU can perform hundreds of millions (or more!) of these operations per second.
Bitwise operations are not faster than multiplication in Javascript. In fact, they're much slower -- numbers are generally stored as double-precision floating point by default, so performing a bitwise operation requires them to be converted to integers, then back.
Bit-shifting is not equivalent to multiplication in the way that the article implies. While left-shifting by 0 and 1 are equivalent to multiplication by 1 and 2, the pattern continues with <<2 and <<3 being equivalent to a multiplication by 4 and 8, not 3 and 4.
Array.forEach does not return a value. The appropriate function to use here would be Array.map.
The other "Javascript hacks" described in the article are even worse. I won't bother going into details.
Bit shift is not the same as multiplying by any number. This is how bit shift works in decimal for example:
1234 >> 0 = 1234
1234 >> 1 = 0123 #discards the last digit.
0123 >> 2 = 0001 #discards the last two digit
So in decimal, shifting digits to the right is like integer division by 10. When shifted by two digits it's the same as dividing by 10^2.
Now computers work with binary words. So instead of a digit shift, we have bit shifts.
10101011 >> 1 = 01010101
This is basically dividing by 2^1.
10101011 >> n is simply dividing by 2^n.
Now in the same manner,
10101011 << n is multiplying by by 2^n. This adds n zeroes behind the binary word.
Hope it's clearer now :)
Cheers
Incidentally, if you want to know why it isn't working for you, you will want to use map instead of forEach since forEach ignores the return.
So do this instead:
// multiply by 1
[1,2,3,4].map(function(n){ return n<<0; }) // 1,2,3,4
// multiply by 2
[1,2,3,4].map(function(n){ return n<<1; }) // 2,4,6,8
// multiply by 4
[1,2,3,4].map(function(n){ return n<<2; }) // 4,8,12,16
// multiply by 8
[1,2,3,4].map(function(n){ return n<<3; }) // 8, 16, 24, 32
And as others have mentioned - don't use this for multiplication. Just take this as a clarity of forEach vs map...

Finding JS max integer value in a funny way fails

Today I tried to find a funny and mysterious way to determine JavaScript's maximal integer value. One of the approaches was the following:
~(+!!![]) >>> (+!![]);
which evaluates actually to
~0 >>> 1
but it returns 2147483647 and not 4294967295 as it should. Why? Of course, the latter one would be the result of this operation for an unsigned integer, while my result is correct for a signed one. But how to force it?..
You're finding the maximum integer, and then shifting it to the right 1 bit, which divides it by 2. Use:
~0 >>> 0
to get the maximum integer.
Converting that to the "funny" way I'll leave as an exercise for the reader.

Hash 32bit int to 16bit int?

What are some simple ways to hash a 32-bit integer (e.g. IP address, e.g. Unix time_t, etc.) down to a 16-bit integer?
E.g. hash_32b_to_16b(0x12345678) might return 0xABCD.
Let's start with this as a horrible but functional example solution:
function hash_32b_to_16b(val32b) {
return val32b % 0xffff;
}
Question is specifically about JavaScript, but feel free to add any language-neutral solutions, preferably without using library functions.
The context for this question is generating unique IDs (e.g. a 64-bit ID might be composed of several 16-bit hashes of various 32-bit values). Avoiding collisions is important.
Simple = good. Wacky+obfuscated = amusing.
The key to maximizing the preservation of entropy of some original 32-bit 'signal' is to ensure that each of the 32 input bits has an independent and equal ability to alter the value of the 16-bit output word.
Since the OP is requesting a bit-size which is exactly half of the original, the simplest way to satisfy this criteria is to xor the upper and lower halves, as others have mentioned. Using xor is optimal because—as is obvious by the definition of xor—independently flipping any one of the 32 input bits is guaranteed to change the value of the 16-bit output.
The problem becomes more interesting when you need further reduction beyond just half-the-size, say from a 32-bit input to, let's say, a 2-bit output. Remember, the goal is to preserve as much entropy from the source as possible, so solutions which involve naively masking off the two lowest bits with (i & 3) are generally heading in the wrong direction; doing that guarantees that there's no way for any bits except the unmasked bits to affect the result, and that generally means there's an arbitrary, possibly valuable part of the runtime signal which is being summarily discarded without principle.
Following from the earlier paragraph, you could of course iterate with xor three additional times to produce a 2-bit output with the desired property of being equally-influenced by each/any of the input bits. That solution is still optimally correct of course, but involves looping or multiple unrolled operations which, as it turns out, aren't necessary!
Fortunately, there is a nice technique of only two operations which gives the same optimal result for this situation. As with xor, it not only ensures that, for any given 32-bit value, twiddling any input bit will result in a change to the 2-bit output, but also that, given a uniform distribution of input values, the distribution of 2-bit output values will also be perfectly uniform. In the current example, the method divides the 4,294,967,296 possible input values into exactly 1,073,741,824 each of the four possible 2-bit hash results { 0, 1, 2, 3 }.
The method I mention here uses specific magic values that I discovered via exhaustive search, and which don't seem to be discussed very much elsewhere on the internet, at least for the particular use under discussion here (i.e., ensuring a uniform hash distribution that's maximally entropy-preserving). Curiously, according to this same exhaustive search, the magic values are in fact unique, meaning that for each of target bit-widths { 16, 8, 4, 2 }, the magic value I show below is the only value that, when used as I show here, satisfies the perfect hashing criteria outlined above.
Without further ado, the unique and mathematically optimal procedure for hashing 32-bits to n = { 16, 8, 4, 2 } is to multiply by the magic value corresponding to n (unsigned, discarding overflow), and then take the n highest bits of the result. To isolate those result bits as a hash value in the range [0 ... (2ⁿ - 1)], simply right-shift (unsigned!) the multiplication result by 32 - n bits.
The "magic" values, and C-like expression syntax are as follows:
Method
Maximum-entropy-preserving hash for reducing 32 bits to. . .
Target Bits Multiplier Right Shift Expression [1, 2]
----------- ------------ ----------- -----------------------
16 0x80008001 16 (i * 0x80008001) >> 16
8 0x80808081 24 (i * 0x80808081) >> 24
4 0x88888889 28 (i * 0x88888889) >> 28
2 0xAAAAAAAB 30 (i * 0xAAAAAAAB) >> 30
Maximum-entropy-preserving hash for reducing 64 bits to. . .
Target Bits Multiplier Right Shift Expression [1, 2]
----------- ------------------ ----------- -------------------------------
32 0x8000000080000001 32 (i * 0x8000000080000001) >> 32
16 0x8000800080008001 48 (i * 0x8000800080008001) >> 48
8 0x8080808080808081 56 (i * 0x8080808080808081) >> 56
4 0x8888888888888889 60 (i * 0x8888888888888889) >> 60
2 0xAAAAAAAAAAAAAAAB 62 (i * 0xAAAAAAAAAAAAAAAB) >> 62
Notes:
Use unsigned multiply and discard any overflow (64-bit multiply is not needed).
If isolating the result using right-shift (as shown), be sure to use an unsigned shift operation.
Further discussion
I find this all this quite cool. In practical terms, the key information-theoretical requirement is the guar­antee that, for any m-bit input value and its corresponding n-bit hash value result, flipping any one of the m source bits always causes some change in the n-bit result value. Now al­though there are 2ⁿ possible result values in total, one of them is already "in-use" (by the result itself) since "switching" to that one from any other result would be no change at all. This leaves 2ⁿ - 1 result values that are eligible to be used by the entire set of m input values flipped by a single bit.
Let's consider an example; in fact, to show how this technique might seem to border on spooky or downright magical, we'll consider the more extreme case where m = 64 and n = 2. With 2 output bits there are four possible result values, { 0, 1, 2, 3 }. Assuming an arbitrary 64-bit input value 0x7521d9318fbdf523, we obtain its 2-bit hash value of 1:
(0x7521d9318fbdf523 * 0xAAAAAAAAAAAAAAAB) >> 62 // result --> '1'
So the result is 1 and the claim is that no value in the set of 64 values where a single-bit of 0x7521d9318fbdf523 is toggled may have that same result value. That is, none of those 64 other results can use value 1 and all must instead use either 0, 2, or 3. So in this example it seems like every one of the 2⁶⁴ input values—to the exclusion of 64 other input values—will selfishly hog one-quarter of the output space for itself. When you consider the sheer magnitude of these interacting constraints, can a simultaneously satisfying solution overall even exist?
Well sure enough, to show that (exactly?) one does, here are the hash result values, listed in order, for inputs that flipping a single bit of 0x7521d9318fbdf523 (one at a time), from MSB (position 63) down to LSB (0).
3 2 0 3 3 3 3 3 3 0 0 0 3 0 3 3 0 3 3 3 0 0 3 3 3 0 0 3 3 0 3 3 // continued…
0 0 3 0 0 3 0 3 0 0 0 3 0 3 3 3 0 3 0 3 3 3 3 3 3 0 0 0 3 0 0 3 // notice: no '1' values
As you can see, there are no 1 values, which entails that every bit in the source "as-is" must be contributing to influence the result (or, if you prefer, the de facto state of each-and-every bit in 0x7521d9318fbdf523 is essential to keeping the entire overall result from being "not-1"). Because no matter what single-bit change you make to the 64-bit input, the 2-bit result value will no longer be 1.
Keep in mind that the "missing-value" table shown above was dumped from the analysis of just the one randomly-chosen example value 0x7521d9318fbdf523; every other possible input value has a similar table of its own, each one eerily missing its owner's actual result value while yet somehow being globally consistent across its set-membership. This property essentially corresponds to maximally preserving the available entropy during the (inherently lossy) bit-width reduction task.
So we see that every one of the 2⁶⁴ possible source values independently imposes, on exactly 64 other source values, the constraint of excluding one of the possible result values. What defies my intuition about this is that there are untold quadrillions of these 64-member sets, each of whose members also belongs to 63 other, seemingly unrelated bit-twiddling sets. Yet somehow despite this most confounding puzzle of interwoven constraints, it is nevertheless trivial to exploit the one (I surmise) resolution which simultaneously satisfies them all exactly.
All this seems related to something you may have noticed in the tables above: namely, I don't see any obvious way to extend the technique to the case of compressing down to a 1-bit result. In this case, there are only two possible result values { 0, 1 }, so if any/every given (e.g.) 64-bit input value still summarily excludes its own result from being the result for all 64 of its single-bit-flip neighbors, then that now essentially imposes the other, only remaining value on those 64. The math breakdown we see in the table seems to be signalling that a simultaneous result under such conditions is a bridge too far.
In other words, the special 'information-preserving' characteristic of xor (that is, its luxuriously reliable guarantee that, as opposed to and, or, etc., it c̲a̲n̲ and w̲i̲l̲l̲ always change a bit) not surprisingly exacts a certain cost, namely, a fiercely non-negotiable demand for a certain amount of elbow room—at least 2 bits—to work with.
I think this is the best you're going to get. You could compress the code to a single line but the var's are there for now as documentation:
function hash_32b_to_16b(val32b) {
var rightBits = val32b & 0xffff; // Left-most 16 bits
var leftBits = val32b & 0xffff0000; // Right-most 16 bits
leftBits = leftBits >>> 16; // Shift the left-most 16 bits to a 16-bit value
return rightBits ^ leftBits; // XOR the left-most and right-most bits
}
Given the parameters of the problem, the best solution would have each 16-bit hash correspond to exactly 2^16 32-bit numbers. It would also IMO hash sequential 32-bit numbers differently. Unless I'm missing something, I believe this solution does those two things.
I would argue that security cannot be a consideration in this problem, as the hashed value is just too few bits. I believe that the solution I gave provides even distribution of 32-bit numbers to 16-bit hashes
This depends on the nature of the integers.
If they can contain some bit-masks, or can differ by powers of two, then simple XORs will have high probability of collisions.
You can try something like (i>>16) ^ ((i&0xffff) * p) with p being a prime number.
Security-hashes like MD5 are all good, but they are obviously an overkill here. Anything more complex than CRC16 is overkill.
I would say just apply a standard hash like sha1 or md5 and then grab the last 16 bits of that.
Assuming that you expect the least significant bits to 'vary' the most, I think you're probably going to get a good enough distribution by just using the lower 16-bits of the value as a hash.
If the numbers you're going to hash won't have that kind of distribution, then the additional step of xor-ing in the upper 16 bits might be helpful.
Of course this suggestion is if you're intending to use the hash merely for some sort of lookup/storage scheme and aren't looking for the crypto-related properties of non-guessability and non-reversability (which the xor-ing suggestions don't really buy you either).
Something simple like this....
function hash_32b_to_16b(val32b) {
var h = hmac(secretKey, sha512);
var v = val32b;
for(var i = 0; i < 4096; ++i)
v = h(v);
return v % 0xffff;
}

2.9999999999999999 >> .5?

I heard that you could right-shift a number by .5 instead of using Math.floor(). I decided to check its limits to make sure that it was a suitable replacement, so I checked the following values and got the following results in Google Chrome:
2.5 >> .5 == 2;
2.9999 >> .5 == 2;
2.999999999999999 >> .5 == 2; // 15 9s
2.9999999999999999 >> .5 == 3; // 16 9s
After some fiddling, I found out that the highest possible value of two which, when right-shifted by .5, would yield 2 is 2.9999999999999997779553950749686919152736663818359374999999¯ (with the 9 repeating) in Chrome and Firefox. The number is 2.9999999999999997779¯ in IE.
My question is: what is the significance of the number .0000000000000007779553950749686919152736663818359374? It's a very strange number and it really piqued my curiosity.
I've been trying to find an answer or at least some kind of pattern, but I think my problem lies in the fact that I really don't understand the bitwise operation. I understand the idea in principle, but shifting a bit sequence by .5 doesn't make any sense at all to me. Any help is appreciated.
For the record, the weird digit sequence changes with 2^x. The highest possible values of the following numbers that still truncate properly:
for 0: 0.9999999999999999444888487687421729788184165954589843749¯
for 1: 1.9999999999999999888977697537484345957636833190917968749¯
for 2-3: x+.99999999999999977795539507496869191527366638183593749¯
for 4-7: x+.9999999999999995559107901499373838305473327636718749¯
for 8-15: x+.999999999999999111821580299874767661094665527343749¯
...and so forth
Actually, you're simply ending up doing a floor() on the first operand, without any floating point operations going on. Since the left shift and right shift bitwise operations only make sense with integer operands, the JavaScript engine is converting the two operands to integers first:
2.999999 >> 0.5
Becomes:
Math.floor(2.999999) >> Math.floor(0.5)
Which in turn is:
2 >> 0
Shifting by 0 bits means "don't do a shift" and therefore you end up with the first operand, simply truncated to an integer.
The SpiderMonkey source code has:
switch (op) {
case JSOP_LSH:
case JSOP_RSH:
if (!js_DoubleToECMAInt32(cx, d, &i)) // Same as Math.floor()
return JS_FALSE;
if (!js_DoubleToECMAInt32(cx, d2, &j)) // Same as Math.floor()
return JS_FALSE;
j &= 31;
d = (op == JSOP_LSH) ? i << j : i >> j;
break;
Your seeing a "rounding up" with certain numbers is due to the fact the JavaScript engine can't handle decimal digits beyond a certain precision and therefore your number ends up getting rounded up to the next integer. Try this in your browser:
alert(2.999999999999999);
You'll get 2.999999999999999. Now try adding one more 9:
alert(2.9999999999999999);
You'll get a 3.
This is possibly the single worst idea I have ever seen. Its only possible purpose for existing is for winning an obfusticated code contest. There's no significance to the long numbers you posted -- they're an artifact of the underlying floating-point implementation, filtered through god-knows how many intermediate layers. Bit-shifting by a fractional number of bytes is insane and I'm surprised it doesn't raise an exception -- but that's Javascript, always willing to redefine "insane".
If I were you, I'd avoid ever using this "feature". Its only value is as a possible root cause for an unusual error condition. Use Math.floor() and take pity on the next programmer who will maintain the code.
Confirming a couple suspicions I had when reading the question:
Right-shifting any fractional number x by any fractional number y will simply truncate x, giving the same result as Math.floor() while thoroughly confusing the reader.
2.999999999999999777955395074968691915... is simply the largest number that can be differentiated from "3". Try evaluating it by itself -- if you add anything to it, it will evaluate to 3. This is an artifact of the browser and local system's floating-point implementation.
If you wanna go deeper, read "What Every Computer Scientist Should Know About Floating-Point Arithmetic": https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
Try this javascript out:
alert(parseFloat("2.9999999999999997779553950749686919152736663818359374999999"));
Then try this:
alert(parseFloat("2.9999999999999997779553950749686919152736663818359375"));
What you are seeing is simple floating point inaccuracy. For more information about that, see this for example: http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems.
The basic issue is that the closest that a floating point value can get to representing the second number is greater than or equal to 3, whereas the closes that the a float can get to the first number is strictly less than three.
As for why right shifting by 0.5 does anything sane at all, it seems that 0.5 is just itself getting converted to an int (0) beforehand. Then the original float (2.999...) is getting converted to an int by truncation, as usual.
I don't think your right shift is relevant. You are simply beyond the resolution of a double precision floating point constant.
In Chrome:
var x = 2.999999999999999777955395074968691915273666381835937499999;
var y = 2.9999999999999997779553950749686919152736663818359375;
document.write("x=" + x);
document.write(" y=" + y);
Prints out: x = 2.9999999999999996 y=3
The shift right operator only operates on integers (both sides). So, shifting right by .5 bits should be exactly equivalent to shifting right by 0 bits. And, the left hand side is converted to an integer before the shift operation, which does the same thing as Math.floor().
I suspect that converting 2.9999999999999997779553950749686919152736663818359374999999
to it's binary representation would be enlightening. It's probably only 1 bit different
from true 3.
Good guess, but no cigar.
As the double precision FP number has 53 bits, the last FP number before 3 is actually
(exact): 2.999999999999999555910790149937383830547332763671875
But why it is
2.9999999999999997779553950749686919152736663818359375
(and this is exact, not 49999... !)
which is higher than the last displayable unit ? Rounding. The conversion routine (String to number) simply is correctly programmed to round the input the the next floating point number.
2.999999999999999555910790149937383830547332763671875
.......(values between, increasing) -> round down
2.9999999999999997779553950749686919152736663818359375
....... (values between, increasing) -> round up to 3
3
The conversion input must use full precision. If the number is exactly the half between
those two fp numbers (which is 2.9999999999999997779553950749686919152736663818359375)
the rounding depends on the setted flags. The default rounding is round to even, meaning that the number will be rounded to the next even number.
Now
3 = 11. (binary)
2.999... = 10.11111111111...... (binary)
All bits are set, the number is always odd. That means that the exact half number will be rounded up, so you are getting the strange .....49999 period because it must be smaller than the exact half to be distinguishable from 3.
I suspect that converting 2.9999999999999997779553950749686919152736663818359374999999 to its binary representation would be enlightening. It's probably only 1 bit different from true 3.
And to add to John's answer, the odds of this being more performant than Math.floor are vanishingly small.
I don't know if JavaScript uses floating-point numbers or some kind of infinite-precision library, but either way, you're going to get rounding errors on an operation like this -- even if it's pretty well defined.
It should be noted that the number ".0000000000000007779553950749686919152736663818359374" is quite possibly the Epsilon, defined as "the smallest number E such that (1+E) > 1."

Categories

Resources