Finding JS max integer value in a funny way fails - javascript

Today I tried to find a funny and mysterious way to determine JavaScript's maximal integer value. One of the approaches was the following:
~(+!!![]) >>> (+!![]);
which evaluates actually to
~0 >>> 1
but it returns 2147483647 and not 4294967295 as it should. Why? Of course, the latter one would be the result of this operation for an unsigned integer, while my result is correct for a signed one. But how to force it?..

You're finding the maximum integer, and then shifting it to the right 1 bit, which divides it by 2. Use:
~0 >>> 0
to get the maximum integer.
Converting that to the "funny" way I'll leave as an exercise for the reader.

Related

JavaScript: is Number.MAX_SAFE_INTEGER in fact NOT a safe integer?

While performing some tasks with bit manipulations on HackerRank, I noticed a strange thing: despite the limit of numbers up to 10 ** 15 in a task (which is about 9 times smaller than Number.MAX_SAFE_INTEGER), some test cases fail if not to use BigInt – although neither source numbers, nor product numbers of such operations exceed max safe integer. Then, I tried by hand the following in a browser console, and here is what the result was (it really surprised me):
507199254740991 >> 1 // -1011589121 (wrong, although 507199254740991 is about 18 times less than max safe integer)
Number(507199254740991n >> 1n) // 253599627370495 (correct)
Math.floor(507199254740991 / 2) //253599627370495 (correct)
638621066001121 ^ 907368627742749 // -1250667780 (wrong)
Number(638621066001121n ^ 907368627742749n) // 419934881731324 (correct)
Why does it happen? Is Number.MAX_SAFE_INTEGER in fact not a safe integer? It still it is, then why some operations with numbers fitting within this range fail? Is this a bug of JavaScript or am I missing something?
Binary operators (>> and ^ among others) cast number operands to 32bit integers first, then perform the operation based on that.

I need an explanation (and possible workaround) for ((2^32-1) << 0) resulting -1 in Javascript

EDIT: Explanation at the end.
I was trying to implement a 64bit integer class using a Uint32Array and have bitwise operations performed under the hood on two uint32 members. I quickly found out that, as to my understanding of the specification, bitwise operations return a signed 32bit integer. Initially I was hoping that the Uint32Array would just take care of the sign bit, but it doesn't.
I tried coding around the sign issue, but I am stuck at something I simply can't make sense of at all.
var a = (Math.pow(2, 32)-1); //set a to uint32 max value
So far, so good.
a.toString(2);// gives "11111111111111111111111111111111", as expected
However:
(a << 0); // gives "-1"
(a >> 1); // gives "-1"
(a << 0) == (a >> 1); // evaluates to true
Even if JS bitwise operations turn numbers into signed 32bit integers, 32 set bits shifted to the right by 1 should never be -1. Or should they? Should a non-zero number shifted by 0 bits equal itself shifted 1 bit? Is this a bug? Am I running into undefined behaviour?
Usually the answer to similar questions has to do with the signed 32bit conversion but I can't see how that should cause this behaviour.
EDIT2, explanation: The cause of my confusion was a fundamental misunderstanding of how negative numbers are represented in binary. While the first bit is in fact the sign bit, 1 indicating a negative, 0 a positive number, the remaining bits aren't just used to store the abs(), as I assumed.
Signed 4bit example:
0111 equals +7. 1111 does not equal -7, it equals -1. How do we end up with negative one? Because the two's complement of 1111 is 0001. To get a number's two's complement, flip all bits and add one:
1111 -> 0000 -> 0001.
Now that I know that, making sense of 11..11 << 0 being -1 is easy. It's perfectly similar to my 4bit example. 11..11 >> 1 being -1 is also completely expected now. The signed right shift >> is 1 filling, so 11..11 >> 1 is still 11..11 which is still -1.
I will leave this as is for now, because I'm certainly not the only one misunderstanding binary signed integer representation. Thanks for everyone's time.
Even if JS bitwise operations turn numbers into signed 32bit integers, 32 set bits shifted to the right by 1 should never be -1. Or should they? Should a non-zero number shifted by 0 bits equal itself shifted 1 bit? Is this a bug? Am I running into undefined behaviour?
That's normal, expected and defined. And yes, they should.
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Right_shift is what you use, and its description is this:
The right shift operator (>>) shifts the first operand the specified number of bits to the right. Excess bits shifted off to the right are discarded. Copies of the leftmost bit are shifted in from the left. Since the new leftmost bit has the same value as the previous leftmost bit, the sign bit (the leftmost bit) does not change. Hence the name "sign-propagating".
So if you have 32 bits of 1, after applying right shift by 1 you will have 32 bits of 1.
The fact that it's 32 bits wide is in the specs, https://tc39.es/ecma262/
6.1.6.1.10 Number::signedRightShift ( x, y )
[...]
4. Return the result of performing a sign-extending right shift of lnum by shiftCount bits. The most significant bit is propagated. The result is a signed 32-bit integer.
(Similarly, << produces 32-bit signed integer)

JavaScript bit-shifting

I would like to shift this unsigned number: 1479636484000 7 bits to the right. Is this possible in JavaScript?
Both
1479636484000 >> 7
and
1479636484000 >>> 7
returns an incorrect answer (for me). The correct answer should be 11559660031.
I guess there's some sign bit involved here, and maybe the number is too large to be supported. But is there any clever way of getting around it?
Bitwise operations in JavaScript start by truncating the number to a 32-bit integer. Your numbers are too big. The "clever way" to get around that limitation is to implement your own numeric library.
Note that floating-point division by 128 gets you the right answer (if you drop the fraction).
You could use a string with the number and remove the last 7 characters and convert it back to a number.
console.log((1479636484000).toString(2));
console.log((11559660031).toString(2));
console.log((1479636484000).toString(2).slice(0, -7));
console.log(parseInt((1479636484000).toString(2).slice(0, -7), 2));

Why does Java and Javascript Math.round(-1.5) to -1?

Today, I saw this behaviour of Java and Javascript's Math.round function.
It makes 1.40 to 1 as well as -1.40 to -1
It makes 1.60 to 2 as well as -1.60 to -2
Now, it makes 1.5 to 2.
But, makes -1.5 to -1.
I checked this behaviour in round equivalents of PhP and MySQL as well.
Both gave results as expected. i.e. round(-1.5) to -2
Even the Math.round definition says it should round it to nearest integer.
Wanted to know why is it so?
The problem is that the distance between 1 and 1.5 as well as 1.5 and 2 is exactly the same (0.5). There are several different ways you now could round:
always towards positive infinity
always towards negative infinity
always towards zero
always away from zero
towards nearest odd or even number
... (see Wikipedia)
Obviously, both Java and JS opted for the first one (which is not uncommon) while PHP and MySql round away from zero.
Rounding mode to round towards negative infinity. If the result is positive, behave as for RoundingMode.DOWN; if negative, behave as for RoundingMode.UP. Note that this rounding mode never increases the calculated value.
It is just matter of whole number and its position against number chart. From here you can see javadocs.
public static int round(float a)
Returns the closest int to the argument, with ties rounding up.
Special cases:
If the argument is NaN, the result is 0.
If the argument is negative infinity or any value less than or equal to the value of Integer.MIN_VALUE, the result is equal to the value of Integer.MIN_VALUE.
If the argument is positive infinity or any value greater than or equal to the value of Integer.MAX_VALUE, the result is equal to the value of Integer.MAX_VALUE.
Parameters:
a - a floating-point value to be rounded to an integer.
Returns:
the value of the argument rounded to the nearest int value.
Review this link too
From the Ecma script documentation,
Returns the Number value that is closest to x and is equal to a
mathematical integer. If two integer Number values are equally close
to x, then the result is the Number value that is closer to +∞. If x
is already an integer, the result is x.
where x is the number passed to Math.round().
So Math.round(1.5) will return 2 hence 2 is closer to +∞ while comparing with 1. Similarly Math.round(-1.5) will return -1 hence -1 is closer to +∞ while comparing with -2.

2.9999999999999999 >> .5?

I heard that you could right-shift a number by .5 instead of using Math.floor(). I decided to check its limits to make sure that it was a suitable replacement, so I checked the following values and got the following results in Google Chrome:
2.5 >> .5 == 2;
2.9999 >> .5 == 2;
2.999999999999999 >> .5 == 2; // 15 9s
2.9999999999999999 >> .5 == 3; // 16 9s
After some fiddling, I found out that the highest possible value of two which, when right-shifted by .5, would yield 2 is 2.9999999999999997779553950749686919152736663818359374999999¯ (with the 9 repeating) in Chrome and Firefox. The number is 2.9999999999999997779¯ in IE.
My question is: what is the significance of the number .0000000000000007779553950749686919152736663818359374? It's a very strange number and it really piqued my curiosity.
I've been trying to find an answer or at least some kind of pattern, but I think my problem lies in the fact that I really don't understand the bitwise operation. I understand the idea in principle, but shifting a bit sequence by .5 doesn't make any sense at all to me. Any help is appreciated.
For the record, the weird digit sequence changes with 2^x. The highest possible values of the following numbers that still truncate properly:
for 0: 0.9999999999999999444888487687421729788184165954589843749¯
for 1: 1.9999999999999999888977697537484345957636833190917968749¯
for 2-3: x+.99999999999999977795539507496869191527366638183593749¯
for 4-7: x+.9999999999999995559107901499373838305473327636718749¯
for 8-15: x+.999999999999999111821580299874767661094665527343749¯
...and so forth
Actually, you're simply ending up doing a floor() on the first operand, without any floating point operations going on. Since the left shift and right shift bitwise operations only make sense with integer operands, the JavaScript engine is converting the two operands to integers first:
2.999999 >> 0.5
Becomes:
Math.floor(2.999999) >> Math.floor(0.5)
Which in turn is:
2 >> 0
Shifting by 0 bits means "don't do a shift" and therefore you end up with the first operand, simply truncated to an integer.
The SpiderMonkey source code has:
switch (op) {
case JSOP_LSH:
case JSOP_RSH:
if (!js_DoubleToECMAInt32(cx, d, &i)) // Same as Math.floor()
return JS_FALSE;
if (!js_DoubleToECMAInt32(cx, d2, &j)) // Same as Math.floor()
return JS_FALSE;
j &= 31;
d = (op == JSOP_LSH) ? i << j : i >> j;
break;
Your seeing a "rounding up" with certain numbers is due to the fact the JavaScript engine can't handle decimal digits beyond a certain precision and therefore your number ends up getting rounded up to the next integer. Try this in your browser:
alert(2.999999999999999);
You'll get 2.999999999999999. Now try adding one more 9:
alert(2.9999999999999999);
You'll get a 3.
This is possibly the single worst idea I have ever seen. Its only possible purpose for existing is for winning an obfusticated code contest. There's no significance to the long numbers you posted -- they're an artifact of the underlying floating-point implementation, filtered through god-knows how many intermediate layers. Bit-shifting by a fractional number of bytes is insane and I'm surprised it doesn't raise an exception -- but that's Javascript, always willing to redefine "insane".
If I were you, I'd avoid ever using this "feature". Its only value is as a possible root cause for an unusual error condition. Use Math.floor() and take pity on the next programmer who will maintain the code.
Confirming a couple suspicions I had when reading the question:
Right-shifting any fractional number x by any fractional number y will simply truncate x, giving the same result as Math.floor() while thoroughly confusing the reader.
2.999999999999999777955395074968691915... is simply the largest number that can be differentiated from "3". Try evaluating it by itself -- if you add anything to it, it will evaluate to 3. This is an artifact of the browser and local system's floating-point implementation.
If you wanna go deeper, read "What Every Computer Scientist Should Know About Floating-Point Arithmetic": https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
Try this javascript out:
alert(parseFloat("2.9999999999999997779553950749686919152736663818359374999999"));
Then try this:
alert(parseFloat("2.9999999999999997779553950749686919152736663818359375"));
What you are seeing is simple floating point inaccuracy. For more information about that, see this for example: http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems.
The basic issue is that the closest that a floating point value can get to representing the second number is greater than or equal to 3, whereas the closes that the a float can get to the first number is strictly less than three.
As for why right shifting by 0.5 does anything sane at all, it seems that 0.5 is just itself getting converted to an int (0) beforehand. Then the original float (2.999...) is getting converted to an int by truncation, as usual.
I don't think your right shift is relevant. You are simply beyond the resolution of a double precision floating point constant.
In Chrome:
var x = 2.999999999999999777955395074968691915273666381835937499999;
var y = 2.9999999999999997779553950749686919152736663818359375;
document.write("x=" + x);
document.write(" y=" + y);
Prints out: x = 2.9999999999999996 y=3
The shift right operator only operates on integers (both sides). So, shifting right by .5 bits should be exactly equivalent to shifting right by 0 bits. And, the left hand side is converted to an integer before the shift operation, which does the same thing as Math.floor().
I suspect that converting 2.9999999999999997779553950749686919152736663818359374999999
to it's binary representation would be enlightening. It's probably only 1 bit different
from true 3.
Good guess, but no cigar.
As the double precision FP number has 53 bits, the last FP number before 3 is actually
(exact): 2.999999999999999555910790149937383830547332763671875
But why it is
2.9999999999999997779553950749686919152736663818359375
(and this is exact, not 49999... !)
which is higher than the last displayable unit ? Rounding. The conversion routine (String to number) simply is correctly programmed to round the input the the next floating point number.
2.999999999999999555910790149937383830547332763671875
.......(values between, increasing) -> round down
2.9999999999999997779553950749686919152736663818359375
....... (values between, increasing) -> round up to 3
3
The conversion input must use full precision. If the number is exactly the half between
those two fp numbers (which is 2.9999999999999997779553950749686919152736663818359375)
the rounding depends on the setted flags. The default rounding is round to even, meaning that the number will be rounded to the next even number.
Now
3 = 11. (binary)
2.999... = 10.11111111111...... (binary)
All bits are set, the number is always odd. That means that the exact half number will be rounded up, so you are getting the strange .....49999 period because it must be smaller than the exact half to be distinguishable from 3.
I suspect that converting 2.9999999999999997779553950749686919152736663818359374999999 to its binary representation would be enlightening. It's probably only 1 bit different from true 3.
And to add to John's answer, the odds of this being more performant than Math.floor are vanishingly small.
I don't know if JavaScript uses floating-point numbers or some kind of infinite-precision library, but either way, you're going to get rounding errors on an operation like this -- even if it's pretty well defined.
It should be noted that the number ".0000000000000007779553950749686919152736663818359374" is quite possibly the Epsilon, defined as "the smallest number E such that (1+E) > 1."

Categories

Resources