Reverse Bits: why it's giving me a negative value? - javascript

var reverseBits = function(n) {
var re = 0;
for( var i = 0; i < 32; i++ ) {
re = (re << 1) | (n & 1);
n >>>= 1;
}
return re;
};
This is my code to reverse bit in Javascript, but when n = 1, it gives -2147483648 (-10000000000000000000000000000000), should not it be a positive number? Where am I wrong?

The reason you are getting a negative number is because of how computers store negative and positive numbers. The most significant bit (the bit with the greatest value) is used in sign numbers to determine if a number should be negative or positive. If this bit is a 0, then its positive. If it's 1, then its negative. Computers use a technique called 2's compliment to convert a number from negative to positive. Here is how it works:
In your example, you assigned the number 1 to n. In a 32-bit computer, the binary would look like this:
0000 0000 0000 0000 0000 0000 0000 0001
After your reverse your bits, your binary looks like this:
1000 0000 0000 0000 0000 0000 0000 0000
If you pull out your binary calculator and punch in this number and convert it to decimal, you'll see its the value 2147483648. Because the left most bit is a 1, its a negative number. Because Javascript only has generic var variables, it assumes you want a signed result. The >>> operator in JavaScript is called a zero-fill right shift and using it with an operand of 0 (>>> 0), tells Javascript that you want an unsigned result.
In case you're curious (or some other reader of this post is curious), here is how a binary-based computer deals with negative numbers. Suppose you want to store the value -96. How does a computer store this? Well first, just ignore the sign. 96 in binary looks like this:
0000 0000 0000 0000 0000 0000 0110 0000
Next, the computer performs a 2's compliment. This is accomplished by first inverting each bit (1's become 0's and 0's become 1's):
1111 1111 1111 1111 1111 1111 1001 1111
Finally, you simply add 1, which looks like this:
1111 1111 1111 1111 1111 1111 1010 0000
Internally, this is how it's stored in your computer's memory. This equates to 4,294,967,200 or -96

should not it be a positive number? Where am I wrong?
No it should not. All bitwise operators except >>> work with signed 32 bit integers, where a leading 1 (in twos complement) signifies a negative number.
As suggested by #Icemanind, you can use the unsignedness of >>> to fix that and "cast" it to an unsigned integer:
return re >>> 0;

Related

JavaScript bitwise & operator appends 1 on unintended position

When I run 0x00000000C17E000F & 0x00000000C17E0000 on javascript, it returns -1048707072 instead of 3246260224.
Binary of each is
0x00000000C17E000F : (32 zeros omitted) 1100 0001 0111 1110 0000 0000 0000 1111
0x00000000C17E0000 : (32 zeros omitted) 1100 0001 0111 1110 0000 0000 0000 0000
so I expected the result of 0x00000000C17E000F & 0x00000000C17E0000 as 3246260224, but it results in -1048707072.
3246260224 : (32 of 0 omitted) 1100 0001 0111 1110 0000 0000 0000 0000
-1048707072 : (32 of 1 omitted) 1100 0001 0111 1110 0000 0000 0000 0000
Why does Javascript calculates 0 & 0 as 1 at there (in 33 to 64th bit)?
(I heard that Javascript Number consists of 64bit.)
From the MDN on bitwise operators:
The operands are converted to 32-bit integers and expressed by a series of bits (zeroes and ones). Numbers with more than 32 bits get their most significant bits discarded.
Your numbers aren't between -2 ^ 31 and 2 ^ 31 so they're changed by the conversion.
Note that this isn't really overflowing nor a limitation of the storage: Numbers in Javascript are 64 bits (IEEE754 doubles) which means they can store all integers between - 2 ^53 and 2 ^53. The limitation at 32 bits is just with the bitwise operators, by design.
A consequence of this remark is that you may very well design your own functions to do bitwise operations on bigger integers:
function and(a, b){
let abits = Array.from(a.toString(2));
let bbits = Array.from(b.toString(2));
return parseInt(abits.map((abit, i)=>abit & bbits[i]).join(""), 2);
}
console.log(and(0x00000000C17E000F, 0x00000000C17E0000)); // 3246260224

~ bitwise operator in JavaScript

I have the following code :
var a = parseInt('010001',2);
console.log(a.toString(2));
// 10001
var b = ~a;
console.log(b.toString(2));
// -10010
The MSDN Say
~ Performs the NOT operator on each bit. NOT a yields the inverted
value (a.k.a. one's complement) of a.
010001 should thus return this 101110.
This Topic kinda confirm that
So I can't understand how we can get -10010 instead ? The only potential explanation is that:
010001 is negated 101110 but he write this -10001
and then for an obscure reason he give me the two complements
and -10001 become -10010.
But all this is very obscure in my mind, would you have an idea on what happen precisely.
JavaScript's bitwise operators convert their operands to 32-bit signed integers (the usual 2's complement), perform their operation, then return the result as the most appropriate value in JavaScript's number type (double-precision floating point; JavaScript doesn't have an integer type). More in §12.5.11 (Bitwise NOT Operator ~) and §7.1.5 (ToInt32).
So your 10001 is:
00000000 00000000 00000000 00010001
which when ~ is:
11111111 11111111 11111111 11101110
...which is indeed negative in 2s complement representation.
You may be wondering: If the bit pattern is as above, then why did b.toString(2) give you -10010 instead? Because it's showing you signed binary, not the actual bit pattern. - meaning negative, and 10010 meaning 18 decimal. The bit pattern above is how that's represented in 2s complement bits. (And yes, I did have to go check myself on that!)
Under the covers, when Javascript does bitwise operations, it converts to a 32-bit signed integer representation, and uses that, then converts the result back into its internal decimal representation.
As such, your input value, 010001 becomes 00000000 00000000 00000000 00010001.
This is then inverted:
~00000000 00000000 00000000 00010001 => 11111111 11111111 11111111 11101110
Converted into hex, the inverted value is 0xFFFFFFEE, which is equivalent to the decimal value of -18.
Since this is a signed integer with a value of -18, this value is converted to the underlying decimal representation of -18 by Javascript.
When Javascript tries to print it as a base-2 number, it sees the negative sign and the value of 18, and prints it as -10010, since 10010 is the binary representation of positive 18.
JavaScript uses 32-bit signed numbers,so
a (010001) (17) is 0000 0000 0000 0000 0000 0000 0001 0001
b = ~a (?) (-18) is 1111 1111 1111 1111 1111 1111 1110 1110
The reason for printing -18 as -10010 and methods to get actual value is explained well here Link
As per the documentation on the Mozilla developer website here. Bitwise NOTing any number x yields -(x + 1). For example, ~5 yields -6. That is why you are getting the negative sign in front of the number.

Why are two different numbers equal in JavaScript?

I've been messing around with a JavaScript console, when I suddenly decided to try this:
0x100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 == 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
Surprisingly, they're equal:
Why does it happen? They're clearly different numbers (even the 0xFFFF...FFFF is one digit shorter)
If I add a F to the 0xFFFF...FF, they're not equal anymore:
0x100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 == 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
Is this expected behaviour?
All numbers in JavaScript are internally represented by 64-bit floating point numbers (see §4.3.19 of the specification). That means it can exactly represent every integer from 0 up to 9007199254740992 (hex value 0x20000000000000). Any integers greater than that (or less than it's negative counterpart) may need to be rounded to the closest approximate value.
Observe:
9007199254740992 === 9007199254740993
> true
However, two numbers that are rounded to sufficiently different approximate values still evaluate to different values when you compare them. For example:
9007199254740992 === 9007199254740994
> false
This is what you're seeing in the second snippet where you add another F digit.
Note: The ECMAScript specification now define Number.MAX_SAFE_INTEGER as a global constant equal to 9007199254740991.
0x100000000000000 == 0xFFFFFFFFFFFFFF gives true while
0x10000000000000 == 0xFFFFFFFFFFFFF gives false. So the former is the "limit", say.
Let's analyze the numbers: 52 bits for 0xFFFFFFFFFFFFF and one additional bit for 0x10000000000000 in the internal representation.
EDIT: Numbers of such magnitude are not represented by long ints but by double precission floats. This is because they exceed the 32bit representation of an integer value every number in javascript is represented as IEEE754 double precision floating point.
When you represent any IEEE754 Double Precission FP Number internally, you get:
0111 1111 1111 2222
2222 2222 2222 2222
2222 2222 2222 2222
2222 2222 2222 2222
Where (0) is the sign bit, (1) the exponent bits, and (2) the mantissa bits.
If you compare in JavaScript 0.5 == 5 * 0.1 you will get true even when that operation has floating imprecission (i.e. you get some error). So Javascript tolerates a small error in floating point operations so operations like that give true, as the common sense tells.
Edit - There's something wrong I wrote about the Mantissa: Yes, every Mantissa starts with 1 (it is said that such mantissa is normalized), BUT that 1 is not stored in a normalized number (each nonzero exponent has only normalized numbers. mantissas for numbers with exponent 000 0000 0000 do not follow this rule). This means that every normalized mantissa has 52 explicit bits, and an implicit 1.
Now: what about the 52 bits? Notice that the 0xFF... has 52 bits length. This means that it will be stored as: 0 for the sign (it is positive), 52 for the exponent, and 52 "1" digits in the mantissa (see a final note at the foot of this answer). Since one "1" is implicit, we'll store 51 "1" and one "0".
0100 0011 0010 1111
1111 1111 1111 1111
1111 1111 1111 1111
1111 1111 1111 1110
(exponent 1075 corresponds to actual exponent 52)
AND the other number has 53 bits: one "1" and 52 "0". Since the first "1" is implicit, it will be stored like:
0100 0011 0100 0000
0000 0000 0000 0000
0000 0000 0000 0000
0000 0000 0000 0000
(exponent 1076 corresponds to actual exponent 53)
Now it's time to compare values. They will compare in equality of conditions: first we take sign and exponent to the comparisson. If they're equal, we consider the mantissa.
Comparisson here takes place considering a small error being tolerated as a product of rounding. Such epsilon is taken into account (epsilon is about =2^-53) and FP ALU detects that, relatively, those numbers differ only in such epsilon, so they seem to be equal only in that context (there are many times where this does not save you as in cases of 0.3 == 0.2 + 0.1, being each of three numbers binary-non-representable, in contrast to 0.5 which is, and can tolerate an error against 0.1 + 0.4).
Note About the mantissa and the FP representation: The Mantissa is always, conceptually, lower than 1. If you want to represent a higher number, you must conceive it using an exponent. Examples:
0.5 is represented as 0.5 * 2 ^ 0 (consider the right operator precedence in math).
1 is represented not as 1 * 2 ^ 0 since the mantissa is always lower than 1, so the representation will be 0.5 * 2 ^ 1.
65, which has binary representation as 1000001, will be stored as (65/128) * 2 ^ 7.
These numbers are represented as (remember: the first "1" is implicit since these exponents are for normalized numbers):
0011 1111 1111 0000
... more 0 digits
(exponent 1023 stands for actual exponent 0, mantissa in binary repr. is 0.1, and the first "1" is implicit).
0100 0000 0000 0000
... more 0 digits
(exponent 1024 stands for actual exponent 1, mantissa in binary repr. is 0.1, and the first "1" is implicit).
and
0100 0000 0110 0000
0100 0000 0000 0000
(exponent 1030 stands for actual exponent 7, mantissa in binary repr. is 0.1000001, and since the first "1" is implicit, it is stored as 0000 0100 0000...)
Note About the exponent: Lower precission can be achieved by allowing negative exponents as well: Exponents seem positive -unsigned- but the reality is that you must subtract 1023 (called "bias") to that number to get the actual exponent (this means that exponent "1" actually corresponds to 2^(-1022)). Translating this to a 10-based power, the lowest exponent is -308 for decimal numbers (considering also the mantissa possition as I will show later). The lowest positive number is:
0000 0000 0000 0000
0000 0000 0000 0000
0000 0000 0000 0000
0000 0000 0000 0001
which is: (1 * 2^-52) * 2^-1023 being the first -52 given by the mantissa and the -1023 by the exponent. The final one is: 1 * 2^(-1075), which goes towards the 10^-308 always told.
The lowest exponent is 000 0000 0000 corresponding to (-1023). There's a rule: Every mantissa must begin with (an implicit) "1" or have this exponent. On the other side, the highest exponent could be 111 1111 1111, but this exponent is reserved for special pseudonumbers:
0111 1111 1111 0000
0000 0000 0000 0000
0000 0000 0000 0000
0000 0000 0000 0000
corresponds to +Infinity, while:
1111 1111 1111 0000
0000 0000 0000 0000
0000 0000 0000 0000
0000 0000 0000 0000
corresponds to -Infinity, and any pattern with a nonzero mantissa, like:
?111 1111 1111 0000
0000 0000 0000 0000
0000 0000 0000 0000
0000 0000 0000 0001
correspond to NaN (not a number; ideal to represent stuff like log(-1) or 0/0). Actually I'm not sure what mantissas are used for NaN (either quiet or signaling NaN). The question mark stands for any bit.
The following hexa decimal number:
0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
is stored as the IEEE 754 standard float value:
1.3407807929942597e+154
You add 1 to this number and it becomes:
0x100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
which is also stored as:
1.3407807929942597e+154
Both numbers are lie outside the range of numbers that can be accurately represented by the JavaScript Number (ref). In the above example, both numbers end up having the same internal representation, hence they are, sort of, equal.
Reminder: one should not compare floating point numbers using the equality operator (ref).
This is obviously overflow or rounding. Work out mathematically the magnitude of the numbers and check against the largest number.

Why does ~-1 equal 0 and ~1 equal -2?

According to subsection 11.4.8 of the ECMAScript 5.1 standard:
The production UnaryExpression : ~ UnaryExpression is evaluated as follows:
Let expr be the result of evaluating UnaryExpression.
Let oldValue be ToInt32(GetValue(expr)).
Return the result of applying bitwise complement to oldValue. The result is a signed 32-bit integer.
The ~ operator will invoke the internal method ToInt32. In my understanding ToInt32(1) and ToInt32(-1) will return the same value 1 , but why does ~-1 equal 0 and ~1 equal -2?
Now my question is why ToInt32(-1) equals -1?
subsection 9.5 of the ECMAScript 5.1 standard:
The abstract operation ToInt32 converts its argument to one of 232 integer values
in the range −231 through 231−1, inclusive. This abstract operation functions as
follows:
Let number be the result of calling ToNumber on the input argument.
If number is NaN, +0, −0, +∞, or −∞, return +0.
Let posInt be sign(number) * floor(abs(number)).
Let int32bit be posInt modulo 232; that is, a finite integer value k of Number
type with positive sign and less than 232 in magnitude such that the mathematical
difference of posInt and k is mathematically an integer multiple of 232.
If int32bit is greater than or equal to 231, return int32bit − 232, otherwise
return int32bit.
when the argument is -1,according to 9.5,
in step 1
number will still be -1,
skip step2
in step 3
posInt will be -1
in step 4
int32bit will be 1
in step 5 it will return 1
which step is wrong?
The -1 in 32-bit integer
1111 1111 1111 1111 1111 1111 1111 1111
So ~-1 will be
0000 0000 0000 0000 0000 0000 0000 0000
Which is zero.
The 1 in 32-bit integer
0000 0000 0000 0000 0000 0000 0000 0001
So ~1 will be
1111 1111 1111 1111 1111 1111 1111 1110
Which is -2.
You should read about two's complement to understand the display of negative integer in binary-base.
Where did you get the idea that ToInt32(-1) evaluates to 1? It evaluates to -1, which in 32-bit, two's complement binary representation, is all bits set to 1. When you apply the ~ operator, every bit then becomes 0, which is the representation of 0 in 32-bit, two's complement binary.
The representation of 1 is all bits 0 except for bit 0. When the bits are inverted, the result is all bits 1 except for bit 0. This happens to be the two's complement representation of -2. (To see this, just subtract 1 from the two's complement representation of -1.)
ToInt32(1) will return 1
ToInt32(-1) will return -1
-1 is represented as a signed 32-bit integer by having all 32 bits set. The bitwise complement of that is all bits clear, thus ~-1 yields 0
1 is represented as a signed 32-bit integer by having all bits clear except the bottom bit. The bitwise complement of that has all bits set except the bottom bit, which is the signed 32-bit integer representation of the value -2. Thus, ~1 yields -2

javascript bitwise operator question

In Javascript when I do this
var num = 1;
~ num == -2
why does ~num not equal 0
in binary 1 is stored as 1 ... thus not 1 should be 0
or it is stored like 0001 thus not 0001 would be 1110
I think I am missing something... can someone clear this up
Look up Two's complement for signed binary numbers
Lets assume that a javascript Number is 8 bits wide (which its not):
then
1 = 0000 0001b
and
~1 = 1111 1110b
Which is the binary representation of -2
0000 0010b = 2
0000 0001b = 1
0000 0000b = 0
1111 1111b = -1
1111 1110b = -2
~ toggles the bits of the operand so
00000001
becomes
11111110
which is -2
Note: In javascript, the numbers are 32-bit, but I shortened it to illustrate the point.
From the documentation:
Bitwise NOTing any number x yields -(x + 1). For example, ~5 yields -6.
The reason for this is that using a bitwise NOT reverses all the bits of a value. If you are storing the value of 1 in a signed 8-bit integer, you're storing the binary value 00000001. If you apply a bitwise NOT, you get 11111110, which for a signed 8-bit integer is the binary value for -2.

Categories

Resources