How to find the 32-bit of a number - javascript

Can I ask how to find the 32-bit version of a number as I want to work around with numbers with the bitwise AND operator in JavaScript. It stated that the numbers perform bitwise operations in 32bit version.
Second question is it in JavaScript bitwise AND operator(&), the operation of numbers perform in 32-bit version, right? Then at the end does it convert it back to 64-bit version?

According to the ECMAScript specification, the return values from bitwise operations must be 32-bit integers. A relevant quote:
The production A : A # B, where # is
one of the bitwise operators in the
productions above, is evaluated as
follows:
Let lref be the result of evaluating A.
Let lval be GetValue(lref).
Let rref be the result of evaluating B.
Let rval be GetValue(rref).
Let lnum be ToInt32(lval).
Let rnum be ToInt32(rval).
Return the result of applying the bitwise operator # to lnum and rnum.
The result is a signed 32 bit integer.
Therefore to convert any number to a 32-bit integer, you can just perform a binary operation that would have no effect. For example, here I convert a float to an integer using a no-op binary or (| 0):
var x = 1.2, y = 1
x = x | 0
alert(x == y) # prints "true"

Related

right shift >> turns value into zero javascript

Trying some bit manipulation in javascript.
Consider the following:
const n = 4393751543811;
console.log(n.toString(2)) // '111111111100000000000000000000000000000011'
console.log(n & 0b11) // last two bits equal 3
const m = n >> 2; // right shift 2
// The unexpected.
console.log(m.toString(2)) // '0'
The result is 0? The expected output I am looking for after the right shift is:
111111111100000000000000000000000000000011 // pre
001111111111000000000000000000000000000000 // post >>
How is this accomplished?
Javascript bitwise operators on numbers work "as if" on 32bit integers.
>> (sign-propagating right-shift for numbers) will first convert to a 32-bit integer. If you read linked spec, note specifically
Let int32bit be int modulo 232.
In other words, all bits above 32 will simply be ignored. For your number, this results in the following:
111111111100000000000000000000000000000011
┗removed━┛┗━━━━━━━━━━━━━━32bit━━━━━━━━━━━━━┛
If you want, you can use BigInt:
const n = 4393751543811n; // note the n-suffix
console.log(n.toString(2))
console.log(n & 0b11n) // for BigInt, all operands must be BigInt
const m = n >> 2n;
// The expected.
console.log(m.toString(2))
The spec for >> on BigInt uses BigInt::leftShift(x, -y), where it in turn states:
Semantics here should be equivalent to a bitwise shift, treating the BigInt as an infinite length string of binary two's complement digits.

Bitwise operations on strings in javascript

In javascript the following test of character to character binary operations prints 0 676 times:
var s = 'abcdefghijklmnopqrstuvwxyz';
var i, j;
for(i=0; i<s.length;i++){ for(j=0; j<s.length;j++){ console.log(s[i] | s[j]) }};
If js was using the actual binary representation of the strings I would expect some non-zero values here.
Similarly, testing binary operations on strings and integers, the following print 26 255s and 0s, respectively. (255 was chosen because it is 11111111 in binary).
var s = 'abcdefghijklmnopqrstuvwxyz';
var i; for(i=0; i<s.length;i++){ console.log(s[i] | 255) }
var i; for(i=0; i<s.length;i++){ console.log(s[i] & 255) }
What is javascript doing here? It seems like javascript is casting any string to false before binary operations.
Notes
If you try this in python, it throws an error:
>>> s = 'abcdefghijklmnopqrstuvwxyz'
>>> [c1 | c2 for c2 in s for c1 in s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for |: 'str' and 'str'
But stuff like this seems to work in php.
In JavaScript, when a string is used with a binary operator it is first converted to a number. Relevant portions of the ECMAScript spec are shown below to explain how this works.
Bitwise operators:
The production A : A # B, where # is one of the bitwise operators in the productions above, is evaluated as follows:
Let lref be the result of evaluating A.
Let lval be GetValue(lref).
Let rref be the result of evaluating B.
Let rval be GetValue(rref).
Let lnum be ToInt32(lval).
Let rnum be ToInt32(rval).
Return the result of applying the bitwise operator # to lnum and rnum. The result is a signed 32 bit integer.
ToInt32:
The abstract operation ToInt32 converts its argument to one of 232 integer values in the range −231 through 231−1, inclusive. This abstract operation functions as follows:
Let number be the result of calling ToNumber on the input argument.
If number is NaN, +0, −0, +∞, or −∞, return +0.
Let posInt be sign(number) * floor(abs(number)).
Let int32bit be posInt modulo 232; that is, a finite integer value k of Number type with positive sign and less than 232 in magnitude such that the mathematical difference of posInt and k is mathematically an integer multiple of 232.
If int32bit is greater than or equal to 231, return int32bit − 232, otherwise return int32bit.
The internal ToNumber function will return NaN for any string that cannot be parsed as a number, and ToInt32(NaN) will give 0. So in your code example all of the bitwise operators with letters as the operands will evaluate to 0 | 0, which explains why only 0 is printed.
Note that something like '7' | '8' will evaluate to 7 | 8 because in this case the strings used as the operands can be successfully convered to a number.
As for why the behavior in Python is different, there isn't really any implicit type conversion in Python so an error is expected for any type that doesn't implement the binary operators (by using __or__, __and__, etc.), and strings do not implement those binary operators.
Perl does something completely different, bitwise operators are implemented for strings and it will essentially perform the bitwise operator for the corresponding bytes from each string.
If you want to use JavaScript and get the same result as Perl, you will need to first convert the characters to their code points using str.charCodeAt, perform the bitwise operator on the resulting integers, and then use String.fromCodePoint to convert the resulting numeric values into characters.
I'd be surprised if JavaScript worked at all with bitwise operations on non-numerical strings and produced anything meaningful. I'd imagine that because any bitwise operator in JavaScript converts its operand into a 32 bit integer, that it would simply turn all non-numerical strings into 0.
I'd use...
"a".charCodeAt(0) & 0xFF
That produces 97, the ASCII code for "a", which is correct, given it's masked off with a byte with all bits set.
Try to remember that because things work nicely in other languages, it isn't always the case in JavaScript. We're talking about a language conceived and implemented in a very short amount of time.
JavaScript is using type coercion which allows it to attempt to parse the strings as numbers automatically when you try to perform a numeric operation on them. The parsed value is either 0 or more likely NaN. This obviously won't get you the information you're trying to get.
I think what you're looking for is charCodeAt which will allow you to get the numeric Unicode value for a character in a string and the possibly the complementary fromCodePoint which converts the numeric value back to a character.

Right shift operator - Javascript

I'm trying to understand why i.e. Math.random()*255>>0; will skip/remove all the decimals. Same thing happens if I write >>1 or >>2 instead of 0.
I came over another SO-post that said x >> n operator could be looked at as x / 2^n. That still doesn't explain why the decimals goes away.
Any help would be appreciated!
According to spec, certain numerical operations are required to convert arguments to 32 bit integers first. (http://www.ecma-international.org/ecma-262/5.1/#sec-11.7.2)
The production ShiftExpression : ShiftExpression >> AdditiveExpression is evaluated as follows:
Let lref be the result of evaluating ShiftExpression.
Let lval be GetValue(lref).
Let rref be the result of evaluating AdditiveExpression.
Let rval be GetValue(rref).
Let lnum be ToInt32(lval). ← The number is converted to a 32 bit integer here
Let rnum be ToUint32(rval).
Let shiftCount be the result of masking out all but the least significant 5 bits of rnum, that is, compute rnum & 0x1F.
Return the result of performing a sign-extending right shift of lnum by shiftCount bits. The most significant bit is propagated. The result is a signed 32-bit integer.

Why do Ruby and JavaScript bitwise operators yield different results with the same operands?

Why do Ruby and JavaScript bitwise operators yield different results with the same operands?
For example:
256 >> -4 # => 4096 (Ruby)
256 >> -4 # => 0 (Javascript)
Any tips/pointers are appreciated.
For the Ruby version, it looks like 256 >> -4 is equivalent to 256 << 4, so the negative operand essentially just switches the direction of the shift.
From looking at the ECMAScript specification for the right-shift operator, in JavaScript, the operand is converted to an unsigned 32-bit integer before the shift, so the -4 becomes 4294967292. After this conversion the 5 least-significant bits are used for the shift, in other words we would end up shifting by 4294967292 & 0x1f bits (which comes out to 28). It probably shouldn't surprise you at all to see that 256 >> 28 gives 0.
For convenience, here is the text from the spec (steps 6 and 7 are most relevant to your confusion here):
The Signed Right Shift Operator ( >> )
Performs a sign-filling bitwise right shift operation on the left operand by the amount specified by the right operand.
The production ShiftExpression : ShiftExpression >> AdditiveExpression is evaluated as follows:
Let lref be the result of evaluating ShiftExpression.
Let lval be GetValue(lref).
Let rref be the result of evaluating AdditiveExpression.
Let rval be GetValue(rref).
Let lnum be ToInt32(lval).
Let rnum be ToUint32(rval).
Let shiftCount be the result of masking out all but the least significant 5 bits of rnum, that is, compute rnum & 0x1F.
Return the result of performing a sign-extending right shift of lnum by shiftCount bits. The most significant bit is propagated. The result is a signed 32-bit integer.
As a side note, if you want to play around with this by converting a value to an unsigned 32-bit integer you can use val >>> 0 as seen in touint32.js from V8 JavaScript Engine.

Why does a shift by 0 truncate the decimal?

I recently found this piece of JavaScript code:
Math.random() * 0x1000000 << 0
I understood that the first part was just generating a random number between 0 and 0x1000000 (== 16777216).
But the second part seemed odd. What's the point of performing a bit-shift by 0? I didn't think that it would do anything. Upon further investigation, however, I noticed that the shift by 0 seemed to truncate the decimal part of the number. Furthermore, it didn't matter if it was a right shift, or a left shift, or even an unsigned right shift.
> 10.12345 << 0
10
> 10.12345 >> 0
10
> 10.12345 >>> 0
10
I tested both with Firefox and Chrome, and the behavior is the same. So, what is the reason for this observation? And is it just a nuance of JavaScript, or does it occur in other languages as well? I thought I understood bit-shifting, but this has me puzzled.
You're correct; it is used to truncate the value.
The reason >> works is because it operates only on 32-bit integers, so the value is truncated. (It's also commonly used in cases like these instead of Math.floor because bitwise operators have a low operator precedence, so you can avoid a mess of parentheses.)
And since it operates only on 32-bit integers, it's also equivalent to a mask with 0xffffffff after rounding. So:
0x110000000 // 4563402752
0x110000000 >> 0 // 268435456
0x010000000 // 268435456
But that's not part of the intended behaviour since Math.random() will return a value between 0 and 1.
Also, it does the same thing as | 0, which is more common.
Math.random() returns a number between 0 (inclusive) and 1 (exclusive). Multiplying this number with a whole number results in a number that has decimal portion. The << operator is a shortcut for eliminating the decimal portion:
The operands of all bitwise operators are converted to signed 32-bit
integers in big-endian order and in two's complement format.
The above statements means that the JavaScript engine will implicitly convert both operands of << operator to 32-bit integers; for numbers it does so by chopping off the fractional portion (numbers that do not fit 32-bit integer range loose more than just the decimal portion).
And is it just a nuance of JavaScript, or does it occur in other
languages as well?
You'll notice similar behavior in loosely typed languages. PHP for example:
var_dump(1234.56789 << 0);
// int(1234)
For strongly types languages, the programs will usually refuse to compile. C# complains like this:
Console.Write(1234.56789 << 0);
// error CS0019: Operator '<<' cannot be applied to operands of type 'double' and 'int'
For these languages, you already have type-casting operators:
Console.Write((int)1234.56789);
// 1234
From the Mozilla documentation of bitwise operators (which includes the shift operators)
The operands of all bitwise operators are converted to signed 32-bit integers in big-endian order and in two's complement format.
So basically the code is using that somewhat-incidental aspect of the shift operator as the only significant thing it does due to shifting by 0 bits. Ick.
And is it just a nuance of JavaScript, or does it occur in other languages as well?
I can't speak for all languages of course, but neither Java nor C# permit double values to be the left operand a shift operator.
According to ECMAScript Language Specification:
http://ecma-international.org/ecma-262/5.1/#sec-11.7.1
The production ShiftExpression : ShiftExpression >> AdditiveExpression
is evaluated as follows:
Let lref be the result of evaluating ShiftExpression.
Let lval be GetValue(lref).
Let rref be the result of evaluating AdditiveExpression.
Let rval be GetValue(rref).
Let lnum be ToInt32(lval).
Let rnum be ToUint32(rval).
Let shiftCount be the result of masking out all but the least significant 5 bits of rnum, that is, compute rnum & 0x1F.
Return the result of performing a sign-extending right shift of lnum by shiftCount bits. The most significant bit is propagated. The
result is a signed 32-bit integer.
The behavior you're observing is defined in the ECMA-262 standard
Here's an excerpt from the specification of the << left shift operator:
The production ShiftExpression : ShiftExpression << AdditiveExpression
is evaluated as follows:
Let lref be the result of evaluating ShiftExpression.
Let lval be GetValue(lref).
Let rref be the result of evaluating AdditiveExpression.
Let rval be GetValue(rref).
Let lnum be ToInt32(lval).
Let rnum be ToUint32(rval).
Let shiftCount be the result of masking out all but the least significant 5 bits of rnum, that is, compute rnum & 0x1F.
Return the result of left shifting lnum by shiftCount bits. The result is a signed 32-bit integer.
As you can see, both operands are cast to 32 bit integers. Hence the disappearance of decimal parts.
The same applies for the other bit shift operators. You can find their respective descriptions in section 11.7 Bitwise Shift Operators of the document I linked to.
In this case, the only effect of performing the shift is type conversion. Math.random() returns a floating point value.

Categories

Resources