XOR operator in javascript different from XOR operator in python - javascript

I am trying to replicate some javascript code into python, and for some reason the XOR operator (^) in javascript gives me a different value than the XOR operator (^) in python. I have an example below. I know the values should be different because of Math.random(), but why is it like 4 significant digits longer?
Javascript:
console.log(Math.floor(2147483648 * Math.random()) ^ 1560268851466)
= 1596700165
Python:
import math
math.floor(2147483648 * random.random()) ^ 1560268851466
= 1559124407072

Your Python result is correct, given XOR's input bits. Your longer operand is on the order of 2^40, and so is your final result.
The Javascript result has been truncated to 32 bits, the shorter operand.

https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Bitwise_Operators:
Bitwise operators treat their operands as a sequence of 32 bits (zeroes and ones), rather than as decimal, hexadecimal, or octal numbers. For example, the decimal number nine has a binary representation of 1001. Bitwise operators perform their operations on such binary representations, but they return standard JavaScript numerical values.
However the particular code you are using can be "fixed" via XOR-ing the 32-bit part of your number, and simply adding the rest:
// 1560268851466 = 0x16B_4745490A
console.log( (Math.floor(2147483648 * Math.random()) ^ 0x4745490A) + 0x16B00000000);
(As 2147483648 is 0x8000000, the random part is "fine", it does not get truncated)

Related

Javascript XOR returning incorrect value of 0 [duplicate]

I am performing following operation
let a = 596873718249029632;
a ^= 454825669;
console.log(a);
Output is 454825669 but the output should have been 596873718703855301. Where I am doing wrong? What I should do to get 596873718703855301 as output?
EDIT: I am using nodejs Bigint library , my node version is 8.12.0
var bigInt = require("big-integer");
let xor = bigInt(596873718249029632).xor(454825669);
console.log(xor)
Output is
{ [Number: 596873717794203900]
value: [ 4203941, 7371779, 5968 ],
sign: false,
isSmall: false }
It is wrong. it should have been 596873718703855301.
From MDN documentation about XOR:
The operands are converted to 32-bit integers and expressed by a series of bits (zeroes and ones). Numbers with more than 32 bits get their most significant bits discarded.
Since the 32 least significant bits of 596873718249029632 are all 0, then the value of a is effectively 0 ^ 454825669, which is 454825669.
To get the intended value of 596873718703855301, BigInts can be used, which allow you to perform operations outside of the range of the Number primitive, so now your code would become:
let a = 596873718249029632n;
a ^= 454825669n;
console.log(a.toString());
In response to your edit, when working with integers and Number, you need to ensure that your values do not exceed Number.MAX_SAFE_INTEGER (equal to 253 - 1, beyond that point the double precision floating point numbers loose sufficient precision to represent integers). The following snippet worked for me:
var big_int = require("big-integer");
let xor = bigInt("596873718249029632").xor("454825669");
console.log(xor.toString());

How does | and + do the trick to turn string into number?

When I was reading a doc about Symbol on MDN, I noticed these things can trun string into number which I've never seen before.
Quote:
When trying to convert a symbol to a number, a TypeError will be
thrown (e.g. +sym or sym | 0).
For example:
+"15"
will return
15
which is number type.
Also
"15" | 0
can do the same thing.
I am wondering how does this trick work.
Can you help?
+"15" is casting the "15" to a number type, the same way -15 works.
eg.
>> -"15" === -15
>> true
The second case, "15" | 0 is doing the same thing, casting to an integer in order to perform a Bitwise OR.
Which means taking the bits of 15 and ORing them with the bits of zero.
15 in binary is, for example 00001111 and zero is 00000000 so each bit is or'd with each other resulting in 15 again which is returned.
Unary Plus Operator
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Arithmetic_Operators#Unary_plus_()
The unary plus operator precedes its operand and evaluates to its operand but attempts to convert it into a number, if it isn't already. Although unary negation (-) also can convert non-numbers, unary plus is the fastest and preferred way of converting something into a number, because it does not perform any other operations on the number. It can convert string representations of integers and floats, as well as the non-string values true, false, and null. Integers in both decimal and hexadecimal ("0x"-prefixed) formats are supported. Negative numbers are supported (though not for hex). If it cannot parse a particular value, it will evaluate to NaN.
+"6";//6
+6;//6
+-6;//-6
+undefined;//NaN
+false;//0
+true;//1
+null;//0
+{};//NaN
+[];//0
+function(){};//NaN
Bitwise OR Operator
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Bitwise_Operators#.7c_%28Bitwise_OR%29
The operands are converted to 32-bit integers and expressed by a series of bits (zeroes and ones). Numbers with more than 32 bits get their most significant bits discarded.
Each bit in the first operand is paired with the corresponding bit in the second operand: first bit to first bit, second bit to second bit, and so on.
The operator is applied to each pair of bits, and the result is constructed bitwise.
The Bitwise OR Operator first converts both operands to 32-bit integers and each bit is compared. When comparing the two bits, if any of the bits is 1, 1 is returned. If both bits are 0, 0 is returned.
Example:
2|1;//produces 3
--------
00000010 //2 in binary
00000001 //1 in binary
--------
00000011 //3 in binary

Is it correct to set bit 31 in javascript?

When I try to set 31 bit 0 | 1 << 31 I get the following result:
console.log(0 | 1 << 31); // -2147483648
Which is actualy:
console.log((-2147483648).toString(2)) // -10000000000000000000000000000000
Is it correct to set 31 bit or should I restrict to 30 to prevent negative values?
Refer to ECMA5 that the bitwise operators and shift operators operate on 32-bit ints, so in that case, the max safe integer is 2^31-1, or 2147483647.
Here is one explanation.
The << operator is defined as working on signed 32-bit integers (converted from the native Number storage of double-precision float). So 1<<31 must result in a negative number.
The only JavaScript operator that works using unsigned 32-bit integers is >>>. You can exploit this to convert a signed-integer-in-Number you've been working on with the other bitwise operators to an unsigned-integer-in-Number:
(1<<31)>>>0
Most bitwise operations are specified as converting their operands to signed 32-bit integers. It is perfectly correct to use bit 31, but yes, you'll get negative values. Usually it doesn't matter if you're doing bitwise operations anyway, since all you (should) care about is the bit pattern, not the decimal value of the number.
If you do want a positive value back, you can convert it back with >>> 0, because >>> is specified to convert its operands to unsigned 32-bit integers.
console.log((0 | 1 << 31) >>> 0);

Bitwise operations on strings in javascript

In javascript the following test of character to character binary operations prints 0 676 times:
var s = 'abcdefghijklmnopqrstuvwxyz';
var i, j;
for(i=0; i<s.length;i++){ for(j=0; j<s.length;j++){ console.log(s[i] | s[j]) }};
If js was using the actual binary representation of the strings I would expect some non-zero values here.
Similarly, testing binary operations on strings and integers, the following print 26 255s and 0s, respectively. (255 was chosen because it is 11111111 in binary).
var s = 'abcdefghijklmnopqrstuvwxyz';
var i; for(i=0; i<s.length;i++){ console.log(s[i] | 255) }
var i; for(i=0; i<s.length;i++){ console.log(s[i] & 255) }
What is javascript doing here? It seems like javascript is casting any string to false before binary operations.
Notes
If you try this in python, it throws an error:
>>> s = 'abcdefghijklmnopqrstuvwxyz'
>>> [c1 | c2 for c2 in s for c1 in s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for |: 'str' and 'str'
But stuff like this seems to work in php.
In JavaScript, when a string is used with a binary operator it is first converted to a number. Relevant portions of the ECMAScript spec are shown below to explain how this works.
Bitwise operators:
The production A : A # B, where # is one of the bitwise operators in the productions above, is evaluated as follows:
Let lref be the result of evaluating A.
Let lval be GetValue(lref).
Let rref be the result of evaluating B.
Let rval be GetValue(rref).
Let lnum be ToInt32(lval).
Let rnum be ToInt32(rval).
Return the result of applying the bitwise operator # to lnum and rnum. The result is a signed 32 bit integer.
ToInt32:
The abstract operation ToInt32 converts its argument to one of 232 integer values in the range −231 through 231−1, inclusive. This abstract operation functions as follows:
Let number be the result of calling ToNumber on the input argument.
If number is NaN, +0, −0, +∞, or −∞, return +0.
Let posInt be sign(number) * floor(abs(number)).
Let int32bit be posInt modulo 232; that is, a finite integer value k of Number type with positive sign and less than 232 in magnitude such that the mathematical difference of posInt and k is mathematically an integer multiple of 232.
If int32bit is greater than or equal to 231, return int32bit − 232, otherwise return int32bit.
The internal ToNumber function will return NaN for any string that cannot be parsed as a number, and ToInt32(NaN) will give 0. So in your code example all of the bitwise operators with letters as the operands will evaluate to 0 | 0, which explains why only 0 is printed.
Note that something like '7' | '8' will evaluate to 7 | 8 because in this case the strings used as the operands can be successfully convered to a number.
As for why the behavior in Python is different, there isn't really any implicit type conversion in Python so an error is expected for any type that doesn't implement the binary operators (by using __or__, __and__, etc.), and strings do not implement those binary operators.
Perl does something completely different, bitwise operators are implemented for strings and it will essentially perform the bitwise operator for the corresponding bytes from each string.
If you want to use JavaScript and get the same result as Perl, you will need to first convert the characters to their code points using str.charCodeAt, perform the bitwise operator on the resulting integers, and then use String.fromCodePoint to convert the resulting numeric values into characters.
I'd be surprised if JavaScript worked at all with bitwise operations on non-numerical strings and produced anything meaningful. I'd imagine that because any bitwise operator in JavaScript converts its operand into a 32 bit integer, that it would simply turn all non-numerical strings into 0.
I'd use...
"a".charCodeAt(0) & 0xFF
That produces 97, the ASCII code for "a", which is correct, given it's masked off with a byte with all bits set.
Try to remember that because things work nicely in other languages, it isn't always the case in JavaScript. We're talking about a language conceived and implemented in a very short amount of time.
JavaScript is using type coercion which allows it to attempt to parse the strings as numbers automatically when you try to perform a numeric operation on them. The parsed value is either 0 or more likely NaN. This obviously won't get you the information you're trying to get.
I think what you're looking for is charCodeAt which will allow you to get the numeric Unicode value for a character in a string and the possibly the complementary fromCodePoint which converts the numeric value back to a character.

Dealing With Binary / Bitshifts in JavaScript

I am trying to perform some bitshift operations and dealing with binary numbers in JavaScript.
Here's what I'm trying to do. A user inputs a value and I do the following with it:
// Square Input and mod with 65536 to keep it below that value
var squaredInput = (inputVal * inputVal) % 65536;
// Figure out how many bits is the squared input number
var bits = Math.floor(Math.log(squaredInput) / Math.log(2)) + 1;
// Convert that number to a 16-bit number using bitshift.
var squaredShifted = squaredInput >>> (16 - bits);
As long as the number is larger than 46, it works. Once it is less than 46, it does not work.
I know the problem is the in bitshift. Now coming from a C background, I know this would be done differently, since all numbers will be stored in 32-bit format (given it is an int). Does JavaScript do the same (since it vars are not typed)?
If so, is it possible to store a 16-bit number? If not, can I treat it as 32-bits and do the required calculations to assume it is 16-bits?
Note: I am trying to extract the middle 4-bits of the 16-bit value in squaredInput.
Another note: When printing out the var, it just prints out the value without the padding so I couldn't figure it out. Tried using parseInt and toString.
Thanks
Are you looking for this?
function get16bitnumber( inputVal ){
return ("0000000000000000"+(inputVal * inputVal).toString(2)).substr(-16);
}
This function returns last 16 bits of (inputVal*inputVal) value.By having binary string you could work with any range of bits.
Don't use bitshifting in JS if you don't absolutely have to. The specs mention at least four number formats
IEEE 754
Int32
UInt32
UInt16
It's really confusing to know which is used when.
For example, ~ applies a bitwise inversion while converting to Int32. UInt16 seems to be used only in String.fromCharCode. Using bitshift operators converts the operands to either UInt32 or to Int32.
In your case, the right shift operator >>> forces conversion to UInt32.
When you type
a >>> b
this is what you get:
ToUInt32(a) >>> (ToUInt32(b) & 0x1f)

Categories

Resources