I am performing following operation
let a = 596873718249029632;
a ^= 454825669;
console.log(a);
Output is 454825669 but the output should have been 596873718703855301. Where I am doing wrong? What I should do to get 596873718703855301 as output?
EDIT: I am using nodejs Bigint library , my node version is 8.12.0
var bigInt = require("big-integer");
let xor = bigInt(596873718249029632).xor(454825669);
console.log(xor)
Output is
{ [Number: 596873717794203900]
value: [ 4203941, 7371779, 5968 ],
sign: false,
isSmall: false }
It is wrong. it should have been 596873718703855301.
From MDN documentation about XOR:
The operands are converted to 32-bit integers and expressed by a series of bits (zeroes and ones). Numbers with more than 32 bits get their most significant bits discarded.
Since the 32 least significant bits of 596873718249029632 are all 0, then the value of a is effectively 0 ^ 454825669, which is 454825669.
To get the intended value of 596873718703855301, BigInts can be used, which allow you to perform operations outside of the range of the Number primitive, so now your code would become:
let a = 596873718249029632n;
a ^= 454825669n;
console.log(a.toString());
In response to your edit, when working with integers and Number, you need to ensure that your values do not exceed Number.MAX_SAFE_INTEGER (equal to 253 - 1, beyond that point the double precision floating point numbers loose sufficient precision to represent integers). The following snippet worked for me:
var big_int = require("big-integer");
let xor = bigInt("596873718249029632").xor("454825669");
console.log(xor.toString());
Related
I am performing following operation
let a = 596873718249029632;
a ^= 454825669;
console.log(a);
Output is 454825669 but the output should have been 596873718703855301. Where I am doing wrong? What I should do to get 596873718703855301 as output?
EDIT: I am using nodejs Bigint library , my node version is 8.12.0
var bigInt = require("big-integer");
let xor = bigInt(596873718249029632).xor(454825669);
console.log(xor)
Output is
{ [Number: 596873717794203900]
value: [ 4203941, 7371779, 5968 ],
sign: false,
isSmall: false }
It is wrong. it should have been 596873718703855301.
From MDN documentation about XOR:
The operands are converted to 32-bit integers and expressed by a series of bits (zeroes and ones). Numbers with more than 32 bits get their most significant bits discarded.
Since the 32 least significant bits of 596873718249029632 are all 0, then the value of a is effectively 0 ^ 454825669, which is 454825669.
To get the intended value of 596873718703855301, BigInts can be used, which allow you to perform operations outside of the range of the Number primitive, so now your code would become:
let a = 596873718249029632n;
a ^= 454825669n;
console.log(a.toString());
In response to your edit, when working with integers and Number, you need to ensure that your values do not exceed Number.MAX_SAFE_INTEGER (equal to 253 - 1, beyond that point the double precision floating point numbers loose sufficient precision to represent integers). The following snippet worked for me:
var big_int = require("big-integer");
let xor = bigInt("596873718249029632").xor("454825669");
console.log(xor.toString());
Trying some bit manipulation in javascript.
Consider the following:
const n = 4393751543811;
console.log(n.toString(2)) // '111111111100000000000000000000000000000011'
console.log(n & 0b11) // last two bits equal 3
const m = n >> 2; // right shift 2
// The unexpected.
console.log(m.toString(2)) // '0'
The result is 0? The expected output I am looking for after the right shift is:
111111111100000000000000000000000000000011 // pre
001111111111000000000000000000000000000000 // post >>
How is this accomplished?
Javascript bitwise operators on numbers work "as if" on 32bit integers.
>> (sign-propagating right-shift for numbers) will first convert to a 32-bit integer. If you read linked spec, note specifically
Let int32bit be int modulo 232.
In other words, all bits above 32 will simply be ignored. For your number, this results in the following:
111111111100000000000000000000000000000011
┗removed━┛┗━━━━━━━━━━━━━━32bit━━━━━━━━━━━━━┛
If you want, you can use BigInt:
const n = 4393751543811n; // note the n-suffix
console.log(n.toString(2))
console.log(n & 0b11n) // for BigInt, all operands must be BigInt
const m = n >> 2n;
// The expected.
console.log(m.toString(2))
The spec for >> on BigInt uses BigInt::leftShift(x, -y), where it in turn states:
Semantics here should be equivalent to a bitwise shift, treating the BigInt as an infinite length string of binary two's complement digits.
I am trying to replicate some javascript code into python, and for some reason the XOR operator (^) in javascript gives me a different value than the XOR operator (^) in python. I have an example below. I know the values should be different because of Math.random(), but why is it like 4 significant digits longer?
Javascript:
console.log(Math.floor(2147483648 * Math.random()) ^ 1560268851466)
= 1596700165
Python:
import math
math.floor(2147483648 * random.random()) ^ 1560268851466
= 1559124407072
Your Python result is correct, given XOR's input bits. Your longer operand is on the order of 2^40, and so is your final result.
The Javascript result has been truncated to 32 bits, the shorter operand.
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Bitwise_Operators:
Bitwise operators treat their operands as a sequence of 32 bits (zeroes and ones), rather than as decimal, hexadecimal, or octal numbers. For example, the decimal number nine has a binary representation of 1001. Bitwise operators perform their operations on such binary representations, but they return standard JavaScript numerical values.
However the particular code you are using can be "fixed" via XOR-ing the 32-bit part of your number, and simply adding the rest:
// 1560268851466 = 0x16B_4745490A
console.log( (Math.floor(2147483648 * Math.random()) ^ 0x4745490A) + 0x16B00000000);
(As 2147483648 is 0x8000000, the random part is "fine", it does not get truncated)
I am trying to understand Javascript logical operators and came across 2 statements with seeminlgy similar functionality and trying to understand the difference. So, What's the difference between these 2 lines of code in Javascript?
For a number x,
x >>>= 0;
x &= 0x7fffffff;
If I understand it correctly, they both should give unsigned 32 bit output. However, for same negative value of x (i.e. most significant bit always 1 in both case), I get different outputs, what am I missing?
Thanks
To truncate a number to 32 bits, the simplest and most common method is to use the "|" bit-wise operator:
x |= 0;
JavaScript always considers the result of any 32-bit computation to be negative if the highest bit (bit 31) is set. Don't let that bother you. And don't clear bit 31 in an attempt to make it positive; that incorrectly alters the value.
To convert a negative 32-bit number as a positive value (a value in the range 0 to 4294967295), you can do this:
x = x < 0? x + 0x100000000 : x;
By adding a 33-bit value, automatic sign-extension of bit 31 is inhibited. However, the result is now outside the signed 32-bit range.
Another (tidier) solution is to use the unsigned right-shift operator with a zero shift count:
x >>>= 0;
Technically, all JavaScript numbers are 64-bit floating-point values, but in reality, as long as you keep numbers within the signed 32-bit range, you make it possible for JavaScript runtimes to optimize your code using 32-bit integer operations.
Be aware that when you convert a negative 32-bit value to a positive value using either of above methods, you have essentially produced a 33-bit value, which may defeat any 32-bit optimizations your JavaScript engine uses.
In javascript the following test of character to character binary operations prints 0 676 times:
var s = 'abcdefghijklmnopqrstuvwxyz';
var i, j;
for(i=0; i<s.length;i++){ for(j=0; j<s.length;j++){ console.log(s[i] | s[j]) }};
If js was using the actual binary representation of the strings I would expect some non-zero values here.
Similarly, testing binary operations on strings and integers, the following print 26 255s and 0s, respectively. (255 was chosen because it is 11111111 in binary).
var s = 'abcdefghijklmnopqrstuvwxyz';
var i; for(i=0; i<s.length;i++){ console.log(s[i] | 255) }
var i; for(i=0; i<s.length;i++){ console.log(s[i] & 255) }
What is javascript doing here? It seems like javascript is casting any string to false before binary operations.
Notes
If you try this in python, it throws an error:
>>> s = 'abcdefghijklmnopqrstuvwxyz'
>>> [c1 | c2 for c2 in s for c1 in s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for |: 'str' and 'str'
But stuff like this seems to work in php.
In JavaScript, when a string is used with a binary operator it is first converted to a number. Relevant portions of the ECMAScript spec are shown below to explain how this works.
Bitwise operators:
The production A : A # B, where # is one of the bitwise operators in the productions above, is evaluated as follows:
Let lref be the result of evaluating A.
Let lval be GetValue(lref).
Let rref be the result of evaluating B.
Let rval be GetValue(rref).
Let lnum be ToInt32(lval).
Let rnum be ToInt32(rval).
Return the result of applying the bitwise operator # to lnum and rnum. The result is a signed 32 bit integer.
ToInt32:
The abstract operation ToInt32 converts its argument to one of 232 integer values in the range −231 through 231−1, inclusive. This abstract operation functions as follows:
Let number be the result of calling ToNumber on the input argument.
If number is NaN, +0, −0, +∞, or −∞, return +0.
Let posInt be sign(number) * floor(abs(number)).
Let int32bit be posInt modulo 232; that is, a finite integer value k of Number type with positive sign and less than 232 in magnitude such that the mathematical difference of posInt and k is mathematically an integer multiple of 232.
If int32bit is greater than or equal to 231, return int32bit − 232, otherwise return int32bit.
The internal ToNumber function will return NaN for any string that cannot be parsed as a number, and ToInt32(NaN) will give 0. So in your code example all of the bitwise operators with letters as the operands will evaluate to 0 | 0, which explains why only 0 is printed.
Note that something like '7' | '8' will evaluate to 7 | 8 because in this case the strings used as the operands can be successfully convered to a number.
As for why the behavior in Python is different, there isn't really any implicit type conversion in Python so an error is expected for any type that doesn't implement the binary operators (by using __or__, __and__, etc.), and strings do not implement those binary operators.
Perl does something completely different, bitwise operators are implemented for strings and it will essentially perform the bitwise operator for the corresponding bytes from each string.
If you want to use JavaScript and get the same result as Perl, you will need to first convert the characters to their code points using str.charCodeAt, perform the bitwise operator on the resulting integers, and then use String.fromCodePoint to convert the resulting numeric values into characters.
I'd be surprised if JavaScript worked at all with bitwise operations on non-numerical strings and produced anything meaningful. I'd imagine that because any bitwise operator in JavaScript converts its operand into a 32 bit integer, that it would simply turn all non-numerical strings into 0.
I'd use...
"a".charCodeAt(0) & 0xFF
That produces 97, the ASCII code for "a", which is correct, given it's masked off with a byte with all bits set.
Try to remember that because things work nicely in other languages, it isn't always the case in JavaScript. We're talking about a language conceived and implemented in a very short amount of time.
JavaScript is using type coercion which allows it to attempt to parse the strings as numbers automatically when you try to perform a numeric operation on them. The parsed value is either 0 or more likely NaN. This obviously won't get you the information you're trying to get.
I think what you're looking for is charCodeAt which will allow you to get the numeric Unicode value for a character in a string and the possibly the complementary fromCodePoint which converts the numeric value back to a character.