Why this bitwise operation is failing in Javascript - javascript

Some one please shed light on this when doing bitwise operation in javascript I get:
65527|34359738368 =>65527
Is it possible to handle this in javascript ?
From mysql command line:
select 65527|34359738368 ;
+-------------------+
| 65527|34359738368 |
+-------------------+
| 34359803895 |
+-------------------+
And more importantly its less than 2 ^ 36
select (65527|34359738368)< pow(2,36);
+--------------------------------+
| (65527|34359738368)< pow(2,36) |
+--------------------------------+
| 1 |
+--------------------------------+
What I read from this SO Q is that int in javascript support max 2^53 value. I might be missing sth

You linked to the answer yourself:
Note that the bitwise operators and shift operators operate on 32-bit ints.

As Tim has already pointed out, bitwise operations in JavaScript use 32-bit numbers. One solution (and probably the easiest) is to use a bignum library that supports bitwise operations, such as this one: https://www.npmjs.org/package/bignum.
Another way to do it would be to break the number into words, and do the operations on the words separately, old-school style:
var a = 65527;
var b = 34359738368;
var wordSize = 4294967296; // 2^32
var ah = Math.floor(a/wordSize);
var al = a - ah*wordSize;
var bh = Math.floor(b/wordSize);
var bl = b - bh*wordSize;
var xh = ah | bh;
var xl = al | bl;
var x = xh*wordSize + xl;
All we're doing is breaking the two operands into two words (high and low), doing the operations on the words, yielding our result (x) as a high and a low word, then recombining them to make a word.
You could, of course, bundle this into a neat function:
function or64(a,b){
var w64 = 18446744073709552000; // 2^64
var w32 = 4294967296; // 2^32
if(a>w64 || b>w64)
throw new Error('operands cannot exceed 64 bits');
var ah = Math.floor(a/w32);
var al = a - ah*w32;
var bh = Math.floor(b/w32);
var bl = b - bh*w32;
return (ah|bh)*w32 + (al|bl);
}

Now (in year 2020) you can also use BigInt which since ES2020 is a standard built-in object. BigInt can be used for arbitrarily large integers.
The easiest way to convert a standard Number to BigInt in JavaScript is to append "n" to it. Like this:
65527n|34359738368n => 34359803895n
See https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/BigInt for more information

Related

How does bitwise AND OR and XOR works on -negative signed integers?

I was just solving random problems on bitwise operators and trying various other combination for making personal notes. And somehow I just cannot figure out the solution.
Say I wanted to check bitwise AND between two integers or on a ~number and -negative number(~num1 & -num2) and various other combo's. Then I can see the answer but I haven't been able to establish how this happened?
Console:
console.log(25 & 3); outputs 1 (I can solve this easily).
console.log(-25 & -3); outputs-27.
Similarly
console.log(~25 & ~3); outputs -28.
console.log(25 & ~3); outputs -24.
console.log(~25 & 3); outputs -2.
console.log(~25 & -3); outputs --28.
console.log(-25 & ~3); outputs --28.
I know the logic behind "console.log(25 & -3)".
25 is 11001
-3 is 11101(3=00011 The minus sign is like 2s compliment+1)
AND-11001 = 25.
But I cannot make it work the same way when both the numbers are negative or with the other cases mentioned above. I have tried various combinations of numbers too, not just these two. But I cannot solve the problem. Can somebody explain the binary logic used in the problems I cannot solve.
(I've spend about 2 hrs here on SO to find the answer and another 1 hr+ on google, but I still haven't found the answer).
Thanks and Regards.
JavaScript specifies that bitwise operations on integers are performed as though they were stored in two's-complement notation. Fortunately, most computer hardware nowadays uses this notation natively anyway.
For brevity's sake I'm going to show the following numbers as 8-bit binary. They're actually 32-bit in JavaScript, but for the numbers in the original question, this doesn't change the outcome. It does, however, let us drop a whole lot of leading bits.
console.log(-25 & -3); //outputs -27. How?
If we write the integers in binary, we get (11100111 & 11111101) respectively. AND those together and you get 11100101, which is -27.
In your later examples, you seem to be using the NOT operator (~) and negation (-) interchangeably. You can't do that in two's complement: ~ and - are not the same thing. ~25 is 11100110, which is -26, not -25. Similarly, ~3 is 11111100, which is -4, not -3.
But when we put these together, we can work out the examples you gave.
console.log(~25 & ~3); //outputs-28. How?
11100110 & 11111100 = 11100100, which is -28 (not 28, as you wrote)
console.log(25 & ~3);//outputs-24. How?
00011001 & 11111100 = 00011000, which is 24
console.log(~25 & 3);//outputs-2. How?
11100110 & 00000011 = 00000001, which is 2
console.log(~25 & -3);//outputs--28. How?
11100110 & 11111101 = 11100100, which is -28
console.log(-25 & ~3);//outputs--28. How?
11100111 & 11111100 = 11100100, which is -28
The real key to understanding this is that you don't really use bitwise operations on integers. You use them on bags of bits of a certain size, and these bags of bits happen to be conveniently representable as integers. This is key to understanding what's going on here, because you've stumbled across a case where the difference matters.
There are specific circumstances in computer science where you can manipulate bags of bits in ways that, by coincidence, give the same results as if you'd done particular mathematical operations on numbers. But this only works in specific circumstances, and they require you to assume certain things about the numbers you're working on, and if your numbers don't fit those assumptions, things break down.
This is one of the reasons Donald Knuth said "premature optimization is the root of all evil". If you want to use bitwise operations in place of actual integer math, you have to be absolutely certain that your inputs will actually follow the assumptions required for that trick to work. Otherwise, the results will start looking strange when you start using inputs outside of those assumptions.
25 = 16+8+1 = 0b011001, I've added another 0 digit as the sign digit. Practically you'll have at least 8 binary digits
but the two's complement math is the same. To get -25 in 6-bits two's complement, you'd do -25 = ~25 + 1=0b100111
3=2+1=0b000011; -3 = ~3+1 = 0b111101
When you & the two, you get:
-25 = ~25 + 1=0b100111
&
-3 = ~3 + 1 = 0b111101
0b100101
The leftmost bit (sign bit) is set so it's a negative number. To find what it's a negative of, you reverse the process and first subtract 1 and then do ~.
~(0b100101-1) = 0b011011
thats 1+2+0*4+8+16 = 27 so -25&-3=-27.
For 25 & ~3, it's:
25 = 16+8+1 = 0b011001
& ~3 = 0b111100
______________________
0b011000 = 24
For ~25 & 3, it's:
~25 = 0b100110
& ~3 = 0b000011
______________________
0b000010 = 2
For ~25 & -3, it's:
~25 = 0b100110
& ~3+1 = 0b111101
______________________
0b100100 #negative
#find what it's a negative of:
~(0b100100-1) =~0b100011 = 0b011100 = 4+8+16 = 28
0b100100 = -28
-27 has 6 binary digits in it so you should be using numbers with at least that many digits. With 8-bit numbers then we have:
00011001 = 25
00000011 = 3
00011011 = 27
and:
11100111 = -25
11111101 = -3
11100101 = -27
Now -25 & -3 = -27 because 11100111 & 11111101 = 11100101
The binary string representation of a 32 bit integer can be found with:
(i >>> 0).toString(2).padStart(32, '0')
The bitwise anding of two binary strings is straightforward
The integer value of a signed, 32 bit binary string is either
parseInt(bitwiseAndString, 2)
if the string starts with a '0', or
-~parseInt(bitwiseAndString, 2) - 1
if it starts with a '1'
Putting all that together:
const tests = [
['-25', '-3'],
['~25', '-3'],
['25', '~3'],
['~25', '3'],
['~25', '~3'],
['-25', '~3']
]
const output = (s,t) => { console.log(`${`${s}:`.padEnd(20, ' ')}${t}`); }
const bitwiseAnd = (i, j) => {
console.log(`Calculating ${i} & ${j}`);
const bitStringI = (eval(i) >>> 0).toString(2).padStart(32, '0');
const bitStringJ = (eval(j) >>> 0).toString(2).padStart(32, '0');
output(`bit string for ${i}`, bitStringI);
output(`bit string for ${j}`, bitStringJ);
const bitArrayI = bitStringI.split('');
const bitArrayJ = bitStringJ.split('');
const bitwiseAndString = bitArrayI.map((s, idx) => s === '1' && bitArrayJ[idx] === '1' ? '1' : '0').join('');
output('bitwise and string', bitwiseAndString);
const intValue = bitwiseAndString[0] === '1' ? -~parseInt(bitwiseAndString, 2) - 1 : parseInt(bitwiseAndString, 2);
if (intValue === (eval(i) & eval(j))) {
console.log(`integer value: ${intValue} ✓`);
} else {
console.error(`calculation failed: ${intValue} !== ${i & j}`);
}
}
tests.forEach(([i, j]) => { bitwiseAnd(i, j); })

There is no real float type in Node.js/V8?

I try to store one float value through Buffer in Node.js
> f = 3.3
3.3
> var buf = new Buffer(32)
> buf.writeFloatBE(f);
4
> g = buf.readFloatBE();
3.299999952316284
Then I found the stored value g after readFloatBE() is NOT equal to the original f.
After further investigation, those two buffer values stored g and f are same.
> var buf1 = new Buffer(4); buf1.writeFloatBE(f); buf1
<Buffer 40 53 33 33>
> var buf2 = new Buffer(4); buf2.writeFloatBE(g); buf2
<Buffer 40 53 33 33>
According to this Buffer reading and writing floats, we know the writeDoulbeBE should be used here.
> var buf3 = new Buffer(8);
> buf3.writeDoubleBE(f);
8
> h = buf3.readDoubleBE();
3.3
> h === f
true
I want to know why the float type is not used in Node.js or V8? Refer to the code from V8
// Fast primitive setters
V8_INLINE void Set(bool value);
V8_INLINE void Set(double i);
V8_INLINE void Set(int32_t i);
V8_INLINE void Set(uint32_t i);
It seems there is NO float type in V8, any reason of this design or am I missing something? In which case, should this function writeFloatBE() be used?
It seems there is NO float type in V8
Yes, and this is by design: There is no float type in JavaScript either. All numbers are doubles as specified by the ECMAScript standard.
Your number f is therefore 3.3 with double precision, while g only has float precision. As you can see, that's not the same double as f. The same would happen if you used one of the buf.writeInt… methods, the result after reading it would only be 3 not 3.3.
In which case, should the function writeFloatBE() be used?
Yes, of course it should be used, whenever you want to store numbers with float precision in a buffer. If you want to store f with full precision, use writeDoubleBE instead.

JavaScript - Convert a four-character array to an integer

How can i convert a four-character array to an integer?
You're trying to turn those characters into ASCII character codes and using the codes as byte values. This can be done using charCodeAt. For instance:
var str = "x7={";
var result = ( str.charCodeAt(0) << 24 )
+ ( str.charCodeAt(1) << 16 )
+ ( str.charCodeAt(2) << 8 )
+ ( str.charCodeAt(3) );
This returns 2016886139 as expected.
However, bear in mind that unlike C++, JavaScript will not necessarily use a one-byte, 256-character set. For instance, '€'.charCodeAt(0) returns 8364, well beyond the maximum of 256 that your equivalent C++ program would allow. As such, any character outside the 0-255 range will cause the above code to behave erraticaly.
Using Unicode, you can represent the above as "砷㵻" instead.
var arr = [5,2,4,0],
foo = +arr.join('');
console.log(foo, typeof foo);
I guess that depends on how you want to map the character values to the integer's bits.
One straight-forward solution would be:
var myArray = ['1', '2', '3', '4']
var myInt = (myArray[0].charCodeAt(0) << 24) | (myArray[1].charCodeAt(0) << 16) | (myArray[2].charCodeAt(0) << 8) | myArray[3].charCodeAt(0);
This produces the integer 0x01020304. This uses integers in the input array, for characters the result might be different depending on the characters used.
Update: use charCodeAt() to convert characters to code points.
var chArr = ['1','2','3','4'];
var num = parseInt( chArr.join(''), 10);
or
var num = parseInt( chArr.reverse().join(''), 10);
if depending on the order you array is filled..

Is there "0b" or something similar to represent a binary number in Javascript

I know that 0x is a prefix for hexadecimal numbers in Javascript. For example, 0xFF stands for the number 255.
Is there something similar for binary numbers ? I would expect 0b1111 to represent the number 15, but this doesn't work for me.
Update:
Newer versions of JavaScript -- specifically ECMAScript 6 -- have added support for binary (prefix 0b), octal (prefix 0o) and hexadecimal (prefix: 0x) numeric literals:
var bin = 0b1111; // bin will be set to 15
var oct = 0o17; // oct will be set to 15
var oxx = 017; // oxx will be set to 15
var hex = 0xF; // hex will be set to 15
// note: bB oO xX are all valid
This feature is already available in Firefox and Chrome. It's not currently supported in IE, but apparently will be when Spartan arrives.
(Thanks to Semicolon's comment and urish's answer for pointing this out.)
Original Answer:
No, there isn't an equivalent for binary numbers. JavaScript only supports numeric literals in decimal (no prefix), hexadecimal (prefix 0x) and octal (prefix 0) formats.
One possible alternative is to pass a binary string to the parseInt method along with the radix:
var foo = parseInt('1111', 2); // foo will be set to 15
In ECMASCript 6 this will be supported as a part of the language, i.e. 0b1111 === 15 is true. You can also use an uppercase B (e.g. 0B1111).
Look for NumericLiterals in the ES6 Spec.
I know that people says that extending the prototypes is not a good idea, but been your script...
I do it this way:
Object.defineProperty(
Number.prototype, 'b', {
set:function(){
return false;
},
get:function(){
return parseInt(this, 2);
}
}
);
100..b // returns 4
11111111..b // returns 511
10..b+1 // returns 3
// and so on
If your primary concern is display rather than coding, there's a built-in conversion system you can use:
var num = 255;
document.writeln(num.toString(16)); // Outputs: "ff"
document.writeln(num.toString(8)); // Outputs: "377"
document.writeln(num.toString(2)); // Outputs: "11111111"
Ref: MDN on Number.prototype.toString
As far as I know it is not possible to use a binary denoter in Javascript. I have three solutions for you, all of which have their issues. I think alternative 3 is the most "good looking" for readability, and it is possibly much faster than the rest - except for it's initial run time cost. The problem is it only supports values up to 255.
Alternative 1: "00001111".b()
String.prototype.b = function() { return parseInt(this,2); }
Alternative 2: b("00001111")
function b(i) { if(typeof i=='string') return parseInt(i,2); throw "Expects string"; }
Alternative 3: b00001111
This version allows you to type either 8 digit binary b00000000, 4 digit b0000 and variable digits b0. That is b01 is illegal, you have to use b0001 or b1.
String.prototype.lpad = function(padString, length) {
var str = this;
while (str.length < length)
str = padString + str;
return str;
}
for(var i = 0; i < 256; i++)
window['b' + i.toString(2)] = window['b' + i.toString(2).lpad('0', 8)] = window['b' + i.toString(2).lpad('0', 4)] = i;
May be this will usefull:
var bin = 1111;
var dec = parseInt(bin, 2);
// 15
No, but you can use parseInt and optionally omit the quotes.
parseInt(110, 2); // this is 6
parseInt("110", 2); // this is also 6
The only disadvantage of omitting the quotes is that, for very large numbers, you will overflow faster:
parseInt(10000000000000000000000, 2); // this gives 1
parseInt("10000000000000000000000", 2); // this gives 4194304
I know this does not actually answer the asked Q (which was already answered several times) as is, however I suggest that you (or others interested in this subject) consider the fact that the most readable & backwards/future/cross browser-compatible way would be to just use the hex representation.
From the phrasing of the Q it would seem that you are only talking about using binary literals in your code and not processing of binary representations of numeric values (for which parstInt is the way to go).
I doubt that there are many programmers that need to handle binary numbers that are not familiar with the mapping of 0-F to 0000-1111.
so basically make groups of four and use hex notation.
so instead of writing 101000000010 you would use 0xA02 which has exactly the same meaning and is far more readable and less less likely to have errors.
Just consider readability, Try comparing which of those is bigger:
10001000000010010 or 1001000000010010
and what if I write them like this:
0x11012 or 0x9012
Convert binary strings to numbers and visa-versa.
var b = function(n) {
if(typeof n === 'string')
return parseInt(n, 2);
else if (typeof n === 'number')
return n.toString(2);
throw "unknown input";
};
Using Number() function works...
// using Number()
var bin = Number('0b1111'); // bin will be set to 15
var oct = Number('0o17'); // oct will be set to 15
var oxx = Number('0xF'); // hex will be set to 15
// making function convTo
const convTo = (prefix,n) => {
return Number(`${prefix}${n}`) //Here put prefix 0b, 0x and num
}
console.log(bin)
console.log(oct)
console.log(oxx)
// Using convTo function
console.log(convTo('0b',1111))

bitwise AND in Javascript with a 64 bit integer

I am looking for a way of performing a bitwise AND on a 64 bit integer in JavaScript.
JavaScript will cast all of its double values into signed 32-bit integers to do the bitwise operations (details here).
Javascript represents all numbers as 64-bit double precision IEEE 754 floating point numbers (see the ECMAscript spec, section 8.5.) All positive integers up to 2^53 can be encoded precisely. Larger integers get their least significant bits clipped. This leaves the question of how can you even represent a 64-bit integer in Javascript -- the native number data type clearly can't precisely represent a 64-bit int.
The following illustrates this. Although javascript appears to be able to parse hexadecimal numbers representing 64-bit numbers, the underlying numeric representation does not hold 64 bits. Try the following in your browser:
<html>
<head>
<script language="javascript">
function showPrecisionLimits() {
document.getElementById("r50").innerHTML = 0x0004000000000001 - 0x0004000000000000;
document.getElementById("r51").innerHTML = 0x0008000000000001 - 0x0008000000000000;
document.getElementById("r52").innerHTML = 0x0010000000000001 - 0x0010000000000000;
document.getElementById("r53").innerHTML = 0x0020000000000001 - 0x0020000000000000;
document.getElementById("r54").innerHTML = 0x0040000000000001 - 0x0040000000000000;
}
</script>
</head>
<body onload="showPrecisionLimits()">
<p>(2^50+1) - (2^50) = <span id="r50"></span></p>
<p>(2^51+1) - (2^51) = <span id="r51"></span></p>
<p>(2^52+1) - (2^52) = <span id="r52"></span></p>
<p>(2^53+1) - (2^53) = <span id="r53"></span></p>
<p>(2^54+1) - (2^54) = <span id="r54"></span></p>
</body>
</html>
In Firefox, Chrome and IE I'm getting the following. If numbers were stored in their full 64-bit glory, the result should have been 1 for all the substractions. Instead, you can see how the difference between 2^53+1 and 2^53 is lost.
(2^50+1) - (2^50) = 1
(2^51+1) - (2^51) = 1
(2^52+1) - (2^52) = 1
(2^53+1) - (2^53) = 0
(2^54+1) - (2^54) = 0
So what can you do?
If you choose to represent a 64-bit integer as two 32-bit numbers, then applying a bitwise AND is as simple as applying 2 bitwise AND's, to the low and high 32-bit 'words'.
For example:
var a = [ 0x0000ffff, 0xffff0000 ];
var b = [ 0x00ffff00, 0x00ffff00 ];
var c = [ a[0] & b[0], a[1] & b[1] ];
document.body.innerHTML = c[0].toString(16) + ":" + c[1].toString(16);
gets you:
ff00:ff0000
Here is code for AND int64 numbers, you can replace AND with other bitwise operation
function and(v1, v2) {
var hi = 0x80000000;
var low = 0x7fffffff;
var hi1 = ~~(v1 / hi);
var hi2 = ~~(v2 / hi);
var low1 = v1 & low;
var low2 = v2 & low;
var h = hi1 & hi2;
var l = low1 & low2;
return h*hi + l;
}
This can now be done with the new BigInt built-in numeric type. BigInt is currently (July 2019) only available in certain browsers, see the following link for details:
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/BigInt
I have tested bitwise operations using BigInts in Chrome 67 and can confirm that they work as expected with up to 64 bit values.
Javascript doesn't support 64 bit integers out of the box. This is what I ended up doing:
Found long.js, a self contained Long implementation on github.
Convert the string value representing the 64 bit number to a Long.
Extract the high and low 32 bit values
Do a 32 bit bitwise and between the high and low bits, separately
Initialise a new 64 bit Long from the low and high bit
If the number is > 0 then there is correlation between the two numbers
Note: for the code example below to work you need to load
long.js.
// Handy to output leading zeros to make it easier to compare the bits when outputting to the console
function zeroPad(num, places){
var zero = places - num.length + 1;
return Array(+(zero > 0 && zero)).join('0') + num;
}
// 2^3 = 8
var val1 = Long.fromString('8', 10);
var val1High = val1.getHighBitsUnsigned();
var val1Low = val1.getLowBitsUnsigned();
// 2^61 = 2305843009213693960
var val2 = Long.fromString('2305843009213693960', 10);
var val2High = val2.getHighBitsUnsigned();
var val2Low = val2.getLowBitsUnsigned();
console.log('2^3 & (2^3 + 2^63)')
console.log(zeroPad(val1.toString(2), 64));
console.log(zeroPad(val2.toString(2), 64));
var bitwiseAndResult = Long.fromBits(val1Low & val2Low, val1High & val2High, true);
console.log(bitwiseAndResult);
console.log(zeroPad(bitwiseAndResult.toString(2), 64));
console.log('Correlation betwen val1 and val2 ?');
console.log(bitwiseAndResult > 0);
Console output:
2^3
0000000000000000000000000000000000000000000000000000000000001000
2^3 + 2^63
0010000000000000000000000000000000000000000000000000000000001000
2^3 & (2^3 + 2^63)
0000000000000000000000000000000000000000000000000000000000001000
Correlation between val1 and val2?
true
The Closure library has goog.math.Long with a bitwise add() method.
Unfortunately, the accepted answer (and others) appears not to have been adequately tested. Confronted by this problem recently, I initially tried to split my 64-bit numbers into two 32-bit numbers as suggested, but there's another little wrinkle.
Open your JavaScript console and enter:
0x80000001
When you press Enter, you'll obtain 2147483649, the decimal equivalent. Next try:
0x80000001 & 0x80000003
This gives you -2147483647, not quite what you expected. It's clear that in performing the bitwise AND, the numbers are treated as signed 32-bit integers. And the result is wrong. Even if you negate it.
My solution was to apply ~~ to the 32-bit numbers after they were split off, check for a negative sign, and then deal with this appropriately.
This is clumsy. There may be a more elegant 'fix', but I can't see it on quick examination. There's a certain irony that something that can be accomplished by a couple of lines of assembly should require so much more labour in JavaScript.

Categories

Resources