How to represent a integer by a chain of 8 bit numbers - javascript

I am wondering how you can represent an integer as a chain of 8-bit numbers.
That is, (and I don't know how to actually calculate this so the example is wrong, but should demonstrate the basic idea), say you have an integer of arbitrary size, meaning it can be extremely large like Math.pow(100, 100) ≈ 1e+200 or some small number like 1 or 0. You want to be able to compute that number by using a chain of 8-bit values. The question is how to do that.
For example, I can represent 255 like this:
[ 255, 1 ]
That is, our representation has the first value be 8-bits, and then the second value is also 8-bits, and we multiply them together as 255 * 1 = 255. Another case would be 256. We could represent that like this:
[ 255, 1, 1 ]
Meaning (255 * 1) + 1 = 256, but now we don't necessarily know what to add vs. what to multiply. Imagine a large number like 1589158915. I have no idea what it's array of values would be, but say they ended up being like this:
[ 15, 12, 1, 141, 12, 250 ]
And say that from those numbers we could compute the value like this:
15 * 12 + 1 * 141 / 12 - 250
That wouldn't be very ideal, unless perhaps the system also stored the operation with the value, like this:
[ 15, 2, 12, 0, 1, 2, 141, 3, 12, 1, 250 ]
But I have no idea where to begin to figure out how to calculate the number like this, or how to say "given some integer, figure out how to break it apart into some equation that will generate its value". That is the basis of the question. I guess (actually) this would probably be the best approach.
As a sidenote, you can't take the integer 1589158915 and just represent it like this:
[ 1589158914, 1 ] == (1589158914 + 1)
because 1589158915 is not an 8-bit value. This is why it's tricky. In addition, we don't want to do it like this:
[ 255, 255, 255, 255, ... ]
==
255 + 255 + 255 + 255 + ...
Because that would be very inefficient. The solution should be a compact representation.
So the question is, how to take an integer and create an equation from it that will generate its value, where each component of the equation can be represented as an 8-bit integer.

So, here's the conversion to and from the 256 system:
const To256 = x => {
var res = [];
while (x > 0) {
res.push(x % 256);
x = Math.floor(x / 256)
}
return res;
}
const From256 = x => {
var res = 0;
while (x.length) res = 256 * res + x.pop();
return res;
}
var t = 9483593483;
console.log(t);
t = To256(t);
console.log(t);
t = From256(t);
console.log(t);
Note that the array is in reverse order.

It sounds like you're essentially describing base-256 arithmetic, although in a kind of circuitous way. Given some list of integers [a, b, c, d] you could compute the number they represent as (parentheses added for clarity):
a*(256**3) + b*(256**2) + c*(256**1) + d*(256**0)
The power to use for the 256 base is determined by the position of the integer in the list. Using this scheme, your example number of 15809158915 could be represented as:
[3, 174, 76, 159, 3]
# 3*256**4 + 174*256**3 + 76*256**2 + 159*256**1 + 3*256**0
This can of course extend to any arbitrarily-sized list. Of course, you could always write out the list of integers in their hexadecimal form instead of base-10:
[0x3, 0xae, 0x4c, 0x9f, 0x03]
Taking things further, you could write the list out as a string with a single 0x prefix to identify the entire string as a hexadecimal number:
0x3ae4c9f03
And voila: you've now duplicated python's hex() function!
>>> hex(15809158915)
'0x3ae4c9f03'
And since python already supports arbitrarily-sized integers, you can represent really, really big numbers this way:
hex(861324987629387561923756912874369128734619287346921367419273)
'0x89378c76db3e9b3e808eea75a42d7ff7d43840e0f41cf43989L'
The "L" at the end of the string indicates that this is a "long" integer, meaning python has engaged its arbitrarily-sized integer support behind the scenes.
In summary, all modern PCs represent numbers internally as a chain of 8-bit integers. 32-bit ints are a chain of 4 8-bit integers, 64-bit ints are a chain of 8 8-bit integers, etc.
If you get a tiny bit fancier, you can represent negative numbers using twos-complement, or floating point numbers using IEEE-754. (Note that I'm ignoring endianness for now.)

Related

How does bitwise AND OR and XOR works on -negative signed integers?

I was just solving random problems on bitwise operators and trying various other combination for making personal notes. And somehow I just cannot figure out the solution.
Say I wanted to check bitwise AND between two integers or on a ~number and -negative number(~num1 & -num2) and various other combo's. Then I can see the answer but I haven't been able to establish how this happened?
Console:
console.log(25 & 3); outputs 1 (I can solve this easily).
console.log(-25 & -3); outputs-27.
Similarly
console.log(~25 & ~3); outputs -28.
console.log(25 & ~3); outputs -24.
console.log(~25 & 3); outputs -2.
console.log(~25 & -3); outputs --28.
console.log(-25 & ~3); outputs --28.
I know the logic behind "console.log(25 & -3)".
25 is 11001
-3 is 11101(3=00011 The minus sign is like 2s compliment+1)
AND-11001 = 25.
But I cannot make it work the same way when both the numbers are negative or with the other cases mentioned above. I have tried various combinations of numbers too, not just these two. But I cannot solve the problem. Can somebody explain the binary logic used in the problems I cannot solve.
(I've spend about 2 hrs here on SO to find the answer and another 1 hr+ on google, but I still haven't found the answer).
Thanks and Regards.
JavaScript specifies that bitwise operations on integers are performed as though they were stored in two's-complement notation. Fortunately, most computer hardware nowadays uses this notation natively anyway.
For brevity's sake I'm going to show the following numbers as 8-bit binary. They're actually 32-bit in JavaScript, but for the numbers in the original question, this doesn't change the outcome. It does, however, let us drop a whole lot of leading bits.
console.log(-25 & -3); //outputs -27. How?
If we write the integers in binary, we get (11100111 & 11111101) respectively. AND those together and you get 11100101, which is -27.
In your later examples, you seem to be using the NOT operator (~) and negation (-) interchangeably. You can't do that in two's complement: ~ and - are not the same thing. ~25 is 11100110, which is -26, not -25. Similarly, ~3 is 11111100, which is -4, not -3.
But when we put these together, we can work out the examples you gave.
console.log(~25 & ~3); //outputs-28. How?
11100110 & 11111100 = 11100100, which is -28 (not 28, as you wrote)
console.log(25 & ~3);//outputs-24. How?
00011001 & 11111100 = 00011000, which is 24
console.log(~25 & 3);//outputs-2. How?
11100110 & 00000011 = 00000001, which is 2
console.log(~25 & -3);//outputs--28. How?
11100110 & 11111101 = 11100100, which is -28
console.log(-25 & ~3);//outputs--28. How?
11100111 & 11111100 = 11100100, which is -28
The real key to understanding this is that you don't really use bitwise operations on integers. You use them on bags of bits of a certain size, and these bags of bits happen to be conveniently representable as integers. This is key to understanding what's going on here, because you've stumbled across a case where the difference matters.
There are specific circumstances in computer science where you can manipulate bags of bits in ways that, by coincidence, give the same results as if you'd done particular mathematical operations on numbers. But this only works in specific circumstances, and they require you to assume certain things about the numbers you're working on, and if your numbers don't fit those assumptions, things break down.
This is one of the reasons Donald Knuth said "premature optimization is the root of all evil". If you want to use bitwise operations in place of actual integer math, you have to be absolutely certain that your inputs will actually follow the assumptions required for that trick to work. Otherwise, the results will start looking strange when you start using inputs outside of those assumptions.
25 = 16+8+1 = 0b011001, I've added another 0 digit as the sign digit. Practically you'll have at least 8 binary digits
but the two's complement math is the same. To get -25 in 6-bits two's complement, you'd do -25 = ~25 + 1=0b100111
3=2+1=0b000011; -3 = ~3+1 = 0b111101
When you & the two, you get:
-25 = ~25 + 1=0b100111
&
-3 = ~3 + 1 = 0b111101
0b100101
The leftmost bit (sign bit) is set so it's a negative number. To find what it's a negative of, you reverse the process and first subtract 1 and then do ~.
~(0b100101-1) = 0b011011
thats 1+2+0*4+8+16 = 27 so -25&-3=-27.
For 25 & ~3, it's:
25 = 16+8+1 = 0b011001
& ~3 = 0b111100
______________________
0b011000 = 24
For ~25 & 3, it's:
~25 = 0b100110
& ~3 = 0b000011
______________________
0b000010 = 2
For ~25 & -3, it's:
~25 = 0b100110
& ~3+1 = 0b111101
______________________
0b100100 #negative
#find what it's a negative of:
~(0b100100-1) =~0b100011 = 0b011100 = 4+8+16 = 28
0b100100 = -28
-27 has 6 binary digits in it so you should be using numbers with at least that many digits. With 8-bit numbers then we have:
00011001 = 25
00000011 = 3
00011011 = 27
and:
11100111 = -25
11111101 = -3
11100101 = -27
Now -25 & -3 = -27 because 11100111 & 11111101 = 11100101
The binary string representation of a 32 bit integer can be found with:
(i >>> 0).toString(2).padStart(32, '0')
The bitwise anding of two binary strings is straightforward
The integer value of a signed, 32 bit binary string is either
parseInt(bitwiseAndString, 2)
if the string starts with a '0', or
-~parseInt(bitwiseAndString, 2) - 1
if it starts with a '1'
Putting all that together:
const tests = [
['-25', '-3'],
['~25', '-3'],
['25', '~3'],
['~25', '3'],
['~25', '~3'],
['-25', '~3']
]
const output = (s,t) => { console.log(`${`${s}:`.padEnd(20, ' ')}${t}`); }
const bitwiseAnd = (i, j) => {
console.log(`Calculating ${i} & ${j}`);
const bitStringI = (eval(i) >>> 0).toString(2).padStart(32, '0');
const bitStringJ = (eval(j) >>> 0).toString(2).padStart(32, '0');
output(`bit string for ${i}`, bitStringI);
output(`bit string for ${j}`, bitStringJ);
const bitArrayI = bitStringI.split('');
const bitArrayJ = bitStringJ.split('');
const bitwiseAndString = bitArrayI.map((s, idx) => s === '1' && bitArrayJ[idx] === '1' ? '1' : '0').join('');
output('bitwise and string', bitwiseAndString);
const intValue = bitwiseAndString[0] === '1' ? -~parseInt(bitwiseAndString, 2) - 1 : parseInt(bitwiseAndString, 2);
if (intValue === (eval(i) & eval(j))) {
console.log(`integer value: ${intValue} ✓`);
} else {
console.error(`calculation failed: ${intValue} !== ${i & j}`);
}
}
tests.forEach(([i, j]) => { bitwiseAnd(i, j); })

Generate big numbers with Math random in Javascript

I need to generate 26 digit numbers with Math.random, but when I use this:
Math.floor(Math.random() * 100000000000000000000000000) + 900000000000000000000000000
I gets 9.544695043285823e+26
Modern browsers support BigInt and bigint primitive type and we can combine it with a random generated array containing 8 bytes (the sizeof bigint is 8 bytes (64 bits)).
1. Generating Random BigInt Performance Wise
We can generate a random hex string of 16 characters length and apply it directly to BigInt:
const hexString = Array(16)
.fill()
.map(() => Math.round(Math.random() * 0xF).toString(16))
.join('');
const randomBigInt = BigInt(`0x${hexString}`);
// randomBigInt will contain a random Bigint
document.querySelector('#generate').addEventListener('click', () => {
const output = [];
let lines = 10;
do {
const hexString = Array(16)
.fill()
.map(() => Math.round(Math.random() * 0xF).toString(16))
.join('');
const number = BigInt(`0x${hexString}`);
output.push(`${
number.toString().padStart(24)
} : 0x${
hexString.padStart(16, '0')
}`);
} while (--lines > 0);
document.querySelector('#numbers').textContent = output.join('\n');
});
<button id="generate">Generate</button>
<pre id="numbers"><pre>
2. Generating Random BigInt from random bytes array
If we want to use Uint8Array or if we want more control over the bits manipulation, we can combine Array.prototype.fill with Array.prototype.map to generate an array containing 8 random byte number values (beware this is around 50% slower than the above method):
const randomBytes = Array(8)
.fill()
.map(() => Math.round(Math.random() * 0xFF));
// randomBytes will contain something similar to this:
// [129, 59, 98, 222, 20, 7, 196, 244]
Then we use Array.prototype.reduce to initialize a BigInt of zero value and left shift each randum byte value its position X 8 bits and applying bitwise or to the current value of each reduce iteration:
const randomBigInt = randomBytes
.reduce((n, c, i) => n | BigInt(c) << BigInt(i) * 8n, 0n);
// randomBigInt will contain a random Bigint
Working example generating 10 random BigInt values
document.querySelector('#generate').addEventListener('click', () => {
const output = [];
let lines = 10;
do {
const number = Array(8)
.fill()
.map(() => Math.round(Math.random() * 0xFF))
.reduce((n, c, i) => n | BigInt(c) << BigInt(i) * 8n, 0n);
output.push(`${
number.toString().padStart(24)
} : 0x${
number.toString(16).padStart(16, '0')
}`);
} while (--lines > 0);
document.querySelector('#numbers').textContent = output.join('\n');
});
<button id="generate">Generate</button>
<pre id="numbers"><pre>
Floating point numbers in JavaScript (and a lot of other languages) can contain only about 15.955 digits without losing precision. For bigger numbers you can look into JS libraries, or concatenate few numbers as strings. For example:
console.log( Math.random().toString().slice(2, 15) + Math.random().toString().slice(2, 15) )
Your question seems to be an XY problem where what you really want to do is generate a sequence of 26 random digits. You don't necessarily have to use Math.random. Whenever one mentions randomness it's important to specify how that randomness is distributed, otherwise you could end up with a paradox.
I'll assume you want each of the 26 digits to be independently randomly chosen uniformly from each of the 10 digits from 0 to 9, but I can also see a common interpretation being that the first digit must not be 0, and so that digit would be chosen uniformly from numbers 1 to 9.
Other answers may tempt you to choose a random bigint value using what amounts to random bits, but their digits will not be randomly distributed in the same way, since their maximum value is not a power of 10. For a simple example consider that a random 4 bit binary value in decimal will range from 00 to 15, and so the second digit will have a 12/16(=75%) chance of being 0 to 5, though it should be 60%.
As for an implementation, there's many ways to go about it. The simplest way would be to append to a string 26 times, but there are more potentially efficient ways that you could investigate for yourself if you find the performance isn't adequate. Math.random has a roughly uniform distribution from 0 to 1, but by being double precision it only has 15 or so significant decimal digits to offer us, so for each call to Math.random we should be able to retrieve up to 15 out of 26 decimal digits. Using this fact, I would suggest the following compromise on ease of readability and efficiency:
function generate26Digits() {
const first13 = Math.floor(Math.random() * Math.pow(10, 13)).toFixed(0).padStart(13, "0");
const next13 = Math.floor(Math.random() * Math.pow(10, 13)).toFixed(0).padStart(13, "0");
return first13 + next13;
}
console.log(generate26Digits())
This solution is not cryptographically secure however, and so I will direct readers to use Crypto.getRandomValues if you need more security for this for some reason.
If you then want to do math on this number as well without losing precision, you will have to use bigint types as others have suggested.
If I use a 6 decillion LG about 72.9 million out of 300 but when I switch to 10 centillion it comes like this 10 ^ 999 - 99999 + 26 just like how 9 - 9 - 9 - 9 - googolplex is equal to 0 googolplex

Can I accurately calculate an average from a medium size set of 2 d.p. numbers using JavaScript?

I need to find the average of a set of values and after doing some reading am not sure if JavaScript is able to produce an accurate result or not.
Each value has a precision of 2 d.p. and there could be up to 10000 of them between -100000.00 and 100000.00. The result also needs to be to 2 d.p.
From what I can see it is usually the figures around the 16th decimal place that are inaccurate which means that I would have to average an extremely large set before affecting my result. Is the best way of doing it to simply sum all of the values, divide by the total number and then use a toFixed(2)?
You could take advantage of your 2dp prevision, and multiply all your numbers by 100 first, then do the mathematics using integers. EG, a float error occurs in this simple average (I am just using 1dp for this example):
(0.1 + 0.2) / 2
0.15000000000000002
But this works:
(0.1*10 + 0.2*10) / (2*10)
0.15
Some good reading here:
http://floating-point-gui.de/basic/
and here:
How to deal with floating point number precision in JavaScript?
and a really precise fix to do it using decimals is to use this:
https://github.com/dtrebbien/BigDecimal.js
Example for 2 dp:
var numbers = [0.10, 0.20, 0.30]
var simple_average = numbers.reduce(function(a, b) {
return a + b
}) / numbers.length
console.log(simple_average)
var smart_average = numbers.map(function(a) {
return a * 100
}).reduce(function(a, b) {
return a + b
}) / (numbers.length * 100)
console.log(smart_average)
This demo can be run here -- http://repl.it/e1B/1

Why are items in the wrong cells in this Javascript array?

Somebody wrote this (very terrible) function to translate a numeric value from 0-999 to English words.
function getNumberWords(number) {
var list = new Array(1000);
list[000] = "zero";
list[001] = "one";
list[002] = "two";
list[003] = "three";
///skip a few
list[099] = "ninety nine";
list[100] = "one hundred";
list[101] = "one hundred and one";
///skip a few more
list[997] = "nine hundred and ninety seven";
list[998] = "nine hundred and ninety eight";
list[999] = "nine hundred and ninety nine";
return list[number];
}
There is some rather odd bug in here that I can't seem to figure out the cause of. Some, but not all of the elements are placed in the wrong cell.
I tried displaying the contentes of the list and it showed a pretty funky result:
> list.toString();
"zero,one,two,three,four,five,six,seven,ten,eleven,twelve,thirteen,fourteen,
fifteen,sixteen,seventeen,twenty,twenty one,twenty two,twenty three,twenty four,
twenty five,twenty six,twenty seven,thirty,thirty one,thirty two,thirty three,
thirty four,thirty five,thirty six,thirty seven,forty,forty one,forty two,"
///(skip a few)
"sixty six,sixty seven,seventy,seventy one,seventy two,seventy three,seventy four,
seventy five,seventy six,seventy seven,,,,,sixty eight,sixty nine,,,,,,,,,
seventy eight,seventy nine,eighty,eighty one,eighty two,eighty three,eighty four,"
///(and so on)
That is, elements 0-7 have the expected value. Elements 68, 69, and 78-999 also have the expected values. Elements 64-67 and 70-77 are empty. Elements 8-63 have incorrect values.
What in the world is going on here? Why are 15 cells empty, 56 cells incorrect, and the rest correct?
Numeric literals starting with 0 are interpreted as octal values (if they can be) — that is, numbers in base-8. This is the case in Javascript, C, C++, PHP, Perl, Bash and many other languages.
Now, base-8 22 is base-10 18 so you're not accessing the elements that you think you are. Your first eight array elements were fine because, naturally, base-8 0 is also base-10 0, and so on up to 7. A value like 069 did not cause confusion because it cannot represent anything in base-8, so Javascript falls back to base-10. Yuck!
I suggest using spacing for alignment instead:
function getNumberWords(number) {
var list = new Array(1000);
list[ 0] = "zero";
list[ 1] = "one";
list[ 2] = "two";
list[ 3] = "three";
///skip a few
list[ 99] = "ninety nine";
list[100] = "one hundred";
list[101] = "one hundred and one";
///skip a few more
list[997] = "nine hundred and ninety seven";
list[998] = "nine hundred and ninety eight";
list[999] = "nine hundred and ninety nine";
return list[number];
}
I also suggest making a new function that generates the strings on-the-fly; it shouldn't be taxing and certainly no more so than creating this array on every call.
In Javascript 022 means 22 in octal (18 in decimal).
Octal numeral system on Wikipedia
Starting a number literal with a 0 in non-strict code may cause that number to be parsed as an Octal literal (base 8). For instance, 010 is parsed to a number with the value 8.
Octal literals are now deprecated and will not be parsed as octals in strict mode, using "use strict":
function a() {
alert(010);
// -> 8
}
function b() {
"use strict";
alert(010);
// -> 10
}
Not all browser support strict mode, so for now just change your code to make sure numbers do not start with a 0, or wrap them in strings:
list[ 21 ] = "something"
list["022"] = "something else";
Strings work because octal numbers are not coerced.
The 0-prefix means octal in JavaScript, so 022 is octal:
022 = 2*8^1+2*8^0 = 18
And since the first index i 0, 18 gives the 19th element.
In JavaScript, numbers beginning with 0 are interpreted as octal (base 8).
For example, 010 === 8.
Oddly, JavaScript will interpret the number as decimal (base 10) even if it has an octal prefix, if the number is impossible to reach in octal.
For example, 08 === 8.

bitwise AND in Javascript with a 64 bit integer

I am looking for a way of performing a bitwise AND on a 64 bit integer in JavaScript.
JavaScript will cast all of its double values into signed 32-bit integers to do the bitwise operations (details here).
Javascript represents all numbers as 64-bit double precision IEEE 754 floating point numbers (see the ECMAscript spec, section 8.5.) All positive integers up to 2^53 can be encoded precisely. Larger integers get their least significant bits clipped. This leaves the question of how can you even represent a 64-bit integer in Javascript -- the native number data type clearly can't precisely represent a 64-bit int.
The following illustrates this. Although javascript appears to be able to parse hexadecimal numbers representing 64-bit numbers, the underlying numeric representation does not hold 64 bits. Try the following in your browser:
<html>
<head>
<script language="javascript">
function showPrecisionLimits() {
document.getElementById("r50").innerHTML = 0x0004000000000001 - 0x0004000000000000;
document.getElementById("r51").innerHTML = 0x0008000000000001 - 0x0008000000000000;
document.getElementById("r52").innerHTML = 0x0010000000000001 - 0x0010000000000000;
document.getElementById("r53").innerHTML = 0x0020000000000001 - 0x0020000000000000;
document.getElementById("r54").innerHTML = 0x0040000000000001 - 0x0040000000000000;
}
</script>
</head>
<body onload="showPrecisionLimits()">
<p>(2^50+1) - (2^50) = <span id="r50"></span></p>
<p>(2^51+1) - (2^51) = <span id="r51"></span></p>
<p>(2^52+1) - (2^52) = <span id="r52"></span></p>
<p>(2^53+1) - (2^53) = <span id="r53"></span></p>
<p>(2^54+1) - (2^54) = <span id="r54"></span></p>
</body>
</html>
In Firefox, Chrome and IE I'm getting the following. If numbers were stored in their full 64-bit glory, the result should have been 1 for all the substractions. Instead, you can see how the difference between 2^53+1 and 2^53 is lost.
(2^50+1) - (2^50) = 1
(2^51+1) - (2^51) = 1
(2^52+1) - (2^52) = 1
(2^53+1) - (2^53) = 0
(2^54+1) - (2^54) = 0
So what can you do?
If you choose to represent a 64-bit integer as two 32-bit numbers, then applying a bitwise AND is as simple as applying 2 bitwise AND's, to the low and high 32-bit 'words'.
For example:
var a = [ 0x0000ffff, 0xffff0000 ];
var b = [ 0x00ffff00, 0x00ffff00 ];
var c = [ a[0] & b[0], a[1] & b[1] ];
document.body.innerHTML = c[0].toString(16) + ":" + c[1].toString(16);
gets you:
ff00:ff0000
Here is code for AND int64 numbers, you can replace AND with other bitwise operation
function and(v1, v2) {
var hi = 0x80000000;
var low = 0x7fffffff;
var hi1 = ~~(v1 / hi);
var hi2 = ~~(v2 / hi);
var low1 = v1 & low;
var low2 = v2 & low;
var h = hi1 & hi2;
var l = low1 & low2;
return h*hi + l;
}
This can now be done with the new BigInt built-in numeric type. BigInt is currently (July 2019) only available in certain browsers, see the following link for details:
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/BigInt
I have tested bitwise operations using BigInts in Chrome 67 and can confirm that they work as expected with up to 64 bit values.
Javascript doesn't support 64 bit integers out of the box. This is what I ended up doing:
Found long.js, a self contained Long implementation on github.
Convert the string value representing the 64 bit number to a Long.
Extract the high and low 32 bit values
Do a 32 bit bitwise and between the high and low bits, separately
Initialise a new 64 bit Long from the low and high bit
If the number is > 0 then there is correlation between the two numbers
Note: for the code example below to work you need to load
long.js.
// Handy to output leading zeros to make it easier to compare the bits when outputting to the console
function zeroPad(num, places){
var zero = places - num.length + 1;
return Array(+(zero > 0 && zero)).join('0') + num;
}
// 2^3 = 8
var val1 = Long.fromString('8', 10);
var val1High = val1.getHighBitsUnsigned();
var val1Low = val1.getLowBitsUnsigned();
// 2^61 = 2305843009213693960
var val2 = Long.fromString('2305843009213693960', 10);
var val2High = val2.getHighBitsUnsigned();
var val2Low = val2.getLowBitsUnsigned();
console.log('2^3 & (2^3 + 2^63)')
console.log(zeroPad(val1.toString(2), 64));
console.log(zeroPad(val2.toString(2), 64));
var bitwiseAndResult = Long.fromBits(val1Low & val2Low, val1High & val2High, true);
console.log(bitwiseAndResult);
console.log(zeroPad(bitwiseAndResult.toString(2), 64));
console.log('Correlation betwen val1 and val2 ?');
console.log(bitwiseAndResult > 0);
Console output:
2^3
0000000000000000000000000000000000000000000000000000000000001000
2^3 + 2^63
0010000000000000000000000000000000000000000000000000000000001000
2^3 & (2^3 + 2^63)
0000000000000000000000000000000000000000000000000000000000001000
Correlation between val1 and val2?
true
The Closure library has goog.math.Long with a bitwise add() method.
Unfortunately, the accepted answer (and others) appears not to have been adequately tested. Confronted by this problem recently, I initially tried to split my 64-bit numbers into two 32-bit numbers as suggested, but there's another little wrinkle.
Open your JavaScript console and enter:
0x80000001
When you press Enter, you'll obtain 2147483649, the decimal equivalent. Next try:
0x80000001 & 0x80000003
This gives you -2147483647, not quite what you expected. It's clear that in performing the bitwise AND, the numbers are treated as signed 32-bit integers. And the result is wrong. Even if you negate it.
My solution was to apply ~~ to the 32-bit numbers after they were split off, check for a negative sign, and then deal with this appropriately.
This is clumsy. There may be a more elegant 'fix', but I can't see it on quick examination. There's a certain irony that something that can be accomplished by a couple of lines of assembly should require so much more labour in JavaScript.

Categories

Resources