48-bit bitwise operations in Javascript? - javascript

I've been given the task of porting Java's Java.util.Random() to JavaScript, and I've run across a huge performance hit/inaccuracy using bitwise operators in Javascript on sufficiently large numbers. Some cursory research states that "bitwise operators in JavaScript are inherently slow," because internally it appears that JavaScript will cast all of its double values into signed 32-bit integers to do the bitwise operations (see here for more on this.) Because of this, I can't do a direct port of the Java random number generator, and I need to get the same numeric results as Java.util.Random(). Writing something like
this.next = function(bits) {
if (!bits) {
bits = 48;
}
this.seed = (this.seed * 25214903917 + 11) & ((1 << 48) - 1);
return this.seed >>> (48 - bits);
};
(which is an almost-direct port of the Java.util.Random()) code won't work properly, since Javascript can't do bitwise operations on an integer that size.)
I've figured out that I can just make a seedable random number generator in 32-bit space using the Lehmer algorithm, but the trick is that I need to get the same values as I would with Java.util.Random(). What should I do to make a faster, functional port?

Instead of foo & ((1 << 48) - 1) you should be able to use foo % Math.pow(2,48).
All numbers in Javascript are 64-bit floating point numbers, which is sufficient to represent any 48-bit integer.

An alternative is to use a boolean array of 48 booleans, and implement the shifting yourself. I don't know if this is faster, though; but I doubt it, since all booleans are stored as doubles.

Bear in mind that a bit shift is directly equivalent to a multiplication or division by a power of 2.
1 << x == 1 * Math.pow(2,x)
It is slower than bit shifting, but allows you to extend beyond 32 bits. It may be a faster solution for bits > 32, once you factor in the additional code you need to support higher bit counts, but you'll have to do some profiling to find out.

48-bit bitwise operations are not possible in JavaScript. You could use two numbers to simulate it though.

Related

Using a float in Javascript in a hash function

I Have a hash function like this.
class Hash {
static rotate (x, b) {
return (x << b) ^ (x >> (32-b));
}
static pcg (a) {
let b = a;
for (let i = 0; i < 3; i++) {
a = Hash.rotate((a^0xcafebabe) + (b^0xfaceb00c), 23);
b = Hash.rotate((a^0xdeadbeef) + (b^0x8badf00d), 5);
}
return a^b;
}
}
// source Adam Smith: https://groups.google.com/forum/#!msg/proceduralcontent/AuvxuA1xqmE/T8t88r2rfUcJ
I use it like this.
console.log(Hash.pcg(116)); // Output: -191955715
As long as I send an integer in, I get an integer out. Now here comes the problem. If I have a floating number as input, rounding will happen. The number Hash.pcg(1.1) and Hash.pcg(1.2) will yield the same. I want different inputs to yield different results. A possible solution could be to multiply the input so the decimal is not rounded down, but is there a more elegant and flexible solution to this?
Is there a way to convert a floating point number to a unique integer? Each floating point number would result in a different integer number.
Performance is important.
This isn't quite an answer, but I was running out of room to make it a comment. :)
You'll hit a problem with integers outside of the 32-bit range as well as with non-integer values.
JavaScript handles all numbers as 64-bit floating point. This gives you exact integers over the range -9007199254740991 to 9007199254740991 (±(2^53 - 1)), but the bit-wise operators used in your hash algorithm (^, <<, >>) only work in a 32-bit range.
Since there are far more non-integer numbers possible than integers, no one-to-one mapping is possible with ordinary numbers. You could work something out with BigInts, but that will likely lead to comparatively much slower performance.
If you're willing to deal with the performance hit, your can use JavaScript buffer functions to get at the actual bits of a floating point number. (I'd say more now about how to do that, but I've got to run!)
Edit... back from dinner...
You can convert JavaScript's standard number type, which is 64-bit floating point, to a BigInt like this:
let dv = new DataView(new ArrayBuffer(8));
dv.setFloat64(0, Math.PI);
console.log(dv.getFloat64(0), dv.getBigInt64(0), dv.getBigInt64(0).toString(16).toUpperCase())
The output from this is:
3.141592653589793 4614256656552045848n "400921FB54442D18"
The first item shows that the number was properly stored as byte array, the second shows the BigInt created from the same bits, and the last is the same BigInt over again, but in hex to better show the floating point data format.
Once you've converted a number like this to a BigInt (which is not the same numeric value, but it is the same string of bits) every possible value of number will be uniquely represented.
The same bit-wise operators you used in your algorithm above will work with BigInts, but without the 32-bit limitation. I'm guessing that for best results you'd want to change the 32 in your code to 64, and use 16-digit (instead of 8-digit) hex constants as hash keys.

Javascript 32 bit numbers and the operators & and >>>

I am trying to understand Javascript logical operators and came across 2 statements with seeminlgy similar functionality and trying to understand the difference. So, What's the difference between these 2 lines of code in Javascript?
For a number x,
x >>>= 0;
x &= 0x7fffffff;
If I understand it correctly, they both should give unsigned 32 bit output. However, for same negative value of x (i.e. most significant bit always 1 in both case), I get different outputs, what am I missing?
Thanks
To truncate a number to 32 bits, the simplest and most common method is to use the "|" bit-wise operator:
x |= 0;
JavaScript always considers the result of any 32-bit computation to be negative if the highest bit (bit 31) is set. Don't let that bother you. And don't clear bit 31 in an attempt to make it positive; that incorrectly alters the value.
To convert a negative 32-bit number as a positive value (a value in the range 0 to 4294967295), you can do this:
x = x < 0? x + 0x100000000 : x;
By adding a 33-bit value, automatic sign-extension of bit 31 is inhibited. However, the result is now outside the signed 32-bit range.
Another (tidier) solution is to use the unsigned right-shift operator with a zero shift count:
x >>>= 0;
Technically, all JavaScript numbers are 64-bit floating-point values, but in reality, as long as you keep numbers within the signed 32-bit range, you make it possible for JavaScript runtimes to optimize your code using 32-bit integer operations.
Be aware that when you convert a negative 32-bit value to a positive value using either of above methods, you have essentially produced a 33-bit value, which may defeat any 32-bit optimizations your JavaScript engine uses.

How bitwise operations boost performance in Asm.js?

At the first lines of Asm.js definition there's a Asm.js-based code example that explains the bitwise operation helps to have a faster JS code:
HEAP32[p >> 2]|0
or
(x+y)|0
My question is, how this operation boost the performance and what's the reason behind using this bitwise operator many times in Asm.js or Emscripten-generated JS codes?
The bitwise operators force their operands to be integer values. It's a considerably faster way of doing the conversion than calling Math.floor, etc. Note that
p >> 2
is (for non-negative values of p) the same as Math.floor(p / 4).
Using a right bit shift (>>) by 1, is the same as dividing by 2, but it is faster than actually calculating it base 10 because the computer can just shift the binary digits in memory without having to do a calculation. It's just like in base 10 when you want to multiply a number by 10, you know you can just add a zero to the right side of the number (The reverse, division, would be taking the zero off of the right side). So, by shifting right two digits you are dividing by 2 twice (e.g. 24/2 = 12 and 12/2 = 6 or 24/4 = 6)
Apparently, (x+y)|0 is just a faster way to do a floor.

Unsigned 32 bit integers in Javascript

How can I emulate 32bit unsiged integers without any external dependencies in Javascript? Tricks with x >>> 0 or x | 0 don't work (for multiplication, they seem to work for addition / subtraction), and doubles lose precision during multiplication.
For example, try to multiply 2654435769 * 340573321 (mod 2^32). The result should be 1.
This answer has multiplication. What about addition / subtraction / division?
Here's a link to wolfram alpha, presenting the equation above.
A 32-bit unsigned int fits within Javascript's 64-bit float -- there should be no loss of precision when performing addition, subtraction, or division. Just mask with 0xffffffff to stay within a 32-bit integer. Multiplication goes beyond what fits, but you already have a solution for that.
You can ape it with BigInts in 2020.
const u32mul = (x, y) => Number((BigInt(x) * BigInt(y)) & 0xFFFFFFFFn);
A sufficiently smart compiler should work this out, but I haven't benchmarked this, so proceed with caution.
The other alternative would of course drop into WebAssembly. If you need to be working at this level, I'd strongly advise it.

Dealing With Binary / Bitshifts in JavaScript

I am trying to perform some bitshift operations and dealing with binary numbers in JavaScript.
Here's what I'm trying to do. A user inputs a value and I do the following with it:
// Square Input and mod with 65536 to keep it below that value
var squaredInput = (inputVal * inputVal) % 65536;
// Figure out how many bits is the squared input number
var bits = Math.floor(Math.log(squaredInput) / Math.log(2)) + 1;
// Convert that number to a 16-bit number using bitshift.
var squaredShifted = squaredInput >>> (16 - bits);
As long as the number is larger than 46, it works. Once it is less than 46, it does not work.
I know the problem is the in bitshift. Now coming from a C background, I know this would be done differently, since all numbers will be stored in 32-bit format (given it is an int). Does JavaScript do the same (since it vars are not typed)?
If so, is it possible to store a 16-bit number? If not, can I treat it as 32-bits and do the required calculations to assume it is 16-bits?
Note: I am trying to extract the middle 4-bits of the 16-bit value in squaredInput.
Another note: When printing out the var, it just prints out the value without the padding so I couldn't figure it out. Tried using parseInt and toString.
Thanks
Are you looking for this?
function get16bitnumber( inputVal ){
return ("0000000000000000"+(inputVal * inputVal).toString(2)).substr(-16);
}
This function returns last 16 bits of (inputVal*inputVal) value.By having binary string you could work with any range of bits.
Don't use bitshifting in JS if you don't absolutely have to. The specs mention at least four number formats
IEEE 754
Int32
UInt32
UInt16
It's really confusing to know which is used when.
For example, ~ applies a bitwise inversion while converting to Int32. UInt16 seems to be used only in String.fromCharCode. Using bitshift operators converts the operands to either UInt32 or to Int32.
In your case, the right shift operator >>> forces conversion to UInt32.
When you type
a >>> b
this is what you get:
ToUInt32(a) >>> (ToUInt32(b) & 0x1f)

Categories

Resources