Why left-shift in JS and Dart are different? - javascript

In Javascript:
255 << 24 = -16777216
In dart:
255 << 24 = 4278190080
Is there any way by which I get the same answer in Dart similar to JS ?

To get precisely the same result in Dart as in JavaScript (whether on the web or not), do:
var jsValue = (255 << 24).toSigned(32);
JavaScript converts all bitwise operations to 32-bit integers, and to signed integers for all operators except >>>.
So, do .toSigned(32) on the result to do precisely what JavaScript does.

Related

difference between JavaScript bit-wise operator code and Python bit-wise operator code

I have converted JavaScript code which uses bit-wise operators in that code to Python code, but there is one problem when i do this in JavaScript and Python
412287 << 10
then I get this 422181888 same results in both languages. but when i do this in both
424970184 << 10
then i get different results in both of the languages 1377771520 in JavaScript and 435169468416 in Python
can anybody help me with this?
any help would be appreciated.
If you want the JavaScript equivalent value then what you can do is :
import ctypes
print(ctypes.c_int(424970184 << 10 ^ 0).value)
Output:
1377771520
As stated in this SO answer, in javascript the bitwise operators and shift operators operate on 32-bit ints, and your second example overflows the 32 bit capacity, so the python equivalent would be:
(424970184 << 10) & 0x7FFFFFFF
(you get a "modulo"/"masked" value with the signed 32 bit integer mask, not the actual value)
In Python there's no limit in capacity for integers, so you get the actual value.

Implementing XorShift the same in Java and Python

I would like to implement an XorShift PRNG in both Java, Python and JavaScript. The different implementations must generate the exact same sequences given the same seed. So far, I've have not been able to do this.
My implementation in Java
have the following implementation of an XorShift PRNG in Java (where x is a long field):
public long randomLong() {
x ^= (x << 21);
x ^= (x >>> 35);
x ^= (x << 4);
return x;
}
If I seed x to 1, the first four calls to randomLong() will generate:
35651601
1130297953386881
-9204155794254196429
144132848981442561
My implementation in Python
I have tried both with and without numpy. Below is the version that uses numpy.
def randomLong(self):
self.x ^= np.left_shift(self.x, 21)
self.x ^= np.right_shift(self.x, 35)
self.x ^= np.left_shift(self.x, 4)
return self.x
With the same seed, the Python function will generate:
35651601
1130297953386881
-9204155787274874573 # different
143006948545953793 # different
My JavaScript implementation
I've not attempted one yet, since JavaScript's only number type seems to be doubles based on IEEE 754, which opens up a different can of worms.
What I think the cause is
Java and Python have different number types. Java has 32 and 64-bit integers, while Python has funky big int types.
It seems that the shift operators have different semantics. For example, in Java there is both logical and arithmetic shift, while in Python there is only one type of shift (logical?).
Questions
I would be happy with an answer that lets me write a PRNG in these three languages, and one that is fast. It does not have to be very good. I have considered porting C libs implementations to the other languages, although it is not very good.
Can I fix my above implementations so they work?
Should I switch to another PRNG function that is easier to implement across prog.langs?
I have read the SO where someone suggested using the java.util.Random class for Python. I don't want this, since I'm also going to need the function in JavaScript, and I don't know that this packages exists there.
I would be happy with an answer that lets me write a PRNG in these three languages, and one that is fast. It does not have to be very good.
You could implement a 32-bit linear congruential generator in 3 languages.
Python:
seed = 0
for i in range(10):
seed = (seed * 1664525 + 1013904223) & 0xFFFFFFFF
print(seed)
Java:
int seed = 0;
for (int i = 0; i < 10; i++) {
seed = seed * 1664525 + 1013904223;
System.out.println(seed & 0xFFFFFFFFL);
}
JavaScript:
var seed = 0;
for (var i = 0; i < 10; i++) {
// The intermediate result fits in 52 bits, so no overflow
seed = (seed * 1664525 + 1013904223) | 0;
console.log(seed >>> 0);
}
Output:
1013904223
1196435762
3519870697
2868466484
1649599747
2670642822
1476291629
2748932008
2180890343
2498801434
Note that in all 3 languages, each iteration prints an unsigned 32-bit integer.
The tricky part is in the logical right shift. The easiest to do in Python if you have access to NumPy, is to store your x as a uint64 value, so that arithmetic and logical right shifting are the exact same operation, and cast the output value to an int64 before returning, e.g.:
import numpy as np
class XorShiftRng(object):
def __init__(self, x):
self.x = np.uint64(x)
def random_long(self):
self.x ^= self.x << np.uint64(21)
self.x ^= self.x >> np.uint64(35)
self.x ^= self.x << np.uint64(4)
return np.int64(self.x)
Those ugly casts of the shift values are required to prevent NumPy from issuing weird casting errors. In any case, this produces the exact same result as your Java version:
>>> rng = XorShiftRng(1)
>>> for _ in range(4):
... print(rng.random_long())
...
35651601
1130297953386881
-9204155794254196429
144132848981442561
The difference in results between Java and python is due to a difference in how the languages have implemented integers. A java long is a 64 bit signed integer, having the sign in the leftmost bit. Python is... well, different.
Presumably python encodes integers with varying bit length depending of the magnitude of the number
>>> n = 10
>>> n.bit_length()
4
>>> n = 1000
>>> n.bit_length()
10
>>> n = -4
>>> n.bit_length()
3
And negative integers are (presumably) encoded as sign and magnitude, though the sign does not seem to be set in any of the bits. The sign would normally be in the leftmost bit, but not here. I guess this has to do with pythons varying bit length for numbers.
>>> bin(-4)
'-0b100'
where -4 in 64 bit 2's complement would be:
0b1111111111111111111111111111111111111111111111111111111111111100
This makes a huge difference in the algorithm, since shifting 0b100 left or right yields quite different results than shifting 0b1111111111111111111111111111111111111111111111111111111111111100.
Luckily there's a way of tricking python, but this involves switching between the two representations yourself.
First some bit masks is needed:
word_size = 64
sign_mask = 1<<(word_size-1)
word_mask = sign_mask | (sign_mask - 1)
Now to force python into 2's complement, all one need is a logical 'and' with the word mask
>>> bin(4 & word_mask)
'0b100'
>>> bin(-4 & word_mask)
'0b1111111111111111111111111111111111111111111111111111111111111100'
which is what you need for the algorithm to work. Except you need to convert the numbers back when returning values, since
>>> -4 & word_mask
18446744073709551612L
So the number needs to be converted from 2's complement to signed magnitude:
>>> number = -4 & word_mask
>>> bin(~(number^word_mask))
'-0b100'
But this only works for negative integers:
>>> number = 4 & word_mask
>>> bin(~(number^word_mask))
'-0b1111111111111111111111111111111111111111111111111111111111111100'
Since positive integers should be returned as is, this would be better:
>>> number = -4 & word_mask
>>> bin(~(number^word_mask) if (number&sign_mask) else number)
'-0b100'
>>> number = 4 & word_mask
>>> bin(~(number^word_mask) if (number&sign_mask) else number)
'0b100'
So I've implemented the algorithm like this:
class XORShift:
def __init__(self, seed=1, word_length=64):
self.sign_mask = (1 << (word_length-1))
self.word_mask = self.sign_mask | (self.sign_mask -1)
self.next = self._to2scomplement(seed)
def _to2scomplement(self, number):
return number & self.word_mask
def _from2scomplement(self, number):
return ~(number^self.word_mask) if (number & self.sign_mask) else number
def seed(self, seed):
self.next = self._to2scomplement(seed)
def random(self):
self.next ^= (self.next << 21) & self.word_mask
self.next ^= (self.next >> 35) & self.word_mask
self.next ^= (self.next << 4) & self.word_mask
return self._from2scomplement(self.next)
And seeding it with 1, the algorithm returns as its 4 first numbers:
>>> prng = XORShift(1)
>>> for _ in range(4):
>>> print prng.random()
35651601
1130297953386881
-9204155794254196429
144132848981442561
Of course you get this for free by using numpy.int64, but this is less fun as it hides the cause difference.
I have not been able to implement the same algorithm in JavaScript. It seems that JavaScript uses 32 bit unsigned integers and the shifting 35 positions right, the number wraps around. I have not investigated it further.

Can someone translate this simple function into Javascript?

I'm reading a tutorial on Perlin Noise, and I came across this function:
function IntNoise(32-bit integer: x)
x = (x<<13) ^ x;
return ( 1.0 - ( (x * (x * x * 15731 + 789221) + 1376312589) & 7fffffff) / 1073741824.0);
end IntNoise function
While I do understand some parts of it, I really don't get what are (x<<13) and & 7fffffff supposed to mean (I see that it is a hex number, but what does it do?). Can someone help me translate this into JS? Also, normal integers are 32 bit in JS, on 32 bit computers, right?
It should work in JavaScript with minimal modifications:
function IntNoise(x) {
x = (x << 13) ^ x;
return (1 - ((x * (x * x * 15731 + 789221) + 1376312589) & 0x7fffffff) / 1073741824);
}
The << operator is a bitwise left-shift, so << 13 means shift the number 13 bits to the left.
The & operator is a bitwise AND. Doing & 0x7fffffff on a signed 32-bit integer masks out the sign bit, ensuring that the result is always a positive number (or zero).
The way that JavaScript deals with numbers is a bit quirky, to say the least. All numbers are usually represented as IEEE-754 doubles, but... once you start using bitwise operators on a number then JavaScript will treat the operands as signed 32-bit integers for the duration of that calculation.
Here's a good explanation of how JavaScript deals with bitwise operations:
Bitwise Operators
x<<13 means shift x 13 steps to left (bitwise).
Furthermore a<<b is equivalent to a*2^b.
& 7ffffff means bitwise AND of leftside with 7FFFFFFF.
If you take a look at the bit pattern of 7FFFFFFF you will notice that the bit 32 is 0 and the rest of the bits are 1. This means that you will mask out bit 0-30 and drop bit 31.

48-bit bitwise operations in Javascript?

I've been given the task of porting Java's Java.util.Random() to JavaScript, and I've run across a huge performance hit/inaccuracy using bitwise operators in Javascript on sufficiently large numbers. Some cursory research states that "bitwise operators in JavaScript are inherently slow," because internally it appears that JavaScript will cast all of its double values into signed 32-bit integers to do the bitwise operations (see here for more on this.) Because of this, I can't do a direct port of the Java random number generator, and I need to get the same numeric results as Java.util.Random(). Writing something like
this.next = function(bits) {
if (!bits) {
bits = 48;
}
this.seed = (this.seed * 25214903917 + 11) & ((1 << 48) - 1);
return this.seed >>> (48 - bits);
};
(which is an almost-direct port of the Java.util.Random()) code won't work properly, since Javascript can't do bitwise operations on an integer that size.)
I've figured out that I can just make a seedable random number generator in 32-bit space using the Lehmer algorithm, but the trick is that I need to get the same values as I would with Java.util.Random(). What should I do to make a faster, functional port?
Instead of foo & ((1 << 48) - 1) you should be able to use foo % Math.pow(2,48).
All numbers in Javascript are 64-bit floating point numbers, which is sufficient to represent any 48-bit integer.
An alternative is to use a boolean array of 48 booleans, and implement the shifting yourself. I don't know if this is faster, though; but I doubt it, since all booleans are stored as doubles.
Bear in mind that a bit shift is directly equivalent to a multiplication or division by a power of 2.
1 << x == 1 * Math.pow(2,x)
It is slower than bit shifting, but allows you to extend beyond 32 bits. It may be a faster solution for bits > 32, once you factor in the additional code you need to support higher bit counts, but you'll have to do some profiling to find out.
48-bit bitwise operations are not possible in JavaScript. You could use two numbers to simulate it though.

Convert a string with a hex representation of an IEEE-754 double into JavaScript numeric variable

Suppose I have a hex number "4072508200000000" and I want the floating point number that it represents (293.03173828125000) in IEEE-754 double format to be put into a JavaScript variable.
I can think of a way that uses some masking and a call to pow(), but is there a simpler solution?
A client-side solution is needed.
This may help. It's a website that lets you enter a hex encoding of an IEEE-754 and get an analysis of mantissa and exponent.
http://babbage.cs.qc.edu/IEEE-754/64bit.html
Because people always tend to ask "why?," here's why: I'm trying to fill out an existing but incomplete implementation of Google's Procol Buffers (protobuf).
I don't know of a good way. It certainly can be done the hard way, here is a single-precision example totally within JavaScript:
js> a = 0x41973333
1100428083
js> (a & 0x7fffff | 0x800000) * 1.0 / Math.pow(2,23) * Math.pow(2, ((a>>23 & 0xff) - 127))
18.899999618530273
A production implementation should consider that most of the fields have magic values, typically implemented by specifying a special interpretation for what would have been the largest or smallest. So, detect NaNs and infinities. The above example should be checking for negatives. (a & 0x80000000)
Update: Ok, I've got it for double's, too. You can't directly extend the above technique because the internal JS representation is a double, and so by its definition it can handle at best a bit string of length 52, and it can't shift by more than 32 at all.
Ok, to do double you first chop off as a string the low 8 digits or 32 bits; process them with a separate object. Then:
js> a = 0x40725082
1081233538
js> (a & 0xfffff | 0x100000) * 1.0 / Math.pow(2, 52 - 32) * Math.pow(2, ((a >> 52 - 32 & 0x7ff) - 1023))
293.03173828125
js>
I kept the above example because it's from the OP. A harder case is when the low 32-bits have a value. Here is the conversion of 0x40725082deadbeef, a full-precision double:
js> a = 0x40725082
1081233538
js> b = 0xdeadbeef
3735928559
js> e = (a >> 52 - 32 & 0x7ff) - 1023
8
js> (a & 0xfffff | 0x100000) * 1.0 / Math.pow(2,52-32) * Math.pow(2, e) +
b * 1.0 / Math.pow(2, 52) * Math.pow(2, e)
293.0319506442019
js>
There are some obvious subexpressions you can factor out but I've left it this way so you can see how it relates to the format.
A quick addition to DigitalRoss' solution, for those finding this page via Google as I did.
Apart from the edge cases for +/- Infinity and NaN, which I'd love input on, you also need to take into account the sign of the result:
s = a >> 31 ? -1 : 1
You can then include s in the final multiplication to get the correct result.
I think for a little-endian solution you'll also need to reverse the bits in a and b and swap them.
The new Typed Arrays mechanism allows you to do this (and is probably an ideal mechanism for implementing protocol buffers):
var buffer = new ArrayBuffer(8);
var bytes = new Uint8Array(buffer);
var doubles = new Float64Array(buffer); // not supported in Chrome
bytes[7] = 0x40; // Load the hex string "40 72 50 82 00 00 00 00"
bytes[6] = 0x72;
bytes[5] = 0x50;
bytes[4] = 0x82;
bytes[3] = 0x00;
bytes[2] = 0x00;
bytes[1] = 0x00;
bytes[0] = 0x00;
my_double = doubles[0];
document.write(my_double); // 293.03173828125
This assumes a little-endian machine.
Unfortunately Chrome does not have Float64Array, although it does have Float32Array. The above example does work in Firefox 4.0.1.

Categories

Resources