Can someone translate this simple function into Javascript? - javascript

I'm reading a tutorial on Perlin Noise, and I came across this function:
function IntNoise(32-bit integer: x)
x = (x<<13) ^ x;
return ( 1.0 - ( (x * (x * x * 15731 + 789221) + 1376312589) & 7fffffff) / 1073741824.0);
end IntNoise function
While I do understand some parts of it, I really don't get what are (x<<13) and & 7fffffff supposed to mean (I see that it is a hex number, but what does it do?). Can someone help me translate this into JS? Also, normal integers are 32 bit in JS, on 32 bit computers, right?

It should work in JavaScript with minimal modifications:
function IntNoise(x) {
x = (x << 13) ^ x;
return (1 - ((x * (x * x * 15731 + 789221) + 1376312589) & 0x7fffffff) / 1073741824);
}
The << operator is a bitwise left-shift, so << 13 means shift the number 13 bits to the left.
The & operator is a bitwise AND. Doing & 0x7fffffff on a signed 32-bit integer masks out the sign bit, ensuring that the result is always a positive number (or zero).
The way that JavaScript deals with numbers is a bit quirky, to say the least. All numbers are usually represented as IEEE-754 doubles, but... once you start using bitwise operators on a number then JavaScript will treat the operands as signed 32-bit integers for the duration of that calculation.
Here's a good explanation of how JavaScript deals with bitwise operations:
Bitwise Operators

x<<13 means shift x 13 steps to left (bitwise).
Furthermore a<<b is equivalent to a*2^b.
& 7ffffff means bitwise AND of leftside with 7FFFFFFF.
If you take a look at the bit pattern of 7FFFFFFF you will notice that the bit 32 is 0 and the rest of the bits are 1. This means that you will mask out bit 0-30 and drop bit 31.

Related

Javascript - will this noise function work?

I have the following deterministic noise function which I've been using in a C# and C++ terrain generator for a while:
float GridNoise(int x, int z, int seed)
{
int n = (1619*x + 31337*z + 1013*seed) & 0x7fffffff;
n = (n >> 13) ^ n;
return 1 - ((n*(n*n*60493 + 19990303) + 1376312589) & 0x7fffffff)/(float)1073741824;
}
It returns a 'random' float between 1 and -1 for any integer x/z coordinates I enter (plus there's a seed so I can generate different terrains). I tried implementing the same function in Javascript, but the results aren't as expected. For small values, it seems OK but as I use larger values (of the order of ~10000) the results are less and less random and eventually all it returns is 1.
You can see it working correctly in C# here, and the incorrect JS results for the same input here.
I suspect it's something to do with JS variables not being strict integers, but can anyone shed more light? Does anyone have a similarly simple deterministic function I could use in JS if this doesn't work?
The underlying problem is, in javascript, there's no integers - so all mathematical functions are done using Number (52bit precision float)
In c#, if you're using longs, then any overflows are just discarded
In javascript, you need to handle this yourself
There's a numeric format is coming to browsers that will help, but it's not here yet - BigInt ... it's in chrome/opera and behind a flag in firefox (desktop, not android)
(no word on Edge (dead anyway) or Safari (the new IE) - and of course, IE will never get them)
The best I can come up with using BigInt is
function gridNoise(x, z, seed) {
var n = (1619 * x + 31337 * z + 1013 * seed) & 0x7fffffff;
n = BigInt((n >> 13) ^ n);
n = n * (n * n * 60493n + 19990303n) + 1376312589n;
n = parseInt(n.toString(2).slice(-31), 2);
return 1 - n / 1073741824;
}
function test() {
for (var i = 10000; i < 11000; i++) {
console.log(gridNoise(0, 0, i));
}
}
test();
Note, the 60493n is BigInt notation
There are "big integer" libraries you could use in the interim though - https://github.com/peterolson/BigInteger.js
The following doesn't work and never will ... because a 32bit x 32bit == 64bit ... so you'll lose bits already
I misread the code and though n was only 19 bits (because of the >>13)
If you limit the result of n * n * 60493 to 32bit, (actually, I made it 31bit ... so .. anyway it seems to work OK
function gridNoise(x, z, seed) {
var n = (1619 * x + 31337 * z + 1013 * seed) & 0x7fffffff;
n = (n >> 13) ^ n;
return 1 - ((n * (n * n * 60493 & 0x7fffffff + 19990303) + 1376312589) & 0x7fffffff) / 1073741824;
}
this also works
return 1 - ((n*(n*n*60493 | 0 + 19990303) + 1376312589) & 0x7fffffff)/1073741824;
That limits the interim result to 32 bit which may or may not be "accurate"
You may need to play around with it if you want to duplicate exactly what c# produces
I'm afraid your code is exceeding the maximum size limit for integers. As soon as that happens, it's returning 1 because the calculation ((n*(n*n*60493 + 19990303) + 1376312589) & 0x7fffffff)/1073741824 will always be 0 - thus 1 - 0 = 1
To understand whats going on here, one has to examine the JavaScripts number type. It is basically a 53bit integer, that gets left/right shifted using another 11bit integer, resulting in a 64bit number. Therefore if you have a calculation that would result in a 54bit integer, it just takes the upper 53bits, and shifts them left by 1. Now if you do bitwise math on numbers, it will take the lower 32bits. Therefore if an integer is bigger than 84bits, doing bitwise shifting on it will always result in 0. Numbers bigger than 32bits will therefore tend to 0 in JS when doing bitwise operations, while C# always takes the lower 32bits, and therefore the result will be accurate for those 32bits (but larger numbers cannot be represented).
(2 + 2 ** 53) & (2 + 2 ** 53) // 2
(2 + 2 ** 54) & (2 + 2 ** 54) // 0
Edit (sorry for the poor previous answer):
As others stated before is the problem related too your values which are exceeding the size of JS Number.
If you have the code working in C#, it might be advisable to offload the functionality to an ASP.NET backend which will handle the calculation an forward the result via some sort of API

JavaScript: | operator in return statement [duplicate]

A colleague of mine stumbled upon a method to floor float numbers using a bitwise or:
var a = 13.6 | 0; //a == 13
We were talking about it and wondering a few things.
How does it work? Our theory was that using such an operator casts the number to an integer, thus removing the fractional part
Does it have any advantages over doing Math.floor? Maybe it's a bit faster? (pun not intended)
Does it have any disadvantages? Maybe it doesn't work in some cases? Clarity is an obvious one, since we had to figure it out, and well, I'm writting this question.
Thanks.
How does it work? Our theory was that using such an operator casts the
number to an integer, thus removing the fractional part
All bitwise operations except unsigned right shift, >>>, work on signed 32-bit integers. So using bitwise operations will convert a float to an integer.
Does it have any advantages over doing Math.floor? Maybe it's a bit
faster? (pun not intended)
http://jsperf.com/or-vs-floor/2 seems slightly faster
Does it have any disadvantages? Maybe it doesn't work in some cases?
Clarity is an obvious one, since we had to figure it out, and well,
I'm writting this question.
Will not pass jsLint.
32-bit signed integers only
Odd Comparative behavior: Math.floor(NaN) === NaN, while (NaN | 0) === 0
This is truncation as opposed to flooring. Howard's answer is sort of correct; But I would add that Math.floor does exactly what it is supposed to with respect to negative numbers. Mathematically, that is what a floor is.
In the case you described above, the programmer was more interested in truncation or chopping the decimal completely off. Although, the syntax they used sort of obscures the fact that they are converting the float to an int.
In ECMAScript 6, the equivalent of |0 is Math.trunc, kind of I should say:
Returns the integral part of a number by removing any fractional digits. It just truncate the dot and the digits behind it, no matter whether the argument is a positive number or a negative number.
Math.trunc(13.37) // 13
Math.trunc(42.84) // 42
Math.trunc(0.123) // 0
Math.trunc(-0.123) // -0
Math.trunc("-1.123")// -1
Math.trunc(NaN) // NaN
Math.trunc("foo") // NaN
Math.trunc() // NaN
Javascript represents Number as Double Precision 64-bit Floating numbers.
Math.floor works with this in mind.
Bitwise operations work in 32bit signed integers. 32bit signed integers use first bit as negative signifier and the other 31 bits are the number. Because of this, the min and max number allowed 32bit signed numbers are -2,147,483,648 and 2147483647 (0x7FFFFFFFF), respectively.
So when you're doing | 0, you're essentially doing is & 0xFFFFFFFF. This means, any number that is represented as 0x80000000 (2147483648) or greater will return as a negative number.
For example:
// Safe
(2147483647.5918 & 0xFFFFFFFF) === 2147483647
(2147483647 & 0xFFFFFFFF) === 2147483647
(200.59082098 & 0xFFFFFFFF) === 200
(0X7FFFFFFF & 0xFFFFFFFF) === 0X7FFFFFFF
// Unsafe
(2147483648 & 0xFFFFFFFF) === -2147483648
(-2147483649 & 0xFFFFFFFF) === 2147483647
(0x80000000 & 0xFFFFFFFF) === -2147483648
(3000000000.5 & 0xFFFFFFFF) === -1294967296
Also. Bitwise operations don't "floor". They truncate, which is the same as saying, they round closest to 0. Once you go around to negative numbers, Math.floor rounds down while bitwise start rounding up.
As I said before, Math.floor is safer because it operates with 64bit floating numbers. Bitwise is faster, yes, but limited to 32bit signed scope.
To summarize:
Bitwise works the same if you work from 0 to 2147483647.
Bitwise is 1 number off if you work from -2147483647 to 0.
Bitwise is completely different for numbers less than -2147483648 and greater than 2147483647.
If you really want to tweak performance and use both:
function floor(n) {
if (n >= 0 && n < 0x80000000) {
return n & 0xFFFFFFFF;
}
if (n > -0x80000000 && n < 0) {
const bitFloored = n & 0xFFFFFFFF;
if (bitFloored === n) return n;
return bitFloored - 1;
}
return Math.floor(n);
}
Just to add Math.trunc works like bitwise operations. So you can do this:
function trunc(n) {
if (n > -0x80000000 && n < 0x80000000) {
return n & 0xFFFFFFFF;
}
return Math.trunc(n);
}
Your first point is correct. The number is cast to an integer and thus any decimal digits are removed. Please note, that Math.floor rounds to the next integer towards minus infinity and thus gives a different result when applied to negative numbers.
The specs say that it is converted to an integer:
Let lnum be ToInt32(lval).
Performance: this has been tested at jsperf before.
note: dead link to spec removed
var myNegInt = -1 * Math.pow(2, 32);
var myFloat = 0.010203040506070809;
var my64BitFloat = myNegInt - myFloat;
var trunc1 = my64BitFloat | 0;
var trunc2 = ~~my64BitFloat;
var trunc3 = my64BitFloat ^ 0;
var trunc4 = my64BitFloat - my64BitFloat % 1;
var trunc5 = parseInt(my64BitFloat);
var trunc6 = Math.floor(my64BitFloat);
console.info(my64BitFloat);
console.info(trunc1);
console.info(trunc2);
console.info(trunc3);
console.info(trunc4);
console.info(trunc5);
console.info(trunc6);
IMO: The question "How does it work?", "Does it have any advantages over doing Math.floor?", "Does it have any disadvantages?" pale in comparison to "Is it at all logical to use it for this purpose?"
I think, before you try to get clever with your code, you may want to run these. My advice; just move along, there is nothing to see here. Using bitwise to save a few operations and having that matter to you at all, usually means your code architecture needs work. As far as why it may work sometimes, well a stopped clock is accurate twice a day, that does not make it useful. These operators have their uses, but not in this context.

Javascript 32 bit numbers and the operators & and >>>

I am trying to understand Javascript logical operators and came across 2 statements with seeminlgy similar functionality and trying to understand the difference. So, What's the difference between these 2 lines of code in Javascript?
For a number x,
x >>>= 0;
x &= 0x7fffffff;
If I understand it correctly, they both should give unsigned 32 bit output. However, for same negative value of x (i.e. most significant bit always 1 in both case), I get different outputs, what am I missing?
Thanks
To truncate a number to 32 bits, the simplest and most common method is to use the "|" bit-wise operator:
x |= 0;
JavaScript always considers the result of any 32-bit computation to be negative if the highest bit (bit 31) is set. Don't let that bother you. And don't clear bit 31 in an attempt to make it positive; that incorrectly alters the value.
To convert a negative 32-bit number as a positive value (a value in the range 0 to 4294967295), you can do this:
x = x < 0? x + 0x100000000 : x;
By adding a 33-bit value, automatic sign-extension of bit 31 is inhibited. However, the result is now outside the signed 32-bit range.
Another (tidier) solution is to use the unsigned right-shift operator with a zero shift count:
x >>>= 0;
Technically, all JavaScript numbers are 64-bit floating-point values, but in reality, as long as you keep numbers within the signed 32-bit range, you make it possible for JavaScript runtimes to optimize your code using 32-bit integer operations.
Be aware that when you convert a negative 32-bit value to a positive value using either of above methods, you have essentially produced a 33-bit value, which may defeat any 32-bit optimizations your JavaScript engine uses.

Javascript operators [duplicate]

This question already has answers here:
Unfamiliar characters used in JavaScript encryption script
(3 answers)
Closed 8 years ago.
I am using a "LightenDarkenColor" function in my script and never really paid much attention to it until now and I noticed some operations and I had no idea what they were doing. I had actually never seen them before.
Those operators are >> and &. I also noticed that the function doesn't work in Firefox.
The function:
function LightenDarkenColor(color, percent) {
var num = parseInt(color,16),
amt = Math.round(2.55 * percent),
R = (num >> 16) + amt,
B = (num >> 8 & 0x00FF) + amt,
G = (num & 0x0000FF) + amt;
return (0x1000000 + (R<255?R<1?0:R:255)*0x10000 + (B<255?B<1?0:B:255)*0x100 + (G<255?G<1?0:G:255)).toString(16).slice(1);
}
What exactly are the operators and how do they work?
Imagine num is 227733 (= some mild dark green) and take
B = (num >> 8 & 0x00FF)
num >> 8 will shift the number (move digits) to the right by 2 hex digits (4 bits per digit x 2=8) making it become:
227733 => 002277
then & 0x00FF will clear out all digits except the last two
002277 => 000077
and that is the component for green.
Hex 00FF is binary 0000000011111111 and & (binary AND) is the operation that will compare all bit pairs one by one and set all bits to zero unless both operand bits are 1s. So ANDing to zeroes will lead to zeroes, and ANDing to ones will give as result the same digits of the other operand: 1 & 1 => 1, 0 & 1=>0. Ones remain ones, zeroes remain zeroes. So AnyNumber & 0000000011111111 = the right part (lower 2 digits) of AnyNumber.
It is just the standard way of getting a number subpart. In this case the green component. Shift right to clear the lower bits, and &0000...1111 to clear the upper bits.
After it got all color components, it adds amt to all of them (amt positive=lighter) and at the end it crops the values
R<255?R<1?0:R:255 means: if less then 0 use 0, if more than 255 use 255.
And finally restores the color as a single number (instead of *0x100 could have used R<<8 that is the opposite of >>8, instead of +, it could have used |, binary OR, to merge the components).
Note that the function uses B as the second component, assuming it is blue but in reality in RGB the second is Green. Yet the result is correct anyway (it would work whatever color components you used and however you named them)
They're bitwise operators.
>> is a bitwise (Sign-propagating) right shift,
& is a bitwise "and".
I could go into detail on what these operators do, but the MDN has a good explanation and example for each operator. It would be counter-productive to copy those.

Using bitwise OR 0 to floor a number

A colleague of mine stumbled upon a method to floor float numbers using a bitwise or:
var a = 13.6 | 0; //a == 13
We were talking about it and wondering a few things.
How does it work? Our theory was that using such an operator casts the number to an integer, thus removing the fractional part
Does it have any advantages over doing Math.floor? Maybe it's a bit faster? (pun not intended)
Does it have any disadvantages? Maybe it doesn't work in some cases? Clarity is an obvious one, since we had to figure it out, and well, I'm writting this question.
Thanks.
How does it work? Our theory was that using such an operator casts the
number to an integer, thus removing the fractional part
All bitwise operations except unsigned right shift, >>>, work on signed 32-bit integers. So using bitwise operations will convert a float to an integer.
Does it have any advantages over doing Math.floor? Maybe it's a bit
faster? (pun not intended)
http://jsperf.com/or-vs-floor/2 seems slightly faster
Does it have any disadvantages? Maybe it doesn't work in some cases?
Clarity is an obvious one, since we had to figure it out, and well,
I'm writting this question.
Will not pass jsLint.
32-bit signed integers only
Odd Comparative behavior: Math.floor(NaN) === NaN, while (NaN | 0) === 0
This is truncation as opposed to flooring. Howard's answer is sort of correct; But I would add that Math.floor does exactly what it is supposed to with respect to negative numbers. Mathematically, that is what a floor is.
In the case you described above, the programmer was more interested in truncation or chopping the decimal completely off. Although, the syntax they used sort of obscures the fact that they are converting the float to an int.
In ECMAScript 6, the equivalent of |0 is Math.trunc, kind of I should say:
Returns the integral part of a number by removing any fractional digits. It just truncate the dot and the digits behind it, no matter whether the argument is a positive number or a negative number.
Math.trunc(13.37) // 13
Math.trunc(42.84) // 42
Math.trunc(0.123) // 0
Math.trunc(-0.123) // -0
Math.trunc("-1.123")// -1
Math.trunc(NaN) // NaN
Math.trunc("foo") // NaN
Math.trunc() // NaN
Javascript represents Number as Double Precision 64-bit Floating numbers.
Math.floor works with this in mind.
Bitwise operations work in 32bit signed integers. 32bit signed integers use first bit as negative signifier and the other 31 bits are the number. Because of this, the min and max number allowed 32bit signed numbers are -2,147,483,648 and 2147483647 (0x7FFFFFFFF), respectively.
So when you're doing | 0, you're essentially doing is & 0xFFFFFFFF. This means, any number that is represented as 0x80000000 (2147483648) or greater will return as a negative number.
For example:
// Safe
(2147483647.5918 & 0xFFFFFFFF) === 2147483647
(2147483647 & 0xFFFFFFFF) === 2147483647
(200.59082098 & 0xFFFFFFFF) === 200
(0X7FFFFFFF & 0xFFFFFFFF) === 0X7FFFFFFF
// Unsafe
(2147483648 & 0xFFFFFFFF) === -2147483648
(-2147483649 & 0xFFFFFFFF) === 2147483647
(0x80000000 & 0xFFFFFFFF) === -2147483648
(3000000000.5 & 0xFFFFFFFF) === -1294967296
Also. Bitwise operations don't "floor". They truncate, which is the same as saying, they round closest to 0. Once you go around to negative numbers, Math.floor rounds down while bitwise start rounding up.
As I said before, Math.floor is safer because it operates with 64bit floating numbers. Bitwise is faster, yes, but limited to 32bit signed scope.
To summarize:
Bitwise works the same if you work from 0 to 2147483647.
Bitwise is 1 number off if you work from -2147483647 to 0.
Bitwise is completely different for numbers less than -2147483648 and greater than 2147483647.
If you really want to tweak performance and use both:
function floor(n) {
if (n >= 0 && n < 0x80000000) {
return n & 0xFFFFFFFF;
}
if (n > -0x80000000 && n < 0) {
const bitFloored = n & 0xFFFFFFFF;
if (bitFloored === n) return n;
return bitFloored - 1;
}
return Math.floor(n);
}
Just to add Math.trunc works like bitwise operations. So you can do this:
function trunc(n) {
if (n > -0x80000000 && n < 0x80000000) {
return n & 0xFFFFFFFF;
}
return Math.trunc(n);
}
Your first point is correct. The number is cast to an integer and thus any decimal digits are removed. Please note, that Math.floor rounds to the next integer towards minus infinity and thus gives a different result when applied to negative numbers.
The specs say that it is converted to an integer:
Let lnum be ToInt32(lval).
Performance: this has been tested at jsperf before.
note: dead link to spec removed
var myNegInt = -1 * Math.pow(2, 32);
var myFloat = 0.010203040506070809;
var my64BitFloat = myNegInt - myFloat;
var trunc1 = my64BitFloat | 0;
var trunc2 = ~~my64BitFloat;
var trunc3 = my64BitFloat ^ 0;
var trunc4 = my64BitFloat - my64BitFloat % 1;
var trunc5 = parseInt(my64BitFloat);
var trunc6 = Math.floor(my64BitFloat);
console.info(my64BitFloat);
console.info(trunc1);
console.info(trunc2);
console.info(trunc3);
console.info(trunc4);
console.info(trunc5);
console.info(trunc6);
IMO: The question "How does it work?", "Does it have any advantages over doing Math.floor?", "Does it have any disadvantages?" pale in comparison to "Is it at all logical to use it for this purpose?"
I think, before you try to get clever with your code, you may want to run these. My advice; just move along, there is nothing to see here. Using bitwise to save a few operations and having that matter to you at all, usually means your code architecture needs work. As far as why it may work sometimes, well a stopped clock is accurate twice a day, that does not make it useful. These operators have their uses, but not in this context.

Categories

Resources