Calculation order rules left shift "Lsh" in Javascript - javascript

I am trying to understand the mathematically correct order in calculations when using shifting.
I found Javascript seems to be doing calculations in priority order of:
^ * / + - << >>
A binary calculator for example the Calculator of Windows10 uses the priority order of:
<< >> ^ * / + -
But what is the mathematically correct order in this case?
for example:
Calculator
1 + 3 Lsh 3 - 1 (result: 24)
Javascript
1 + 3 << 3 - 1 (result: 16)
Try It:
http://www.w3schools.com/code/tryit.asp?filename=F0L1LGPQX9T2

there is no "mathematically correct order" for this
mathematics considers exponential, multiplication/division, and addition/subtraction, but bitwise operations come with programming languages, see https://en.wikipedia.org/wiki/Order_of_operations#Programming_languages

Use PEMDAS method: Parentheses, Exponents, Multiplication and Division, and Addition and Subtraction.
So it means correct order is ^ * / + - << >>.
JS Operator precedence table

Related

Javascript - will this noise function work?

I have the following deterministic noise function which I've been using in a C# and C++ terrain generator for a while:
float GridNoise(int x, int z, int seed)
{
int n = (1619*x + 31337*z + 1013*seed) & 0x7fffffff;
n = (n >> 13) ^ n;
return 1 - ((n*(n*n*60493 + 19990303) + 1376312589) & 0x7fffffff)/(float)1073741824;
}
It returns a 'random' float between 1 and -1 for any integer x/z coordinates I enter (plus there's a seed so I can generate different terrains). I tried implementing the same function in Javascript, but the results aren't as expected. For small values, it seems OK but as I use larger values (of the order of ~10000) the results are less and less random and eventually all it returns is 1.
You can see it working correctly in C# here, and the incorrect JS results for the same input here.
I suspect it's something to do with JS variables not being strict integers, but can anyone shed more light? Does anyone have a similarly simple deterministic function I could use in JS if this doesn't work?
The underlying problem is, in javascript, there's no integers - so all mathematical functions are done using Number (52bit precision float)
In c#, if you're using longs, then any overflows are just discarded
In javascript, you need to handle this yourself
There's a numeric format is coming to browsers that will help, but it's not here yet - BigInt ... it's in chrome/opera and behind a flag in firefox (desktop, not android)
(no word on Edge (dead anyway) or Safari (the new IE) - and of course, IE will never get them)
The best I can come up with using BigInt is
function gridNoise(x, z, seed) {
var n = (1619 * x + 31337 * z + 1013 * seed) & 0x7fffffff;
n = BigInt((n >> 13) ^ n);
n = n * (n * n * 60493n + 19990303n) + 1376312589n;
n = parseInt(n.toString(2).slice(-31), 2);
return 1 - n / 1073741824;
}
function test() {
for (var i = 10000; i < 11000; i++) {
console.log(gridNoise(0, 0, i));
}
}
test();
Note, the 60493n is BigInt notation
There are "big integer" libraries you could use in the interim though - https://github.com/peterolson/BigInteger.js
The following doesn't work and never will ... because a 32bit x 32bit == 64bit ... so you'll lose bits already
I misread the code and though n was only 19 bits (because of the >>13)
If you limit the result of n * n * 60493 to 32bit, (actually, I made it 31bit ... so .. anyway it seems to work OK
function gridNoise(x, z, seed) {
var n = (1619 * x + 31337 * z + 1013 * seed) & 0x7fffffff;
n = (n >> 13) ^ n;
return 1 - ((n * (n * n * 60493 & 0x7fffffff + 19990303) + 1376312589) & 0x7fffffff) / 1073741824;
}
this also works
return 1 - ((n*(n*n*60493 | 0 + 19990303) + 1376312589) & 0x7fffffff)/1073741824;
That limits the interim result to 32 bit which may or may not be "accurate"
You may need to play around with it if you want to duplicate exactly what c# produces
I'm afraid your code is exceeding the maximum size limit for integers. As soon as that happens, it's returning 1 because the calculation ((n*(n*n*60493 + 19990303) + 1376312589) & 0x7fffffff)/1073741824 will always be 0 - thus 1 - 0 = 1
To understand whats going on here, one has to examine the JavaScripts number type. It is basically a 53bit integer, that gets left/right shifted using another 11bit integer, resulting in a 64bit number. Therefore if you have a calculation that would result in a 54bit integer, it just takes the upper 53bits, and shifts them left by 1. Now if you do bitwise math on numbers, it will take the lower 32bits. Therefore if an integer is bigger than 84bits, doing bitwise shifting on it will always result in 0. Numbers bigger than 32bits will therefore tend to 0 in JS when doing bitwise operations, while C# always takes the lower 32bits, and therefore the result will be accurate for those 32bits (but larger numbers cannot be represented).
(2 + 2 ** 53) & (2 + 2 ** 53) // 2
(2 + 2 ** 54) & (2 + 2 ** 54) // 0
Edit (sorry for the poor previous answer):
As others stated before is the problem related too your values which are exceeding the size of JS Number.
If you have the code working in C#, it might be advisable to offload the functionality to an ASP.NET backend which will handle the calculation an forward the result via some sort of API

Javascript syntax: variable declaration with "<<" or ">>"

I had a look at Jason Davies's Word Cloud source on Github
and within the index.js there are some variables that are declared like this:
cw = 1 << 11 >> 5,
ch = 1 << 11;
I noticed this pattern:
value before "<<" multiplies the value after "<<";
value after "<<" is a 2 to the power of the value specified;
value after ">>" (following "<<") divides that number before (which is also 2 two the power of the value);
I was curious:
in general what are the uses for this type of declaration and where does it come from
how does it add value to the code in the rest of the Jason Davies' layout?
See this link
Basically, << and >> do bit-wise shifts. If you do a << b, it will represent a as a number in base 2 (0s and 1s) and shift all the digits to the left by b positions. This is mathematically equivalent to
a * 2^b
The >> is the same principle, but it shifts to the right. It's almost analogous to a division by a factor of 2, but there's a special case when the intial number is odd: it floors the result.
⌊(a / 2^b)⌋
If you have 1 << 11 >> 5, the left and right shifts cancel each other, we end up in reality with
1 << 6 === 64 === 1 * 2^6

JavaScript: How to interpret a variation of the `Math.floor(Math.random)` method?

It seems that Math.random()*3|0 equates to Math.floor(Math.random() * 3), which is the version I'm familiar with. While I have no problem understanding the step-by-step process of how the latter generates integers 0, 1, and 2, the structure of the former stumps me. It may well be an idiomatic variation of the more roundabout Math.floor method.
Is it possible to express the following two snippets of code in x|y style instead and produce the same results? If so, could you break down how it works?
1 + Math.floor(Math.random() * 100) // yielding 1-100 (inclusive)
190 + Math.floor(Math.random() * 66) // yielding 190-255 (inclusive)
| is the Bitwise Or operator in javascript.
The Bitwise Or operator returns a one in each bit position for which the corresponding bits of either or both operands are ones.
x | 0 is x, for any x
The bitwise operator works only for integers, so javascript converts the float into an integer.
Thus 1.5 | 0 becomes 1.
Your expressions can be rewritten as
1 + (Math.random() * 100 | 0)
190 + (Math.random() * 66 | 0)
| is a bitwise OR operator. It returns a one in each bit position for which the corresponding bits of either or both operands are ones.
You can easily change your methods into the following:
1 + Math.random()*100|0
190 + Math.random()*66|0
It appears, according to the EMCAScript Specification, that when a bit-wise operator is applied to a number it is converted to an Int32 value. This would explain the behavior.
Link: http://www.ecma-international.org/ecma-262/5.1/#sec-11.10
"The production A : A # B, where # is one of the bitwise operators in the productions above, is evaluated as follows:
Let lref be the result of evaluating A.
Let lval be GetValue(lref).
Let rref be the result of evaluating B.
Let rval be GetValue(rref).
Let lnum be ToInt32(lval).
Let rnum be ToInt32(rval).
Return the result of applying the bitwise operator # to lnum and rnum. The result is a signed 32 bit integer. "

Javascript operators [duplicate]

This question already has answers here:
Unfamiliar characters used in JavaScript encryption script
(3 answers)
Closed 8 years ago.
I am using a "LightenDarkenColor" function in my script and never really paid much attention to it until now and I noticed some operations and I had no idea what they were doing. I had actually never seen them before.
Those operators are >> and &. I also noticed that the function doesn't work in Firefox.
The function:
function LightenDarkenColor(color, percent) {
var num = parseInt(color,16),
amt = Math.round(2.55 * percent),
R = (num >> 16) + amt,
B = (num >> 8 & 0x00FF) + amt,
G = (num & 0x0000FF) + amt;
return (0x1000000 + (R<255?R<1?0:R:255)*0x10000 + (B<255?B<1?0:B:255)*0x100 + (G<255?G<1?0:G:255)).toString(16).slice(1);
}
What exactly are the operators and how do they work?
Imagine num is 227733 (= some mild dark green) and take
B = (num >> 8 & 0x00FF)
num >> 8 will shift the number (move digits) to the right by 2 hex digits (4 bits per digit x 2=8) making it become:
227733 => 002277
then & 0x00FF will clear out all digits except the last two
002277 => 000077
and that is the component for green.
Hex 00FF is binary 0000000011111111 and & (binary AND) is the operation that will compare all bit pairs one by one and set all bits to zero unless both operand bits are 1s. So ANDing to zeroes will lead to zeroes, and ANDing to ones will give as result the same digits of the other operand: 1 & 1 => 1, 0 & 1=>0. Ones remain ones, zeroes remain zeroes. So AnyNumber & 0000000011111111 = the right part (lower 2 digits) of AnyNumber.
It is just the standard way of getting a number subpart. In this case the green component. Shift right to clear the lower bits, and &0000...1111 to clear the upper bits.
After it got all color components, it adds amt to all of them (amt positive=lighter) and at the end it crops the values
R<255?R<1?0:R:255 means: if less then 0 use 0, if more than 255 use 255.
And finally restores the color as a single number (instead of *0x100 could have used R<<8 that is the opposite of >>8, instead of +, it could have used |, binary OR, to merge the components).
Note that the function uses B as the second component, assuming it is blue but in reality in RGB the second is Green. Yet the result is correct anyway (it would work whatever color components you used and however you named them)
They're bitwise operators.
>> is a bitwise (Sign-propagating) right shift,
& is a bitwise "and".
I could go into detail on what these operators do, but the MDN has a good explanation and example for each operator. It would be counter-productive to copy those.

Can someone translate this simple function into Javascript?

I'm reading a tutorial on Perlin Noise, and I came across this function:
function IntNoise(32-bit integer: x)
x = (x<<13) ^ x;
return ( 1.0 - ( (x * (x * x * 15731 + 789221) + 1376312589) & 7fffffff) / 1073741824.0);
end IntNoise function
While I do understand some parts of it, I really don't get what are (x<<13) and & 7fffffff supposed to mean (I see that it is a hex number, but what does it do?). Can someone help me translate this into JS? Also, normal integers are 32 bit in JS, on 32 bit computers, right?
It should work in JavaScript with minimal modifications:
function IntNoise(x) {
x = (x << 13) ^ x;
return (1 - ((x * (x * x * 15731 + 789221) + 1376312589) & 0x7fffffff) / 1073741824);
}
The << operator is a bitwise left-shift, so << 13 means shift the number 13 bits to the left.
The & operator is a bitwise AND. Doing & 0x7fffffff on a signed 32-bit integer masks out the sign bit, ensuring that the result is always a positive number (or zero).
The way that JavaScript deals with numbers is a bit quirky, to say the least. All numbers are usually represented as IEEE-754 doubles, but... once you start using bitwise operators on a number then JavaScript will treat the operands as signed 32-bit integers for the duration of that calculation.
Here's a good explanation of how JavaScript deals with bitwise operations:
Bitwise Operators
x<<13 means shift x 13 steps to left (bitwise).
Furthermore a<<b is equivalent to a*2^b.
& 7ffffff means bitwise AND of leftside with 7FFFFFFF.
If you take a look at the bit pattern of 7FFFFFFF you will notice that the bit 32 is 0 and the rest of the bits are 1. This means that you will mask out bit 0-30 and drop bit 31.

Categories

Resources