Javascript bitwise to c# - javascript

I have some issue to convert a javascript code into c# the issue is with bitwise operator:
Javascript function:
return (s - (s | 0x0)) * 0x100000000 | 0x0;
C# function;
return (long)((s - ((long)s)) * 0x100000000);
If s = 1.7320508075688772
on Javascript report -1150833019
on c# report 3144134277
other example can be Javascript: (1779033703 << 0x1e) = -1073741824
c# (1779033703 << 0x1e) = 1910222893216694272
What i need is translate Javascript function into c# with same number result.
Thanks for help.

So, there are a few things going on here.
You have a type mismatch in your JavaScript. In Hex, 3144134277 is BB67AE85, and -1150833019 is FFFFFFFFBB67AE85. So, we can see that the JavaScript int32 is being implicitly converted to an unsigned int64.
You can't bitshift by 0. Bitshifting is dividing by 2^n, where n is, in this case, 0. That returns the same number, as 2^0 = 1.
(long)((ulong)(…) That's a double cast, and is considered bad form. Your number literal will be cast to an unsigned long, then cast again to a long. This just wastes cycles.
Your cast is a C style cast, in C# casting is more often done as object.ToInt()
So, in review, you have a bug in your JavaScript.

You can't expect the same behavior on C# by default. Because:
In JavaScript, a number is stored as a 64-bit floating point number
but the bit-wise operation is performed on a 32-bit binary number
So to perform a bit-operation JavaScript converts the number into a
32-bit binary number, perform the operation and convert back the
result to a 64-bit number.
So in your case you might be trying to cast a 64-bit number to 32-bit one and get a faulty result from there. Which in C# it wouldn't be a good thing to have in my opinion.

Related

Why BigInt demand explicit conversion from Number?

BigInt and Number conversions
When working with numbers in JavaScript there are two primitive types to choose from - BigInt and Number. One could expect implicit conversion from "smaller" type to "bigger" type which isn't a case in JavaScript.
Expected
When computing some combination of BigInt and Number user could expect implicit cast from Number to BigInt like in below example:
const number = 16n + 32; // DOESN'T WORK
// Expected: Evaluates to 48n
Actual behavior
Expressions operating on both BigInt and Number are throwing an error:
const number = 16n + 32;
// Throws "TypeError: Cannot mix BigInt and other types, use explicit conversions"
Why explicit conversion is needed in above cases?
Or in other words what is the reason behind this design?
This is documented in the original BigInt proposal: https://github.com/tc39/proposal-bigint/blob/master/README.md#design-goals-or-why-is-this-like-this
When a messy situation comes up, this proposal errs on the side of throwing an exception rather than rely on type coercion and risk giving an imprecise answer.
It's a design choice. In statically typed languages, coercion might give loss of information, like going from float to int the fractional part just gets truncated. JavaScript does type coercion and you may expect 16n + 32 to just use 32 as if it were a BigInt instead of a Number and there wouldn't be a problem.
This was purely a design choice which is motivated here in this part of the documentation
They are not "smaller" and "bigger". One has real but potentially imprecise numbers, the other has integral but precise ones. What do you think should be the result of 16n + 32.5? (note that type-wise, there is no difference between 32 and 32.5). Automatically converting to BigInt will lose any fractional value; automatically converting to Number will risk loss of precision, and potential overflow. The requirement for explicit conversion forces the programmer to choose which behaviour they desire, without leaving it to chance, as a potential (very likely) source of bugs.
You probably missed an important point:
BigInt is about integers
Number is about real numbers
Implicit conversion from 32 to 32n might have sense, but implicit conversion from floating point number e.g. 1.555 to BigInt would be misleading.

I can't get proper uint32 number in javascript

i am trying to convert a long number to unit in JavaScript, but the result i got it different from the one i already have in c#.
c#:
var tt=431430059159441001;
var t=(UInt32)tt;//1570754153
js:
var arr =new Uint32Array(1);
arr[0]=431430059159441001;//1570754176
so could any body explain why there is difference.
That's because your number literal is rather a 64 bit integer, and that cannot be represented in JavaScripts regular Number type. The number type is a 64-bit precision floating point number, which can only represent integer values up to around 2**53. So I would recommend to just not use such a huge number literal.
A recent development in the JavaScript world is BigInts. If you can afford to use them, then your code is easy to fix:
var t = Number(BigInt.asUintN(32, 431430059159441001n));
console.log(t); // 1570754153
This is not about uints, but about floats. JavaScript uses floating point numbers, and your number exceeds the maximum range of integers that can safely be represented:
console.log(431430059159441001)
You cannot convert 431430059159441001 to unsigned integer in c#. Max Value of UInt32 is 4294967295. So the var t=(UInt32)431430059159441001; assignment gives Compiler error.
also 431430059159441001 is larger then max value of float number (javascript holds number with float format)

output INT64 from js UDF

I'm trying to use BigQuery's INT64 type to hold bit encoded information. I have to use a javascript udf function and I'd like to use all the 64 bits.
My issue is that javascript only deals with int32 so 1 << 32 == 1 and I'm not sure how to use the full 64 range that BigQuery supports in the udf.
It’s not possible to directly convert Big Query’s INT64 type to JavaScript UDF, neither as input nor output, as JavaScript does not support 64-bit integer type [1]. You could use FLOAT64 instead, as far as the values are less than 2^53 - 1, since it follows the IEEE 754-2008 standard for double precision [2]. You can also use a string containing the number value. Here is the documentation for supported external UDF data types [3].

Javascript, how does it work string to int use bitshift like '2' >> 0

I use bitshift want to conver String to bit, but I want to know how does it work.
"12345" >> 0 // int 12345
I found '>>' only for deal with 32-bit things and will return a int. But I confuse how it is work?
According to the specification, in "12345" >> 0, this happens:
"12345" is put through the specification's abstract ToInt32 function:
ToInt32 runs it through ToNumber to coerce the string to a standard JavaScript number (an IEEE-754 double-precision binary floating-point value); it ends up being 12345
ToInt32 then converts it to a 32-bit integer value; it ends up being...12345
That 32-bit integer is shifted...not at all because you used >> 0, so we still have 12345
and will return a int
Sort of. If you stored the result in a variable, it would end up being a standard JavaScript number again (a double). The spec seems to allow for the possibility the int value would be used as-is if you combined this operation with others, though, which makes sense.
If your goal is just to force "12345" to be a number using the default coercion rules, >> 0 is a fairly obscure and somewhat inefficient way to do it (not that the latter is likely to matter). The unary + is rather more direct and applies the same rules: +"12345"
According to MDN,
The operands of all bitwise operators are converted to signed 32-bit integers in two's complement format.
JS simply converts your string to a Number, and then shifts it by 0 position (read "does nothing").
This JS type conversion is pretty fun. You can also simply use unary + to convert your string to a number. This approach is a common practice, and looks much better than bitwise shifting.
var str = "12345";
console.log(+str);
console.log(+str + 1);

Dealing With Binary / Bitshifts in JavaScript

I am trying to perform some bitshift operations and dealing with binary numbers in JavaScript.
Here's what I'm trying to do. A user inputs a value and I do the following with it:
// Square Input and mod with 65536 to keep it below that value
var squaredInput = (inputVal * inputVal) % 65536;
// Figure out how many bits is the squared input number
var bits = Math.floor(Math.log(squaredInput) / Math.log(2)) + 1;
// Convert that number to a 16-bit number using bitshift.
var squaredShifted = squaredInput >>> (16 - bits);
As long as the number is larger than 46, it works. Once it is less than 46, it does not work.
I know the problem is the in bitshift. Now coming from a C background, I know this would be done differently, since all numbers will be stored in 32-bit format (given it is an int). Does JavaScript do the same (since it vars are not typed)?
If so, is it possible to store a 16-bit number? If not, can I treat it as 32-bits and do the required calculations to assume it is 16-bits?
Note: I am trying to extract the middle 4-bits of the 16-bit value in squaredInput.
Another note: When printing out the var, it just prints out the value without the padding so I couldn't figure it out. Tried using parseInt and toString.
Thanks
Are you looking for this?
function get16bitnumber( inputVal ){
return ("0000000000000000"+(inputVal * inputVal).toString(2)).substr(-16);
}
This function returns last 16 bits of (inputVal*inputVal) value.By having binary string you could work with any range of bits.
Don't use bitshifting in JS if you don't absolutely have to. The specs mention at least four number formats
IEEE 754
Int32
UInt32
UInt16
It's really confusing to know which is used when.
For example, ~ applies a bitwise inversion while converting to Int32. UInt16 seems to be used only in String.fromCharCode. Using bitshift operators converts the operands to either UInt32 or to Int32.
In your case, the right shift operator >>> forces conversion to UInt32.
When you type
a >>> b
this is what you get:
ToUInt32(a) >>> (ToUInt32(b) & 0x1f)

Categories

Resources