I can't get proper uint32 number in javascript - javascript

i am trying to convert a long number to unit in JavaScript, but the result i got it different from the one i already have in c#.
c#:
var tt=431430059159441001;
var t=(UInt32)tt;//1570754153
js:
var arr =new Uint32Array(1);
arr[0]=431430059159441001;//1570754176
so could any body explain why there is difference.

That's because your number literal is rather a 64 bit integer, and that cannot be represented in JavaScripts regular Number type. The number type is a 64-bit precision floating point number, which can only represent integer values up to around 2**53. So I would recommend to just not use such a huge number literal.
A recent development in the JavaScript world is BigInts. If you can afford to use them, then your code is easy to fix:
var t = Number(BigInt.asUintN(32, 431430059159441001n));
console.log(t); // 1570754153

This is not about uints, but about floats. JavaScript uses floating point numbers, and your number exceeds the maximum range of integers that can safely be represented:
console.log(431430059159441001)

You cannot convert 431430059159441001 to unsigned integer in c#. Max Value of UInt32 is 4294967295. So the var t=(UInt32)431430059159441001; assignment gives Compiler error.
also 431430059159441001 is larger then max value of float number (javascript holds number with float format)

Related

how to handle more than 20 digit number (Big integer)?

My angular program, I need to pass the number which is more than 20 digit to the API request.
num: any;
this.num = 2019111122001424290521878689;
console.log(this.num); // It displays "2.0191111220014244e+27"
I tried to change string from number as below
console.log(this.num.toString()); // It displays "2.0191111220014244e+27"
My expectation is that I need to pass the original big integer into the API request. If I pass as below, it goes as "2.0191111220014244e+27".
BTW, I tried BigInt(this.num), which gives difference number.
Suggest me
In JavaScript, big integer literals have the letter n as a suffix:
var bigNum = 2019111122001424290521878689n;
console.log(bigNum);
For more information, see
MDN JavaScript Reference - BigInt
If you got a large number (> SAFE_INTEGER) from an API, in JSON format, and you want to get the exact value, as as string, you unfortunately can't use JSON.parse(), as it will use the number type and lose precision.
There are alternative JSON parsers out there like LosslessJSON that might solve your problem.
You can use BigInt.
BigInt is a built-in object that provides a way to represent whole numbers larger than 253 - 1, which is the largest number JavaScript can reliably represent with the Number primitive. BigInt can be used for arbitrarily large integers.
const theBiggestInt = 9007199254740991n;
const alsoHuge = BigInt(9007199254740991);
// ↪ 9007199254740991n
const hugeString = BigInt("9007199254740991");
// ↪ 9007199254740991n
const hugeHex = BigInt("0x1fffffffffffff");
// ↪ 9007199254740991n
const hugeBin = BigInt("0b11111111111111111111111111111111111111111111111111111");
// ↪ 9007199254740991n
BigInt is similar to Number in some ways, but also differs in a few key matters — it cannot be used with methods in the built-in Math object and cannot be mixed with instances of Number in operations; they must be coerced to the same type. Be careful coercing values back and forth, however, as the precision of a BigInt may be lost when it is coerced to a Number.
Refer to
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/BigInt
The problem is that the number you have there is not an integer. Javascript can only store integers up to the value given by Number.MAX_SAFE_INTEGER. In chrome, this number is 9007199254740991.
The number you have is actually a floating point number, and converting it between floating point and integer will loose some precision.

Using a float in Javascript in a hash function

I Have a hash function like this.
class Hash {
static rotate (x, b) {
return (x << b) ^ (x >> (32-b));
}
static pcg (a) {
let b = a;
for (let i = 0; i < 3; i++) {
a = Hash.rotate((a^0xcafebabe) + (b^0xfaceb00c), 23);
b = Hash.rotate((a^0xdeadbeef) + (b^0x8badf00d), 5);
}
return a^b;
}
}
// source Adam Smith: https://groups.google.com/forum/#!msg/proceduralcontent/AuvxuA1xqmE/T8t88r2rfUcJ
I use it like this.
console.log(Hash.pcg(116)); // Output: -191955715
As long as I send an integer in, I get an integer out. Now here comes the problem. If I have a floating number as input, rounding will happen. The number Hash.pcg(1.1) and Hash.pcg(1.2) will yield the same. I want different inputs to yield different results. A possible solution could be to multiply the input so the decimal is not rounded down, but is there a more elegant and flexible solution to this?
Is there a way to convert a floating point number to a unique integer? Each floating point number would result in a different integer number.
Performance is important.
This isn't quite an answer, but I was running out of room to make it a comment. :)
You'll hit a problem with integers outside of the 32-bit range as well as with non-integer values.
JavaScript handles all numbers as 64-bit floating point. This gives you exact integers over the range -9007199254740991 to 9007199254740991 (±(2^53 - 1)), but the bit-wise operators used in your hash algorithm (^, <<, >>) only work in a 32-bit range.
Since there are far more non-integer numbers possible than integers, no one-to-one mapping is possible with ordinary numbers. You could work something out with BigInts, but that will likely lead to comparatively much slower performance.
If you're willing to deal with the performance hit, your can use JavaScript buffer functions to get at the actual bits of a floating point number. (I'd say more now about how to do that, but I've got to run!)
Edit... back from dinner...
You can convert JavaScript's standard number type, which is 64-bit floating point, to a BigInt like this:
let dv = new DataView(new ArrayBuffer(8));
dv.setFloat64(0, Math.PI);
console.log(dv.getFloat64(0), dv.getBigInt64(0), dv.getBigInt64(0).toString(16).toUpperCase())
The output from this is:
3.141592653589793 4614256656552045848n "400921FB54442D18"
The first item shows that the number was properly stored as byte array, the second shows the BigInt created from the same bits, and the last is the same BigInt over again, but in hex to better show the floating point data format.
Once you've converted a number like this to a BigInt (which is not the same numeric value, but it is the same string of bits) every possible value of number will be uniquely represented.
The same bit-wise operators you used in your algorithm above will work with BigInts, but without the 32-bit limitation. I'm guessing that for best results you'd want to change the 32 in your code to 64, and use 16-digit (instead of 8-digit) hex constants as hash keys.

Conversion issue for a long string of integers in JavaScript

I'm trying to convert a long string which has only integers to numbers.
var strOne = '123456789123456789122';
parseInt(strOne, 10);
// => 123456789123456800000
var strTwo = '1234567891234567891232';
parseInt(strTwo, 10);
// => 1.234567891234568e+21
The expected output should be the same as strOne and strTwo but that isn't happening here. While converting the string to a number, the output gets changed.
What's the best way to fix this issue?
BigInt is now available in browsers.
BigInt is a built-in object that provides a way to represent whole
numbers larger than 253, which is the largest number JavaScript can
reliably represent with the Number primitive.
value The numeric value of the object being created. May be a string or an integer.
var strOne = '123456789123456789122';
var intOne = BigInt(strOne);
var strTwo = '1234567891234567891232';
var intTwo = BigInt(strTwo);
console.log(intOne, intTwo);
You number is unfortunately too large and gets wrapped when the conversion is done.
The largest integer you can express in JavaScript is 2^53-1, it is given by Number.MAX_SAFE_INTEGER, see the MDN doc here.
The reasoning behind that number is that JavaScript uses double-precision floating-point format numbers as specified in IEEE 754 and can only safely represent numbers between -(2^53 - 1) and 2^53 - 1.
console.log(Number.MAX_SAFE_INTEGER);
If you want to work with numbers bigger than this limit, you'll have to use a different representation than Number such as String and use a library to handle operations (see the BigInteger library for example).

Memory Size: What's Smallest Between String or Array

var string = '';
var array = [];
for(var i = 0; i < 10000; i++){
string += '0';
array.push(0);
}
Which one would be smaller? When/where is the breakpoint between the two?
Note: The numbers are always 1 digit.
Creating the array is about 50% faster than creating the string.
Based on the answer here, you can roughly calculate the size of different data-types in JavaScript.
The equations used, pertaining directly to your question, to calculate the size in bytes:
string = string.length * 2
number = 8
Based on this, the size of your array variable would depend on the content-type being placed in it. As you're inserting numeric values, each offset would be 8 bytes, so:
array[number] = array.length * 8
With these equations, the sizes are:
string = 20000
array = 80000
If you were to use array.push('0') instead (i.e. use strings), the sizes of string and array should be roughly equal.
References:
The String Type - EMCAScript Language Specification:
The String type is the set of all finite ordered sequences of zero or more 16-bit unsigned integer values.
The Number Type - EMCAScript Language Specification:
The Number type has exactly 18437736874454810627 (that is, 264−253+3) values, representing the double-precision 64-bit format IEEE 754 values as specified in the IEEE Standard for Binary Floating-Point Arithmetic
To store small numbers in an array, best way is to use a Int8Array.
(https://developer.mozilla.org/en-US/docs/Web/API/Int8Array).
The array will be faster always.
With the string, each time you append, the runtime has to allocate space for the new string, and then throw away the last version of the string.
With the array, it's just extending a linked list.
http://en.wikipedia.org/wiki/Linked_list
On the other hand, the string will probably consume less memory since all the data will be in a single contiguous block of RAM, whereas the array will have the data and all the linked-list pointers too.

Dealing With Binary / Bitshifts in JavaScript

I am trying to perform some bitshift operations and dealing with binary numbers in JavaScript.
Here's what I'm trying to do. A user inputs a value and I do the following with it:
// Square Input and mod with 65536 to keep it below that value
var squaredInput = (inputVal * inputVal) % 65536;
// Figure out how many bits is the squared input number
var bits = Math.floor(Math.log(squaredInput) / Math.log(2)) + 1;
// Convert that number to a 16-bit number using bitshift.
var squaredShifted = squaredInput >>> (16 - bits);
As long as the number is larger than 46, it works. Once it is less than 46, it does not work.
I know the problem is the in bitshift. Now coming from a C background, I know this would be done differently, since all numbers will be stored in 32-bit format (given it is an int). Does JavaScript do the same (since it vars are not typed)?
If so, is it possible to store a 16-bit number? If not, can I treat it as 32-bits and do the required calculations to assume it is 16-bits?
Note: I am trying to extract the middle 4-bits of the 16-bit value in squaredInput.
Another note: When printing out the var, it just prints out the value without the padding so I couldn't figure it out. Tried using parseInt and toString.
Thanks
Are you looking for this?
function get16bitnumber( inputVal ){
return ("0000000000000000"+(inputVal * inputVal).toString(2)).substr(-16);
}
This function returns last 16 bits of (inputVal*inputVal) value.By having binary string you could work with any range of bits.
Don't use bitshifting in JS if you don't absolutely have to. The specs mention at least four number formats
IEEE 754
Int32
UInt32
UInt16
It's really confusing to know which is used when.
For example, ~ applies a bitwise inversion while converting to Int32. UInt16 seems to be used only in String.fromCharCode. Using bitshift operators converts the operands to either UInt32 or to Int32.
In your case, the right shift operator >>> forces conversion to UInt32.
When you type
a >>> b
this is what you get:
ToUInt32(a) >>> (ToUInt32(b) & 0x1f)

Categories

Resources