How to convert a decimal (base 10) to 32-bit unsigned integer? - javascript

How to convert a decimal (base 10) to 32-bit unsigned integer?
Example:
If n = 9 (base 10), how to convert it to something like: 00000000000000000000000000001001 (base 2)?

Let's be clear that when you're talking about number bases, you're talking about textual representations (which code will see as strings). Numbers don't have number bases, they're just numbers (but more on this below). A number base is a way of representing a number with a series of digits (text). So n = 9 isn't base 10 or base 36 or base 2, it's just a number. (The number literal is in base 10, but the resulting number has no concept of that.)
You have a couple of options:
Built in methods
The number type's toString accepts a radix (base) to use, valid values are 2 through 36. And the string padStart method lets you pad the start of a string so the string is the desired number of characters, specifying the padding character. So:
const n = 9;
const binaryText = n.toString(2).padStart(32, "0");
console.log(binaryText);
If your starting point is text (e.g., "9" rather than 9), you'd parse that first. My answer here has a full rundown of your number parsing options, but for instance:
const decimalText = "9";
const binaryText = (+decimalText).toString(2).padStart(32, "0");
console.log(binaryText);
Bit manipulation
"Numbers don't have number bases, they're just numbers" is true in the abstract, but at the bits-in-memory level, the bits are naturally assigned meaning. In fact, JavaScript's number type is an implementation of IEEE-754 double-precision binary floating point.
That wouldn't help us except that JavaScript's bitwise & and | operators are defined in terms of 32-bit binary integers (even though numbers aren't 32-bit binary integers, they're 64-bit floating point). that means we could also implement the above by testing bits using &:
const n = 9;
let binaryText = "";
for (let bit = 1; bit < 65536; bit *= 2) {
binaryText = (n & bit ? "1" : "0") + binaryText;
}
console.log(binaryText);

Hoping u guys have already practiced javascript in leetcode using andygala playlists
function flippingBits(n){
n = n.toString(2).padStart(32, "0");
n=(n.split(''))
for(let i=0;i<32;i++){
(n[i]==='1')? n[i]='0': n[i]='1';
}
n=n.join('')
let n=parseInt(n,2)
return n;
}
console.log(flippingBits(9))

Related

how can i make a CRC in typescript or javascript

now, i have a need when user send Hexadecimal array like [01 06 00 01 00 10]
the Divisor is '11000000000000101'
who knows how to do it or give me a example in js or ts
More information is needed to answer your question. A CRC is defined not just by the polynomial (for which you have provided a common 16-bit CRC polynomial, 0x8005 or x16+x15+x2+1) but also the order in which the bits from the bytes are fed to the CRC, the initial value of the CRC register, and the order of the bits from the CRC to make the result. You may also need to know the order of the two bytes of the 16-bit CRC as they are placed in the message (little or big endian).
From a list of known CRCs, I see seven different 16-bit CRC definitions that use that polynomial, with various choices of bit orderings, initial values and final CRCs.
You would need to find and provide that information, or at least provide several examples of messages and their CRCs in order to try to derive its definition.
Here's an example of a function in JavaScript that takes in a hexadecimal array and a divisor and returns the remainder when the hexadecimal number represented by the array is divided by the divisor:
function divideHexArray(hexArray, divisor) {
let hexNumber = "";
// convert hexadecimal array to hexadecimal string
for (let i = 0; i < hexArray.length; i++) {
hexNumber += hexArray[i].toString(16).padStart(2, "0");
}
// convert hexadecimal string to decimal number
let decimalNumber = parseInt(hexNumber, 16);
// divide decimal number by divisor and return the remainder
return decimalNumber % divisor;
}

JavaScript BigInt print unsigned binary represenation

How do you print an unsigned integer when using JavaScript's BigInt?
BigInts can be printed as binary representation using toString(2). However for negative values this function just appends a - sign when printing.
BigInt(42).toString(2)
// output => 101010
BigInt(-42).toString(2)
// output => -101010
How do I print the unsigned representation of BigInt(42)? I that with regular numbers you can do (-42 >>> 0).toString(2), however the unsigned right shift seems not to be implemented for BigInt, resulting in an error
(BigInt(-42) >>> BigInt(0)).toString(2)
// TypeError: BigInts have no unsigned right shift, use >> instead
An easy way to get the two's complement representation for negative BigInts is to use BigInt.asUintN(bit_width, bigint):
> BigInt.asUintN(64, -42n).toString(2)
'1111111111111111111111111111111111111111111111111111111111010110'
Note that:
You have to define the number of bits you want (64 in my example), there is no "natural"/automatic value for that.
Given only that string of binary digits, there is no way to tell whether this is meant to be a positive BigInt (with a value close to 2n**64n) or a two's complement representation of -42n. So if you want to reverse the conversion later, you'll have to provide this information somehow (e.g. by writing your code such that it implicitly assumes one or the other option).
Relatedly, this is not how -42n is stored internally in current browsers. (But that doesn't need to worry you, since you can create this output whenever you want/need to.)
You could achieve the same result with a subtraction: ((2n ** 64n) - 42n).toString(2) -- again, you can specify how many bits you'd like to see.
Is there something like bitAtIndex for BigInt?
No, because there is no specification for how BigInts are represented. Engines can choose to use bits in any way they want, as long as the resulting BigInts behave as the specification demands.
#Kyroath:
negative BigInts are represented as infinite-length two's complement
No, they are not: the implementations in current browsers represent BigInts as "sign + magnitude", not as two's complement. However, this is an unobservable implementation detail: implementations could change how they store BigInts internally, and BigInts would behave just the same.
What you probably meant to say is that the two's complement representation of any negative integer (big or not) is conceptually an infinite stream of 1-bits, so printing or storing that in finite space always requires defining a number of characters/bits after which the stream is simply cut off. When you have a fixed-width type, that obviously defines this cutoff point; for conceptually-unlimited BigInts, you have to define it yourself.
Here's a way to convert 64-bit BigInts into binary strings:
// take two's complement of a binary string
const twosComplement = (binaryString) => {
let complement = BigInt('0b' + binaryString.split('').map(e => e === "0" ? "1" : "0").join(''));
return decToBinary(complement + BigInt(1));
}
const decToBinary = (num) => {
let result = ""
const isNegative = num < 0;
if (isNegative) num = -num;
while (num > 0) {
result = (num % BigInt(2)) + result;
num /= BigInt(2);
}
if (result.length > 64) result = result.substring(result.length - 64);
result = result.padStart(64, "0");
if (isNegative) result = twosComplement(result);
return result;
}
console.log(decToBinary(BigInt(5))); // 0000000000000000000000000000000000000000000000000000000000000101
console.log(decToBinary(BigInt(-5))); // 1111111111111111111111111111111111111111111111111111111111111011
This code doesn't do any validation, however.

Why console.log shows only part of the number resulting from 0.1+0.2=0.30000000000000004

This question wasn't asked on stackoverlow yet! I'm not asking why 0.1+0.2 doesn't equal 0.3, I'm asking very different thing! Please read the question before marking it as a duplicate.
I've written this function that shows how JavaScript stores float numbers in 64 bits:
function to64bitFloat(number) {
var f = new Float64Array(1);
f[0] = number;
var view = new Uint8Array(f.buffer);
var i, result = "";
for (i = view.length - 1; i >= 0; i--) {
var bits = view[i].toString(2);
if (bits.length < 8) {
bits = new Array(8 - bits.length).fill('0').join("") + bits;
}
result += bits;
}
return result;
}
Now I want to check if the result of 0.1+0.2 is actually stored as it's shown in the console 0.30000000000000004. So I do the following:
var r = 0.1+0.2;
to64bitFloat(r);
The resulting number is:
0 01111111101 0011001100110011001100110011001100110011001100110100
Now, let's convert it to the binary:
Calculated exponent:
01111111101 = 1021
1021 - 1023 = -2
Get it all together,
1.0011001100110011001100110011001100110011001100110100 x 2 ** -2 =
0.010011001100110011001100110011001100110011001100110100
Now, if we convert the resulting number into decimal using this converter, we get:
0.3000000000000000444089209850062616169452667236328125
Why doesn't console show the whole number, instead of just it's more significand digits?
The console.log method is non-standard. In Firefox, you can specify the number of decimal with a format specifier
console.log('%.60f', 0.1 + 0.2)
gives
0.300000000000000044408920985006261616945266723632812500000000
Which is the same number as the one given by your converter.
Note that, this doesn't work in chrome.
In conclusion:
Javascript number are stored with the IEEE 754-2008 double-precision 64-bit binary format.
String representation of a number is defined in the ECMAScript standard.
console.log method is browser dependent and the Firefox implementation allows to specify an arbitrary number of decimal places to display numbers .
Actually you don't have to write such a long question. What you could do is just open console and type:
var a = 0.3000000000000000444089209850062616169452667236328125;
console.log(a);
that would still give you result - 0.30000000000000004 (at least in Google chrome console).
And the reason why it is like that, is because limitations of JS, which only allow to show 16 chars of a float. You can read more in answer to this question: https://stackoverflow.com/a/19613321/3014041

Converting number into string in JavaScripts without trailing zeros from number

I tried to convert number to string in JavaScripts using toString() but it truncates insignificant zeros from numbers. For examples;
var n1 = 250.00
var n2 = 599.0
var n3 = 056.0
n1.toString() // yields 250
n2.toString() // yields 599
n3.toString() // yields 56
but I dont want to truncate these insignificant zeros ( "250.00"). Could you please provide any suggestions?. Thank you for help.
The number doesn't know how many trailing 0 there are because they are not stored. In math, 250, 250.00 or 250.0000000000000 are all the same number and are all represented the same way in memory.
So in short, there is no way to do what you want. What you can do is format all numbers in a specific way. See Formatting a number with exactly two decimals in JavaScript.
As far as I know, you can't store number with floating zeros, but you can create zeroes with floating zeroes, by using toFixed:
var n1 = 250;
var floatedN1 = n1.toFixed(2); //type 'string' value '250.00'

Is this a valid way to truncate a number?

I found this code in a SO answer as a way to truncate a number into an integer in Javascript:
var num = -20.536;
var result = num | 0;
//result = -20
Is this a valid way to truncate a number in Javascript, or it is some kind of hack? Why does it works only with numbers less than 2147483647?
That method works by implicitly converting the number to a 32-bit integer, as binary operators use 32-bit integers in their calculations.
The drawbacks of that method are:
The desired operation is hidden as an implicit effect of the operator, so it's not easy to see what the intention of the code is.
It can only handle integers within the range of a 32-bit number.
For any regular case you should use the Math.floor or Math.ceil methods instead, it clearly shows what the intention of the code is, and it handles any number within the precision range of a double, i.e. integers up to 52 bits:
var num = 20.536;
var result = Math.floor(num); // 20
var num = -20.536;
var result = Math.ceil(num); // -20
There is no round-towards-zero method in Javascript, so to do that you would need to check the sign before rounding:
var result = num < 0 ? Math.ceil(num) : Math.floor(num);
Use Javascript's parseInt like so:
var num = -20.536;
var num2int = parseInt(num);
return num2int; //returns -20
Tada! num is now an int with the value of -20.
If you use parseInt you can go from -2^53 to +2^53:
parseInt(-20.536) // -20
parseInt(9007199254740992.1234) // 9007199254740992
Why +/- 2^53? This is because JavaScript uses a 64-bit representation for floating point numbers, with a 52-bit mantissa. Hence all integer values up to 2^53 can be represented exactly. Beyond this, whole numbers are approximated.

Categories

Resources