How to parse BigInt in specific base? [duplicate] - javascript

Suppose I want to convert a base-36 encoded string to a BigInt, I can do this:
BigInt(parseInt(x,36))
But what if my string exceeds what can safely fit in a Number? e.g.
parseInt('zzzzzzzzzzzzz',36)
Then I start losing precision.
Are there any methods for parsing directly into a BigInt?

You could convert the number to a bigint type.
function convert(value, radix) {
return [...value.toString()]
.reduce((r, v) => r * BigInt(radix) + BigInt(parseInt(v, radix)), 0n);
}
console.log(convert('zzzzzzzzzzzzz', 36).toString());
With greater chunks, like just for example with ten (eleven return a false result).
function convert(value, radix) { // value: string
var size = 10,
factor = BigInt(radix ** size),
i = value.length % size || size,
parts = [value.slice(0, i)];
while (i < value.length) parts.push(value.slice(i, i += size));
return parts.reduce((r, v) => r * factor + BigInt(parseInt(v, radix)), 0n);
}
console.log(convert('zzzzzzzzzzzzz', 36).toString());

Not sure if there's a built-in one, but base-X to BigInt is pretty easy to implement:
function parseBigInt(
numberString,
keyspace = "0123456789abcdefghijklmnopqrstuvwxyz",
) {
let result = 0n;
const keyspaceLength = BigInt(keyspace.length);
for (let i = numberString.length - 1; i >= 0; i--) {
const value = keyspace.indexOf(numberString[i]);
if (value === -1) throw new Error("invalid string");
result = result * keyspaceLength + BigInt(value);
}
return result;
}
console.log(parseInt("zzzzzzz", 36));
console.log(parseBigInt("zzzzzzz"));
console.log(parseBigInt("zzzzzzzzzzzzzzzzzzzzzzzzzz"));
outputs
78364164095
78364164095n
29098125988731506183153025616435306561535n
The default keyspace there is equivalent to what parseInt with base 36 uses, but should you need something else, the option's there. :)

Related

Converting Uint8Array to BigInt in Javascript

I have found 3 methods to convert Uint8Array to BigInt and all of them give different results for some reason. Could you please tell me which one is correct and which one should I use?
Using bigint-conversion library. We can use bigintConversion.bufToBigint() function to get a BigInt. The implementation is as follows:
export function bufToBigint (buf: ArrayBuffer|TypedArray|Buffer): bigint {
let bits = 8n
if (ArrayBuffer.isView(buf)) bits = BigInt(buf.BYTES_PER_ELEMENT * 8)
else buf = new Uint8Array(buf)
let ret = 0n
for (const i of (buf as TypedArray|Buffer).values()) {
const bi = BigInt(i)
ret = (ret << bits) + bi
}
return ret
}
Using DataView:
let view = new DataView(arr.buffer, 0);
let result = view.getBigUint64(0, true);
Using a FOR loop:
let result = BigInt(0);
for (let i = arr.length - 1; i >= 0; i++) {
result = result * BigInt(256) + BigInt(arr[i]);
}
I'm honestly confused which one is right since all of them give different results but do give results.
I'm fine with either BE or LE but I'd just like to know why these 3 methods give a different result.
One reason for the different results is that they use different endianness.
Let's turn your snippets into a form where we can execute and compare them:
let source_array = new Uint8Array([
0xff, 0xee, 0xdd, 0xcc, 0xbb, 0xaa, 0x99, 0x88,
0x77, 0x66, 0x55, 0x44, 0x33, 0x22, 0x11]);
let buffer = source_array.buffer;
function method1(buf) {
let bits = 8n
if (ArrayBuffer.isView(buf)) {
bits = BigInt(buf.BYTES_PER_ELEMENT * 8)
} else {
buf = new Uint8Array(buf)
}
let ret = 0n
for (const i of buf.values()) {
const bi = BigInt(i)
ret = (ret << bits) + bi
}
return ret
}
function method2(buf) {
let view = new DataView(buf, 0);
return view.getBigUint64(0, true);
}
function method3(buf) {
let arr = new Uint8Array(buf);
let result = BigInt(0);
for (let i = arr.length - 1; i >= 0; i--) {
result = result * BigInt(256) + BigInt(arr[i]);
}
return result;
}
console.log(method1(buffer).toString(16));
console.log(method2(buffer).toString(16));
console.log(method3(buffer).toString(16));
Note that this includes a bug fix for method3: where you wrote for (let i = arr.length - 1; i >= 0; i++), you clearly meant i-- at the end.
For "method1" this prints: ffeeddccbbaa998877665544332211
Because method1 is a big-endian conversion (first byte of the array is most-significant part of the result) without size limit.
For "method2" this prints: 8899aabbccddeeff
Because method2 is a little-endian conversion (first byte of the array is least significant part of the result) limited to 64 bits.
If you switch the second getBigUint64 argument from true to false, you get big-endian behavior: ffeeddccbbaa9988.
To eliminate the size limitation, you'd have to add a loop: using getBigUint64 you can get 64-bit chunks, which you can assemble using shifts similar to method1 and method3.
For "method3" this prints: 112233445566778899aabbccddeeff
Because method3 is a little-endian conversion without size limit. If you reverse the for-loop's direction, you'll get the same big-endian behavior as method1: result * 256n gives the same value as result << 8n; the latter is a bit faster.
(Side note: BigInt(0) and BigInt(256) are needlessly verbose, just write 0n and 256n instead. Additional benefit: 123456789123456789n does what you'd expect, BigInt(123456789123456789) does not.)
So which method should you use? That depends on:
(1) Do your incoming arrays assume BE or LE encoding?
(2) Are your BigInts limited to 64 bits or arbitrarily large?
(3) Is this performance-critical code, or are all approaches "fast enough"?
Taking a step back: if you control both parts of the overall process (converting BigInts to Uint8Array, then transmitting/storing them, then converting back to BigInt), consider simply using hexadecimal strings instead: that'll be easier to code, easier to debug, and significantly faster. Something like:
function serialize(bigint) {
return "0x" + bigint.toString(16);
}
function deserialize(serialized_bigint) {
return BigInt(serialized_bigint);
}
If you need to store really big integers that isn't bound to any base64 or 128 and also keep negative numbers then this is a solution for you...
function encode(n) {
let hex, bytes
// shift all numbers 1 step to the left and xor if less then 0
n = (n << 1n) ^ (n < 0n ? -1n : 0n)
// convert to hex
hex = n.toString(16)
// pad if neccesseery
if (hex.length % 2) hex = '0' + hex
// convert hex to bytes
bytes = hex.match(/.{1,2}/g).map(byte => parseInt(byte, 16))
return bytes
}
function decode(bytes) {
let hex, n
// convert bytes back into hex
hex = bytes.map(e => e.toString(16).padStart(2, 0)).join('')
// Convert hex to BigInt
n = BigInt(`0x`+hex)
// Shift all numbers to right and xor if the first bit was signed
n = (n >> 1n) ^ (n & 1n ? -1n : 0n)
return n
}
const input = document.querySelector('input')
input.oninput = () => {
console.clear()
const bytes = encode(BigInt(input.value))
// TODO: Save or transmit this bytes
// new Uint8Array(bytes)
console.log(bytes.join(','))
const n = decode(bytes)
console.log(n.toString(10)+'n') // cuz SO can't render bigints...
}
input.oninput()
<input type="number" value="-39287498324798237498237498273323423" style="width: 100%">

Generic formula for build an array of numbers of base-n

Say I want to build an array of numbers base 8, or base 26, I'm not sure how to approach a general formula for doing this:
console.log(arrayOfNumbersOfBase(8, 0, 10));
console.log(arrayOfNumbersOfBase(26, 0, 10));
function arrayOfNumbersOfBase(base, start, size)
{
var array = [];
for (var i = start, n = size; i < n; i++)
{
array.push(i * (base));
}
return array;
}
You can take the next approach as a starting point, basically I had to define some utility methods:
mapToChar(n) maps a number n to a character representation, for example, 10 is mapped to 'A'.
convertToBaseN(n, base) converts the number n to his representation on the given base. This method uses a recursive approach and utilizes the previous one.
Finally, generateNumbersOfBase(base, start, size) generates an array of size elements starting with the number start for the given base.
CODE:
// Next utility method map a decimal number to a character representation.
const mapToChar = (n) =>
{
n = (n >= 0 && n <= 9) ? '0'.charCodeAt() + n : n - 10 + 'A'.charCodeAt();
return String.fromCharCode(n);
}
// Next utility method convert a decimal number to his base-n representation.
const convertToBaseN = (n, base, res = "") =>
{
if (n <= 0)
return (res && res.split("").reverse().join("")) || "0";
// Convert input number to given base by repeatedly
// dividing it by base and taking remainder.
res += mapToChar(n % base);
return convertToBaseN(Math.floor(n / base), base, res);
}
// Next method generates an array of numbers for a given base.
const generateNumbersOfBase = (base, start, size) =>
{
return Array(size).fill(0).map((x, idx) => convertToBaseN(start + idx, base));
}
// Finally, generate some arrays.
let base10Array = generateNumbersOfBase(10, 15, 5);
let base2Array = generateNumbersOfBase(2, 5, 9);
let base16Array = generateNumbersOfBase(16, 10, 12);
let base8Array = generateNumbersOfBase(8, 1, 12);
console.log(
JSON.stringify(base10Array),
JSON.stringify(base2Array),
JSON.stringify(base16Array),
JSON.stringify(base8Array),
);
Now, if you need to convert some base-n representation back to decimal number, you can use next approach:
const convertToDec = (str, base) =>
{
let codeA = 'A'.charCodeAt();
let code0 = '0'.charCodeAt();
return str.split("").reverse().reduce((acc, c, idx) =>
{
let code = c.charCodeAt();
c = code + ((c >= '0' && c <= '9') ? -code0 : -codeA + 10);
return acc += c * Math.pow(base, idx);
}, 0);
}
// Lets convert back some arrays generated on the previous exampel
let base2Array = ["101","110","111","1000","1001","1010","1011","1100","1101"];
let base16Array = ["A","B","C","D","E","F","10","11","12","13","14","15"];
let res2 = base2Array.map(x => convertToDec(x, 2));
let res16 = base16Array.map(x => convertToDec(x, 16));
console.log(
JSON.stringify(res2),
JSON.stringify(res16)
);

Adding two numbers JS

I want to add two numbers from range 10-99,for example:
Input:16
Output:1+6=7
Input:99
Output:18
function digital_root(n) {
var z = n.toString().length;
if (z == 2) {
var x = z[0] + z[1]
return x;
}
}
console.log( digital_root(16) );
Output from this code is NaN.What should I correct?
You can try this:
function digital_root(n) {
var z = n.toString();
//use length here
if (z.length == 2) {
//convert to int
var x = parseInt(z[0]) + parseInt(z[1]);
return x;
} else {
return "not possible!";
}
}
console.log( digital_root(16) );
console.log( digital_root(99) );
console.log( digital_root(999) );
Use split to split the string in half and add the two using parseInt to convert to a number.
const sum = (s) => (''+s).split('').reduce((a,b) => parseInt(a)+parseInt(b))
↑ ↑ ↑ ↑
our coerce split sum
function to string in two both
Here a test :
const sum = (s) => (''+s).split('').reduce((a,b) => parseInt(a)+parseInt(b))
console.log(sum(12))
There are several approaches to sum digits of a number. You can convert it to a string but IDK if thats neccesary at all. You can do it with numerical operations.
var input = 2568,
sum = 0;
while (input) {
sum += input % 10;
input = Math.floor(input / 10);
}
console.log(sum);
Here's a fun short way to do it:
const number = 99
const temp = number.toString().split('')
const res = temp.reduce((a, c) => a + parseInt(c), 0) // 18
1.) Convert number to string
2.) Separate into individual numbers
3.) Use reduce to sum the numbers.
Your way would be the iterational way to solve this problem, but you can also use a recursive way.
Iterative solution (Imperative)
n.toString() Create String from number.
.split("") split string into chars.
.reduce(callback, startValue) reduces an array to a single value by applying the callback function to every element and updating the startValue.
(s, d) => s + parseInt(d) callback function which parses the element to an integer and adds it to s (the startValue).
0 startValue.
Recursive solution (Functional)
condition?then:else short-hand if notation.
n<10 only one digit => just return it.
n%10 the last digit of the current number (1234%10 = 4).
digital_root_recurse(...) call the function recursivly.
Math.floor(n / 10) Divide by 10 => shift dcimal point to left (1234 => 123)
... + ... add the last digit and the return value (digital root) of n/10 (1234 => 4 + root(123)).
function digital_root_string(n) {
return n.toString().split("").reduce((s, d) => s + parseInt(d), 0);
}
function digital_root_recurse(n) {
return n < 10 ? n : n % 10 + digital_root_recurse(Math.floor(n / 10));
}
console.log(digital_root_string(16));
console.log(digital_root_string(99));
console.log(digital_root_recurse(16));
console.log(digital_root_recurse(99));
The issue in your code is that you stored the length of n into z. The length is an integer, so both z[0] and [1] are undefined. The solution is to store the string into another variable and use that instead of z.
function digital_root(n) {
n = n.toString();
var l = n.length;
if (l === 2) {
return parseInt(n[0], 10) + parseInt(n[1], 10);
}
}
console.log( digital_root(16) );
Simply use var x = parseInt(n/10) + (n%10); and it will work for you.
function digital_root(n) {
var z = n.toString().length;
if (z == 2) {
var x = parseInt(n/10) + (n%10);
return x;
}
}
console.log( digital_root(16) );
console.log( digital_root(99) );
console.log( digital_root(62) );
Convert input to string, split it, convert each item back to number and sum them all:
function digital_root(n) {
return String(n).split('').map(Number).reduce((a,b) => a + b)
}
const result = digital_root(99);
console.log(result);

Javascript - parse string to long

I have a working script in python doing string to integer conversion based on specified radix using long(16):
modulus=public_key["n"]
modulusDecoded = long(public_key["n"], 16)
which prints:
8079d7ae567dd2c02dadd1068843136314fa3893fa1fb1ab331682c6a85cad62b208d66c9974bbbb15d52676fd9907efb158c284e96f5c7a4914fd927b7326c40efa14922c68402d05ff53b0e4ccda90bbee5e6c473613e836e2c79da1072e366d0d50933327e77651b6984ddbac1fdecf1fd8fa17e0f0646af662a8065bd873
and
90218878289834622370514047239437874345637539049004160177768047103383444023879266805615186962965710608753937825108429415800005684101842952518531920633990402573136677611127418094912644368840442620417414685225340199872975797295511475162170060618806831021437109054760851445152320452665575790602072479287289305203
respectively.
This looks like a Hex to decimal conversion.
I tried to have the same result in JS but parseInt() and parseFloat() produce something completely different. On top of that JavaScript seems not to like chars in input string and sometimes returns NaN.
Could anyone please provide a function / guidance how to get the same functionality as in Python script?
Numbers in JavaScript are floating point so they always lose precision after a certain digit. To have unlimited numbers one could rather use an array of numbers from 0 to 9, which has an unlimited range. To do so based on the hex string input, i do a hex to int array conversion, then I use the double dabble algorithm to convert the array to BCD. That can be printed easily:
const hexToArray = arr => arr.split("").map(n => parseInt(n,16));
const doubleDabble = arr => {
var l = arr.length;
for( var b = l * 4; b--;){
//add && leftshift
const overflow = arr.reduceRight((carry,n,i) => {
//apply the >4 +3, then leftshift
var shifted = ((i < (arr.length - l ) && n>4)?n+3:n ) << 1;
//just take the right four bits and add the eventual carry value
arr[i] = (shifted & 0b1111) | carry;
//carry on
return shifted > 0b1111;
}, 0);
// we've exceeded the current array, lets extend it:
if(overflow) arr.unshift(overflow);
}
return arr.slice(0,-l);
};
const arr = hexToArray("8079d7");
const result = doubleDabble(arr);
console.log(result.join(""));
Try it
Using the built in api parseInt, you can get upto 100 digts of accuracy on Firefox and 20 digits of accuracy on Chrome.
a = parseInt('8079d7ae567dd2c02dadd1068843136314fa3893fa1fb1ab331682c6a85cad62b208d66c9974bbbb15d52676fd9907efb158c284e96f5c7a4914fd927b7326c40efa14922c68402d05ff53b0e4ccda90bbee5e6c473613e836e2c79da1072e366d0d50933327e77651b6984ddbac1fdecf1fd8fa17e0f0646af662a8065bd873', 16)
a.toPrecision(110)
> Uncaught RangeError: toPrecision() argument must be between 1 and 21
# Chrome
a.toPrecision(20)
"9.0218878289834615508e+307"
# Firefox
a.toPrecision(100)
"9.021887828983461550807409292694387726882781812072572899692574101215517323445643340153182035092932819e+307"
From the ECMAScript Spec,
Let p be ? ToInteger(precision).
...
If p < 1 or p > 100, throw a RangeError exception.
As described in this answer, JavaScript numbers cannot represent integers larger than 9.007199254740991e+15 without loss of precision.
Working with larger integers in JavaScript requires a BigInt library or other special-purpose code, and large integers will then usually be represented as strings or arrays.
Re-using code from this answer helps to convert the hexadecimal number representation
8079d7ae567dd2c02dadd1068843136314fa3893fa1fb1ab331682c6a85cad62b208d66c9974bbbb15d52676fd9907efb158c284e96f5c7a4914fd927b7326c40efa14922c68402d05ff53b0e4ccda90bbee5e6c473613e836e2c79da1072e366d0d50933327e77651b6984ddbac1fdecf1fd8fa17e0f0646af662a8065bd873
to its decimal representation
90218878289834622370514047239437874345637539049004160177768047103383444023879266805615186962965710608753937825108429415800005684101842952518531920633990402573136677611127418094912644368840442620417414685225340199872975797295511475162170060618806831021437109054760851445152320452665575790602072479287289305203
as demonstrated in the following snippet:
function parseBigInt(bigint, base) {
//convert bigint string to array of digit values
for (var values = [], i = 0; i < bigint.length; i++) {
values[i] = parseInt(bigint.charAt(i), base);
}
return values;
}
function formatBigInt(values, base) {
//convert array of digit values to bigint string
for (var bigint = '', i = 0; i < values.length; i++) {
bigint += values[i].toString(base);
}
return bigint;
}
function convertBase(bigint, inputBase, outputBase) {
//takes a bigint string and converts to different base
var inputValues = parseBigInt(bigint, inputBase),
outputValues = [], //output array, little-endian/lsd order
remainder,
len = inputValues.length,
pos = 0,
i;
while (pos < len) { //while digits left in input array
remainder = 0; //set remainder to 0
for (i = pos; i < len; i++) {
//long integer division of input values divided by output base
//remainder is added to output array
remainder = inputValues[i] + remainder * inputBase;
inputValues[i] = Math.floor(remainder / outputBase);
remainder -= inputValues[i] * outputBase;
if (inputValues[i] == 0 && i == pos) {
pos++;
}
}
outputValues.push(remainder);
}
outputValues.reverse(); //transform to big-endian/msd order
return formatBigInt(outputValues, outputBase);
}
var largeNumber =
'8079d7ae567dd2c02dadd1068843136314fa389'+
'3fa1fb1ab331682c6a85cad62b208d66c9974bb'+
'bb15d52676fd9907efb158c284e96f5c7a4914f'+
'd927b7326c40efa14922c68402d05ff53b0e4cc'+
'da90bbee5e6c473613e836e2c79da1072e366d0'+
'd50933327e77651b6984ddbac1fdecf1fd8fa17'+
'e0f0646af662a8065bd873';
//convert largeNumber from base 16 to base 10
var largeIntDecimal = convertBase(largeNumber, 16, 10);
//show decimal result in console:
console.log(largeIntDecimal);
//check that it matches the expected output:
console.log('Matches expected:',
largeIntDecimal === '90218878289834622370514047239437874345637539049'+
'0041601777680471033834440238792668056151869629657106087539378251084294158000056'+
'8410184295251853192063399040257313667761112741809491264436884044262041741468522'+
'5340199872975797295511475162170060618806831021437109054760851445152320452665575'+
'790602072479287289305203'
);
//check that conversion and back-conversion results in the original number
console.log('Converts back:',
convertBase(convertBase(largeNumber, 16, 10), 10, 16) === largeNumber
);

How to implement toString to convert a number to a string?

I was asked during an interview to implement toString() to convert a number into a string.
toString()
n => s
123 => "123"
Aside from:
converting the number by concatenating an empty string
123+""
using the native toString() function
(123).toString()
creating a new string
String(123)
How else could toString() be implemented in javascript?
You can use it as the property name of an object.
function toString(value) {
// Coerces value to a primitive string (or symbol)
var obj = {};
obj[value] = true;
return Object.getOwnPropertyNames(obj)[0];
}
console.log(toString(123)); // 123 -> "123"
console.log(toString(1.23)); // 1.23 -> "1.23"
console.log(toString(NaN)); // NaN -> "NaN"
console.log(Infinity); // Infinity -> "Infinity"
console.log(toString(-0)); // -0 -> "0"
console.log(toString(1e99)); // 1e99 -> "1e+99"
You can also use DOM attributes:
var obj = document.createElement('div');
obj.setAttribute('data-toString', value);
return obj.getAttribute('data-toString');
Or join an array
return [value].join();
And a big etcetera. There are lots of things which internally use the ToString abstract operation.
This works for integers. It takes the number modulo 10 and divides it by 10 repeatedly, then adds 48 to the digits and uses String.fromCharCode to get a string value of the digits, then joins everything.
function toString(n){
var minus = (n < 0
? "-"
: ""),
result = [];
n = Math.abs(n);
while(n > 0){
result.unshift(n % 10);
n = Math.floor(n / 10);
}
return minus + (result.map(function(d){
return String.fromCharCode(d + 48);
})
.join("") || "0");
}
console.log(toString(123123));
console.log(toString(999));
console.log(toString(0));
console.log(toString(-1));
The trick here is to consider a number as a series of digits. This is not an inherent property of numbers, since the base-10 representation that we use is quite arbitrary. But once a number is represented as a series of digits, it is quite easy to convert each digit individually to a string, and concatenate all such strings.
EDIT: As pointed out, this only takes integers into consideration (which is probably acceptable for an interview question).
var intToDigits = function(n) {
var highestPow = 1;
while (highestPow < n) highestPow *= 10;
var div = highestPow / 10;
// div is now the largest multiple of 10 smaller than n
var digits = [];
do {
var digit = Math.floor(n / div);
n = n - (digit * div);
div /= 10;
digits.push(digit);
} while (n > 0);
return digits;
};
var toString = function(n) {
var digitArr = intToDigits(n);
return digitArr.map(function(n) {
return "0123456789"[n];
}).join('');
};
Usage:
>> toString(678)
"678"
If you're using ES6 you could use template literals.
var a = 5;
console.log(`${a}`);

Categories

Resources