how to convert giant numbers to pure number in JS - javascript

how to convert this giant number 7.125693126643573e+26 to pure number in JS without any scientific notation

Chances are, if the number is provided in scientific notation it'll probably be too large to use integers to represent it without integer overflow occurring. Use string functions to break the scientific notation number apart and construct a new string.
const largeNumber = "7.125693126643573E+26"
const [digit, decimal] = largeNumber.toLowerCase().split(".");
const [decimals, power] = decimal.split('e+');
const zerosToFill = power - decimals.length;
const zeros = Array(zerosToFill).fill(0);
const fullNumber = [digit, decimals, ...zeros].join('');
console.log(`${largeNumber} => ${fullNumber}`);

Use .toLocaleString
var a=7.125693126643573E26;
console.log(a.toLocaleString()); `//output -> 712,569,312,664,357,300,000,000,000`
console.log(a.toLocaleString('aa', { useGrouping: false })); `//output -> 712569312664357300000000000`
This will remove grouping also.
But it will make your number a string.

Use the built in function Number
let number=Number("7.125693126643573E26")
you can also convert it using Math
let number=("7.125693126643573e+26")
parts = String(number).toLowerCase().split('e'),
e = parts.pop(),
l = Math.abs(e),
sign = e/l,
coeff_array = parts[0].split('.');
var dec = coeff_array[1];
if(dec) l = l - dec.length;
num = coeff_array.join('') + new Array(l+1).join('0');
alert(num)

Related

Getting wrong result for binary to decimal even after using BigInt() in javascript

I am trying to add two given binary strings after converting them to decimals(Numbers) and then converting back the resulting decimal(Number) to string.
I am getting the wrong binary to decimal even after using BigInt().
let a = "10100000100100110110010000010101111011011001101110111111111101000000101111001110001111100001101";
let b="110101001011101110001111100110001010100001101011101010000011011011001011101111001100000011011110011";
var twoSum = function(a, b) {
let a1=BigInt(parseInt(a, 2));
let b1=BigInt(parseInt(b,2));
let aStr=a1.toString(10);
let bStr=b1.toString(10);
console.log(aStr)
console.log(bStr)
};
console.log(twoSum(a, b));
Output:
24847893154024981755840167936
526700554598729745018195542016
Correct result is : 24847893154024981730169397005 & 526700554598729746900966573811
I don't why I am getting the wrong result of binary to decimal.
parseInt returns a Number. Due to Number's limited precision being less than the length of your input strings, you've lost precision at that point. Converting that Number to a BigInt afterwards doesn't bring the lost precision back.
The solution is to convert the strings to BigInt directly. There was supposed to be a BigInt.parseInt function, but TC39 (the committee that standardizes JavaScript) never got around to finishing that. In the meantime, for non-decimal inputs, the BigInt constructor does understand strings starting with 0x... (hex), 0o... (octal), and 0b... (binary). The latter is what you want in this case. So you could fix your code as follows:
function parseBinaryToBigInt(a) {
return BigInt('0b' + a);
}
function twoSum(a, b) {
let a1 = parseBinaryToBigInt(a);
let b1 = parseBinaryToBigInt(b);
let aStr = a1.toString(10);
let bStr = b1.toString(10);
console.log(aStr)
console.log(bStr)
};
let a = "10100000100100110110010000010101111011011001101110111111111101000000101111001110001111100001101";
// Binary to decimal
const d = BigInt('0b' + a);
// Turn the binary
const b = BigInt(a);

How to convert this unicode fraction(“¼” and “½”) to Number(1/2, 1/3) in JS?

“¼” and “½”, I just want to convert this Unicode fraction into the Number. I tried various methods and also searched google but I did not find any sollution.
If anyone has any idea how I can convert this Unicode fraction to the Number, it will be appriciated.
You can use String.prototype.normalize("NFKD") to get the two operands from such characters:
const unicodes = [ "¼","½","⅐","⅔","⅖","⅙","⅞" ];
unicodes.forEach((char) => {
const normalized = char.normalize("NFKD");
const operands = normalized.split("⁄");
console.log( char, operands[0] + "/" + operands[1], operands[0] / operands[1] );
});
You don't provide any context but it could be as simple as:
m = {'¼': 0.25, '½': 0.5 };
if(c in m) number = m[c];

Integer and string Javascript

i do a simple exercise "Write a JavaScript program to compute the sum of the two given integers. If the two values are same, then returns triple their sum".
InnerHTML is ok but it seems that my variables are string and not numbers (if i use parseFloat however it doesn't work).
Example : p161 = 10; p162 = 5; => ris = 105 and not 15
let p16 = document.getElementById("p16");
document.getElementById("button16").addEventListener("click", es);
function es(){
let p161 = document.getElementById("input161").value;
let p162 = document.getElementById("input162").value;
let ris = 0;
if (p161 == p162){
ris = (p161 + p162)*3;
return p16.innerHTML = ris;
} else {
ris = p161 + p162;
return p16.innerHTML = ris;
}
}
You are concatenating strings so what you see makes sense. Since you are looking for the sum of integers I dont see why you need to parseFloat. If you want numbers you should just do
let p161 = +document.getElementById("input161").value;
let p162 = +document.getElementById("input162").value;
Plus sign in this case is the unary operator that will convert value to Number type according to ECMA spec

How do I shorten and expand a uuid to a 15 or less characters

Given a uuid(v4) without dashes, how can I shorten it to a 15 or less than 15 characters string? I should also be able to go back to the original uuid from the 15 characters string.
I am trying to shorten it to send it in a flat file and the file format specifies this field to be a 15 characters alphanumeric field. Given that shortened uuid, I should be able to map it back to the original uuid.
Here is what I tried, but definitely not what I wanted.
export function shortenUUID(uuidToShorten: string, length: number) {
const uuidWithoutDashes = uuidToShorten.replace(/-/g , '');
const radix = uuidWithoutDashes.length;
const randomId = [];
for (let i = 0; i < length; i++) {
randomId[i] = uuidWithoutDashes[ 0 | Math.random() * radix];
}
return randomId.join('');
}
As AuxTaco pointed out, if you actually mean "alphanumeric" as in it matches "/^[A-Za-z0-9]{0,15}/" (giving the number of bits of 26 + 26 + 10 = 62), then it is really impossible. You can't fit 3 gallons of water in a gallon bucket without losing something. A UUID is 128-bits, so to convert that to a character space of 62, you'd need at least 22 characters (log[base 62](2^128) == ~22).
If you are more flexible on your charset and just need it 15 unicode characters you can put in a text document, then my answer will help.
Note: First part of this answer, I thought it said length of 16, not 15. The simpler answer won't work. The more complex version below still will.
In order to do so, you'd to use some kind of two-way compression algorithm (similar to an algorithm that is used for zipping files).
However, the problem with trying to compress something like a UUID is you'd probably have lots of collisions.
A UUID v4 is 32 characters long (without dashes). It's hexadecimal, so it's character space is 16 characters (0123456789ABCDEF)
That gives you a number of possible combinations of 16^32, approximately 3.4028237e+38 or 340,282,370,000,000,000,000,000,000,000,000,000,000. To make it recoverable after compression, you'd have to make sure you don't have any collisions (i.e., no 2 UUIDs turn into the same value). That's a lot of possible values (which is exactly why we use that many for UUID, the chance of 2 random UUIDs is only 1 out of that number big number).
To crunch that many possibilities to 16 characters, you'd have to have at least as many possible values. With 16 characters, you'd have to have 256 characters (root 16 of that big number, 256^16 == 16^32`). That's assuming you have an algorithm that'd never create a collision.
One way to ensure you never have collisions would be to convert it from a base-16 number to a base-256 number. That would give you a 1-to-1 relation, ensuring no collisions and making it perfectly reversible. Normally, switching bases is easy in JavaScript: parseInt(someStr, radix).toString(otherRadix) (e.g., parseInt('00FF', 16).toString(20). Unfortunately, JavaScript only does up to a radix of 36, so we'll have to do the conversion ourselves.
The catch with such a large base is representing it. You could arbitrarily pick 256 different characters, throw them in a string, and use that for a manual conversion. However, I don't think there are 256 different symbols on a standard US keyboard, even if you treat upper and lowercase as different glyphs.
A simpler solution would be to just use arbitrary character codes from 0 to 255 with String.fromCharCode().
Another small catch is if we tried to treat that all as one big number, we'd have issues because it's a really big number and JavaScript can't properly represent it exactly.
Instead of that, since we already have hexadecimal, we can just split it into pairs of decimals, convert those, then spit them out. 32 hexadecimal digits = 16 pairs, so that'll (coincidentally) be perfect. (If you had to solve this for an arbitrary size, you'd have to do some extra math and converting to split the number into pieces, convert, then reassemble.)
const uuid = '1234567890ABCDEF1234567890ABCDEF';
const letters = uuid.match(/.{2}/g).map(pair => String.fromCharCode(parseInt(pair, 16)));
const str = letters.join('');
console.log(str);
Note that there are some random characters in there, because not every char code maps to a "normal" symbol. If what you are sending to can't handle them, you'll instead need to go with the array approach: find 256 characters it can handle, make an array of them, and instead of String.fromCharCode(num), use charset[num].
To convert it back, you would just do the reverse: get the char code, convert to hex, add them together:
const uuid = '1234567890ABCDEF1234567890ABCDEF';
const compress = uuid =>
uuid.match(/.{2}/g).map(pair => String.fromCharCode(parseInt(pair, 16))).join('');
const expand = str =>
str.split('').map(letter => ('0' + letter.charCodeAt(0).toString(16)).substr(-2)).join('');
const str = compress(uuid);
const original = expand(str);
console.log(str, original, original.toUpperCase() === uuid.toUpperCase());
For fun, here is how you could do it for any arbitrary input base and output base.
This code is a bit messy because it is really expanded to make it more self-explanatory, but it basically does what I described above.
Since JavaScript doesn't have an infinite level of precision, if you end up converting a really big number, (one that looks like 2.00000000e+10), every number not shown after that e was essentially chopped off and replaced with a zero. To account for that, you'll have to break it up in some way.
In the code below, there is a "simple" way which doesn't account for this, so only works for smaller strings, and then a proper way which breaks it up. I chose a simple, yet somewhat inefficient, approach of just breaking up the string based on how many digits it gets turned into. This isn't the best way (since math doesn't really work like that), but it does the trick (at the cost of needed a smaller charset).
You could imploy a smarter splitting mechanism if you really needed to keep your charset size to a minimum.
const smallStr = '1234';
const str = '1234567890ABCDEF1234567890ABCDEF';
const hexCharset = '0123456789ABCDEF'; // could also be an array
const compressedLength = 16;
const maxDigits = 16; // this may be a bit browser specific. You can make it smaller to be safer.
const logBaseN = (num, n) => Math.log(num) / Math.log(n);
const nthRoot = (num, n) => Math.pow(num, 1/n);
const digitsInNumber = num => Math.log(num) * Math.LOG10E + 1 | 0;
const partitionString = (str, numPartitions) => {
const partsSize = Math.ceil(str.length / numPartitions);
let partitions = [];
for (let i = 0; i < numPartitions; i++) {
partitions.push(str.substr(i * partsSize, partsSize));
}
return partitions;
}
console.log('logBaseN test:', logBaseN(256, 16) === 2);
console.log('nthRoot test:', nthRoot(256, 2) === 16);
console.log('partitionString test:', partitionString('ABCDEFG', 3));
// charset.length should equal radix
const toDecimalFromCharset = (str, charset) =>
str.split('')
.reverse()
.map((char, index) => charset.indexOf(char) * Math.pow(charset.length, index))
.reduce((sum, num) => (sum + num), 0);
const fromDecimalToCharset = (dec, charset) => {
const radix = charset.length;
let str = '';
for (let i = Math.ceil(logBaseN(dec + 1, radix)) - 1; i >= 0; i--) {
const part = Math.floor(dec / Math.pow(radix, i));
dec -= part * Math.pow(radix, i);
str += charset[part];
}
return str;
};
console.log('toDecimalFromCharset test 1:', toDecimalFromCharset('01000101', '01') === 69);
console.log('toDecimalFromCharset test 2:', toDecimalFromCharset('FF', hexCharset) === 255);
console.log('fromDecimalToCharset test:', fromDecimalToCharset(255, hexCharset) === 'FF');
const arbitraryCharset = length => new Array(length).fill(1).map((a, i) => String.fromCharCode(i));
// the Math.pow() bit is the possible number of values in the original
const simpleDetermineRadix = (strLength, originalCharsetSize, compressedLength) => nthRoot(Math.pow(originalCharsetSize, strLength), compressedLength);
// the simple ones only work for values that in decimal are so big before lack of precision messes things up
// compressedCharset.length must be >= compressedLength
const simpleCompress = (str, originalCharset, compressedCharset, compressedLength) =>
fromDecimalToCharset(toDecimalFromCharset(str, originalCharset), compressedCharset);
const simpleExpand = (compressedStr, originalCharset, compressedCharset) =>
fromDecimalToCharset(toDecimalFromCharset(compressedStr, compressedCharset), originalCharset);
const simpleNeededRadix = simpleDetermineRadix(str.length, hexCharset.length, compressedLength);
const simpleCompressedCharset = arbitraryCharset(simpleNeededRadix);
const simpleCompressed = simpleCompress(str, hexCharset, simpleCompressedCharset, compressedLength);
const simpleExpanded = simpleExpand(simpleCompressed, hexCharset, simpleCompressedCharset);
// Notice, it gets a little confused because of a lack of precision in the really big number.
console.log('Original string:', str, toDecimalFromCharset(str, hexCharset));
console.log('Simple Compressed:', simpleCompressed, toDecimalFromCharset(simpleCompressed, simpleCompressedCharset));
console.log('Simple Expanded:', simpleExpanded, toDecimalFromCharset(simpleExpanded, hexCharset));
console.log('Simple test:', simpleExpanded === str);
// Notice it works fine for smaller strings and/or charsets
const smallCompressed = simpleCompress(smallStr, hexCharset, simpleCompressedCharset, compressedLength);
const smallExpanded = simpleExpand(smallCompressed, hexCharset, simpleCompressedCharset);
console.log('Small string:', smallStr, toDecimalFromCharset(smallStr, hexCharset));
console.log('Small simple compressed:', smallCompressed, toDecimalFromCharset(smallCompressed, simpleCompressedCharset));
console.log('Small expaned:', smallExpanded, toDecimalFromCharset(smallExpanded, hexCharset));
console.log('Small test:', smallExpanded === smallStr);
// these will break the decimal up into smaller numbers with a max length of maxDigits
// it's a bit browser specific where the lack of precision is, so a smaller maxDigits
// may make it safer
//
// note: charset may need to be a little bit bigger than what determineRadix decides, since we're
// breaking the string up
// also note: we're breaking the string into parts based on the number of digits in it as a decimal
// this will actually make each individual parts decimal length smaller, because of how numbers work,
// but that's okay. If you have a charset just barely big enough because of other constraints, you'll
// need to make this even more complicated to make sure it's perfect.
const partitionStringForCompress = (str, originalCharset) => {
const numDigits = digitsInNumber(toDecimalFromCharset(str, originalCharset));
const numParts = Math.ceil(numDigits / maxDigits);
return partitionString(str, numParts);
}
const partitionedPartSize = (str, originalCharset) => {
const parts = partitionStringForCompress(str, originalCharset);
return Math.floor((compressedLength - parts.length - 1) / parts.length) + 1;
}
const determineRadix = (str, originalCharset, compressedLength) => {
const parts = partitionStringForCompress(str, originalCharset);
return Math.ceil(nthRoot(Math.pow(originalCharset.length, parts[0].length), partitionedPartSize(str, originalCharset)));
}
const compress = (str, originalCharset, compressedCharset, compressedLength) => {
const parts = partitionStringForCompress(str, originalCharset);
const partSize = partitionedPartSize(str, originalCharset);
return parts.map(part => simpleCompress(part, originalCharset, compressedCharset, partSize)).join(compressedCharset[compressedCharset.length-1]);
}
const expand = (compressedStr, originalCharset, compressedCharset) =>
compressedStr.split(compressedCharset[compressedCharset.length-1])
.map(part => simpleExpand(part, originalCharset, compressedCharset))
.join('');
const neededRadix = determineRadix(str, hexCharset, compressedLength);
const compressedCharset = arbitraryCharset(neededRadix);
const compressed = compress(str, hexCharset, compressedCharset, compressedLength);
const expanded = expand(compressed, hexCharset, compressedCharset);
console.log('String:', str, toDecimalFromCharset(str, hexCharset));
console.log('Neded radix size:', neededRadix); // bigger than normal because of how we're breaking it up... this could be improved if needed
console.log('Compressed:', compressed);
console.log('Expanded:', expanded);
console.log('Final test:', expanded === str);
To use the above specifically to answer the question, you would use:
const hexCharset = '0123456789ABCDEF';
const compressedCharset = arbitraryCharset(determineRadix(uuid, hexCharset));
// UUID to 15 characters
const compressed = compress(uuid, hexCharset, compressedCharset, 15);
// 15 characters to UUID
const expanded = expanded(compressed, hexCharset, compressedCharset);
If there are problematic characters in the arbitrary, you'll have to do something to either filter those out, or hard-code a specific one. Just make sure all of the functions are deterministic (i.e., same result every time).

How can I create the mathematical number 1e6 in JavaScript?

How do I create the number 1e6 with JavaScript?
var veryLargeNumber = //1e6
Here are some different ways:
var veryLargeNumber = 1e6;
var veryLargeNumber = 1.0e+06;
var veryLargeNumber = 1000000;
var veryLargeNumber = 0xf4240;
var veryLargeNumber = 03641100;
var veryLargeNumber = Math.pow(10, 6);
It is written the way you wrote it: var notVeryLargeNumber = 1e6.
Like you wrote above:
var veryLargeNumber = 1e6;//Equals to 1*10^6
This works just fine for me
var veryLargeNumber = 1e6;
console.log( veryLargeNumber );
outputs:
1000000
For more information about really "large" numbers within JavaScript, have a look at this question:
What is JavaScript's Max Int? What's the highest Integer value a Number can go to without losing precision?
For the curious, I went on a little learning safari...
Although the E stands for exponent, the notation is usually referred to as (scientific) E notation or (scientific) e notation
...
Because superscripted exponents like 10^7 cannot always be conveniently displayed, the letter E or e is often used to represent "times ten raised to the power of" (which would be written as "× 10n") and is followed by the value of the exponent; in other words, for any two real numbers m and n, the usage of "mEn" would indicate a value of m × 10n.
https://en.wikipedia.org/wiki/Scientific_notation#E_notation
Use exponential notation if number starts with “0.” followed by more than five zeros. Example:
> 0.0000003
3e-7
http://www.2ality.com/2012/03/displaying-numbers.html
Also: How to convert a String containing Scientific Notation to correct Javascript number format
Number("4.874915326E7") //returns 48749153.26

Categories

Resources