Why is BigInt(largeNumber) still not precise? - javascript

I want to find the sum of all digits of a large number, for example 9995.
I applied BigInt to get the power of large number and sum its digits together using the code below.
BigInt(Math.pow(99, 95)).toString().split("").reduce((a, b) => a * 1 + b * 1)
However, it returns 845, but the correct answer should be 972.
I have checked the integer output of the large number. In JavaScript it is:
3848960788934848488282452569509484590776195611314554049114673132510910096787679715604422673797115451807631980373077374162416714994207463122539142978709403811688831410945323915071533162168320
This is not the same as the correct answer (in C#):
3848960788934848611927795802824596789608451156087366034658627953530148126008534258032267383768627487094610968554286692697374726725853195657679460590239636893953692985541958490801973870359499.
I am wondering what’s wrong in my code causing the differences.

The expression Math.pow(99, 95), when it resolves, has already lost precision - casting it to a BigInt after the fact does not recover the lost precision.
Use BigInts from the beginning instead, and use ** instead of Math.pow so that the exponentiation works:
console.log(
(99n ** 95n)
.toString()
.split('')
.reduce((a, b) => a + Number(b), 0)
);

BigInt(Math.pow(99,95))
This runs math pow on 2 floats, then converts it to bigint.
You want BigInt(99) ** BigInt(95) instead

Related

JavaScript BigInt print unsigned binary represenation

How do you print an unsigned integer when using JavaScript's BigInt?
BigInts can be printed as binary representation using toString(2). However for negative values this function just appends a - sign when printing.
BigInt(42).toString(2)
// output => 101010
BigInt(-42).toString(2)
// output => -101010
How do I print the unsigned representation of BigInt(42)? I that with regular numbers you can do (-42 >>> 0).toString(2), however the unsigned right shift seems not to be implemented for BigInt, resulting in an error
(BigInt(-42) >>> BigInt(0)).toString(2)
// TypeError: BigInts have no unsigned right shift, use >> instead
An easy way to get the two's complement representation for negative BigInts is to use BigInt.asUintN(bit_width, bigint):
> BigInt.asUintN(64, -42n).toString(2)
'1111111111111111111111111111111111111111111111111111111111010110'
Note that:
You have to define the number of bits you want (64 in my example), there is no "natural"/automatic value for that.
Given only that string of binary digits, there is no way to tell whether this is meant to be a positive BigInt (with a value close to 2n**64n) or a two's complement representation of -42n. So if you want to reverse the conversion later, you'll have to provide this information somehow (e.g. by writing your code such that it implicitly assumes one or the other option).
Relatedly, this is not how -42n is stored internally in current browsers. (But that doesn't need to worry you, since you can create this output whenever you want/need to.)
You could achieve the same result with a subtraction: ((2n ** 64n) - 42n).toString(2) -- again, you can specify how many bits you'd like to see.
Is there something like bitAtIndex for BigInt?
No, because there is no specification for how BigInts are represented. Engines can choose to use bits in any way they want, as long as the resulting BigInts behave as the specification demands.
#Kyroath:
negative BigInts are represented as infinite-length two's complement
No, they are not: the implementations in current browsers represent BigInts as "sign + magnitude", not as two's complement. However, this is an unobservable implementation detail: implementations could change how they store BigInts internally, and BigInts would behave just the same.
What you probably meant to say is that the two's complement representation of any negative integer (big or not) is conceptually an infinite stream of 1-bits, so printing or storing that in finite space always requires defining a number of characters/bits after which the stream is simply cut off. When you have a fixed-width type, that obviously defines this cutoff point; for conceptually-unlimited BigInts, you have to define it yourself.
Here's a way to convert 64-bit BigInts into binary strings:
// take two's complement of a binary string
const twosComplement = (binaryString) => {
let complement = BigInt('0b' + binaryString.split('').map(e => e === "0" ? "1" : "0").join(''));
return decToBinary(complement + BigInt(1));
}
const decToBinary = (num) => {
let result = ""
const isNegative = num < 0;
if (isNegative) num = -num;
while (num > 0) {
result = (num % BigInt(2)) + result;
num /= BigInt(2);
}
if (result.length > 64) result = result.substring(result.length - 64);
result = result.padStart(64, "0");
if (isNegative) result = twosComplement(result);
return result;
}
console.log(decToBinary(BigInt(5))); // 0000000000000000000000000000000000000000000000000000000000000101
console.log(decToBinary(BigInt(-5))); // 1111111111111111111111111111111111111111111111111111111111111011
This code doesn't do any validation, however.

Is it safe to use strings in arithmetic expressions?

I have a number of seconds in a string, like: '5'.
From that I need to get the number of milliseconds and it has to be of type Number, like: 5000.
I know that you can easily convert strings to numbers by prefixing them with a +
const result = +'5';
console.log(result, typeof(result));
However playing around I saw that that's not even necessary because JavaScript automatically does the conversion for you when you try to use arithmetic between strings and numbers.
const result = '5' * 3;
console.log(result, typeof(result));
console.log('5.3' * 3);
On the docs I only found info about the Number() constructor.
My question is: is it safe to use arithmetic on strings (except for the addition)? Can I rely on the behaviour showed above?
Yes, it is safe. All arithmetic operations except a binary + will convert the operands to numbers. That includes bitwise operators as well as unary plus.
With that said, it is probably a good idea not to rely on this extensively. Imagine that you have this code:
function calculate(a, b) {
return a * 2 + b * 3;
}
//elsewhere in the code
console.log(calculate("5", "2"));
This works fine because both a and b are multiplied, so are going to be converted to numbers. But in six months time you come back to the project and realise you want to modify the calculation, so you change the function:
function calculate(a, b) {
return a + b * 3;
}
//elsewhere in the code
console.log(calculate("5", "2"));
...and suddenly the result is wrong.
It is therefore better if you explicitly convert the values to numbers if you want to do arithmetic. Saves the occasional accidental bug and it is more maintainable.
Yes, but you have to be careful...
console.log('5.3' * 3);
console.log('5.3' + 3);
These two very similar functions cast the values different ways:
* can only be applied between two numbers, so '5.3' becomes 5.3
+ can also concatenate strings, and the string comes first, so 3 becomes '3'
If you understand all these you can do this, but I'd recommend against it. It's very easy to miss and JS has a lot of weird unexpected casts.

Converting a Two's complement number to its binary representation

I am performing bitwise operations, the result of which is apparently being stored as a two's complement number. When I hover over the variable it's stored in I see- num = -2086528968.
The binary of that number that I want is - (10000011101000100001100000111000).
But when I say num.toString(2) I get a completely different binary representation, the raw number's binary instead of the 2s comp(-1111100010111011110011111001000).
How do I get the first string back?
Link to a converter: rapidtables.com/convert/number/decimal-to-binary.html
Put in this number: -2086528968
Follow bellow the result:
var number = -2086528968;
var bin = (number >>> 0).toString(2)
//10000011101000100001100000111000
console.log(bin)
pedro already answered this, but since this is a hack and not entirely intuitive I'll explain it.
I am performing bitwise operations, the result of which is apparently being stored as a two's complement number. When I hover over the variable its stored in I see num = -2086528968
No, the result of most bit-operations is a 32bit signed integer. This means that the bit 0x80000000 is interpreted as a sign followed by 31 bits of value.
The weird bit-sequence is because of how JS stringifies the value, something like sign + Math.abs(value).toString(base);
How to deal with that? We need to tell JS to not interpret that bit as sign, but as part of the value. But how?
An easy to understand solution would be to add 0x100000000 to the negative numbers and therefore get their positive couterparts.
function print(value) {
if (value < 0) {
value += 0x100000000;
}
console.log(value.toString(2).padStart(32, 0));
}
print(-2086528968);
Another way would be to convert the lower and the upper bits seperately
function print(value) {
var signBit = value < 0 ? "1" : "0";
var valueBits = (value & 0x7FFFFFFF).toString(2);
console.log(signBit + valueBits.padStart(31, 0));
}
print(-2086528968);
//or lower and upper half of the bits:
function print2(value) {
var upperHalf = (value >> 16 & 0xFFFF).toString(2);
var lowerHalf = (value & 0xFFFF).toString(2);
console.log(upperHalf.padStart(16, 0) + lowerHalf.padStart(16, 0));
}
print2(-2086528968);
Another way involves the "hack" that pedro uses. You remember how I said that most bit-operations return an int32? There is one operation that actually returns an unsigned (32bit) interger, the so called Zero-fill right shift.
So number >>> 0 does not change the bits of the number, but the first bit is no longer interpreted as sign.
function uint32(value){
return value>>>0;
}
function print(value){
console.log(uint32(value).toString(2).padStart(32, 0));
}
print(-2086528968);
will I run this shifting code only when the number is negative, or always?
generally speaking, there is no harm in running nr >>> 0 over positive integers, but be careful not to overdo it.
Technically JS only supports Numbers, that are double values (64bit floating point values). Internally the engines also use int32 values; where possible. But no uint32 values. So when you convert your negative int32 into an uint32, the engine converts it to a double. And if you follow up with another bit operation, first thing it does is converting it back.
So it's fine to do this like when you need an actual uint32 value, like to print the bits here, but you should avoid this conversion between operations. Like "just to fix it".

Comparing floating-point to integers in Javascript

So I ran across a small piece of code that looks like this
Math.random() * 5 | 0 and was confused by what it did.
after some inspecting, it seems like the comparison turns the decimal into an integer. is that right? and so the piece of code is another way is saying give me a random number between 0 and 4. Can anyone explain why that is?
1) Math.random() function always return decimal value and will be less than one. Ex - 0.2131313
random()
Returns a double value with a positive sign, greater than or equal to 0.0 and less than 1.0.
2) Math.random()*5 will always be less than 5. (maxvalue - 4.99999).
3) The bitwise operator '|' will truncate the decimal values.
Edit : Paul is correct. '|' does more than just truncate.
But in this case Math.random()*5|0 - It truncates the decimal and returns the integar.

Can someone please explain this JavaScript decimal rounding function in detail?

function Round2DecimalPlaces(l_amt) {
var l_dblRounded = +(Math.round(l_amt + "e+2") + "e-2");
return l_dblRounded;
}
Fiddle: http://jsfiddle.net/1jf3ut3v/
I'm mainly confused on how Math.round works with "e+2" and how addition the "+" sign to the beginning of Math.round makes any difference at all.
I understand the basic of the function; the decimal gets moved n places to the right (as specified by e+2), rounded with this new integer, and then moved back. However, I'm not sure what 'e' is doing in this situation.
eX is a valid part of a Number literal and means *10^X, just like in scientific notation:
> 1e1 // 1 * Math.pow(10, 1)
10
> 1e2 // 1 * Math.pow(10, 2)
100
And because of that, converting a string containing such a character sequence results in a valid number:
> var x = 2;
> Number(x + "e1")
20
> Number(x + "e2")
200
For more information, have a look at the MDN JavaScript Guide.
But of course the way this notation is used in your example is horrible. Converting values back and forth to numbers and strings is already bad enough, but it also makes it more difficult to understand.
Simple multiple or divide by a multiple of 10.
The single plus operator coerces a the string into a float. (See also: Single plus operator in javascript )

Categories

Resources