Debugging a base conversion calculator up to base 809 - javascript

So my calculator is built for absolutely huge numbers. I am having a bit of fun here and making a base system based on the pokedex. So for example, base 151 would include all the Pokemon from generation 1, and base 809 goes all the way to Melmetal from Pokemon Go. The number 0 is represented by an unhatched pokemon egg.
I am running into a problem that I cannot figure out how to determine what is wrong, some of these symptoms may be from the same problem but I am unsure.
Symptom 1:
Currently on my screen, I have it set to base pidgey (base 16) and I have input the base 10 number 80000006871717981.
My math gives me the following remainders, with the corresponding representative images
(1)(1)(12)(3)(7)(9)(5)(1)(7)(9)(14)(1)(8)(6)(0)
Bulbasaur Bulbasaur Butterfree Venusaur Squirtle Blastoise Charmeleon Bulbasaur Squirtle Blastoise Kakuna Bulbasaur Wartortle Charizard Egg
And the output from simply converting with toString(16) is 11c3795179e1860.
The output from the windows 10 calculator is ‭11C3795179E185D‬, implying that the e/Kakuna, 6/Charizard, and 0/Egg are wrong.
Symptom 2:
Large numbers will have the same outputs.
1000000000000000001 and 1000000000000000000 both output as
(3)(458)(599)(562)(324)(484)(498)
Venasaur Mantyke Klang Yamask Torkoal Palkia Tepig
I feel like this is relating to the size of the number, but don't know how I can prevent this.
For symptom 2, I have tried casting the value to a BigInt when I set the value for input, but that just gave me nothing, only the output3 got anything calculated
function calculate()
{
document.getElementById('output').innerHTML= '';
document.getElementById('output2').innerHTML= '';
var base = {{base}};//gets base from the python code
var input = document.getElementById('base10').value;
document.getElementById('output3').innerHTML= parseInt(input).toString(16);
var remainder = input % base;
while(input > 0)
{
//alert(input + " % " + base + " = " + remainder);
document.getElementById('output').innerHTML = '<img src="{{url_for('static', filename='images/')}}'+ remainder+'MS.png">' + document.getElementById('output').innerHTML;//adds the images of the pokemon
document.getElementById('output2').innerHTML = '('+remainder+')' + document.getElementById('output2').innerHTML;
input = parseInt(input / base);
remainder = input % base;
}
}
For my expected results, as I mentioned the built in js function shows the base 16 test is accurate, but the windows 10 calculator says otherwise. I don't know which one to go with.
And as far as the problems with large numbers go, I just need to make large base 10 numbers stay stable, not get scientific notation, and still be able to be processed.

Related

Javascript (Floating-Point Question): is there any reliable way to convert a fraction to a decimal AND THEN BACK AGAIN?

Say I have a very simple fraction (for the purposes of this question, presuppose all cases discussed will be 0 < [VALUE] < 1; that is: nothing like 8 7/16 or 1.625):
My starting fraction:
someFraction = '1/3'; // result: STRING value containing "1/3".
Okay easy enough to convert that to a decimal:
correspondingDecimal = eval(someFraction); // result: FLOAT value containing 0.3333333333333333
(yes, yes, "evils of eval and all that. Work with me here; this is an example)
Now say I wanted it BACK again to "1/3". If this were 1/4 (0.25), no problem:
We'd want a simple greatest common denominator function (to reduce the results to manageability), say
function reduce(numerator,denominator){
let getGCD = (a,b) => b ? getGCD(b, a%b) : a;
gcd = getGCD(numerator,denominator);
return (numerator/gcd) + '/' + (denominator/gcd);
}
...and then we can just grab out the decimal portion of our test string:
let justDecimalPart = ('' + 0.25).slice(2);
and multiply by its length power to get our numerator and our denominator to reduce:
let commonFactor = Math.pow(10,justDecimalPart.length); // Result: 100
// = 25 = 100
reduce((justDecimalPart * commonFactor), commonFactor); // Result: 1/4
... capital! That worked out fine!
EDIT: On a whim, I tried that same reduce function on the decimal values without multiplying them at all (reduce(0.25,1); // result: "1/4"). Sorry; brain-fart. Ignore the exponent bit above ☺️
...But 1/3 is a repeating decimal. If we run the same steps through, we wind up with 3333333333333333/10000000000000000 (((1/3) * 1e16) + '/' + 1e16). It's even worse with something like 17/29.
Is there any way to arrive BACK at "1/3" after taking the plunge from 1/3?
EDIT: I'm not trying to get to an infinite precision, just to a manageable one, preferably through throttling/limiting decimal lengths and then rounding/simplifying the result.
Use-case example here: I'm working on a carpentry calculator. 25mm, expressed in US Common units (to a precision of 3 decimal places) is identical to 63/64. I can only seem to arrive at 125/128.
(25/25.4).toPrecision(3) = 0.984
(63/64).toPrecision(3) = 0.984
It appears I was leaving out a step in my conversion; I wasn't rounding the result of the decimal multiplied against the denominator:
decimalValue = (63 / 64).toPrecision(3); // Result: 0.984
denominator = 128;
numerator = decimalValue * denominator; // Result: 125.952
numerator = Math.round(numerator); // Result: 126
reduce(numerator, denominator); // Result: 63/64

How does bitwise AND OR and XOR works on -negative signed integers?

I was just solving random problems on bitwise operators and trying various other combination for making personal notes. And somehow I just cannot figure out the solution.
Say I wanted to check bitwise AND between two integers or on a ~number and -negative number(~num1 & -num2) and various other combo's. Then I can see the answer but I haven't been able to establish how this happened?
Console:
console.log(25 & 3); outputs 1 (I can solve this easily).
console.log(-25 & -3); outputs-27.
Similarly
console.log(~25 & ~3); outputs -28.
console.log(25 & ~3); outputs -24.
console.log(~25 & 3); outputs -2.
console.log(~25 & -3); outputs --28.
console.log(-25 & ~3); outputs --28.
I know the logic behind "console.log(25 & -3)".
25 is 11001
-3 is 11101(3=00011 The minus sign is like 2s compliment+1)
AND-11001 = 25.
But I cannot make it work the same way when both the numbers are negative or with the other cases mentioned above. I have tried various combinations of numbers too, not just these two. But I cannot solve the problem. Can somebody explain the binary logic used in the problems I cannot solve.
(I've spend about 2 hrs here on SO to find the answer and another 1 hr+ on google, but I still haven't found the answer).
Thanks and Regards.
JavaScript specifies that bitwise operations on integers are performed as though they were stored in two's-complement notation. Fortunately, most computer hardware nowadays uses this notation natively anyway.
For brevity's sake I'm going to show the following numbers as 8-bit binary. They're actually 32-bit in JavaScript, but for the numbers in the original question, this doesn't change the outcome. It does, however, let us drop a whole lot of leading bits.
console.log(-25 & -3); //outputs -27. How?
If we write the integers in binary, we get (11100111 & 11111101) respectively. AND those together and you get 11100101, which is -27.
In your later examples, you seem to be using the NOT operator (~) and negation (-) interchangeably. You can't do that in two's complement: ~ and - are not the same thing. ~25 is 11100110, which is -26, not -25. Similarly, ~3 is 11111100, which is -4, not -3.
But when we put these together, we can work out the examples you gave.
console.log(~25 & ~3); //outputs-28. How?
11100110 & 11111100 = 11100100, which is -28 (not 28, as you wrote)
console.log(25 & ~3);//outputs-24. How?
00011001 & 11111100 = 00011000, which is 24
console.log(~25 & 3);//outputs-2. How?
11100110 & 00000011 = 00000001, which is 2
console.log(~25 & -3);//outputs--28. How?
11100110 & 11111101 = 11100100, which is -28
console.log(-25 & ~3);//outputs--28. How?
11100111 & 11111100 = 11100100, which is -28
The real key to understanding this is that you don't really use bitwise operations on integers. You use them on bags of bits of a certain size, and these bags of bits happen to be conveniently representable as integers. This is key to understanding what's going on here, because you've stumbled across a case where the difference matters.
There are specific circumstances in computer science where you can manipulate bags of bits in ways that, by coincidence, give the same results as if you'd done particular mathematical operations on numbers. But this only works in specific circumstances, and they require you to assume certain things about the numbers you're working on, and if your numbers don't fit those assumptions, things break down.
This is one of the reasons Donald Knuth said "premature optimization is the root of all evil". If you want to use bitwise operations in place of actual integer math, you have to be absolutely certain that your inputs will actually follow the assumptions required for that trick to work. Otherwise, the results will start looking strange when you start using inputs outside of those assumptions.
25 = 16+8+1 = 0b011001, I've added another 0 digit as the sign digit. Practically you'll have at least 8 binary digits
but the two's complement math is the same. To get -25 in 6-bits two's complement, you'd do -25 = ~25 + 1=0b100111
3=2+1=0b000011; -3 = ~3+1 = 0b111101
When you & the two, you get:
-25 = ~25 + 1=0b100111
&
-3 = ~3 + 1 = 0b111101
0b100101
The leftmost bit (sign bit) is set so it's a negative number. To find what it's a negative of, you reverse the process and first subtract 1 and then do ~.
~(0b100101-1) = 0b011011
thats 1+2+0*4+8+16 = 27 so -25&-3=-27.
For 25 & ~3, it's:
25 = 16+8+1 = 0b011001
& ~3 = 0b111100
______________________
0b011000 = 24
For ~25 & 3, it's:
~25 = 0b100110
& ~3 = 0b000011
______________________
0b000010 = 2
For ~25 & -3, it's:
~25 = 0b100110
& ~3+1 = 0b111101
______________________
0b100100 #negative
#find what it's a negative of:
~(0b100100-1) =~0b100011 = 0b011100 = 4+8+16 = 28
0b100100 = -28
-27 has 6 binary digits in it so you should be using numbers with at least that many digits. With 8-bit numbers then we have:
00011001 = 25
00000011 = 3
00011011 = 27
and:
11100111 = -25
11111101 = -3
11100101 = -27
Now -25 & -3 = -27 because 11100111 & 11111101 = 11100101
The binary string representation of a 32 bit integer can be found with:
(i >>> 0).toString(2).padStart(32, '0')
The bitwise anding of two binary strings is straightforward
The integer value of a signed, 32 bit binary string is either
parseInt(bitwiseAndString, 2)
if the string starts with a '0', or
-~parseInt(bitwiseAndString, 2) - 1
if it starts with a '1'
Putting all that together:
const tests = [
['-25', '-3'],
['~25', '-3'],
['25', '~3'],
['~25', '3'],
['~25', '~3'],
['-25', '~3']
]
const output = (s,t) => { console.log(`${`${s}:`.padEnd(20, ' ')}${t}`); }
const bitwiseAnd = (i, j) => {
console.log(`Calculating ${i} & ${j}`);
const bitStringI = (eval(i) >>> 0).toString(2).padStart(32, '0');
const bitStringJ = (eval(j) >>> 0).toString(2).padStart(32, '0');
output(`bit string for ${i}`, bitStringI);
output(`bit string for ${j}`, bitStringJ);
const bitArrayI = bitStringI.split('');
const bitArrayJ = bitStringJ.split('');
const bitwiseAndString = bitArrayI.map((s, idx) => s === '1' && bitArrayJ[idx] === '1' ? '1' : '0').join('');
output('bitwise and string', bitwiseAndString);
const intValue = bitwiseAndString[0] === '1' ? -~parseInt(bitwiseAndString, 2) - 1 : parseInt(bitwiseAndString, 2);
if (intValue === (eval(i) & eval(j))) {
console.log(`integer value: ${intValue} ✓`);
} else {
console.error(`calculation failed: ${intValue} !== ${i & j}`);
}
}
tests.forEach(([i, j]) => { bitwiseAnd(i, j); })

Var won't exceed billion

EDIT
I tested something out, and apparently, this:
if(info.max.value == "") {maxdesiredvalue = 999999999999999999999}
Returns in the chrome console:
> maxdesiredvalue
< 999999999
So I believe the problem really comes from there... is there a maximum number of digits we can attribute to a variable?
I'm into javascript for a few months now, and I've made a program that generates random weapons for a tabletop rpg.
Every weapon generated has a price relative to it's attributes. My problem today is that this price won't exceed 9 digits (cannot reach billion), even though it can.
In my generator, it is possible to choose certain properties before generating the weapon. If I intentionally try to generate something worth over a billion gold, it will crash instantly. On the other hand, if there is any way the weapon can be generated without exceeding the billion, it will do so.
For example, the most expansive metal is the "Residuum". The only 2 weapons that can be generated in Residuum are the dart and the shuriken, since they only use 1/16 of an Ingot. Therefore if I set the metal to be Residuum, they will be the only 2 possible generated weapons. From this point, if I try to specify I want a Residuum Sword, it will simply crash as explained earlier.
In my generation options, I also have a text input for the user to choose a minimum value and/or a maximum value for the weapon. I set the default max value to Infinity, so it should'nt be a problem.
function desiredvalue(){
if(info.max.value == "") {maxdesiredvalue = Infinity}
else {maxdesiredvalue = parseInt(info.max.value)}
if(info.min.value == "") {mindesiredvalue = 0}
else {mindesiredvalue = parseInt(info.min.value)}
}
In my html:
Min price: <input type="text" name="min" value="" onchange="desiredvalue()">
Max price: <input type="text" name="max" value="" onchange="desiredvalue()">
I already tried to deactivate this function to see if it was the problem, but even without a specific max value, weapons still won't be generated if their value exceeds 9 digits.
Maybe the problem sets inside the value formula, so here it is, even though it might not be a big help since it is all made up from variables.
WeapValue = ((((IngotValue * Ingots) + CraftTime + (actualEnchantTime * 3) + (LS * 0.02) + (R * 0.05) + BS + (FTH * 0.03)) * (((BPArace + BPAstatus + BPAlevel + ((BPAcrit1 + 1) * BPAcrit2)) / 100) + 1)) + PAenchant + PAaugment1 + PAaugment2 + PAaugment3)
Also the value is modified afterwards to fit in gold, silver or copper...
WeapValue.toLocaleString('en-US', {minimumFractionDigits: 0});
WeapValue = WeapValue.toFixed(2);
if (WeapValue >= 2) {WeapValue2 = Math.ceil(WeapValue); goldtype = " GP"}
else if (WeapValue < 2 && WeapValue >= 1) {WeapValue2 = WeapValue * 10; goldtype = " SP"}
else if (WeapValue < 1 && WeapValue >= 0) {WeapValue2 = WeapValue * 100; goldtype = " CP"}
Nothing else in the script really change the value, and all the variables affecting it are defined earlier, and I don't really think they are the problem, since they actually seem to work (they simply make the price exceed 9 digits).
If you have any questions related to the script, I'm here to answer, but I can't put the full script since it is very, very long (2543 lines)...
If anyone has an idea of how I can deal with my problem, it would be so appreciated! Again, I'm not a javascript expert, but I did my best and looked a lot on the Internet for help, but I still can't get rid of this problem...
Thank you!

Where is my logic going wrong in trying to calculate the presidential outcome?

Let me explain what I'm trying to do. I have data like
const dataByState = {
'Washington' : { ElectoralVotes : 12, RChance: 54, DChance: 46 },
'Oregon': { ElectoralVotes: 7, RChance: 51, DChance: 49 },
.
.
.
'Hawaii' : { ElectoralVotes: 4, RChance : 40, DChance: 60 }
};
where one of the above key-value pairs like
'Hawaii' : { ElectoralVotes: 4, RChance : 40, DChance: 60 }
means "In the state Hawaii, which has 4 electoral votes, there is a 40% chance of the Republican Candidate winning and a 60% chance of the Democrat candidate winning". What I'm ultimately trying to do is calculate the chance of each candidate winning the election. How this would be done in a perfect world is
Iterate through all 2^51 combinations of states
For each combination c, its combined electoral votes are greater than or equal to 270, add it to a collection C of collecions of states
For the Republican candidate, sum up the probabilities of winning each combination of states in C; call that value r. That's his/her chance of winning. The Democrat's chance is 1 - r.
But since I can't go through all 2^51, what I'm doing is choosing some N smaller than 51 and doing
Find a random 2^N combinations of states whose combined electoral votes sum to greater than or equal to 270; call this combination C.
For the Republican candidate, sum up the probabilities of winning each combination of states in C; call that value r. Multiply r by 2^(51-N). That's approximately his/her chance of winning. The Democrat's chance is 1 - r.
Anyhow, this doesn't seem to be working and I'm wondering whether my logic is wrong (I haven't taken statistics since college 3 years ago) or if I'm running into rounding errors. I'm getting a near 100% of the Republican winning (i.e. America being made great again) when I make the chance even in every state, which is wrong because it should calculate to about 50/50.
Code dump: https://jsfiddle.net/pqhnwek9/
The probability of a republican victory is
probRepVict = 0
for(combination in combinations) {
if(combination is republican victory) {
probRepVict += proability of combination
}
}
As you observe it is not feasible to calculate the entire sum. Hence, you choose some subset C to try to estimate this probability.
N = number of combination // 2^51
n = size of C
probRepVictEstimate = 0
for(combination in C) {
if(combination is republican victory) {
probRepVictEstimate += proability of combination
}
}
probRepVictEstimate *= N/n
In the last statement we assume that the probability of a victory scales linearly with the size of the subset.
I believe the logic goes wrong at several places in the script:
(1) When generating the random number you might not get a sufficiently many bits of randomness. For instance if there were 54 states you would be outside of the safe integer range. Some implementations might give you even less fewer bits of randomness (it did break for me in Node, which only give 32 bits). Thus I suggest adding a function
function getRandom() {
// Generate 32 random bits
var s = Math.floor(Math.random()*Math.pow(2, 32)).toString(2)
return new Array(32 - s.length + 1).join("0") + s
}
Replacing
const rand = Math.floor(Math.random() * Math.pow(2,states.length));
with const rand = getRandom() + getRandom();, and replace getCombo with
const getCombo = (i) => {
let combo = [];
for(var j = 0; j < states.length; ++j)
if(i[j] == "0")
combo.push(states[j]);
return combo;
}
(2) You need to count both wins and losses for the republican party to be able to estimate the probability. Thus you cannot add the complement of a combo (by the way, ~ is a bitwise operations, hence convert the operand to a 32-bit integer, so your code does not work as intended). Hence your code should be simplified to:
...
if(!winningCombos.hasOwnProperty(rand)) {
const stateCombo = getCombo(rand);
if(hasSufficientVotes(stateCombo))
{
winningCombos[rand] = stateCombo;
++wins;
}
++count;
}
...
(3) You should scale repubChanceSum by N/n, where N = Math.pow(2, 51) and n = limit. Note that limit should be considerably greater than winningCombos.length.
With these modifications the code correctly predicts a ~50% probability. See this modified fiddle.
Let's hope we get a more optimistic outlook for the future with more realistic probabilities.

toFixed Isn't Doing Anything

I'm teaching myself JavaScript and have run into a problem with toFixed(). I'm working through an amortization calculator; and, one of the steps returns a number with a huge number of decimal places. I'm trying to cut it down to 4 decimal places.
Be advised the sample code has a lot of explanatory HTML in it. It's only there so that I can work through the steps of the equation. Also, when I add one to the very long number, it adds the numeral one to end of the scientific notation.
var paymentamount;
var principal=250000;
var interestrate = 4.5;
var annualrate = interestrate/12;
var numberofpayments = 360;
document.write("This is the annuitized interest rate: "+ annualrate +"%");
document.write("<h3> Now we add 1 to the annualized interest rate</h3>");
var RplusOne = annualrate + 1;
document.write("<p> This is One Added to R: " + RplusOne + "%");
document.write("<h3>Next RplusOne is Raised to the power of N </h3>");
var RRaised = (Math.pow(RplusOne, numberofpayments)).toFixed(4);
document.write("<p>This gives us the following very long number, even thought it shouldn't: " + RRaised);
document.write("<h3>Now we add one to the very long number </h3>");
var RplusOne = RRaised + 1;
document.write("<p>Now we've added one: " + RplusOne);
From MDN's documentation:
If number is greater than 1e+21, this method simply calls Number.prototype.toString() and returns a string in exponential notation.
The problem is that you are using 4.5 as your interest rate instead of 0.045, so doing this:
Math.pow(4.5 / 12 + 1, 360)
gives you a huge number (6.151362770461608e+49 or 6.15 * 10^49 to be exact). Change your interest rate to 0.045 and you will get what you are expecting.
As for the var RplusOne = RRaised + 1 line, the problem here is that RRaised is a string because of toFixed. I would only call toFixed when you're displaying things, and not at any other time; the primary reason for this would be to avoid rounding errors in subsequent calculations, but has the added benefit that your variables remain numbers and not strings.

Categories

Resources