How to avoid "Infinity" and console.log a large number in Javascript? - javascript

I am trying to find the first number in the Fibonacci sequence to contain over 1000 digits.
Given a number n (e.g. 4), I found a way to find what place the first number with n-digits has in the Fibonacci sequence as well as a way to find the number given its place in the sequence.
Say, for example, you need to know the first number with 4 digits in the Fibonacci sequence as well as its place in the sequence. My code would work like this:
var phi = (1+Math.sqrt(5))/2;
var nDigits = 4;
var fEntry = Math.ceil(2 + Math.log(Math.pow(10, nDigits-
1))/Math.log(phi));
var fNumber = 2 * Math.pow(phi, fEntry);
console.log(fEntry);
console.log(fNumber);
In the console you would see fEntry (that is, the place the number has in the Fibonacci sequence) and fNumber (the number you're looking for). If you want to find the first number with 4 digits and its place in the sequence, for example, you'll get number 1597 at place 17, which is correct.
So far so good.
Problems arise when I want to find big numbers. I need to find the first number with 1000 digits in the Fibonacci sequence, but when I write nDigits = 1000 and run the code, the console displays "Infinity" for fEntry and for fNumber. I guess the reason is that my code involves calculations with numbers higher than what Javascript can deal with.
How can I find that number and avoid Infinity?

How can I find that number and avoid Infinity?
You can't, with the number type. Although it can hold massive values, it loses integer accuracy after Number.MAX_SAFE_INTEGER (9,007,199,254,740,991):
const a = Number.MAX_SAFE_INTEGER;
console.log(a); // 9007199254740991
console.log(a + 1); // 9007199254740992, so far so good
console.log(a + 2); // 9007199254740992, oh dear...
You can use the new BigInt on platforms that support it. Alternately, any of several "big int" libraries that store the numbers as strings of digits (literally).

Related

How can I parse a string as an integer and keep decimal places if they are zeros?

I have these strings: "59.50" & "30.00"
What I need to do is convert them to integers but keep the trailing zeros at the end to effectively return:
59.50
30.00
I've tried:
Math.round(59.50 * 1000) / 1000
Math.round(30.00 * 1000) / 1000
but ended up with
59.5
30
I'm assuming I need to use a different method than Math.round as this automatically chops off trailing zeros.
I need to keep these as integers as they need to be multiplied with other integers and keep two decimals points. T thought this would be fairly straight forward but after a lot of searching I can't seem to find a solution to exactly what I need.
Thanks!
Your premise is flawed. If you parse a number, you are converting it to its numerical representation, which by definition doesn't have trailing zeros.
A further flaw is that you seem to think you can multiply two numbers together and keep the same number of decimal places as the original numbers. That barely makes sense.
It sounds like this might be an XY Problem, and what you really want to do is just have two decimal places in your result.
If so, you can use .toFixed() for this:
var num = parseFloat("59.50");
var num2 = parseFloat("12.33");
var num3 = num * num2
console.log(num3.toFixed(2)); // 733.64
Whenever you want to display the value of the variable, use Number.prototype.toFixed(). This function takes one argument: the number of decimal places to keep. It returns a string, so do it right before viewing the value to the user.
console.log((123.4567).toFixed(2)); // logs "123.46" (rounded)
To keep the decimals - multiply the string by 1
example : "33.01" * 1 // equals to 33.01
Seems you are trying to retain the same floating point, so better solution will be some thing like
parseFloat(string).toFixed(string.split('.')[1].length);
If you want numbers with decimal points, you are not talking about integers (which are whole numbers) but floating point numbers.
In Javascript all numbers are represented as floating point numbers.
You don't need the trailing zeros to do calculations. As long as you've got all the significant digits, you're fine.
If you want to output your result with a given number of decimal values, you can use the toFixed method to transform your number into a formatted string:
var num = 1.5
var output = num.toFixed(2) // '1.50'
// the number is rounded
num = 1.234
output = num.toFixed(2) // '1.23'
num = 1.567
output = num.toFixed(2) // '1.57'
Here's a more detailed description of toFixed: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Number/toFixed

JavaScript toString limits

So my problem is this, I'm writing a program that checks if number is even or odd without division. So I decided to take the number, turn it into a String with the
number.toString()
method. The problem I'm having is that if you put a number that is about 17 or more digits long the string is correct for about the first 17 digits then it's just 0's and sometimes 2's. For example,
function toStr (number)
{
return number.toString(10);
}
console.log(toStr(123456789123456789));
prints,
123456789123456780
any ideas?
The problem has nothing to do with strings or your function at all. Try going to your console and just entering the expression 123456789123456789 and pressing return.
You will likewise obtain 123456789123456780.
Why?
The expression 123456789123456789 within the JavaScript language is interpreted as a JavaScript number type, which can only be represented exactly to a certain number of base two significant figures. The input number happens to have more significant digits when expressed in base two than the number of base two significant figures available in JavaScript's representation of a number, and so the value is automatically rounded in base two as follows:
123456789123456789 =
110110110100110110100101110101100110100000101111100010101 (base two)
123456789123456780 =
110110110100110110100101110101100110100000101111100001100 (base two)
Note that you CAN accurately represent some numbers larger than a certain size in JavaScript, but only those numbers with no more significant figures in base two than JavaScript has room for. For instance, 2 times a very large power of 10, which would have only one significant figure in base two.
If you are designing this program to accept user input from a form or dialog box, then you will receive the input as a string. You only need to check the last digit in order to determine if the input number is odd or even (assuming it is indeed an integer to begin with). The other answer has suggested the standard way to obtain the last character of a string as well as the standard way to test if a string value is odd or even.
If you go beyond Javascript's max integer size (9007199254740992) you are asking for trouble: http://ecma262-5.com/ELS5_HTML.htm.
So to solve this problem, you must treat it as a string only. Then extract the last digit in the string and use it to determine whether the number is even or odd.
if(parseInt(("123456789123456789").slice(-1)) % 2)
//odd
else
//even
It's a 64-bit floating point number, using the IEEE 754 specification. A feature of this spec is that starting at 2^53 the smallest distance between two numbers is 2.
var x = Math.pow(2, 53);
console.log( x == x + 1 );
This difference is the value of the unit in the last place, or ULP.
This is similar in principle to trying to store fractional values in integral types in other languages; values like .5 can't be represented, so they are discarded. With integers, the ULP value is always 1; with floating point, the ULP value depends on how big or small the number you're trying to represent.

Why this randomization method gives skewed results?

I'm using two different randomization methods, while one gives results whose variance is what I expect, the other method gives results which are skewed and in a very consistent way too.
The methods:
function randomA() {
var raw = Number((Math.random()+'').substr(2));
return raw % NUM_OF_POSSIBLES;
}
function randomB() {
var raw = Math.round(Math.random()*10000);
return raw % NUM_OF_POSSIBLES;
}
When NUM_OF_POSSIBLES = 2 the first method (randomA()) results in a rather consistent number of zeros (64%) and 36% of 1s. While radnomB() does pretty much 50/50.
If NUM_OF_POSSIBLES = 5 the first method again is skewed in a pretty consistent way:
0: 10%, 1: 23%, 2: 22%, 3: 22%, 4: 23%, while the second one gives around 20% to each result.
You can find the full code with multiple tests here: jsfiddle
Why is the first method skewed, and also why is the skewing consistent?
I'm not entirely sure, but my guess is that it has to do with the rounding mode used when JavaScript formats a number as a string. In the first case, your result depends on the choice of last digit, which is sensitive to this rounding. If it's biased toward even numbers, that would explain your results. (In the case of NUM_OF_POSSIBLES == 5, it would be because of a deficit of 5s as the last digit.) In the second routine, the result depends on an intermediate digit of the string representation, which is pretty much isolated from that influence.
You might have better results by chopping off the last digit or two in the first routine.
EDIT I just confirmed experimentally that if the first routine is changed to chop off the last digit:
function randomA() {
var raw = String(Math.random());
raw = raw.substring(2, raw.length-1);
return raw % NUM_OF_POSSIBLES;
}
then the bias appears to be gone when NUM_OF_POSSIBLES == 2 or 5.
I have found that the reason of why randomA is working in such way is because java script uses floating poing numbers with 52+1 Digits (table under Basic formats chapter). So after random function returns too big value it is rounded, for example
Math.pow(2, 54) +1
//18014398509481984
Math.pow(2, 54)
//18014398509481984
Math.pow(2, 54) -1
//18014398509481984
all return the same value wich is divided by 2 (because of rounding).
for more understanding you can play and see how it looks in biniry format , examples :
parseInt(Math.pow(2, 54) - 2).toString(2)
//"111111111111111111111111111111111111111111111111111110"
parseInt(Math.pow(2, 54) - 3).toString(2)
//"111111111111111111111111111111111111111111111111111100"
parseInt(Math.pow(2, 54) ).toString(2)
//"1000000000000000000000000000000000000000000000000000000"
parseInt(Math.pow(2, 54) -1).toString(2)
//"1000000000000000000000000000000000000000000000000000000"
You get the bias because the results of Math.random() aren't always of the same length, so for example 0.123 and 0.1235 count towards the "ones" heap.
You could think that it'd be corrected if you even out the lengths with trailing zeroes, but that also won't be correct, because 0.123 could be a rounded 0.122999999999.
The real error of the first method is relying on the least significant digit of an imprecise fraction (both %2 and %5 are only affected by the last digit), which had suffered rounding errors when converted from binary to decimal for presentation.
The original, binary form of the fraction is probably uniformly distributed, but there's no way of reading it in Javascript.
Now, if someone would explain the distribution of trailing digits of a rounded decimal fraction...

JavaScript Math.floor: how guarantee number will round down?

I want to normalize an array so that each value is
in [0-1) .. i.e. "the max will never be 1 but the min can be 0."
This is not unlike the random function returning numbers in the same range.
While looking at this, I found that .99999999999999999===1 is true!
Ditto (1-Number.MIN_VALUE) === 1 But Math.ceil(Number.MIN_VALUE) is 1, as it should be.
Some others: Math.floor(.999999999999) is 0
while Math.floor(.99999999999999999) is 1
OK so there are rounding problems in JS.
Is there any way I can normalize a set of numbers to lie in the range [0,1)?
It may help to examine the steps that JavaScript performs of each of your expressions.
In .99999999999999999===1:
The source text .99999999999999999 is converted to a Number. The closest Number is 1, so that is the result. (The next closest Number is 0.99999999999999988897769753748434595763683319091796875, which is 1–2–53.)
Then 1 is compared to 1. The result is true.
In (1-Number.MIN_VALUE) === 1:
Number.MIN_VALUE is 2–1074, about 5e–304.
1–2–1074 is extremely close to one. The exact value cannot be represented as a Number, so the nearest value is used. Again, the nearest value is 1.
Then 1 is compared to 1. The result is true.
In Math.ceil(Number.MIN_VALUE):
Number.MIN_VALUE is 2–1074, about 5e–304.
The ceiling function of that value is 1.
In Math.floor(.999999999999):
The source text .999999999999 is converted to a Number. The closest Number is 0.99999999999900002212172012150404043495655059814453125, so that is the result.
The floor function of that value is 0.
In Math.floor(.99999999999999999):
The source text .99999999999999999 is converted to a Number. The closest Number is 1, so that is the result.
The floor function of 1 is 1.
There are only two surprising things here, at most. One is that the numerals in the source text are converted to internal Number values. But this should not be surprising. Of course text has to be converted to internal representations of numbers, and the Number type cannot perfectly store all the infinitely many numbers. So it has to round. And of course numbers very near 1 round to 1.
The other possibly surprising thing is that 1-Number.MIN_VALUE is 1. But this is actually the same issue: The exact result is not representable, but it is very near 1, so 1 is used.
The Math.floor function works correctly. It never introduces any error, and you do not have to do anything to guarantee that it will round down. It always does.
However, since you want to normalize numbers, it seems likely you are going to divide numbers at some point. When you divide, there may be rounding problems, because many results of division are not exactly representable, so they must be rounded.
However, that is a separate problem, and you have not given enough information in this question to address the specific calculations you plan to do. You should open a separate question for it.
Javascript will treat any number between 0.999999999999999994 and 1 as 1, so just subtract .000000000000000006.
Of course that's not as easy as it sounds, since .000000000000000006 is evaluated as 0 in Javascript, so you could do something like:
function trueFloor(x)
{
x = x * 100;
if(x > .0000000000000006)
x = x - .0000000000000006;
x = Math.floor(x/100);
return x;
}
EDIT: Or at least you'd think you could. Apparently JS casts .99999999999999999 to 1 before passing it to a function, so you'd have to try something like:
trueFloor("0.99999999999999999")
function trueFloor(str)
{
x=str.substring(0,9) + 0;
return Math.floor(x); //=> 0
}
Not sure why you'd need that level of precision, but in theory, I guess it works. You can see a working fiddle here
As long as you cast your insanely precise float as a string, that's probably your best bet.
Please understand one thing: this...
.999999999999999999
... is just a Number literal. Just as
.999999999999999998
.999999999999999997
.999999999999999996
...
... you see the pattern.
How JavaScript treats these literals is completely another story. And yes, this treatment is limited by the number of bits that can be used to store a Number value.
The number of possible floating point literals is infinite by definition - no matter how small is the range set for them. For example, take the ones shown above: how many of numbers very close to 1 you may express? Right, it's infinite: just keep appending 9 to the line.
But the container for each Number value is quite finite: it has 64 bits. That means, it can store 2^64 different values (Infinite, -Infinite and NaN among them) - and that's all.
You want to work with such literals anyway? Use Strings to store them, not Numbers - and some BigMath JS library (take your pick) to work with those values - as Strings, again.
But from your question it looks like you're not, as you talked about array of Numbers - Number values, that is. And in no way there can be .999999999999999999 stored there, as there is no such Number value in JavaScript.

Influence Math.random()

I'm looking for a way to influence Math.random().
I have this function to generate a number from min to max:
var rand = function(min, max) {
return Math.floor(Math.random() * (max - min + 1)) + min;
}
Is there a way to make it more likely to get a low and high number than a number in the middle?
For example; rand(0, 10) would return more of 0,1,9,10 than the rest.
Is there a way to make it more likely to get a low and high number than a number in the middle?
Yes. You want to change the distribution of the numbers generated.
http://en.wikipedia.org/wiki/Random_number_generation#Generation_from_a_probability_distribution
One simple solution would be to generate an array with say, 100 elements.
In those 100 elements represent the numbers you are interested in more frequently.
As a simple example, say you wanted number 1 and 10 to show up more frequently, you could overrepresent it in the array. ie. have number one in the array 20 times, number 10 in the array 20 times, and the rest of the numbers in there distributed evenly. Then use a random number between 0-100 as the array index. This will increase your probability of getting a 1 or a 10 versus the other numbers.
You need a distribution map. Mapping from random output [0,1] to your desired distribution outcome. like [0,.3] will yield 0, [.3,.5] will yield 1, and so on.
Sure. It's not entirely clear whether you want a smooth rolloff so (for example) 2 and 8 are returned more often than 5 or 6, but the general idea works either way.
The typical way to do this is to generate a larger range of numbers than you'll output. For example, lets start with 5 as the base line occurring with frequency N. Let's assume that you want 4 or 7 to occur at frequency 2N, 3 or 8 at frequency 3N, 2 or 9 and frequency 4N and 0 or 10 at frequency 5N.
Adding those up, we need values from 1 to 29 (or 0 to 28, or whatever) from the generator. Any of the first 5 gives an output of 0. Any of the next 4 gives and output of 1. Any of the next 3 gives an output of 2, and so on.
Of course, this doesn't change the values returned by the original generator -- it just lets us write a generator of our own that produces numbers following the distribution we've chosen.
Not really. There is a sequence of numbers that are generated based off the seed. Your random numbers come from the sequence. When you call random, you are grabbing the next element of the sequence.
Can you influence the output of Math.random in javascript (which runs client side)?
No. At least not in any feasible/practical manner.
But what you could do is to create your own random number generator that produces number in the distribution that you need.
There are probably an infinite number of ways of doing it, and you might want to think about the exact shape/curvature of the probability function.
It can be probably be done in one line, but here is a multi-line approach that uses your existing function definition (named rand, here):
var dd = rand(1,5) + rand(0,5);
var result;
if (dd > 5)
result = dd - 5;
else result = 6 - dd;
One basic result is that if U is a random variable with uniform distribution and F is the cumulative distribution you want to sample from, then Y = G(X) where G is the inverse of F has F as its cumulative distribution. This might not necessarily be the most efficient way of doing and generating random numbers from all sort of distributions is a research subfield in and of itself. But for a simple transformation it might just do the trick. Like in your case, F(x) could be 4*(x-.5)^3+.5, it seems to satisfy all constraints and is easy to invert and use as a transformation of the basic random number generator.

Categories

Resources