I am trying to generate a random number between 1 and a maximum. This I don't have a problem doing so and do so with the following:
var max = 200;
var randomNumber = Math.floor(Math.random() * max) + 1;
However in an ideal situation I would like to generate a number between 1 and my maximum however the lower numbers have a higher probability of occurring. I want the variable to be biased towards 1. However my maths skills aren't strong enough to work this out, it would be great if someone could point me in the right direction.
Thank you,
Josh
a simple way will be to just square the result of Math.random(). Since the result of the function is between 0 and 1 , the square will also be in the range [0, 1], but values , for example , 0.5 from it will be mapped to lower ones - 0.25 . You can experiment with powers above 1 until you find an acceptable function.
I got a code in java which does what you want.
You should choose your own probabilities for the int[] prob arrays.
I think it wont be that hard to translate this to js or build smth. equal.
int[] probs;
void initRandom(int n, int[] probabilities)
{
int i,j,begin=0,end=0,sum=0;
int[] probs;
// sum of all propabilitys must be 100%
for(i=0;i<probabilities.length;i++) sum+=probabilities[i];
probs=new int[sum];
// fills numbers from 0 till n-1 in regard to their probabilities
// to the probability array.
for(i=0;i<n;i++)
{
begin=end;
end+=probabilities[i];
for(j=begin;j<end;j++) probs[j]=i;
}
}
int genRandom()
{
return probs[smallRand(probs.length-1)];
}
This is a very general question. First consider this link here
http://en.wikipedia.org/wiki/List_of_probability_distributions#Supported_on_a_bounded_interval
It shows some probability functions which are bounded, which I believe is what you are looking for (since min=1 and max=max).
You can also chose a semi-infine interval, and just ignore all value above your maximum. I think, this could also be acceptable, depending on your application.
Next, chose one of those probabilty functions, that suits you best. For the sake of simplicity, I chose the triangular distribution
The distribution functions are (PDF and CDF)
f(x) = 2/(2*max-1-max^2)*(x-max)
F(x) = 2/(2*max-1-max^2)*(0.5*x^2-max*x-0.5+max)
so I can generate from a uniform distribution on 0-1 a biased distribution by inverting the CDF like
var urand = Math.random();
var a = 2/(2*max-1-max^2);
var randomNumber = max-Math.sqrt(max*max-2*(max-urand/a-0.5));
Cheers
R
The following function i made up gives you a near-one-biased random number
function rand(max) {
var r = Math.random();
r = 1/(101-100 * r);
return Math.floor(r * max) - 1;
}
It only uses simple arithmetics, thus, it should be quite fast.
Related
I have a method that mimics an unfair coin. You can pass in a percentage, and it tells you whether or not you succeeded by returning a boolean. So if you call it with .25, it'll return true 25% of the time.
I'm trying to figure out if I can use this function to create a weighted randomness function that works like this: There is a 25% chance it returns x, a 40% chance it returns y, and a 35% chance it returns z. This is just an example. I would want the function to work for an unlimited amount of letters, but the percentages added together should equal 1.
The trick is, I want to be able to think about it the way I just described above. In other words:
result = function ({.25, x}, {.4, y}, {.35, z})
result should be x 25% of the time, and so on. Can I implement this function with my unfairCoin?
Here's how I worded it in a comment below.it might clarify what I'm asking for:
Correct my logic if I'm making a mistake here, but let's say XY and Z all had .3333... Couldn't I use my unfair coin to pass in .3333... If that returns true, that means you get X as a result. If it returns false, call my unfair again with .5 if that returns true, return Y, otherwise return Z. If that is correct, I don't know how to get this working if the numbers AREN'T .3333 and if there's more than three
If you have coins with a known probability of heads
Assume you have a function unfairCoin(p), which is a function that produces heads with a known probability p and tails otherwise. For example, it could be implemented like this:
function unfairCoin(p) {
return Math.random() < p ? True : false;
}
Here is an algorithm that solves your problem given unfairCoin, assuming all the probabilities involved sum to 1:
Set cumu to 1.
For each item starting with the first:
Get the probability associated with the chosen item (call it p) and accept the item with probability p / cumu (e.g., via unfairCoin(p / cumu)). If the item is accepted, return that item.
If the item was not accepted, subtract p from cumu.
This algorithm's expected time complexity depends on the order of the probabilities. In general, the algorithm's time complexity is linear, but if the probabilities are sorted in descending order, the expected time complexity is constant.
EDIT (Jul. 30): As I've just found out, this exact algorithm was already described by Keith Schwarz in Darts, Dice, and Coins, in "Simulating a Loaded Die with a Biased Coin". That page also contains a proof of its correctness.
An alternative solution uses rejection sampling, but requires generating a random integer using fair coin tosses:
Generate a uniform random integer index in the interval [0, n), where n is the number of items. This can be done, for example, using the Fast Dice Roller by J. Lumbroso, which uses only fair coin tosses (unfairCoin(0.5)); see the code below. Choose the item at the given index (starting at 0).
Get the probability associated with the chosen item (call it p) and accept it with probability p (e.g., via unfairCoin(p)). If the item is accepted, return that item; otherwise, go to step 1.
This algorithm's expected time complexity depends on the difference between the lowest and highest probability.
Given the weights for each item, there are many other ways to make a weighted choice besides the algorithms given earlier; see my note on weighted choice algorithms.
Fast Dice Roller Implementation
The following is JavaScript code that implements the Fast Dice Roller. Note that it uses a rejection event and a loop to ensure it's unbiased.
function randomInt(minInclusive, maxExclusive) {
var maxInclusive = (maxExclusive - minInclusive) - 1
var x = 1
var y = 0
while(true) {
x = x * 2
var randomBit = Math.random()<0.5 ? 1 : 0
y = y * 2 + randomBit
if(x > maxInclusive) {
if (y <= maxInclusive) { return y + minInclusive }
// Rejection
x = x - maxInclusive - 1
y = y - maxInclusive - 1
}
}
}
The following version returns a BigInt, an arbitrary-precision integer supported in recent versions of JavaScript:
function randomInt(minInclusive, maxExclusive) {
minInclusive=BigInt(minInclusive)
maxExclusive=BigInt(maxExclusive)
var maxInclusive = (maxExclusive - minInclusive) - BigInt(1)
var x = BigInt(1)
var y = BigInt(0)
while(true) {
x = x * BigInt(2)
var randomBit = BigInt(Math.random()<0.5 ? 1 : 0)
y = y * BigInt(2) + randomBit
if(x > maxInclusive) {
if (y <= maxInclusive) { return y + minInclusive }
// Rejection
x = x - maxInclusive - BigInt(1)
y = y - maxInclusive - BigInt(1)
}
}
}
If you have coins with an unknown probability of heads
If on the other hand, you have a function COIN that outputs heads with an unknown probability and tails otherwise, then there are two problems to solve to get to the solution:
How to turn a biased coin into a fair coin.
How to turn a fair coin into a loaded die.
In other words, the task is to turn a biased coin into a loaded die.
Let's see how these two problems can be solved.
From biased to fair coins
Assume you have a function COIN() that outputs heads with an unknown probability and tails otherwise. (If the coin is known to have probability 0.5 of producing heads then you already have a fair coin and can skip this step.)
Here we can use von Neumann's algorithm from 1951 of turning a biased coin into a fair coin. It works like this:
Flip COIN() twice.
If both results are heads or both are tails, go to step 1.
If the first result is heads and the other is tails, take heads as the final result.
If the first result is tails and the other is heads, take tails as the final result.
Now we have a fair coin FAIRCOIN().
(Note that there are other ways of producing fair coins this way, collectively called randomness extractors, but the von Neumann method is perhaps the simplest.)
From fair coins to loaded dice
Now, the method to turn fair coins into loaded dice is much more complex. It suffices to say that there are many ways to solve this problem, and the newest of them is called the Fast Loaded Dice Roller, which produces a loaded die using just fair coins (in fact, it uses on average up to 6 fair coin tosses more than the optimal amount to produce each loaded die roll). The algorithm is not exactly trivial to implement, but see my Python implementation and the implementation by the Fast Loaded Dice Roller's authors.
Note that to use the Fast Loaded Dice Roller, you need to express each probability as a non-negative integer weight (such as 25, 40, 35 in your example).
Look into this:
function weightedRandom(array) {
// expected array: [[percent, var], [percent, var], ...] where sum of percents is 1
var random=Math.random();
var sofar=0;
var index=-1;
for(var i=0; i<array.length; i++) {
if(sofar<random) index=i;
sofar+=array[i][0];
};
return array[index][1];
}
I'm using the standard Fisher-Yates algorithm to randomly shuffle a deck of cards in an array. However, I'm unsure if this will actually produce a true distribution of all possible permutations of a real-world shuffled deck of cards.
V8's Math.random only has 128-bits of internal state. Since there are 52 cards in a deck, 52 factorial would require 226-bits of internal state to generate all possible permutations.
However, I'm unsure if this applies when using Fisher-Yates since you aren't actually generating each possible but just getting one position randomly out of 52.
function shuffle(array) {
var m = array.length, t, i;
while (m) {
i = Math.floor(Math.random() * m--);
t = array[m];
array[m] = array[i];
array[i] = t;
}
return array;
}
In general, if a pseudorandom number generator admits fewer than 52 factorial different seeds, then there are some permutations that particular PRNG can't choose when it shuffles a 52-item list, and Fisher-Yates can't change that. (The set of permutations a particular PRNG can choose can be different from the set of permutations another PRNG can choose, even if both PRNGs are initialized with the same seed.) See also this question.
Note that although the Math.random algorithm used by V8 admits any of about 2^128 seeds at the time of this writing, no particular random number algorithm is mandated by the ECMAScript specification of Math.random, which states only that that method uses an "implementation-dependent algorithm or strategy" to generate random numbers (see ECMAScript sec. 20.2.2.27).
A PRNG's period can be extended with the Bays-Durham shuffle, which effectively increases that PRNG's state length (see Severin Pappadeux's answer). However, if you merely initialize the Bays-Durham table entries with outputs of the PRNG (rather than use the seed to initialize those entries), it will still be the case that that particular PRNG (which includes the way in which it initializes those entries and selects those table entries based on the random numbers it generates) can't choose more permutations than the number of possible seeds to initialize its original state, because there would be only one way to initialize the Bays-Durham entries for a given seed — unless, of course, the PRNG actually shuffles an exorbitant amount of lists, so many that it generates more random numbers without cycling than it otherwise would without the Bays-Durham shuffle.
For example, if the PRNG is 128 bits long, there are only 2^128 possible seeds, so there are only 2^128 ways to initialize the Bays-Durham shuffle, one for each seed, unless a seed longer than 128 bits extends to the Bays-Durham table entries and not just the PRNG's original state. (This is not to imply that the set of permutations that PRNG can choose is always the same no matter how it selects table entries in the Bays-Durham shuffle.)
EDIT (Aug. 7): Clarifications.
EDIT (Jan. 7, 2020): Edited.
You are right. With 128 bits of starting state, you can only generate at most 2128 different permutations. It doesn't matter how often you use this state (call Math.random()), the PRNG is deterministic after all.
Where the number of calls to Math.random() actually matter is when
each call would draw some more entropy (e.g. from hardware random) into the system, instead of relying on the internal state that is initialised only once
the entropy of a single call result is so low that you don't use the entire internal state over the run of the algorithm
Well, you definitely need RNG with 226bits period for all permutation to be covered, #PeterO answer is correct in this regard. But you could extend period using Bays-Durham shuffle, paying by effectively extending state of RNG. There is an estimate of the period of the B-D shuffled RNG and it is
P = sqrt(Pi * N! / (2*O))
where Pi=3.1415..., N is B-D table size, O is period of the original generator. If you take log2 of the whole expression, and use Stirling formula for factorial, and assume P=2226 and O=2128, you could get estimate for N, size of the table in B-D algorithm. From back-of-the envelope calculation N=64 would be enough to get all your permutations.
UPDATE
Ok, here is an example implementation of RNG extended with B-D shuffle. First, I implemented in Javascript Xorshift128+, using BigInt, which is apparently default RNG in V8 engine as well. Compared with C++ one, they produced identical output for first couple of dozen calls. 128bits seed as two 64bits words. Windows 10 x64, NodeJS 12.7.
const WIDTH = 2n ** 64n;
const MASK = WIDTH - 1n; // to keep things as 64bit values
class XorShift128Plus { // as described in https://v8.dev/blog/math-random
_state0 = 0n;
_state1 = 0n;
constructor(seed0, seed1) { // 128bit seed as 2 64bit values
this._state0 = BigInt(seed0) & MASK;
this._state1 = BigInt(seed1) & MASK;
if (this._state0 <= 0n)
throw new Error('seed 0 non-positive');
if (this._state1 <= 0n)
throw new Error('seed 1 non-positive');
}
next() {
let s1 = this._state0;
let s0 = this._state1;
this._state0 = s0;
s1 = ((s1 << 23n) ^ s1 ) & MASK;
s1 ^= (s1 >> 17n);
s1 ^= s0;
s1 ^= (s0 >> 26n);
this._state1 = s1;
return (this._state0 + this._state1) & MASK; // modulo WIDTH
}
}
Ok, then on top of XorShift128+ I've implemented B-D shuffle, with table of size 4. For your purpose you'll need table more than 84 entries, and power of two table is much easier to deal with, so let's say 128 entries table (7bit index) shall be good enough. Anyway, even with 4 entries table and 2bit index we need to know which bits to pick to form index. In original paper B-D discussed picking them from the back of rv as well as from front of rv etc. Here is where B-D shuffle needs another seed value - telling algorithm to pick say, bits from position 2 and 6.
class B_D_XSP {
_xsprng;
_seedBD = 0n;
_pos0 = 0n;
_pos1 = 0n;
_t; // B-D table, 4 entries
_Z = 0n;
constructor(seed0, seed1, seed2) { // note third seed for the B-D shuffle
this._xsprng = new XorShift128Plus(seed0, seed1);
this._seedBD = BigInt(seed2) & MASK;
if (this._seedBD <= 0n)
throw new Error('B-D seed non-positive');
this._pos0 = findPosition(this._seedBD); // first non-zero bit position
this._pos1 = findPosition(this._seedBD & (~(1n << this._pos0))); // second non-zero bit position
// filling up table and B-D shuffler
this._t = new Array(this._xsprng.next(), this._xsprng.next(), this._xsprng.next(), this._xsprng.next());
this._Z = this._xsprng.next();
}
index(rv) { // bit at first position plus 2*bit at second position
let idx = ((rv >> this._pos0) & 1n) + (((rv >> this._pos1) & 1n) << 1n);
return idx;
}
next() {
let retval = this._Z;
let j = this.index(this._Z);
this._Z = this._t[j];
this._t[j] = this._xsprng.next();
return retval;
}
}
Use example is as follow.
let rng = new B_D_XSP(1, 2, 4+64); // bits at second and sixth position to make index
console.log(rng._pos0.toString(10));
console.log(rng._pos1.toString(10));
console.log(rng.next());
console.log(rng.next());
console.log(rng.next());
Obviously, third seed value of say 8+128 would produce different permutation from what is shown in the example, you could play with it.
Last step would be to make 226bit random value by calling several (3 of 4) times B-D shuffled rng and combine 64bit values (and potential carry over) to make 226 random bits and then convert them to the deck shuffle.
I am having issues with understanding dynamic programming solutions to various problems, specifically the coin change problem:
"Given a value N, if we want to make change for N cents, and we have infinite supply of each of S = { S1, S2, .. , Sm} valued coins, how many ways can we make the change? The order of coins doesn’t matter.
For example, for N = 4 and S = {1,2,3}, there are four solutions: {1,1,1,1},{1,1,2},{2,2},{1,3}. So output should be 4. For N = 10 and S = {2, 5, 3, 6}, there are five solutions: {2,2,2,2,2}, {2,2,3,3}, {2,2,6}, {2,3,5} and {5,5}. So the output should be 5."
There is another variation of this problem where the solution is the minimum number of coins to satisfy the amount.
These problems appear very similar, but the solutions are very different.
Number of possible ways to make change: the optimal substructure for this is DP(m,n) = DP(m-1, n) + DP(m, n-Sm) where DP is the number of solutions for all coins up to the mth coin and amount=n.
Minimum amount of coins: the optimal substructure for this is
DP[i] = Min{ DP[i-d1], DP[i-d2],...DP[i-dn] } + 1 where i is the total amount and d1..dn represent each coin denomination.
Why is it that the first one required a 2-D array and the second a 1-D array? Why is the optimal substructure for the number of ways to make change not "DP[i] = DP[i-d1]+DP[i-d2]+...DP[i-dn]" where DP[i] is the number of ways i amount can be obtained by the coins. It sounds logical to me, but it produces an incorrect answer. Why is that second dimension for the coins needed in this problem, but not needed in the minimum amount problem?
LINKS TO PROBLEMS:
http://comproguide.blogspot.com/2013/12/minimum-coin-change-problem.html
http://www.geeksforgeeks.org/dynamic-programming-set-7-coin-change/
Thanks in advance. Every website I go to only explains how the solution works, not why other solutions do not work.
Lets first talk about the number of ways, DP(m,n) = DP(m-1, n) + DP(m, n-Sm). This in indeed correct because either you can use the mth denomination or you can avoid it. Now you say why don't we write it as DP[i] = DP[i-d1]+DP[i-d2]+...DP[i-dn]. Well this will lead to over counting , lets take an example where n=4 m=2 and S={1,3}. Now according to your solution dp[4]=dp[1]+dp[3]. ( Assuming 1 to be a base case dp[1]=1 ) .Now dp[3]=dp[2]+dp[0]. ( Again dp[0]=1 by base case ). Again applying the same dp[2]=dp[1]=1. Thus in total you get answer as 3 when its supposed to be just 2 ( (1,3) and (1,1,1,1) ). Its so because
your second method treats (1,3) and (3,1) as two different solution.Your second method can be applied to case where order matters, which is also a standard problem.
Now to your second question you say that minimum number of denominations can
be found out by DP[i] = Min{ DP[i-d1], DP[i-d2],...DP[i-dn] } + 1. Well this is correct as in finding minimum denominations, order or no order does not matter. Why this is linear / 1-D DP , well although the DP array is 1-D each state depends on at most m states unlike your first solution where array is 2-D but each state depends on at most 2 states. So in both case run time which is ( number of states * number of states each state depends on ) is the same which is O(nm). So both are correct, just your second solution saves memory. So either you can find it by 1-D array method or by 2-D by using the recurrence
dp(n,m)=min(dp(m-1,n),1+dp(m,n-Sm)). (Just use min in your first recurrence)
Hope I cleared the doubts , do post if still something is unclear.
This is a very good explanation of the coin change problem using Dynamic Programming.
The code is as follows:
public static int change(int amount, int[] coins){
int[] combinations = new int[amount + 1];
combinations[0] = 1;
for(int coin : coins){
for(int i = 1; i < combinations.length; i++){
if(i >= coin){
combinations[i] += combinations[i - coin];
//printAmount(combinations);
}
}
//System.out.println();
}
return combinations[amount];
}
I'm currently making a Conway's Game of Life reproduction in JavaScript and I've noticed that the function Math.random() is always returning a certain pattern. Here's a sample of a randomized result in a 100x100 grid:
Does anyone knows how to get better randomized numbers?
ApplyRandom: function() {
var $this = Evolution;
var total = $this.Settings.grid_x * $this.Settings.grid_y;
var range = parseInt(total * ($this.Settings.randomPercentage / 100));
for(var i = 0; i < total; i++) {
$this.Infos.grid[i] = false;
}
for(var i = 0; i < range; i++) {
var random = Math.floor((Math.random() * total) + 1);
$this.Infos.grid[random] = true;
}
$this.PrintGrid();
},
[UPDATE]
I've created a jsFiddle here: http://jsfiddle.net/5Xrs7/1/
[UPDATE]
It seems that Math.random() was OK after all (thanks raina77ow). Sorry folks! :(. If you are interested by the result, here's an updated version of the game: http://jsfiddle.net/sAKFQ/
(But I think there's some bugs left...)
This line in your code...
var position = (y * 10) + x;
... is what's causing this 'non-randomness'. It really should be...
var position = (y * $this.Settings.grid_x) + x;
I suppose 10 was the original size of this grid, that's why it's here. But that's clearly wrong: you should choose your position based on the current size of the grid.
As a sidenote, no offence, but I still consider the algorithm given in #JayC answer to be superior to yours. And it's quite easy to implement, just change two loops in ApplyRandom function to a single one:
var bias = $this.Settings.randomPercentage / 100;
for (var i = 0; i < total; i++) {
$this.Infos.grid[i] = Math.random() < bias;
}
With this change, you will no longer suffer from the side effect of reusing the same numbers in var random = Math.floor((Math.random() * total) + 1); line, which lowered the actual cell fillrate in your original code.
Math.random is a pseudo random method, that's why you're getting those results. A by pass i often use is to catch the mouse cursor position in order to add some salt to the Math.random results :
Math.random=(function(rand) {
var salt=0;
document.addEventListener('mousemove',function(event) {
salt=event.pageX*event.pageY;
});
return function() { return (rand()+(1/(1+salt)))%1; };
})(Math.random);
It's not completly random, but a bit more ;)
A better solution is probably not to randomly pick points and paint them black, but to go through each and every point, decide what the odds are that it should be filled, and then fill accordingly. (That is, if you want it on average %20 percent chance of it being filled, generate your random number r and fill when r < 0.2 I've seen a Life simulator in WebGL and that's kinda what it does to initialize...IIRC.
Edit: Here's another reason to consider alternate methods of painting. While randomly selecting pixels might end up in less work and less invocation of your random number generator, which might be a good thing, depending upon what you want. As it is, you seem to have selected a way that, at most some percentage of your pixels will be filled. IF you had kept track of the pixels being filled, and chose to fill another pixel if one was already filled, essentially all your doing is shuffling an exact percentage of black pixels among your white pixels. Do it my way, and the percentage of pixels selected will follow a binomial distribution. Sometimes the percentage filled will be a little more, sometimes a little less. The set of all shufflings is a strict subset of the possibilities generated this kind of picking (which, also strictly speaking, contains all possibilities for painting the board, just with astronomically low odds of getting most of them). Simply put, randomly choosing for every pixel would allow more variance.
Then again, I could modify the shuffle algorithm to pick a percentage of pixels based upon numbers generated from a binomial probability distribution function with a defined expected/mean value instead of the expected/mean value itself, and I honestly don't know that it'd be any different--at least theoretically--than running the odds for every pixel with the expected/mean value itself. There's a lot that could be done.
console.log(window.crypto.getRandomValues(new Uint8Array(32))); //return 32 random bytes
This return a random bytes with crypto-strength: https://developer.mozilla.org/en/docs/Web/API/Crypto/getRandomValues
You can try
JavaScript Crypto Library (BSD license). It is supposed to have a good random number generator. See here an example of usage.
Stanford JavaScript Crypto Library (BSD or GPL license). See documentation for random numbers.
For a discussion of strength of Math.random(), see this question.
The implementation of Math.random probably is based on a linear congruential generator, one weakness of which is that a random number depends on the earlier value, producing predictable patterns like this, depending on the choice of the constants in the algorithm. A famous example of the effect of poor choice of constants can be seen in RANDU.
The Mersenne Twister random number generator does not have this weakness. You can find an implementation of MT in JavaScript for example here: https://gist.github.com/banksean/300494
Update: Seeing your code, you have a problem in the code that renders the grid. This line:
var position = (y * 10) + x;
Should be:
var position = (y * grid_x) + x;
With this fix there is no discernible pattern.
You can using the part of sha256 hash from timestamp including nanoseconds:
console.log(window.performance.now()); //return nanoseconds inside
This can be encoded as string,
then you can get hash, using this: http://geraintluff.github.io/sha256/
salt = parseInt(sha256(previous_salt_string).substring(0, 12), 16);
//48 bits number < 2^53-1
then, using function from #nfroidure,
write gen_salt function before, use sha256 hash there,
and write gen_salt call to eventListener.
You can use sha256(previous_salt) + mouse coordinate, as string to get randomized hash.
Normally this is how you get a random number in javascript.
Math.random();
However, this method seems to be inefficient when it comes to generating random integers.
Firstly, the random function has to generate a random decimal, like 0.1036098338663578, then it has to be multiplied to a suitable range (10.464593220502138). Finally, the floor function subtracts the decimals to produce the result (which in this case, 10).
var random_integer = Math.floor(Math.random()*101);
Is there a faster way to generate random integers in javascript?
Edit1:
I am using this for creating a canvas HTML5 game. The FPS is about 50, and my code is pretty optimized, apart from generating a random number.
This code is faster... to type.
var random_integer = Math.random()*101|0;
It won't work right for huge numbers though.
(and it doesn't run any faster, at least not in chrome.)
You could achieve a much faster speed during the game if you generate the random numbers beforehand, though.
for (var i=1e6, lookupTable=[]; i--;) {
lookupTable.push(Math.random()*101|0);
}
function lookup() {
return ++i >= lookupTable.length ? lookupTable[i=0] : lookupTable[i];
}
lookup will rotate through an array with a million random integers. It is much faster than calling random and floor (of course, there is a "loading time" penalty up front from generating the lookup table).
If you want to avoid floating point calculation then you can do that by writing your own pseudo random number generator. Here is a list of well known pseudo random number generators (PRNG). Linear congruential generator is the easiest one to implement and probably most effective in terms of performance too. However, you will need to understand the theory behind PRNGs well enough to write an effective one. That might not be worth of effort though. The JS implementation should be effective enough. At the end there is a high possibility that you will find Math.random() is running faster than your code.
i mostly use
var a = Math.floor(Math.random((number you'd like to be minimum, (number you'd like to be maximum) * (number you'd like to be maximum);
No, there is no easier or shorter way. You can create a function if you need to do it multiple times, though.
const getRandomInt = (base = 10) => {
return Math.floor(Math.random() * base)
}
Heres what I use:
function getRandomInt(max) {
return Math.floor(Math.random() * max);
}
An example of how this would be used would be
function getRandomInt(max) {
return Math.floor(Math.random() * max);
}
if(getRandomInt(420) == 69){
console.log("nice")
}
Your way is the right way to retrive a random integer in javascript, don't worry about performance it will run fast.
This is the shortest one-liner Random Number Generator code
rnd=(a,b)=>~~(Math.random()*(b-a))+a
How To Use: rnd(min,max)
Example : rnd(10,100)