Web Audio: Karplus Strong String Synthesis - javascript

Edit: Cleaned up the code and the player (on Github) a little so it's easier to set the frequency
I'm trying to synthesize strings using the Karplus Strong string synthesis algorithm, but I can't get the string to tune properly. Does anyone have any idea?
As linked above, the code is on Github: https://github.com/achalddave/Audio-API-Frequency-Generator (the relevant bits are in strings.js).
Wiki has the following diagram:
So essentially, I generate the noise, which then gets output and sent to a delay filter simultaneously. The delay filter is connected to a low-pass filter, which is then mixed with the output. According to Wikipedia, the delay should be of N samples, where N is the sampling frequency divided by the fundamental frequency (N = f_s/f_0).
Excerpts from my code:
Generating the noise (bufferSize is 2048, but that shouldn't matter too much)
var buffer = context.createBuffer(1, bufferSize, context.sampleRate);
var bufferSource = context.createBufferSource();
bufferSource.buffer = buffer;
var bufferData = buffer.getChannelData(0);
for (var i = 0; i < delaySamples+1; i++) {
bufferData[i] = 2*(Math.random()-0.5); // random noise from -1 to 1
}
Create a delay node
var delayNode = context.createDelayNode();
We need to delay by f_s/f_0 samples. However, the delay node takes the delay in seconds, so we need to divide that by the samples per second, and we get (f_s/f_0) / f_s, which is just 1/f_0.
var delaySeconds = 1/(frequency);
delayNode.delayTime.value = delaySeconds;
Create the lowpass filter (the frequency cutoff, as far as I can tell, shouldn't affect the frequency, and is more a matter of whether the string "sounds" natural):
var lowpassFilter = context.createBiquadFilter();
lowpassFilter.type = lowpassFilter.LOWPASS; // explicitly set type
lowpassFilter.frequency.value = 20000; // make things sound better
Connect the noise to the output and the delay node (destination = context.destination and was defined earlier):
bufferSource.connect(destination);
bufferSource.connect(delayNode);
Connect the delay to the lowpass filter:
delayNode.connect(lowpassFilter);
Connect the lowpass to the output and back to the delay*:
lowpassFilter.connect(destination);
lowpassFilter.connect(delayNode);
Does anyone have any ideas? I can't figure out whether the issue is my code, my interpretation of the algorithm, my understanding of the API, or (though this is least likely) an issue with the API itself.
*Note that on Github, there's actually a Gain Node between the lowpass and the output, but this doesn't really make a big difference in the output.

Here's what I think is the problem. I don't think the DelayNode implementation is designed to handle such tight feedback loops. For a 441 Hz tone, for example, that's only 100 samples of delay, and the DelayNode implementation probably processes its input in blocks of 128 or more. (The delayTime attribute is "k-rate", meaning changes to it are only processed in blocks of 128 samples. That doesn't prove my point, but it hints at it.) So the feedback comes in too late, or only partially, or something.
EDIT/UPDATE: As I state in a comment below, the actual problem is that a DelayNode in a cycle adds 128 sample frames between output and input, so that the observed delay is 128 / sampleRate seconds longer than specified.
My advice (and what I've begun to do) is to implement the whole Karplus-Strong including your own delay line in a JavaScriptNode (now known as a ScriptProcessorNode). It's not hard and I'll post my code once I get rid of an annoying bug that can't possibly exist but somehow does.
Incidentally, the tone you (and I) get with a delayTime of 1/440 (which is supposed to be an A) seems to be a G, two semitones below where it should be. Doubling the frequency raises it to a B, four semitones higher. (I could be off by an octave or two - kind of hard to tell.) Probably one could figure out what's going on (mathematically) from a couple more data points like this, but I won't bother.
EDIT: Here's my code, certified bug-free.
var context = new webkitAudioContext();
var frequency = 440;
var impulse = 0.001 * context.sampleRate;
var node = context.createJavaScriptNode(4096, 0, 1);
var N = Math.round(context.sampleRate / frequency);
var y = new Float32Array(N);
var n = 0;
node.onaudioprocess = function (e) {
var output = e.outputBuffer.getChannelData(0);
for (var i = 0; i < e.outputBuffer.length; ++i) {
var xn = (--impulse >= 0) ? Math.random()-0.5 : 0;
output[i] = y[n] = xn + (y[n] + y[(n + 1) % N]) / 2;
if (++n >= N) n = 0;
}
}
node.connect(context.destination);

Related

How do i get the audio frequency from my mic using javascript?

I need to create a sort of like guitar tuner.. thats recognize the sound frequencies and determines in witch chord i am actually playing. Its similar to this guitar tuner that i found online:
https://musicjungle.com.br/afinador-online
But i cant figure it out how it works because of the webpack files..I want to make this tool app backendless.. Someone have a clue about how to do this only in the front end?
i founded some old pieces of code that doesnt work together.. i need fresh ideas
There are quite a few problems to unpack here, some of which will require a bit more information as to the application. Hopefully the sheer size of this task will become apparent as this answer progresses.
As it stands, there are two problems here:
need to create a sort of like guitar tuner..
1. How do you detect the fundamental pitch of a guitar note and feed that information back to the user in the browser?
and
thats recognize the sound frequencies and determines in witch chord i am actually playing.
2. How do you detect which chord a guitar is playing?
This second question is definitely not a trivial one, but we'll come to it in turn. This is not a programming question, but rather a DSP question
Question 1: Pitch Detection in Browser
Breakdown
If you wish to detect the pitch of a note in the browser there are a couple sub-problems that should be split up. Shooting from the hip we have the following JavaScript browser problems:
how to get microphone permission?
how to tap microhone for sample data?
how to start an audio context?
how to display a value?
how to update a value regularly?
how to filter audio data?
how to perform pitch detection?
how to get pitch via autocorrolation?
how to get picth via zero-crossing?
how to get pitch from frequency domain?
how to perform a fourier transform?
This is not an exhaustive list, but it should consitute the bulk of the overall problem
There is no Minimal, Reproducible Example, so none of the above can be assumed.
Implementation
A basic implementation would consist of a numeric reprenstation of a single fundamental frequency (f0) using an autocorrolation method outlined in the A. v. Knesebeck and U. Zölzer paper [1].
There are other approaches which mix and match filtering and pitch detection algorithms which I believe is far outside the scope of a reasonable answer.
NOTE: The Web Audio API is still not equally implemented across all browser. You should check each of the major browsers and make accomodations in your program. The following was tested in Google Chrome, so your mileage may (and likely will) vary in other browsers.
HTML
Our page should include
an element to display frequency
an element to initiate pitch detection
A more rounded interface would likely split the operations of
Asking for microphone permission
starting microphone stream
processing microphone stream
into separate interface elements, but for brevity they will be wrapped into a single element. This gives us a basic HTML page of
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Pitch Detection</title>
</head>
<body>
<h1>Frequency (Hz)</h1>
<h2 id="frequency">0.0</h2>
<div>
<button onclick="startPitchDetection()">
Start Pitch Detection
</button>
</div>
</body>
</html>
We are jumping the gun slightly with <button onclick="startPitchDetection()">. We will wrap up the operation in a single function called startPitchDetection
Pallate of variables
For an autocorrolation pitch detection approach our pallate of variables will need to include:
the Audio context
the microphone stream
an Analyser Node
an array for audio data
an array for the corrolated signal
an array for corrolated signal maxima
a DOM reference to the frequency
giving us something like
let audioCtx = new (window.AudioContext || window.webkitAudioContext)();
let microphoneStream = null;
let analyserNode = audioCtx.createAnalyser()
let audioData = new Float32Array(analyserNode.fftSize);;
let corrolatedSignal = new Float32Array(analyserNode.fftSize);;
let localMaxima = new Array(10);
const frequencyDisplayElement = document.querySelector('#frequency');
Some value are left null as they will not be known until the microphone stream has been activated. The 10 in let localMaxima = new Array(10); is a little arbitrary. This array will store the distance in samples between consecutive maxima of the corrolated signal.
Main script
Our <button> element has an onclick function of startPitchDetection, so that will be required. We will also need
an update function (for updating the display)
an autocorrolation function that returns a pitch
However, the first thing we have to do is ask for permission to use the microphone. To achieve this we use navigator.mediaDevices.getUserMedia, which will returm a Promise. Embellishing on what is outlined in the MDN documentation this gives us something roughly looking like
navigator.mediaDevices.getUserMedia({audio: true})
.then((stream) => {
/* use the stream */
})
.catch((err) => {
/* handle the error */
});
Great! Now we can start adding our main functionality to the then function.
Our order of events should be
Start microphone stream
connect microphone stream to the analyser node
set a timed callback to
get the latest time domain audio data from the Analyser Node
get the autocorrolation derived pitch estimate
update html element with the value
On top of that, add a log of the error from the catch method.
This can then all be wrapped into the startPitchDetection function, giving something like:
function startPitchDetection()
{
navigator.mediaDevices.getUserMedia ({audio: true})
.then((stream) =>
{
microphoneStream = audioCtx.createMediaStreamSource(stream);
microphoneStream.connect(analyserNode);
audioData = new Float32Array(analyserNode.fftSize);
corrolatedSignal = new Float32Array(analyserNode.fftSize);
setInterval(() => {
analyserNode.getFloatTimeDomainData(audioData);
let pitch = getAutocorrolatedPitch();
frequencyDisplayElement.innerHTML = `${pitch}`;
}, 300);
})
.catch((err) =>
{
console.log(err);
});
}
The update interval for setInterval of 300 is arbitrary. A little experimentation will dictate which interval is best for you. You may even wish to give the user control of this, but that is outside the scope of thise question.
The next step is to actually define what getAutocorrolatedPitch() does, so lets actually breakdown what autocorrolation is.
Autocorrelation is the process of convolving a signal with itself. Any time the result goes from a positive rate of change to a negative rate of change is defined as a local maxima. The number of samples between the start of the corrolated signal to the first maxima should be the period in samples of f0. We can continue to look for subsequent maxima and take an average which should improve accuracy slightly. Some frequencies do not have a period of whole samples, for instance 440 Hz at a sample rate of 44100 Hz has a period of 100.227. We technichally could never accurately detect this frequency of 440 Hz by taking a single maxima, the result would always be either 441 Hz (44100/100) or 436 Hz (44100/101).
For our autocorrolation function, we'll need
a track of how many maxima that have been detected
the mean distance between maxima
Our function should first perform the autocorrolation, find the sample positions of local maxima and then calculate the mean distance between these maxima. This give a function looking like:
function getAutocorrolatedPitch()
{
// First: autocorrolate the signal
let maximaCount = 0;
for (let l = 0; l < analyserNode.fftSize; l++) {
corrolatedSignal[l] = 0;
for (let i = 0; i < analyserNode.fftSize - l; i++) {
corrolatedSignal[l] += audioData[i] * audioData[i + l];
}
if (l > 1) {
if ((corrolatedSignal[l - 2] - corrolatedSignal[l - 1]) < 0
&& (corrolatedSignal[l - 1] - corrolatedSignal[l]) > 0) {
localMaxima[maximaCount] = (l - 1);
maximaCount++;
if ((maximaCount >= localMaxima.length))
break;
}
}
}
// Second: find the average distance in samples between maxima
let maximaMean = localMaxima[0];
for (let i = 1; i < maximaCount; i++)
maximaMean += localMaxima[i] - localMaxima[i - 1];
maximaMean /= maximaCount;
return audioCtx.sampleRate / maximaMean;
}
Problems
Once you have implemented this you may find there are actually a couple of problems.
The frequency result is a bit erratic
the display method is not intuitive for tuning purposes
The erratic result is down to the fact that autocorrolation by itself is not a perfect solution. You will need to experiment with first filtering the signal and aggregating other methods. You could also try limiting the signal or only analyse the signal when it is above a certain threshold. You could also increase the rate at which you perform the detection and average out the results.
Secondly, the method for display is limited. Musician would not be appreciative of a simple numerical result. Rather, some kind of graphical feedback would be more intuitive. Again, that is outside the scope of the question.
Full page and script
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Pitch Detection</title>
</head>
<body>
<h1>Frequency (Hz)</h1>
<h2 id="frequency">0.0</h2>
<div>
<button onclick="startPitchDetection()">
Start Pitch Detection
</button>
</div>
<script>
let audioCtx = new (window.AudioContext || window.webkitAudioContext)();
let microphoneStream = null;
let analyserNode = audioCtx.createAnalyser()
let audioData = new Float32Array(analyserNode.fftSize);;
let corrolatedSignal = new Float32Array(analyserNode.fftSize);;
let localMaxima = new Array(10);
const frequencyDisplayElement = document.querySelector('#frequency');
function startPitchDetection()
{
navigator.mediaDevices.getUserMedia ({audio: true})
.then((stream) =>
{
microphoneStream = audioCtx.createMediaStreamSource(stream);
microphoneStream.connect(analyserNode);
audioData = new Float32Array(analyserNode.fftSize);
corrolatedSignal = new Float32Array(analyserNode.fftSize);
setInterval(() => {
analyserNode.getFloatTimeDomainData(audioData);
let pitch = getAutocorrolatedPitch();
frequencyDisplayElement.innerHTML = `${pitch}`;
}, 300);
})
.catch((err) =>
{
console.log(err);
});
}
function getAutocorrolatedPitch()
{
// First: autocorrolate the signal
let maximaCount = 0;
for (let l = 0; l < analyserNode.fftSize; l++) {
corrolatedSignal[l] = 0;
for (let i = 0; i < analyserNode.fftSize - l; i++) {
corrolatedSignal[l] += audioData[i] * audioData[i + l];
}
if (l > 1) {
if ((corrolatedSignal[l - 2] - corrolatedSignal[l - 1]) < 0
&& (corrolatedSignal[l - 1] - corrolatedSignal[l]) > 0) {
localMaxima[maximaCount] = (l - 1);
maximaCount++;
if ((maximaCount >= localMaxima.length))
break;
}
}
}
// Second: find the average distance in samples between maxima
let maximaMean = localMaxima[0];
for (let i = 1; i < maximaCount; i++)
maximaMean += localMaxima[i] - localMaxima[i - 1];
maximaMean /= maximaCount;
return audioCtx.sampleRate / maximaMean;
}
</script>
</body>
</html>
Question 2: Detecting multiple notes
At this point I think we can all agree that this answer has gotten a little out of hand. So far we've just covered a single method of pitch detection. See Ref [2, 3, 4] for some suggestions of algorithms for multiple f0 detection.
In essence, this problem would come down to detecting all f0s and looking up the resulting notes against a dictionary of chords. For that, there should at least be a little work done on your part. Any questions about the DSP should probably be pointed toward https://dsp.stackexchange.com. You will be spoiled for choice on questions regarding pitch detection algorithms
References
A. v. Knesebeck and U. Zölzer, "Comparison of pitch trackers for real-time guitar effects", in Proceedings of the 13th International Conference on Digital Audio Effects (DAFx-10), Graz, Austria, September 6-10, 2010.
A. P. Klapuri, "A perceptually motivated multiple-F0 estimation method," IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, 2005., 2005, pp. 291-294, doi: 10.1109/ASPAA.2005.1540227.
A. P. Klapuri, "Multiple fundamental frequency estimation based on harmonicity and spectral smoothness," in IEEE Transactions on Speech and Audio Processing, vol. 11, no. 6, pp. 804-816, Nov. 2003, doi: 10.1109/TSA.2003.815516.
A. P. Klapuri, "Multipitch estimation and sound separation by the spectral smoothness principle," 2001 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings (Cat. No.01CH37221), 2001, pp. 3381-3384 vol.5, doi: 10.1109/ICASSP.2001.940384.
I suppose it'll depend how you're building your application. Hard to help without much detail around specs. Though, here are a few options for you.
There are a few stream options, for example;
mic
Or if you're using React;
react-mic
Or if you're wanting to go real basic with some vanilla JS;
Web Fundamentals: Recording Audio

AnalyserNode.getFloatFrequencyData() returns negative values

I'm trying to get the volume of a microphone input using the Web Audio API using AnalyserNode.getFloatFrequencyData().
The spec states that "each item in the array represents the decibel value for a specific frequency" but it returns only negative values although they do look like they are reacting to the level of sound - a whistle will return a value of around -23 and silence around -80 (the values in the dataArray are also all negative, so I don't think it's to do with how I've added them together) . The same code gives the values I'd expect (positive) with AnalyserNode.getByeFrequencyData() but the decibel values returned have been normalised between 0-255 so are more difficult to add together to determine the overall volume.
Why am I not getting the values I expect? And/or is this perhaps not a good way of getting the volume of the microphone input in the first place?
function getVolume(analyser) {
analyser.fftSize = 32;
let bufferLength = analyser.frequencyBinCount;
let dataArray = new Float32Array(bufferLength);
analyser.getFloatFrequencyData(dataArray);
let totalAntilogAmplitude = 0;
for (let i = 0; i < bufferLength; i++) {
let thisAmp = dataArray[i]; // amplitude of current bin
let thisAmpAntilog = Math.pow(10, (thisAmp / 10)) // antilog amplitude for adding
totalAntilogAmplitude = totalAntilogAmplitude + thisAmpAntilog;
}
let amplitude = 10 * Math.log10(totalAntilogAmplitude);
return amplitude;
}
Your code looks correct. But without an example, it's hard to tell if it's producing the values you expect. Also, since you're just computing (basically), the sum of all the values of the transform coefficients, you've just done a a more expensive version of summing the squares of the time domain signal.
Another alternative would square the signal, filter it a bit to smooth out variations, and get the output value at various times. Something like the following, where s is the node that has the signal you're interested in.
let g = new GainNode(context, {gain: 0});
s.connect(g);
s.connect(g.gain);
// Output of g is now the square of s
let f = new BiquadFilterNode(context, {frequency: 10});
// May want to adjust the frequency some to other value for your needs.
// I arbitrarily chose 10 Hz.
g.connect(f).connect(analyser)
// Now get the time-domain value from the analyser and just take the first
// value from the signal. This is the energy of the signal and represents
// the volume.

Rollercoaster simulation in matlab/javascript

I'm having problem with a project that I'm working on in matlab which i'm then going to implement using javascript. The purpose is to use matlab to get a better understanding of the physics before moving over to javascript. The goal is to create some sort of a Rollercoaster simulation in matlab using differential equations and euler aproximation. Then to animate a block(cart) following the path.
The problem is that I can't get the approximation working on a arbitary path (equation), and I don't understand what I'm doing wrong.
So this is what i'm trying to do:
for t=1:h:tf
%acceleration
ax(i) = (g*cos((-pi/4)))*sin(-pi/4)-((C*ro*A*vx(i)^2*sign(vx(i)))/(2*m))*sin(-pi/4);
ay(i) = (g*cos((-pi/4)))*cos(-pi/4)+((C*ro*A*vy(i)^2*sign(vy(i)))/(2*m))*cos(-pi/4);
%speed
vx(i+1) = vx(i)+h*ax(i);
vy(i+1) = vy(i)+h*ay(i);
%position
x(i+1) = x(i)+h*vx(i);
y(i+1) = y(i)+h*vy(i);
i = i+1;
speed = sqrt(vx(i)^2+vy(i)^2);
plot(x(i),y(i),'or')
pause(1/speed)
end
Where i'm following Newtons second law (F=ma => a = F/m) the only negative force component i'm using now is air resistance.
This is when i'm using a hardcoded path -> y=1-x, and it's working just fine!
But when i try to use an arbitary path, eg. 1/x, the angle is changed all the time, and i've tried things like putting each angle into an angle vector:
%% initial values
vx(1) = 0;
vy(1) = 0;
x(1) = 0;
y(1) = 6.7056;
%Track Geometry Constants
h_start = 6.7056;
l_end = 32;
b = .055;
w = .7;
p = .3;
%% Creating path
path_x = linspace(0, 23, 1000);
path_y = (exp(-b*path_x).*(cos(w*path_x - p)) + 2.2*(exp(-(b)*path_x)));
path_y = path_y*(h_start/max(path_y));
path_x = path_x*(l_end/max(path_x));
%%
alpha = zeros(size(path_y));
for k=1:1:size(path_y)-1
alpha(j) = atan(path_y(k+1)-path_y(k))/(path_x(k+1)-path_x(k));
j= j+1;
end
But this doesn't appear to work.
How can I make this work for an arbitrary path?
Thank you in advance!
There is a pretty simple error in your loop. Two errors, actually.
You are indexing alpha with a variable, j, that doesn't exist and increment it in the loop, instead of just using k, which increments automatically in the loop.
The reason this isn't giving an error is that your loop never runs. Because the size of path_y is not a single number, (the size is 1 x 1000), k=1:1:size(path_y)-1 tries to create a loop that goes from 1 to 0 in steps of positive 1. Since this isn't possible, the loop is skipped. One option is to use length, not size here.
But the most important error: you didn't check your code line by line when it stopped working to confirm that every part of your code was doing what you thought it was when you wrote it. If you'd checked what k=1:1:size(path_y)-1 was outputting, you should have been able to identify this problem very quickly.
Incidentally I think you can avoid the loop entirely (append a zero at the end if you really need this to have the same size as the path variables):
alpha = atan(diff(path_y)./diff(path_x));

Golang vs JavaScript (v8/node.js) map performance

Out of curiosity I wrote some trivial benchmarks comparing the performance of golang maps to JavaScript (v8/node.js) objects used as maps and am surprised at their relative performance. JavaScript objects appear to perform roughly twice as fast as go maps (even including some minor performance edges for go)!
Here is the go implementation:
// map.go
package main
import "fmt"
import "time"
func elapsedMillis(t0, t1 time.Time) float64 {
n0, n1 := float64(t0.UnixNano()), float64(t1.UnixNano())
return (n1 - n0) / 1e6
}
func main() {
m := make(map[int]int, 1000000)
t0 := time.Now()
for i := 0; i < 1000000; i++ {
m[i] = i // Put.
_ = m[i] + 1 // Get, use, discard.
}
t1 := time.Now()
fmt.Printf("go: %fms\n", elapsedMillis(t0, t1))
}
And here is the JavaScript:
#!/usr/bin/env node
// map.js
function elapsedMillis(hrtime0, hrtime1) {
var n0 = hrtime0[0] * 1e9 + hrtime0[1];
var n1 = hrtime1[0] * 1e9 + hrtime1[1];
return (n1 - n0) / 1e6;
}
var m = {};
var t0 = process.hrtime();
for (var i=0; i<1000000; i++) {
m[i] = i; // Put.
var _ = m[i] + 1; // Get, use, discard.
}
var t1 = process.hrtime();
console.log('js: ' + elapsedMillis(t0, t1) + 'ms');
Note that the go implementation has a couple of minor potential performance edges in that:
Go is mapping integers to integers directly, whereas JavaScript will convert the integer keys to string property names.
Go makes its map with initial capacity equal to the benchmark size, whereas JavaScript is growing from its default capacity).
However, despite the potential performance benefits listed above the go map usage seems to perform at about half the rate of the JavaScript object map! For example (representative):
go: 128.318976ms
js: 48.18517ms
Am I doing something obviously wrong with go maps or somehow comparing apples to oranges?
I would have expected go maps to perform at least as well - if not better than JavaScript objects as maps. Is this just a sign of go's immaturity (1.4 on darwin/amd64) or does it represent some fundamental difference between the two language data structures that I'm missing?
[Update]
Note that if you explicitly use string keys (e.g. via s := strconv.Itoa(i) and var s = ''+i in Go and JavaScript, respectively) then their performance is roughly equivalent.
My guess is that the very high performance from v8 is related to a specific optimization in that runtime for objects whose keys are consecutive integers (e.g. by substituting an array implementation instead of a hashtable).
I'm voting to close since there is likely nothing to see here...
Your benchmark is synthetic a bit, just like any benchmarks are. Just for curious try
for i := 0; i < 1000000; i += 9 {
in Go implementation. You may be surprised.

Better random function in JavaScript

I'm currently making a Conway's Game of Life reproduction in JavaScript and I've noticed that the function Math.random() is always returning a certain pattern. Here's a sample of a randomized result in a 100x100 grid:
Does anyone knows how to get better randomized numbers?
ApplyRandom: function() {
var $this = Evolution;
var total = $this.Settings.grid_x * $this.Settings.grid_y;
var range = parseInt(total * ($this.Settings.randomPercentage / 100));
for(var i = 0; i < total; i++) {
$this.Infos.grid[i] = false;
}
for(var i = 0; i < range; i++) {
var random = Math.floor((Math.random() * total) + 1);
$this.Infos.grid[random] = true;
}
$this.PrintGrid();
},
[UPDATE]
I've created a jsFiddle here: http://jsfiddle.net/5Xrs7/1/
[UPDATE]
It seems that Math.random() was OK after all (thanks raina77ow). Sorry folks! :(. If you are interested by the result, here's an updated version of the game: http://jsfiddle.net/sAKFQ/
(But I think there's some bugs left...)
This line in your code...
var position = (y * 10) + x;
... is what's causing this 'non-randomness'. It really should be...
var position = (y * $this.Settings.grid_x) + x;
I suppose 10 was the original size of this grid, that's why it's here. But that's clearly wrong: you should choose your position based on the current size of the grid.
As a sidenote, no offence, but I still consider the algorithm given in #JayC answer to be superior to yours. And it's quite easy to implement, just change two loops in ApplyRandom function to a single one:
var bias = $this.Settings.randomPercentage / 100;
for (var i = 0; i < total; i++) {
$this.Infos.grid[i] = Math.random() < bias;
}
With this change, you will no longer suffer from the side effect of reusing the same numbers in var random = Math.floor((Math.random() * total) + 1); line, which lowered the actual cell fillrate in your original code.
Math.random is a pseudo random method, that's why you're getting those results. A by pass i often use is to catch the mouse cursor position in order to add some salt to the Math.random results :
Math.random=(function(rand) {
var salt=0;
document.addEventListener('mousemove',function(event) {
salt=event.pageX*event.pageY;
});
return function() { return (rand()+(1/(1+salt)))%1; };
})(Math.random);
It's not completly random, but a bit more ;)
A better solution is probably not to randomly pick points and paint them black, but to go through each and every point, decide what the odds are that it should be filled, and then fill accordingly. (That is, if you want it on average %20 percent chance of it being filled, generate your random number r and fill when r < 0.2 I've seen a Life simulator in WebGL and that's kinda what it does to initialize...IIRC.
Edit: Here's another reason to consider alternate methods of painting. While randomly selecting pixels might end up in less work and less invocation of your random number generator, which might be a good thing, depending upon what you want. As it is, you seem to have selected a way that, at most some percentage of your pixels will be filled. IF you had kept track of the pixels being filled, and chose to fill another pixel if one was already filled, essentially all your doing is shuffling an exact percentage of black pixels among your white pixels. Do it my way, and the percentage of pixels selected will follow a binomial distribution. Sometimes the percentage filled will be a little more, sometimes a little less. The set of all shufflings is a strict subset of the possibilities generated this kind of picking (which, also strictly speaking, contains all possibilities for painting the board, just with astronomically low odds of getting most of them). Simply put, randomly choosing for every pixel would allow more variance.
Then again, I could modify the shuffle algorithm to pick a percentage of pixels based upon numbers generated from a binomial probability distribution function with a defined expected/mean value instead of the expected/mean value itself, and I honestly don't know that it'd be any different--at least theoretically--than running the odds for every pixel with the expected/mean value itself. There's a lot that could be done.
console.log(window.crypto.getRandomValues(new Uint8Array(32))); //return 32 random bytes
This return a random bytes with crypto-strength: https://developer.mozilla.org/en/docs/Web/API/Crypto/getRandomValues
You can try
JavaScript Crypto Library (BSD license). It is supposed to have a good random number generator. See here an example of usage.
Stanford JavaScript Crypto Library (BSD or GPL license). See documentation for random numbers.
For a discussion of strength of Math.random(), see this question.
The implementation of Math.random probably is based on a linear congruential generator, one weakness of which is that a random number depends on the earlier value, producing predictable patterns like this, depending on the choice of the constants in the algorithm. A famous example of the effect of poor choice of constants can be seen in RANDU.
The Mersenne Twister random number generator does not have this weakness. You can find an implementation of MT in JavaScript for example here: https://gist.github.com/banksean/300494
Update: Seeing your code, you have a problem in the code that renders the grid. This line:
var position = (y * 10) + x;
Should be:
var position = (y * grid_x) + x;
With this fix there is no discernible pattern.
You can using the part of sha256 hash from timestamp including nanoseconds:
console.log(window.performance.now()); //return nanoseconds inside
This can be encoded as string,
then you can get hash, using this: http://geraintluff.github.io/sha256/
salt = parseInt(sha256(previous_salt_string).substring(0, 12), 16);
//48 bits number < 2^53-1
then, using function from #nfroidure,
write gen_salt function before, use sha256 hash there,
and write gen_salt call to eventListener.
You can use sha256(previous_salt) + mouse coordinate, as string to get randomized hash.

Categories

Resources