WEB AUDIO API creating rain crackling noises - javascript

I'm trying to create rain in the javascript Web Audio API.
So far I've created a low frequency rumbling noise for the background and I'm working on a high frequency noise which will imitate the sound of rain droplets. However, right now the high-frequency noise is very much like white noise and too intense to be individual droplets. Does anyone know how to "separate" the sound a little so it almost sounds like crackling. Here is a link to what I would like the high-frequency noise to sound like if you increase the last slider (violet) you can hear it.
And here is my HTML code so far
<script>
let context= new AudioContext();
let context2= new AudioContext();
let lowpass = context.createBiquadFilter();
lowpass.type = 'lowpass';
//lowpass.Q.value = -7.01;
lowpass.frequency.setValueAtTime(80, context2.currentTime);
let gain = new GainNode(context);
gain.gain.value= 0.4;
let gain2 = new GainNode(context2);
gain2.gain.value= 0.02;
let highpass=context2.createBiquadFilter();
highpass.type = 'highpass';
highpass.Q.value = 2;
//highpass.frequency.setValueAtTime(6000, context2.currentTime);
let distortion = context2.createWaveShaper();
let delay = context2.createDelay(90.0);
function StartAudio() {context.resume()};
context.audioWorklet.addModule('basicnoise.js').then(() => {
let myNoise = new AudioWorkletNode(context,'noise-generator');
myNoise.connect(lowpass);
lowpass.connect(gain);
gain.connect(context.destination);
});
function StartAudio2() {context2.resume()};
context2.audioWorklet.addModule('basicnoise.js').then(() => {
let myNoise2 = new AudioWorkletNode(context2,'noise-generator');
myNoise2.connect(highpass);
highpass.connect(gain2);
gain2.connect(delay);
delay.connect(context2.destination);
});
I've been playing around with different functions, some of them didn't do much or I simply am not using them correctly as I am very new to the audio API scene. Any help is appreciated as this is for a school project and I know some other students want to make fire sounds and could also benefit from the crackling noise !! Thank you !!

If you think of rain as a physical process, it's basically lots of surface impact sounds (and probably some additional ambience created by airflow). When enough raindrops hit surface(s) at a rapid enough clip, the end result ends up being noise-ish.
I think a realistic rain generator would simulate lots of single droplets hitting a surface at different distances from the listener (which causes attenuation and filtering).
That said, if you want to try just "crackling" the noise generator you have going on now, try modulating the gain node's gain value randomly; here, there's a 25% chance for the generator to be effectively muted every 20 milliseconds (or so, considering timers aren't exactly precise).
setInterval(() => {
gain.gain.value=(Math.random() < 0.75 ? 0.4 : 0);
}, 20)

Related

PixiJS Fixed-step loop - Stutter / Jitter

I've been facing a strange issue beyond my understanding of how to fix it.
I'm trying to create the following multiplayer game structure:
The server running at 20-30 fps
The client logic loop at the same FPS as the server
The client render loop
I'm using PixiJS for UI and here is where I got stuck.
( I've opened a thread here as well )
And I have a demo here: https://playcode.io/1045459
Ok, now let's explain the issue!
private update = () => {
let elapsed = PIXI.Ticker.shared.elapsedMS
if (elapsed > 1000) elapsed = this.frameDuration
this.lag += elapsed
//Update the frame if the lag counter is greater than or
//equal to the frame duration
while (this.lag >= this.frameDuration) {
//Update the logic
console.log(`[Update] FPS ${Math.round(PIXI.Ticker.shared.FPS)}`)
this.updateInputs(now())
//Reduce the lag counter by the frame duration
this.lag -= this.frameDuration
}
// Render elements in-between states
const lagOffset = this.lag / this.frameDuration
this.interpolateSpaceships(lagOffset)
}
In the client loop I keep track of both logic & render parts, limiting the logic one at 20FPS. It all works "cookies and clouds" until the browser has a sudden frame rate drop from 120fps to 60fps. Based on my investigation and a nice & confusing spreadsheet that I've put together when the frame rate drops, the "player" moves 2x more ( eg. 3.3 instead of 1.66 ) On paper it's normal and the math is correct, BUT this creates a small bounce / jitter / stutter or whatever naming this thing has.
In the demo that I've created in playcode it's not visible. My assumption is that the code is too basic and the framerate never drops.
Considering that the math and the algorithm are correct ( which I'm not yet sure ), I've turned my eyes to other parts that might affect this. I'm using pixi-viewport to follow the character. Could it be that the following part creates this bounce?
Does anyone have experience writing such a game loop?
Update:
Okkkk, mindblowing result. I just found out that this happens even with the most simple version of the game loop ever. Just by multiplying x = x + speed * delta every frame.
For the same reason. Sudden drops in FPS.
Ok, I've found the solution. Will post it here as there is not a lot of info about it. The solution is to smooth out sudden fps drops over multiple frames. Easy right? 😅
const ticker = new PIXI.Ticker();
// The number of frames to use for smoothing
const smoothingFrames = 10;
// The smoothed frame duration
let smoothedFrameDuration = 0;
ticker.add((deltaTime) => {
// Add the current frame duration to the smoothing array
smoothedFrameDuration = (smoothedFrameDuration * (smoothingFrames - 1) + deltaTime) / smoothingFrames;
// Update the game logic here
// Use the smoothed frame duration instead of the raw deltaTime value
});
ticker.start();

How do i get the audio frequency from my mic using javascript?

I need to create a sort of like guitar tuner.. thats recognize the sound frequencies and determines in witch chord i am actually playing. Its similar to this guitar tuner that i found online:
https://musicjungle.com.br/afinador-online
But i cant figure it out how it works because of the webpack files..I want to make this tool app backendless.. Someone have a clue about how to do this only in the front end?
i founded some old pieces of code that doesnt work together.. i need fresh ideas
There are quite a few problems to unpack here, some of which will require a bit more information as to the application. Hopefully the sheer size of this task will become apparent as this answer progresses.
As it stands, there are two problems here:
need to create a sort of like guitar tuner..
1. How do you detect the fundamental pitch of a guitar note and feed that information back to the user in the browser?
and
thats recognize the sound frequencies and determines in witch chord i am actually playing.
2. How do you detect which chord a guitar is playing?
This second question is definitely not a trivial one, but we'll come to it in turn. This is not a programming question, but rather a DSP question
Question 1: Pitch Detection in Browser
Breakdown
If you wish to detect the pitch of a note in the browser there are a couple sub-problems that should be split up. Shooting from the hip we have the following JavaScript browser problems:
how to get microphone permission?
how to tap microhone for sample data?
how to start an audio context?
how to display a value?
how to update a value regularly?
how to filter audio data?
how to perform pitch detection?
how to get pitch via autocorrolation?
how to get picth via zero-crossing?
how to get pitch from frequency domain?
how to perform a fourier transform?
This is not an exhaustive list, but it should consitute the bulk of the overall problem
There is no Minimal, Reproducible Example, so none of the above can be assumed.
Implementation
A basic implementation would consist of a numeric reprenstation of a single fundamental frequency (f0) using an autocorrolation method outlined in the A. v. Knesebeck and U. Zölzer paper [1].
There are other approaches which mix and match filtering and pitch detection algorithms which I believe is far outside the scope of a reasonable answer.
NOTE: The Web Audio API is still not equally implemented across all browser. You should check each of the major browsers and make accomodations in your program. The following was tested in Google Chrome, so your mileage may (and likely will) vary in other browsers.
HTML
Our page should include
an element to display frequency
an element to initiate pitch detection
A more rounded interface would likely split the operations of
Asking for microphone permission
starting microphone stream
processing microphone stream
into separate interface elements, but for brevity they will be wrapped into a single element. This gives us a basic HTML page of
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Pitch Detection</title>
</head>
<body>
<h1>Frequency (Hz)</h1>
<h2 id="frequency">0.0</h2>
<div>
<button onclick="startPitchDetection()">
Start Pitch Detection
</button>
</div>
</body>
</html>
We are jumping the gun slightly with <button onclick="startPitchDetection()">. We will wrap up the operation in a single function called startPitchDetection
Pallate of variables
For an autocorrolation pitch detection approach our pallate of variables will need to include:
the Audio context
the microphone stream
an Analyser Node
an array for audio data
an array for the corrolated signal
an array for corrolated signal maxima
a DOM reference to the frequency
giving us something like
let audioCtx = new (window.AudioContext || window.webkitAudioContext)();
let microphoneStream = null;
let analyserNode = audioCtx.createAnalyser()
let audioData = new Float32Array(analyserNode.fftSize);;
let corrolatedSignal = new Float32Array(analyserNode.fftSize);;
let localMaxima = new Array(10);
const frequencyDisplayElement = document.querySelector('#frequency');
Some value are left null as they will not be known until the microphone stream has been activated. The 10 in let localMaxima = new Array(10); is a little arbitrary. This array will store the distance in samples between consecutive maxima of the corrolated signal.
Main script
Our <button> element has an onclick function of startPitchDetection, so that will be required. We will also need
an update function (for updating the display)
an autocorrolation function that returns a pitch
However, the first thing we have to do is ask for permission to use the microphone. To achieve this we use navigator.mediaDevices.getUserMedia, which will returm a Promise. Embellishing on what is outlined in the MDN documentation this gives us something roughly looking like
navigator.mediaDevices.getUserMedia({audio: true})
.then((stream) => {
/* use the stream */
})
.catch((err) => {
/* handle the error */
});
Great! Now we can start adding our main functionality to the then function.
Our order of events should be
Start microphone stream
connect microphone stream to the analyser node
set a timed callback to
get the latest time domain audio data from the Analyser Node
get the autocorrolation derived pitch estimate
update html element with the value
On top of that, add a log of the error from the catch method.
This can then all be wrapped into the startPitchDetection function, giving something like:
function startPitchDetection()
{
navigator.mediaDevices.getUserMedia ({audio: true})
.then((stream) =>
{
microphoneStream = audioCtx.createMediaStreamSource(stream);
microphoneStream.connect(analyserNode);
audioData = new Float32Array(analyserNode.fftSize);
corrolatedSignal = new Float32Array(analyserNode.fftSize);
setInterval(() => {
analyserNode.getFloatTimeDomainData(audioData);
let pitch = getAutocorrolatedPitch();
frequencyDisplayElement.innerHTML = `${pitch}`;
}, 300);
})
.catch((err) =>
{
console.log(err);
});
}
The update interval for setInterval of 300 is arbitrary. A little experimentation will dictate which interval is best for you. You may even wish to give the user control of this, but that is outside the scope of thise question.
The next step is to actually define what getAutocorrolatedPitch() does, so lets actually breakdown what autocorrolation is.
Autocorrelation is the process of convolving a signal with itself. Any time the result goes from a positive rate of change to a negative rate of change is defined as a local maxima. The number of samples between the start of the corrolated signal to the first maxima should be the period in samples of f0. We can continue to look for subsequent maxima and take an average which should improve accuracy slightly. Some frequencies do not have a period of whole samples, for instance 440 Hz at a sample rate of 44100 Hz has a period of 100.227. We technichally could never accurately detect this frequency of 440 Hz by taking a single maxima, the result would always be either 441 Hz (44100/100) or 436 Hz (44100/101).
For our autocorrolation function, we'll need
a track of how many maxima that have been detected
the mean distance between maxima
Our function should first perform the autocorrolation, find the sample positions of local maxima and then calculate the mean distance between these maxima. This give a function looking like:
function getAutocorrolatedPitch()
{
// First: autocorrolate the signal
let maximaCount = 0;
for (let l = 0; l < analyserNode.fftSize; l++) {
corrolatedSignal[l] = 0;
for (let i = 0; i < analyserNode.fftSize - l; i++) {
corrolatedSignal[l] += audioData[i] * audioData[i + l];
}
if (l > 1) {
if ((corrolatedSignal[l - 2] - corrolatedSignal[l - 1]) < 0
&& (corrolatedSignal[l - 1] - corrolatedSignal[l]) > 0) {
localMaxima[maximaCount] = (l - 1);
maximaCount++;
if ((maximaCount >= localMaxima.length))
break;
}
}
}
// Second: find the average distance in samples between maxima
let maximaMean = localMaxima[0];
for (let i = 1; i < maximaCount; i++)
maximaMean += localMaxima[i] - localMaxima[i - 1];
maximaMean /= maximaCount;
return audioCtx.sampleRate / maximaMean;
}
Problems
Once you have implemented this you may find there are actually a couple of problems.
The frequency result is a bit erratic
the display method is not intuitive for tuning purposes
The erratic result is down to the fact that autocorrolation by itself is not a perfect solution. You will need to experiment with first filtering the signal and aggregating other methods. You could also try limiting the signal or only analyse the signal when it is above a certain threshold. You could also increase the rate at which you perform the detection and average out the results.
Secondly, the method for display is limited. Musician would not be appreciative of a simple numerical result. Rather, some kind of graphical feedback would be more intuitive. Again, that is outside the scope of the question.
Full page and script
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Pitch Detection</title>
</head>
<body>
<h1>Frequency (Hz)</h1>
<h2 id="frequency">0.0</h2>
<div>
<button onclick="startPitchDetection()">
Start Pitch Detection
</button>
</div>
<script>
let audioCtx = new (window.AudioContext || window.webkitAudioContext)();
let microphoneStream = null;
let analyserNode = audioCtx.createAnalyser()
let audioData = new Float32Array(analyserNode.fftSize);;
let corrolatedSignal = new Float32Array(analyserNode.fftSize);;
let localMaxima = new Array(10);
const frequencyDisplayElement = document.querySelector('#frequency');
function startPitchDetection()
{
navigator.mediaDevices.getUserMedia ({audio: true})
.then((stream) =>
{
microphoneStream = audioCtx.createMediaStreamSource(stream);
microphoneStream.connect(analyserNode);
audioData = new Float32Array(analyserNode.fftSize);
corrolatedSignal = new Float32Array(analyserNode.fftSize);
setInterval(() => {
analyserNode.getFloatTimeDomainData(audioData);
let pitch = getAutocorrolatedPitch();
frequencyDisplayElement.innerHTML = `${pitch}`;
}, 300);
})
.catch((err) =>
{
console.log(err);
});
}
function getAutocorrolatedPitch()
{
// First: autocorrolate the signal
let maximaCount = 0;
for (let l = 0; l < analyserNode.fftSize; l++) {
corrolatedSignal[l] = 0;
for (let i = 0; i < analyserNode.fftSize - l; i++) {
corrolatedSignal[l] += audioData[i] * audioData[i + l];
}
if (l > 1) {
if ((corrolatedSignal[l - 2] - corrolatedSignal[l - 1]) < 0
&& (corrolatedSignal[l - 1] - corrolatedSignal[l]) > 0) {
localMaxima[maximaCount] = (l - 1);
maximaCount++;
if ((maximaCount >= localMaxima.length))
break;
}
}
}
// Second: find the average distance in samples between maxima
let maximaMean = localMaxima[0];
for (let i = 1; i < maximaCount; i++)
maximaMean += localMaxima[i] - localMaxima[i - 1];
maximaMean /= maximaCount;
return audioCtx.sampleRate / maximaMean;
}
</script>
</body>
</html>
Question 2: Detecting multiple notes
At this point I think we can all agree that this answer has gotten a little out of hand. So far we've just covered a single method of pitch detection. See Ref [2, 3, 4] for some suggestions of algorithms for multiple f0 detection.
In essence, this problem would come down to detecting all f0s and looking up the resulting notes against a dictionary of chords. For that, there should at least be a little work done on your part. Any questions about the DSP should probably be pointed toward https://dsp.stackexchange.com. You will be spoiled for choice on questions regarding pitch detection algorithms
References
A. v. Knesebeck and U. Zölzer, "Comparison of pitch trackers for real-time guitar effects", in Proceedings of the 13th International Conference on Digital Audio Effects (DAFx-10), Graz, Austria, September 6-10, 2010.
A. P. Klapuri, "A perceptually motivated multiple-F0 estimation method," IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, 2005., 2005, pp. 291-294, doi: 10.1109/ASPAA.2005.1540227.
A. P. Klapuri, "Multiple fundamental frequency estimation based on harmonicity and spectral smoothness," in IEEE Transactions on Speech and Audio Processing, vol. 11, no. 6, pp. 804-816, Nov. 2003, doi: 10.1109/TSA.2003.815516.
A. P. Klapuri, "Multipitch estimation and sound separation by the spectral smoothness principle," 2001 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings (Cat. No.01CH37221), 2001, pp. 3381-3384 vol.5, doi: 10.1109/ICASSP.2001.940384.
I suppose it'll depend how you're building your application. Hard to help without much detail around specs. Though, here are a few options for you.
There are a few stream options, for example;
mic
Or if you're using React;
react-mic
Or if you're wanting to go real basic with some vanilla JS;
Web Fundamentals: Recording Audio

Limited playback rate using the Web Audio API

I'm trying to use the Web Audio API to create a simple synthesizer, but I'm having problem with the playback rate of an AudioBuffer. Its value seems to be limited when I try using a somewhat high value.
Here's my code sample. I first create a sample buffer containing a simple waveform that has as much samples as the sample rate (this creates a waveform at 1Hz if it was directly read). I then create an AudioBuffer that will contain the samples. Finaly, I create an AudioBufferSourceNode that will play the previous buffer in a loop at a some playback rate (which translate to an audible frequency).
const audioContext = new window.AudioContext();
const buffer = new Float32Array(audioContext.sampleRate);
for (let i = 0; i < buffer.length; i++) {
buffer[i] = Math.sin(2 * Math.PI * i / audioContext.sampleRate);
}
const audioBuffer = new AudioBuffer({
length: buffer.length,
numberOfChannels: 1,
sampleRate: audioContext.sampleRate
});
audioBuffer.copyToChannel(buffer, 0);
const sourceNode = new AudioBufferSourceNode(audioContext, {
buffer: audioBuffer,
loop: true,
playbackRate: 440
});
sourceNode.connect(audioContext.destination);
sourceNode.start();
In this scenario, playbackRate seems to be limited at 1024. Any value higher than this will be not have any audible effect. I verified its maximum value (sourceNode.playbackRate.maxValue) and it's around 3.4e+38, which is way above what I'm trying to achieve.
I'm wondering if there's something I'm missing or if my understanding of this API is wrong. I'm using Google Chrome on the latest version if this changes anything.
This is a bug in Chrome: https://crbug.com/712256. I believe Firefox may implement this correctly (but I did not check).
Note, also, that Chrome uses very simple interpolation. The quality of the output degrades quite a bit when the playbackRate is very different from 1.

HTML5 Canvas game player limit

I placed 1000 players on canvas but their movement is not smooth. Is there a way for canvas to handle that much players? What options do I have?
Position of players is sent 20 times per second using socket.IO. When event reach client it draws players on canvas.
var images = {};
images.spriteSheet = new Image();
images.spriteSheet.src = "/images/spriteSheet.png";
socket.on('drawing', function(data){
ctx.clearRect(0,0,1000,500);
for(var i = 0; i < data.length; i++){
ctx.drawImage(images['spriteSheet'], data[i].sx, data[i].sy, 100, 100, data[i].x, data[i].y, 70, 70);
}
});
I placed 1000 players on canvas but their movement is not smooth. Is there a way for canvas to handle that much players?
I assume you mean you're drawing 1,000 sprites to a canvas 20x per second. No doubt, that's going to be tough.
Position of players is sent 20 times in a second using socket.IO.
Surely you can send a position and vector instead, and let the client update per-frame locally, predictively, until the next real update. That's how pretty much all other games work, and for a reason. This idea that you can even get data 20 times a second isn't always going to be the case.
Without more details it's hard to give a specific answer, but in the general case I'd suggest looking into SVG. Then, you can update individual elements and let the browser worry about compositing and what not, which is likely going to be faster than what you can pull off in the canvas. You'll have to experiment though for your specific use case.

Analyser not updating canvas draws as quickly as audio is coming in

Whenever I have a new buffer that come into my client I want to redraw that instance of audio onto my canvas. I took the sample code from http://webaudioapi.com/samples/visualizer/ and tried to alter it to fit my needs in a live environment. I seem to have something working because I do see the canvas updating when I call .draw() but it's not nearly as fast as it should be. I'm probably seeing about 1 fps as it is. How do I speed up my fps and still call draw for each instance of a new buffer?
Entire code:
https://github.com/grkblood13/web-audio-stream/tree/master/visualizer
Here's the portion calling .draw() for every buffer:
function playBuffer(audio, sampleRate) {
var source = context.createBufferSource();
var audioBuffer = context.createBuffer(1, audio.length , sampleRate);
source.buffer = audioBuffer;
audioBuffer.getChannelData(0).set(audio);
source.connect(analyser);
var visualizer = new Visualizer(analyser);
visualizer.analyser.connect(context.destination);
visualizer.draw(); // Draw new canvas for every new set of data
if (nextTime == 0) {
nextTime = context.currentTime + 0.05; /// add 50ms latency to work well across systems - tune this if you like
}
source.start(nextTime);
nextTime+=source.buffer.duration; // Make the next buffer wait the length of the last buffer before being played
}
And here's the .draw() method:
Visualizer.prototype.draw = function() {
function myDraw() {
this.analyser.smoothingTimeConstant = SMOOTHING;
this.analyser.fftSize = FFT_SIZE;
// Get the frequency data from the currently playing music
this.analyser.getByteFrequencyData(this.freqs);
this.analyser.getByteTimeDomainData(this.times);
var width = Math.floor(1/this.freqs.length, 10);
// Draw the time domain chart.
this.drawContext.fillStyle = 'black';
this.drawContext.fillRect(0, 0, WIDTH, HEIGHT);
for (var i = 0; i < this.analyser.frequencyBinCount; i++) {
var value = this.times[i];
var percent = value / 256;
var height = HEIGHT * percent;
var offset = HEIGHT - height - 1;
var barWidth = WIDTH/this.analyser.frequencyBinCount;
this.drawContext.fillStyle = 'green';
this.drawContext.fillRect(i * barWidth, offset, 1, 2);
}
}
requestAnimFrame(myDraw.bind(this));
}
Do you have a working demo? as you can easily debug this using the Timeline in chrome. You can find out what process takes long. Also please take unnecessary math out. Most of your code doesn't need to be executed every frame. Also, how many times is the draw function called from playBuffer? When you call play, on the end of that function it requests a new animation frame. If you call play every time you get a buffer, you get much more cycles of math->drawing->request frame. This also makes it very slow. If you are already using the requestanimationframe, you shall only call the play function once.
To fix up the multi frame issue:
window.animframe = requestAnimFrame(myDraw.bind(this));
And on your playBuffer:
if(!animframe) visualizer.draw();
This makes sure it only executes the play function when there is no request.
Do you have a live example demo? I'd like to run it through some profiling. You're trying to get updates for the playing audio, not just once per chunk, right?
I see a number of inefficiencies:
you're copying the data at least once more than necessary - you should have your scheduleBuffers() method create an AudioBuffer of the appropriate length, rather than an array that then needs to be converted.
If I understand your code logic, it's going to create a new Visualizer for every incoming chunk, although they use the same Analyser. I'm not sure you really want a new Visualizer every time - in fact, I think you probably don't.
-You're using a pretty big fftSize, which might be desirable for a frequency analysis, but at 2048/44100 you're sampling more than you need. Minor point, though.
-I'm not sure why you're doing a getByteFrequencyData at all.
-I think the extra closure may be causing memory leakage. This is one of the reasons I'd like to run it through the dev tools.
-You should move the barWidth definition outside of the loop, along with the snippet length:
var snippetLength = this.analyser.frequencyBinCount;
var barWidth = WIDTH/snippetLength;
if you can post a live demo that shows the 1fps behavior, or send an URL to me privately (cwilso at google or gmail), I'd be happy to take a look.

Categories

Resources