This is my first stack overflow so I'm sorry in advance.
This is a web audio API question, relating to React Hooks (specifically useContext/useReducer - the dream team).
BASICALLY... I've been trying to use web audio API to create an oscillator and a slider to control it. So far, so good, and in vanilla JS I managed it by using setInterval() and listening for the changes:
setInterval(() => {
if (!osc) {
console.log("Oscillator is stopped");
} else {
let freqSliderVal = document.getElementById("freq-slide").value;
osc.frequency.value = freqSliderVal;
osc.type = selectedWaveform;
console.log(`Oscillator is playing. Frequency value is ${freqSliderVal}`);
}
}, 50);
I can change the frequency of the note and the waveform without the note stopping and everything's grand. You can probably see where this one's going. React basically hates this because every time you move the slider, as you can predict, it rerenders the page because the audio context is inside a useEffect. I'm aware that in the dependencies I have it re rendering every time the frequency changes, but that was the only way I could get it to actually register the change in frequency:
useEffect(() => {
let audioContext = new AudioContext();
let osc = audioContext.createOscillator();
osc.type = waveform;
osc.frequency.value = freq;
osc.connect(audioContext.destination);
osc.start(audioContext.currentTime);
osc.stop(audioContext.currentTime + 3);
audioContextRef.current = audioContext;
audioContext.suspend();
return () => osc.disconnect(audioContext.destination);
}, [freq, waveform]);
How could I make it so that:
a) I can move the fader in real time to control the frequency of the output?
2. I can change the waveform (controlled with a Context and linked to another component), also in real time?
Any help you can provide would be absolutely wonderful, as I'm beginning to really hate React now after everything started so wonderfully.
Thanks!
Sam
I created a separate component with the input slider that does what you want. I defined an audioContext outside of the component to avoid re-defining it in every state update.
Since I don't know how you handle the start and stop I just start it on component load but you can easily update that with a useEffect in your component. The way it is now you will here the sound when you move the slider.
The component is the following:
import React, { useEffect, useState, useRef } from 'react';
const audioContext = new AudioContext();
const osc = audioContext.createOscillator();
osc.type = 'square';
osc.start();
const Osc = (props) => {
const [freq, setFrequency] = useState(0);
const { waveform } = props;
const onSlide = (e) => {
const { target } = e;
setFrequency(target.value);
};
useEffect(() => {
osc.frequency.value = freq;
osc.connect(audioContext.destination);
return () => osc.disconnect(audioContext.destination);
}, [freq]);
useEffect(() => {
osc.type = waveform;
}, [waveform]);
return (
<input
name="freqSlide"
type="range"
min="20"
max="1000"
onChange={(e) => onSlide(e)}
/>
);
};
export default Osc;
To use it you first need to import it:
import Osc from './osc';
And use it in your render function:
<Osc waveform="square" />
The waveform is a property since you said you update it from a different component so you can update it here and the update will be reflected in the component.
Related
I have a form that dynamically enters elements to a react state array on click, and obviously between clicks the state persists. I am trying to now do the same thing programatically but in each iteration the state does not persist, is the only answer to this truly a context object or local storage or is there something wrong with my iteration that I can correct to allow state to persist.
Ive simplified the code basically the button firing will add as many elements as I want but trying to tell react to create 3 elements via the for const only creates 1. I have scripts to write state to session storage, so if there's not some big thing i'm missing, I'll probably just do that, but i figure I'd ask and see cause it would drastically improve the overall health of my app if i knew the solution to this.
const sectionI = {
type: "i",
sectionArea: "",
};
const [i, setI] = useState([])
const strArr = ["i","i","i"]
const addI = () =>{
const newI = [...i, {...sectionI}]
setI(newI)
}
<button onClick={()=>addI()}>Add One Image</button>
const addMultiple = () =>{
for(const el of strArr){
const newI = [...i, {...sectionI}]
setI(newI)
}
}
I will show you how to fix it and give you a link to another one of my answers for the explanation. Here is how to fix the issue:
const addMultiple = () =>{
for(const el of strArr){
setI(prevState => [
...prevState,
{...sectionI},
])
}
}
And here is why it is happening: https://stackoverflow.com/a/66560223/1927991
I was having the following problems.
When I run AudioBufferSourceNode.start() when I have multiple tracks, I sometimes get a delay
Then, per chrisguttandin's answer, I tried the method using offLineAudioContext. (Thanks to chrisguttandin).
I wanted to play two different mp3 files completely simultaneously, so I used offlineAudioContext to synthesize an audioBuffer.
And I succeeded in playing the synthesized node.
The following is a demo of it.
CodeSandBox
The code in the demo is based on the code in the following page.
OfflineAudioContext - Web APIs | MDN
However, the demo does not allow you to change the gain for each of the two types of audio.
Is there any way to change the gain of the two types of audio during playback?
What I would like to do is as follows.
I want to play two pieces of audio perfectly simultaneously.
I want to change the gain of each of the two audios in real time.
Therefore, if you can achieve what you want to do as described above, you don't need to use offlineAudioContext.
The only way I can think of to do this is to run startRendering on every input type="range", but I don't think this is practical from a performance standpoint.
Also, I looked for a solution to this problem, but could not find one.
code
let ctx = new AudioContext(),
offlineCtx,
tr1,
tr2,
renderedBuffer,
renderedTrack,
tr1gain,
tr2gain,
start = false;
const trackArray = ["track1", "track2"];
const App = () => {
const [loading, setLoading] = useState(true);
useEffect(() => {
(async () => {
const bufferArray = trackArray.map(async (track) => {
const res = await fetch("/" + track + ".mp3");
const arrayBuffer = await res.arrayBuffer();
return await ctx.decodeAudioData(arrayBuffer);
});
const audioBufferArray = await Promise.all(bufferArray);
const source = audioBufferArray[0];
offlineCtx = new OfflineAudioContext(
source.numberOfChannels,
source.length,
source.sampleRate
);
tr1 = offlineCtx.createBufferSource();
tr2 = offlineCtx.createBufferSource();
tr1gain = offlineCtx.createGain();
tr2gain = offlineCtx.createGain();
tr1.buffer = audioBufferArray[0];
tr2.buffer = audioBufferArray[1];
tr1.connect(tr1gain);
tr1gain.connect(offlineCtx.destination);
tr2.connect(tr1gain);
tr2gain.connect(offlineCtx.destination);
tr1.start();
tr2.start();
offlineCtx.startRendering().then((buffer) => {
renderedBuffer = buffer;
renderedTrack = ctx.createBufferSource();
renderedTrack.buffer = renderedBuffer;
setLoading(false);
});
})();
return () => {
ctx.close();
};
}, []);
const [playing, setPlaying] = useState(false);
const playAudio = () => {
if (!start) {
renderedTrack = ctx.createBufferSource();
renderedTrack.buffer = renderedBuffer;
renderedTrack.connect(ctx.destination);
renderedTrack.start();
setPlaying(true);
start = true;
return;
}
ctx.resume();
setPlaying(true);
};
const pauseAudio = () => {
ctx.suspend();
setPlaying(false);
};
const stopAudio = () => {
renderedTrack.disconnect();
start = false;
setPlaying(false);
};
const changeVolume = (e) => {
const target = e.target.ariaLabel;
target === "track1"
? (tr1gain.gain.value = e.target.value)
: (tr2gain.gain.value = e.target.value);
};
const Inputs = trackArray.map((track, index) => (
<div key={index}>
<span>{track}</span>
<input
type="range"
onChange={changeVolume}
step="any"
max="1"
aria-label={track}
disabled={loading ? true : false}
/>
</div>
));
return (
<>
<button
onClick={playing ? pauseAudio : playAudio}
disabled={loading ? true : false}
>
{playing ? "pause" : "play"}
</button>
<button onClick={stopAudio} disabled={loading ? true : false}>
stop
</button>
{Inputs}
</>
);
};
As a test, I'd go back to your original solution, but instead of
tr1.start();
tr2.start();
try something like
t = ctx.currentTime;
tr1.start(t+0.1);
tr2.start(t+0.1);
There will be a delay of about 100 ms before audio starts, but they should be synchronized precisely. If this works, reduce the 0.1 to something smaller, but not zero. Once this is working, you can then connect separate gain nodes to each track and control the gains of each in real-time.
Oh, one other thing, instead of resuming the context after calling start, you might want to do something like
ctx.resume()
.then(() => {
let t = ctx.currentTime;
tr1.start(t + 0.1);
tr2.start(t + 0.1);
});
The clock isn't running if the context is suspended, and resuming doesn't happen instantly. It may take some time to restart the audio HW.
Oh, another approach since I see that the buffer you created with an offline context has two channels in it.
Let s be the AudioBufferSourceNode you created in the offline context.
let splitter = new ChannelSplitterNode(ctx, {numberOfOutputs: 2});
s.connect(splitter);
let g1 = new GainNode(ctx);
let g2 = new GainNode(ctx);
splitter.connect(g1, 0, 0);
splitter.connect(g2, 1, 0);
let merger = new ChannelMergerNode(ctx, {numberOfInputs: 1});
g1.connect(merger, 0, 0);
g2.connect(merger, 0 ,1);
// Connect merger to the downstream nodes or the destination.
You can now start s and modify g1 and g2 as desired to produce the output you want.
You can remove the gain nodes created in the offline context; they're not needed unless you really want to apply some kind of gain in the offline context.
But if I were doing this, I'd prefer not to use the offline context unless absolutely necessary.
I have an application in react with webaudio api, for recording voice . It has play/pause feature.
Some times audio is playing with out voice , sometimes some parts are missing but audio length is correct. Some times audio is correct. Below is my code. Can anybody point out the mistake in the code?
import React from 'react';
import { Button} from 'react-bootstrap';
import { saveAs } from 'file-saver';
import 'bootstrap/dist/css/bootstrap.min.css';
import './App.css';
var audioBufferUtils = require("audio-buffer-utils")
var encodeWAV = require('audiobuffer-to-wav')
let audio = new Audio();
var context = new AudioContext()
var audioBuffer = []
function App() {
var status = true
function listen() {
initDevice()
}
function pauseRecording(){
if (status){
context.suspend()
status = false
}
else{
context.resume()
status = true
}
}
function initDevice(){
const handleSuccess = function(stream) {
const source = context.createMediaStreamSource(stream);
const processor = context.createScriptProcessor(1024, 1, 1);
source.connect(processor);
processor.connect(context.destination);
processor.onaudioprocess = function(e) {
audioBuffer = audioBufferUtils.concat(audioBuffer,e.inputBuffer)
};
};
navigator.mediaDevices.getUserMedia({ audio: true, video: false })
.then(handleSuccess);
};
function saveAudio(){
context.suspend()
var wav = encodeWAV(audioBuffer)
var blob = new Blob([ new DataView(wav) ], {
type: 'audio/wav'
})
let finalAudio = new Audio()
var url = window.URL.createObjectURL(blob)
finalAudio.src = url
finalAudio.play()
saveAs(blob,"test.wav")
}
return (
<div >
<Button variant="warning" id="listen" onClick={listen}>Listen</Button>
<Button variant="warning" id="stop" onClick={pauseRecording}>play/pause</Button>
<Button variant="warning" id="stop" onClick={saveAudio}>Save</Button>
</div>
);
}
export default App;
I want to use webaudio api, because I have to process the audio once recording finished
The code seems fine. I suspect the issue here is the ScriptProcessor is choking. As in it can't finish executing onaudioprocess in time for the next call to onaudioprocess. That usually results in this kind of random behaviour.
A simple thing to try is to play with the ScriptProcessor's bufferSize parameter. Increasing that might help sometimes.
Another possible solution could be to move the concatenation of the AudioBuffers to a WebWorker that way the main thread is ready to execute the next onaudioprocess. I believe libraries like recorder.js do that.
Finally, you can also consider using something like https://github.com/mattdiamond/Recorderjs or one of it's updated forks to do this
Whatever I do, I get an error message while trying to playing a sound:
Uncaught (in promise) DOMException.
After searching on Google I found that it should appear if I autoplayed the audio before any action on the page from the user but it's not the case for me. I even did this:
componentDidMount() {
let audio = new Audio('sounds/beep.wav');
audio.load();
audio.muted = true;
document.addEventListener('click', () => {
audio.muted = false;
audio.play();
});
}
But the message still appears and the sound doesn't play. What should I do?
The audio is an HTMLMediaElement, and calling play() returns a Promise, so needs to be handled. Depending on the size of the file, it's usually ready to go, but if it is not loaded (e.g pending promise), it will throw the "AbortError" DOMException.
You can check to see if its loaded first, then catch the error to turn off the message. For example:
componentDidMount() {
this.audio = new Audio('sounds/beep.wav')
this.audio.load()
this.playAudio()
}
playAudio() {
const audioPromise = this.audio.play()
if (audioPromise !== undefined) {
audioPromise
.then(_ => {
// autoplay started
})
.catch(err => {
// catch dom exception
console.info(err)
})
}
}
Another pattern that has worked well without showing that error is creating the component as an HTML audio element with the autoPlay attribute and then rendering it as a component where needed. For example:
const Sound = ( { soundFileName, ...rest } ) => (
<audio autoPlay src={`sounds/${soundFileName}`} {...rest} />
)
const ComponentToAutoPlaySoundIn = () => (
<>
...
<Sound soundFileName="beep.wav" />
...
</>
)
Simple error tone
If you want something as simple as playing a simple error tone (for non-visual feedback in a barcode scanner environment, for instance), and don't want to install dependencies, etc. - it can be pretty simple. Just link to your audio file:
import ErrorAudio from './error.mp3'
And in the code, reference it, and play it:
var AudioPlay = new Audio (ErrorAudio);
AudioPlay.play();
Only discovered this after messing around with more complicated options.
I think it would be better to use this component (https://github.com/justinmc/react-audio-player) instead of a direct dom manipulation
It is Very Straightforward Indeed
const [, setMuted] = useState(true)
useEffect(() => {
const player = new Audio(./sound.mp3);
const playPromise = player.play();
if (playPromise !== undefined)
return playPromise.then(() => setMuted(false)).catch(() => setMuted(false));
}, [])
I hope It works now :)
I am trying to create a new state-dependent Audio instance in React. When using require(), I receive the warning, "Critical dependency: the request of a dependency is an expression." I cannot simply import the file since the audio's source is state-dependent. How can I work around this?
The following code gives the error:
playSong = () => {
this.setState(this.state, function(){
let source = require(this.state.songList[this.state.songIndex].src);
let audio = new Audio(source);
audio.play();
});
}
The require() function only seems to work if given a literal.
You cannot require dynamic values, sadly.
You could import all your files into the songList array statically first, and pick the correct one from that:
const songList = [
require('./path/to/song1.mp3'),
require('./path/to/song2.mp3')
];
class MyComponent extends React.Component {
playSong = () => {
let source = songList[this.state.songIndex].src;
let audio = new Audio(source);
audio.play();
};
render () {
// ...
}
}