I am doing some audio processing with web audio api using JavaScript and I need an advice.
I am trying to do something like this.
const stream = await navigator.mediaDevices.getUserMedia
{
audio: true,
}
// Adding some constraints to the stream
for await(const track of stream.getAudioTracks()){
await track.applyConstraints({echoCancellation:true, noiseSuppresion:false,....});
}
// Creating source and destination
const source = context.createMediaStreamSource(stream);
const destination = context.createMediaStreamDestination();
// Filtering some audio
source.connect(filter);
filter.connect(destination);
// Applying new constraints after filtering
for await(const destTrack of destination.stream.getAudioTracks()){
await destTrack.applyConstraints({autoGainControl:true...});
}
But after trying to apply new constarints to the destination I get error OverconstrainedErrorĀ {name: 'OverconstrainedError', message: 'Cannot satisfy constraints', constraint: ''}. Why is error.constraint === '' ? How resolve this issue?
Related
I have created a real time voice chat application for a game I am making. I got it to work completely fine using audiocontext.createScriptProcessor() method.
Here's the code, I left out parts that weren't relevant
//establish websocket connection
const audioData = []
//websocket connection.onMessage (data) =>
audioData.push(decodeBase64(data)) //push audio data coming from another player into array
//on get user media (stream) =>
const audioCtx = new AudioContext({latencyHint: "interactive", sampleRate: 22050,})
const inputNode = audioCtx.createMediaStreamSource(stream)
var processor = audioCtx.createScriptProcessor(2048, 1, 1);
var outputNode = audioCtx.destination
inputNode.connect(tunerNode)
processor.connect(outputNode)
processor.onaudioprocess = function (e) {
var input = e.inputBuffer.getChannelData(0);
webSocketSend(input) //send microphone input to other sockets via a function set up in a different file, all it does is base 64 encode then send.
//if there is data from the server, play it, else, play nothing
var output
if(audioData.length > 0){
output = audioData[0]
audioData.splice(0,1)
}else output = new Array(2048).fill(0)
};
the only issue is that the createScriptProccessor() method is deprecated. As recommended, I attempted to do this using Audio Worklet Nodes. However I quickly ran into a problem. I can't access the user's microphone input, or set the output from the main file where the WebSocket connection is.
Here is my code for main.js:
document.getElementById('btn').onclick = () => {createVoiceChatSession()}
//establish websocket connection
const audioData = []
//webSocket connection.onMessage (data) =>
audioData.push(data) //how do I get this data to the worklet Node???
var voiceChatContext
function createVoiceChatSession(){
voiceChatContext = new AudioContext()
navigator.mediaDevices.getUserMedia({audio: true}).then( async stream => {
await voiceChatContext.audioWorklet.addModule('module.js')
const microphone = voiceChatContext.createMediaStreamSource(stream)
const processor = new AudioWorkletNode(voiceChatContext, 'processor')
microphone.connect(processor).connect(voiceChatContext.destination)
}).catch(err => console.log(err))
}
Here is my code for module.js:
class processor extends AudioWorkletProcessor {
constructor() {
super()
}
//copies the input to the output
process(inputList, outputList) { // how do I get the input list data (the data from my microphone) to the main file so I can send it via websocket ???
for(var i = 0; i < inputList[0][0].length; i++){
outputList[0][0][i] = inputList[0][0][i]
outputList[0][1][i] = inputList[0][1][i]
}
return true;
}
}
registerProcessor("processor", processor);
So I can record and process the input, but I can't send input via WebSocket or pass in data that is coming from the server to the worklet node because I can't access the input list or output list from the main file where the WebSocket connection is. Does anyone know a way to work around this? Or is there a better solution that doesn't use audio worklet nodes?
Thank you to all who can help!
I figured it out, all I needed to do was use the port.onmessage method to exchange data between the worklet and the main file.
processor.port.onmessage = (e) => {//do something with e.data}
I want to send a audio file to a server (in my case discord) easly as if it was comming from the microphone
I found this code at Send sound through microphone in javascript and modified it to try to fit my use case, but I still cannot get it to work.
navigator.mediaDevices.getUserMedia = () => {
const audioContext = new AudioContext();
return fetch('http://127.0.0.1:8000/enemey.ogg',{mode: 'no-cors'})
.then((response) => response.arrayBuffer())
.then((arrayBuffer) => audioContext.decodeAudioData(arrayBuffer))
.then((audioBuffer) => {
const audioBufferSourceNode = audioContext.createBufferSource();
const mediaStreamAudioDestinationNode = audioContext.createMediaStreamDestination();
audioBufferSourceNode.buffer = audioBuffer;
// Maybe it makes sense to loop the buffer.
audioBufferSourceNode.loop = true;
audioBufferSourceNode.start();
audioBufferSourceNode.connect(mediaStreamAudioDestinationNode);
return mediaStreamAudioDestinationNode.stream;
});
};
any Ideas? I cannot find a fix for this, and the error is
[AudioActionCreators] unknown getUserMedia error: EncodingError
by discord
(all of this is done with the console, not a external program)
I have a web project (vanilla HTML/CSS/JS only) with three audio sources. The idea is for all three to play simultaneously, but I noticed on mobile that the files were playing out of sync (i.e. one source would start, then a few ms later the second would start, then the third). I believe they are playing due to the individual files playing as soon as they're loaded, so I would like to request that once all files have loaded that the play() method is called on all three at the same time,
What would be the best way to achieve this using vanilla JS?
Example: https://jacksorjacksor.xyz/soundblocks/
Repo: https://github.com/jacksorjacksor/jacksorjacksor/tree/master/soundblocks
TIA!
Rich
MediaElements are meant for normal playback of media and aren't optimized enough to get low latency. The best is to use the Web Audio API, and AudioBuffers.
You will first fetch each file's data in memory, then decode the audio data from these, and once all the audio data has been decoded, you'll be able to schedule playing all at the same precise moment:
(async() => {
const urls = [ "layer1_big.mp3", "layer2_big.mp3", "layer3_big.mp3" ]
.map( (url) => "https://cdn.jsdelivr.net/gh/jacksorjacksor/jacksorjacksor/soundblocks/audio/" + url );
// first, fetch each file's data
const data_buffers = await Promise.all(
urls.map( (url) => fetch( url ).then( (res) => res.arrayBuffer() ) )
);
// get our AudioContext
const context = new (window.AudioContext || window.webkitAudioContext)();
// decode the data
const audio_buffers = await Promise.all(
data_buffers.map( (buf) => context.decodeAudioData( buf ) )
);
// to enable the AudioContext we need to handle a user gesture
const btn = document.querySelector( "button" );
btn.onclick = (evt) => {
const current_time = context.currentTime;
audio_buffers.forEach( (buf) => {
// a buffer source is a really small object
// don't be afraid of creating and throwing it
const source = context.createBufferSource();
// we only connect the decoded data, it's not copied
source.buffer = buf;
// in order to make some noise
source.connect( context.destination );
// make it loop?
//source.loop = true;
// start them all 0.5s after we began, so we're sure they're in sync
source.start( current_time + 0.5 );
} );
};
btn.disabled = false;
})();
<button disabled>play</button>
Looking for experience working with media devices:
I'm working on recording on cache and playback from Microphone source; Firefox & Chrome using HTML5.
This is what I've so far:
var constraints = {audio: true, video: false};
var promise = navigator.mediaDevices.getUserMedia(constraints);
I've been checking on official documentation from MDN on getUserMedia
but nothing related to storage the audio from the constraint to cache.
No such question has been asked previously at Stackoverflow; I'm wondering if's possible.
Thanks you.
You can simply use the MediaRecorder API for such task.
In order to record only the audio from your video+audio gUM stream, you will need to create a new MediaStream, from the gUM's audioTrack:
// using async for brevity
async function doit() {
// first request both mic and camera
const gUMStream = await navigator.mediaDevices.getUserMedia({video: true, audio: true});
// create a new MediaStream with only the audioTrack
const audioStream = new MediaStream(gUMStream.getAudioTracks());
// to save recorded data
const chunks = [];
const recorder = new MediaRecorder(audioStream);
recorder.ondataavailable = e => chunks.push(e.data);
recorder.start();
// when user decides to stop
stop_btn.onclick = e => {
recorder.stop();
// kill all tracks to free the devices
gUMStream.getTracks().forEach(t => t.stop());
audioStream.getTracks().forEach(t => t.stop());
};
// export all the saved data as one Blob
recorder.onstop = e => exportMedia(new Blob(chunks));
// play current gUM stream
vid.srcObject = gUMStream;
stop_btn.disabled = false;
}
function exportMedia(blob) {
// here blob is your recorded audio file, you can do whatever you want with it
const aud = new Audio(URL.createObjectURL(blob));
aud.controls = true;
document.body.appendChild(aud);
document.body.removeChild(vid);
}
doit()
.then(e=>console.log("recording"))
.catch(e => {
console.error(e);
console.log('you may want to try from jsfiddle: https://jsfiddle.net/5s2zabb2/');
});
<video id="vid" controls autoplay></video>
<button id="stop_btn" disabled>stop</button>
And as a fiddle since stacksnippets don't work very well with gUM...
I'm trying to create audio stream from browser and send it to server.
Here is the code:
let recording = false;
let localStream = null;
const session = {
audio: true,
video: false
};
function start () {
recording = true;
navigator.webkitGetUserMedia(session, initializeRecorder, onError);
}
function stop () {
recording = false;
localStream.getAudioTracks()[0].stop();
}
function initializeRecorder (stream) {
localStream = stream;
const audioContext = window.AudioContext;
const context = new audioContext();
const audioInput = context.createMediaStreamSource(localStream);
const bufferSize = 2048;
// create a javascript node
const recorder = context.createScriptProcessor(bufferSize, 1, 1);
// specify the processing function
recorder.onaudioprocess = recorderProcess;
// connect stream to our recorder
audioInput.connect(recorder);
// connect our recorder to the previous destination
recorder.connect(context.destination);
}
function onError (e) {
console.log('error:', e);
}
function recorderProcess (e) {
if (!recording) return;
const left = e.inputBuffer.getChannelData(0);
// send left to server here (socket.io can do the job). We dont need stereo.
}
when function start is fired, the samples can be catched in recorderProcess
when function stop is fired, the mic icon in browser disappears, but...
unless I put if (!recording) return in the beginning of recorderProcess, it still process samples.
Unfortunately it's not a solution at all - the samples are still being received by recordingProcess and if I fire start functiono once more, it will get all samples from previous stream and from new one.
My question is:
How can I stop/start recording without such issue?
or if it's not best solution
How can I totally remove stream in stop function, to safely initialize it again anytime?
recorder.disconnect() should help.
You might want to consider the new MediaRecorder functionality in Chrome Canary shown at https://webrtc.github.io/samples/src/content/getusermedia/record/ (currently video-only I think) instead of the WebAudio API.