Web audio echoCancellation with multiple sources does not work - javascript

I have an application that plays multiple web audio sources concurrently, and allows the user to record audio at the same time. It works fine if the physical input (e.g. webcam) cannot detect the physical output (e.g. headphones). But if the output can bleed into the input (e.g. using laptop speakers with a webcam), then the recording picks up the other audio sources.
My understanding is the echoCancellation constraint is supposed to address this problem, but it doesn't seem to work when multiple sources are involved.
I've included a simple example to reproduce the issue. JSfiddle seems to be too strictly sandboxed to allow user media otherwise I'd dump it somewhere.
Steps to reproduce
Press record
Make a noise, or just observe. The "metronome" should beep 5 times
After 2 seconds, the <audio> element source will be set to the recorded audio data
Play the <audio> element - you will hear the "metronome" beep. Ideally, the metronome beep would be "cancelled" via the echoCancellation constraint which is set on the MediaStream, but it doesn't work this way.
index.html
<!DOCTYPE html>
<html lang="en">
<body>
<button onclick="init()">record</button>
<audio id="audio" controls="true"></audio>
<script src="demo.js"></script>
</body>
</html>
demo.js
let audioContext
let stream
async function init() {
audioContext = new AudioContext()
stream = await navigator.mediaDevices.getUserMedia({
audio: {
echoCancellation: true,
},
video: false,
})
playMetronome()
record()
}
function playMetronome(i = 0) {
if (i > 4) {
return
}
const osc = new OscillatorNode(audioContext, {
frequency: 440,
type: 'sine',
})
osc.connect(audioContext.destination)
osc.start()
osc.stop(audioContext.currentTime + 0.1)
setTimeout(() => {
playMetronome(i + 1)
}, 500)
}
function record() {
const recorder = new MediaRecorder(stream)
const data = []
recorder.addEventListener('dataavailable', (e) => {
console.log({ event: 'dataavailable', e })
data.push(e.data)
})
recorder.addEventListener('stop', (e) => {
console.log({ event: 'stop', e })
const blob = new Blob(data, { type: 'audio/ogg; codecs=opus' })
const audioURL = window.URL.createObjectURL(blob)
document.getElementById('audio').src = audioURL
})
recorder.start()
setTimeout(() => {
recorder.stop()
}, 2000)
}

Unfortunately this is a long standing issue in Chrome (and all its derivatives). It should work in Firefox and Safari.
Here is the ticket: https://bugs.chromium.org/p/chromium/issues/detail?id=687574.
It basically says that the echo cancellation only works for audio that is coming from a peer connection. As soon as it is processed locally by the Web Audio API it will not be considered anymore by the echo cancellation.

Related

webRTC recording tab screen with tab audio not wroking

I used webRTC, node js, and react to build a fully functional video conferencing app that can support up to 4 users and uses mesh architecture. After that, I wanted to add a record meeting feature, so I added it. However, it only records my own audio from my microphone and remote stream audio is not recorded in the media recorder. Why is that?
here is a simple code snippet that shows how I get my tab screen stream
const toBeRecordedStream = await navigator.mediaDevices.getDisplayMedia({
video: {
width: 1920,
height: 1080,
frameRate: {
max:30,
ideal: 24,
},
},
audio: true,
});
After receiving the tab stream, I used audio context to combine the tab audio with my microphone audio and record it.
const vp9Codec = "video/webm;codecs=vp9,opus";
const vp9Options = {
mimeType: vp9Codec,
};
const audioCtx = new AudioContext();
const outputStream = new MediaStream();
const micStream = audioCtx.createMediaStreamSource(localStream);
const screenAudio = audioCtx.createMediaStreamSource(screenStream);
const destination = audioCtx.createMediaStreamDestination();
screenAudio.connect(destination);
micStream.connect(destination);
outputStream.addTrack(screenStream.getVideoTracks()[0]);
outputStream.addTrack(destination.stream.getAudioTracks()[0]);
if (MediaRecorder.isTypeSupported(vp9Codec)) {
mediaRecorder = new MediaRecorder(outputStream, vp9Options);
} else {
mediaRecorder = new MediaRecorder(outputStream);
}
mediaRecorder.ondataavailable = handelDataAvailable;
mediaRecorder.start();
Four video and audio streams are visible on the screen, but only my voice and the video of the tab are recorded
and I am working with the Chrome browser because I am aware that Firefox does not support tab audio, but Chrome and Edge do.

Audio capture with getDisplayMedia is not worked with Chrome in my Macbook

Audio capture with getDisplayMedia is not worked with Chrome in my Macbook, and it is not asked to check the audio share when the chrome ask user to share the screen, it only recorded the video with the MediaStream. But In my Windows computer, it was fully supported with Chrome browser both with video and audio capture, and it will ask user to check to share the audio or not. May I please ask is that because of the supporting issue or it is the code problem? I am using the lastest version of chrome in macbook
Below is my code:
navigator.mediaDevices
.getDisplayMedia({
video: true,
audio: true
})
.then((Mediastream) => {
vm.$set(vm, 'isRecording', true);
if (vm.isInitiator || vm.isConnector) {
if (localStream) {
let localAudio = new MediaStream();
localAudio.addTrack(localStream.getAudioTracks()[0]);
if (Mediastream.getAudioTracks().length != 0) {
let systemAudio = new MediaStream();
systemAudio.addTrack(Mediastream.getAudioTracks()[0]);
let audioContext = new AudioContext();
let audioIn_01 = audioContext.createMediaStreamSource(localAudio);
let audioIn_02 = audioContext.createMediaStreamSource(systemAudio);
let dest = audioContext.createMediaStreamDestination();
audioIn_01.connect(dest);
audioIn_02.connect(dest);
let finalAudioStream = dest.stream;
Mediastream.removeTrack(Mediastream.getAudioTracks()[0]);
Mediastream.addTrack(finalAudioStream.getAudioTracks()[0]);
} else {
Mediastream.addTrack(localStream.getAudioTracks()[0]);
}
}
}
this.createRecorder(Mediastream);
})
.catch((err) => {
this.getUserMediaError(err);
});
Unfortunately this is a limitation of Chrome on macOS. According to "caniuse.com",
On Windows and Chrome OS the entire system audio can be captured, but on Linux and macOS only the audio of a tab can be captured.
https://caniuse.com/mdn-api_mediadevices_getdisplaymedia_audio_capture_support

HTML5 video element request stay pending forever (on chrome in mobile) when toggle over front-rare camera

I'm developing an app where users can capture photo using a front/rare camera. it working perfectly but when toggle over camera front/rare var playPromise = videoStream.play() is gone in pending state. some times promise get resolve, the camera is working sometimes not.
this issue occurs only in chrome browser not in mozila and firefox
try {
stopWebCam(); // stop media stream when toggle over camera
stream = await navigator.mediaDevices.getUserMedia({video: true});
/* use the stream */
let videoStream = document.getElementById('captureCandidateId');
videoStream.srcObject = stream;
// videoStream.play();
var playPromise = videoStream.play();
if (playPromise !== undefined) {
playPromise.then(_ => {
// Automatic playback started!
// Show playing UI.
})
.catch(error => {
// Auto-play was prevented
// Show paused UI.
});
}
};
} catch(err) {
/* handle the error */
console.log(err.name + ": " + err.message);
}
let stopWebCam = function (pictureType) {
setTimeout(()=>{
let videoStream = document.getElementById('captureCandidateId');
const stream = videoStream.srcObject;
if (stream && stream.getTracks) {
const tracks = stream.getTracks();
tracks.forEach(function(track) {
track.stop();
});
}
videoStream.srcObject = null;
}, 0)
}
Here, I drafted a piece of code for you, this is much more simple and smaller approach than what you are trying to do. I am just taking the stream from the video element and drawing it to canvas. Image can be downloaded by right clicking.
NOTE: Example does not work in StackOverflow
<video id="player" controls autoplay></video>
<button id="capture">Capture</button>
<canvas id="canvas" width=320 height=240></canvas>
<script>
const player = document.getElementById('player');
const canvas = document.getElementById('canvas');
const context = canvas.getContext('2d');
const captureButton = document.getElementById('capture');
const constraints = {
video: true,
};
captureButton.addEventListener('click', () => {
// Draw the video frame to the canvas.
context.drawImage(player, 0, 0, canvas.width, canvas.height);
});
// Attach the video stream to the video element and autoplay.
navigator.mediaDevices.getUserMedia(constraints)
.then((stream) => {
player.srcObject = stream;
});
</script>
If you want, you can also make some edits according to your needs, like:
Choose which camera to use
Hide the video stream
Add a easier method to download the photo on your device
You can also add a functionality to upload the photo straight to the server if you have one

Really strange playback using mediarecorder API: different parts are lost on different devices

I'm working on an app running in a browser where the user can record his voice. Using the MediaRecorder API I can get the recording and POST it to my server. The problem comes in if the user pauses and restarts the recording. When that happens, the first time they get it from the server it plays fine, but when it is replayed only the last segment is played. If I reload the web page and try again, once again the first time I get the whole recording, subsequent times it is just the last segment.
It is using opus codec, so I tried playing it in VLC. There, I get only the 1st segment, never any of the subsequent ones. Finally, I tried converting to MP3, and when I do that - the MP3 has the whole recording! So the whole thing is being saved, but somehow the segments seem to be stored in the file and mess up replay. In each case only one segment of the blob is playing. I have to say I'm at something of a loss even as how to attack this. The time showed by the player is the time of the 1st segment, whether it plays the first, second, or the whole thing. Any thoughts?
edited to provide a working example
How to test: put this code where it can be served and open it (I use a chrome-based browser). Click on Start to start recording, then Pause, then Start again to continue recording, then Pause again to stop. Then click on setup to load the recording, and listen to the recording. The first time I listen I get the whole recording, though the playback timer only shows the first segment. Subsequent playbacks only play the last segment. Pressing the setup button again will cause it to play the whole recording, but again only the first time.
<!doctype html>
<html>
<head>
<script>
var audio = null;
function init() {
audio = new Recording();
audio.prepareAudio();
}
class Recording {
recordButton;
pauseButton;
recorder;
mimeType;
audioChunks;
constructor() {
this.mimeType = this.recorder = null;
this.recordButton = this.pauseButton = null;
this.audioChunks = [];
}
getRecorder() {
return new Promise(async resolve => {
const stream = await navigator.mediaDevices.getUserMedia({ audio: true });
const mediaRecorder = new MediaRecorder(stream);
var _this = this;
mediaRecorder.addEventListener('dataavailable', event => {
_this.audioChunks.push(event.data);
});
const start = () => {
mediaRecorder.start();
};
const pause = () =>
new Promise(resolve => {
mediaRecorder.addEventListener('stop', () => {
resolve(_this.audioChunks);
});
mediaRecorder.stop();
_this.mimeType = mediaRecorder.mimeType;
});
resolve({ start, pause });
});
}
get codec() {
if (!this.mimeType) return null;
let split = this.mimeType.split('=');
return (split.length > 1) ? split[1] : split;
}
prepareAudio() {
this.recordButton = document.querySelector('#record');
this.pauseButton = document.querySelector('#pause');
var _this = this;
this.recordButton.addEventListener('click', async () => {
_this.recordButton.setAttribute('disabled', true);
_this.pauseButton.removeAttribute('disabled');
if (!_this.recorder) {
_this.recorder = await this.getRecorder();
}
_this.recorder.start();
});
this.pauseButton.addEventListener('click', async () => {
_this.recordButton.removeAttribute('disabled');
_this.pauseButton.setAttribute('disabled', true);
await _this.recorder.pause();
});
}
data() {
return new Promise((resolve, reject) => {
const reader = new FileReader();
let codec = this.audioChunks[0].type;
let audioBlob = new Blob(this.audioChunks, {type: this.codec || codec});
reader.readAsDataURL(audioBlob);
reader.onload = () => resolve({codec: codec, data: reader.result.split(',')[1]});
});
}
blobUrl() {
let codec = this.audioChunks[0].type;
let audioBlob = new Blob(this.audioChunks, {type: this.codec || codec});
let blobUrl = URL.createObjectURL(audioBlob);
let player = document.getElementById('play-blob');
player.src = blobUrl;
player.disabled = false;
return blobUrl;
}
}
</script>
</head>
<body onload="init()">
<div>
<button id="record" class="button fill" >Start</button>
<br />
<button id="pause" class="button fill" >Pause</button>
<br />
<button class="button fill" onclick="audio.blobUrl()">setup</button>
</div>
<audio id="play-blob" controls></audio>
</body>
</html>
This isn't a complete answer, but I'm understanding more of what is happening. The audio player, at least for the versions of Chrome and Firefox I am using (up-to-date), does not seem to handle streaming audio properly. When the source is loaded it does not know the length (of course). When the blob is created with multiple segments (new Blob([segment1, segment2, ...])) the first time the duration is given as infinite, and the whole clip plays. On subsequent plays the clip time is given as the length of the longest segment and only the last segment is played. The audio object gives the duration as the length of the longest segment.
I've also solved my immediate problem by replacing the audio device, using howler. That plays the entire clip as I expected repeatedly.

Setting WebRTC stream in HTML5 video element sometimes shows no image and creates weird artifacts

I am making a small chatroulette clone where I have implemented a switch webcam feature. Most of the times it works, but sometimes I get no image at all including weird artifacts over whole page, not only video element.
Image: https://i.gyazo.com/87d089807c17314ff79cda2e8eaea454.png
Video: https://drive.google.com/file/d/0B3h9E32u9L9aU0lZZ0l1bUd5bDQ/view
Demo: https://codepen.io/grymer/pen/gxWzvw
Sometimes when I change the camera, ie. re-setting the video.srcObject, I get no image at all and diagonal black lines becomes visible overall the page.
This looks like a bug in Chrome. I've put together a small sample for anyone with two webcams to test (I have one real webcam and one virtual).
EDIT: Updated demo. I'm logging all events on the video element.
When everything is working like expected, I receive these events:
event: emptied
event: loadstart
event: durationchange
event: loadedmetadata
event: loadeddata
event: canplay
event: canplaythrough
When I get no image, these events have been execute:
event: emptied
event: loadstart
So why does it stop at loadstart?
Here are the demo source:
stackoverflow code snippets is not working with webrtc apparently. Go to codepen link
let deviceIds, currentDeviceId,
videoEl = document.getElementById('video'),
buttonEl = document.getElementById('button');
buttonEl.addEventListener('click', () => getUserMedia())
navigator.mediaDevices.enumerateDevices().then(devices => {
// get all video inputs
deviceIds = devices.filter(device => device.kind === 'videoinput').map(device => device.deviceId);
currentDeviceId = deviceIds.length > 0 ? 0 : -1;
}).then(() => {
getUserMedia();
});
function getUserMedia() {
const constraints = {
audio: true,
video: {
deviceId: deviceIds[currentDeviceId]
}
};
currentDeviceId = currentDeviceId === 0 ? 1 : 0;
navigator.mediaDevices.getUserMedia(constraints).then(stream => {
videoEl.srcObject = null;
videoEl.srcObject = stream;
});
}
body {
background: black;
color: white;
}
<video id="video" width=200 height=200></video>
<button id="button">change</button>
<p>You need two webcams for this demo.</p>
Add a .catch(e => console.error(e)) to the getUserMedia call. Possibly the device can not be opened yet which if I recall correctly results in a TrackStartError.
You might also want to stop all tracks of the current video object, i.e. videoEl.srcObject.getTracks.forEach(t => t.stop())
to avoid opening the device too often.

Categories

Resources