I have a multi peer WebRTC stream using simple-peer and I'm playing the received stream like this:
peer.on("stream", data => {
let audio = document.createElement("audio") as HTMLAudioElement;
audio.src = URL.createObjectURL(data);
audio.play();
});
This works fine on desktop but for chrome on android there is no sound:
Unhandled Promise rejection: play() can only be initiated by a user gesture.
I couldn't find any documentation on how to correctly play the received stream. Do I really have to show a button when the stream is ready?
I have also tried to work around this issue by playing the stream from getUserMedia but this only worked as long as I didn't call audioTag.muted = true which is no solution either because this creates a feedback loop.
let audioTag = document.createElement("audio") as HTMLAudioElement;
audioTag.autoplay = true;
navigator.getUserMedia({video: false, audio: true}, (async stream => {
audioTag.src = window.URL.createObjectURL(stream);
audioTag.muted = true;
// ...
});
Sites like http://talky.io seem to have found a way around this problem though, so what do I have to do?
Check out: https://www.chromium.org/audio-video/autoplay
var promise = document.querySelector('video').play();
if (promise !== undefined) {
promise.then(_ => {
// Autoplay started!
}).catch(error => {
// Autoplay was prevented.
// Show a "Play" button so that user can start playback.
});
}
Related
I am trying to create a web application with some video chat functionality, but trying to get it to work on mobile (specifically, Chrome for iOS) is giving me fits.
What I would like to do is have users be able to join a game, and join a team within that game. There are two tabs on the page for players - a "Team" tab and a "Game" tab. When the player selects the game tab, they may talk to all participants in the entire game (e.g. to ask the host/moderator a question). When the team tab is selected, the player's stream to the game is muted, and only the player's team can hear them talk. As a result, I believe I need two MediaStream objects for each player - one to stream to the game, and one to stream to the player's team - this way, I can mute one while keeping the other unmuted.
There is an iOS quirk where you can only call the getUserMedia() function once, so I need to clone the stream using MediaStream.clone(). AddVideoStream is a function that just adds the video to the appropriate grid of videos, and it appears to work properly.
The problem is - when I use my iPhone 12 to connect to the game, I can see my video just fine, but when I click over to the "game" tab and look at the second stream, the stream works for a second, and then freezes. The weird thing is, if I open a new tab in Chrome, and then go back to the game tab, both videos seem to run smoothly.
Has anyone ever tried something similar, and figured out why this behavior occurs?
const myPeer = new Peer(undefined);
myPeer.on('open', (userId) => {
myUserId = userId;
console.log(`UserId: ${myUserId}`);
socket.emit('set-peer-id', {
id: userId,
});
});
const myVideo = document.createElement('video');
myVideo.setAttribute('playsinline', true);
myVideo.muted = true;
const myTeamVideo = document.createElement('video');
myTeamVideo.setAttribute('playsinline', true);
myTeamVideo.muted = true;
const myStream =
// (navigator.mediaDevices ? navigator.mediaDevices.getUserMedia : undefined) ||
navigator.mediaDevices ||
navigator.webkitGetUserMedia ||
navigator.mozGetUserMedia;
let myVideoStream;
let myTeamStream;
if (myStream) {
myStream
.getUserMedia({
video: true,
audio: true,
})
.then((stream) => {
myVideoStream = stream;
myTeamStream = stream.clone();
addVideoStream(myTeamVideo, myTeamStream, myUserId, teamVideoGrid);
addVideoStream(myVideo, myVideoStream, myUserId, videoGrid);
myPeer.on('call', (call) => {
call.answer(stream);
const video = document.createElement('video');
video.setAttribute('playsinline', true);
call.on('stream', (userVideoStream) => {
const teammate = teammates.find((t) => {
return t.peerId === call.peer;
});
if (teammate) {
addVideoStream(
video,
userVideoStream,
call.peer,
teamVideoGrid,
teammate.name
);
} else {
addVideoStream(video, userVideoStream, call.peer, videoGrid);
}
});
call.on('close', () => {
console.log(`Call with ${call.peer} closed`);
});
});
socket.on('player-joined', (data) => {
addMessage({
name: 'System',
isHost: false,
message: `${data.name} has joined the game.`,
});
if (data.id !== myUserId) {
if (data.teamId !== teamId) {
connectToNewUser(data.peerId, myVideoStream, videoGrid, data.name);
} else {
connectToNewUser(
data.peerId,
myTeamStream,
teamVideoGrid,
data.name
);
}
}
});
});
}
How do you start and stop an audio stream, so that you can optionally start it again, in Javascript?
To start the stream, I'm using:
running = false;
function handleAudioStream(stream){
let audioCtx = new AudioContext();
let source = audioCtx.createMediaStreamSource(stream);
let processor = audioCtx.createScriptProcessor(1024, 1, 1);
source.connect(processor);
processor.connect(audioCtx.destination);
processor.onaudioprocess = function(event) {
console.log('processing audio');
if (!running) {
stream.getTracks().forEach(function(track) {
if (track.readyState == 'live' && track.kind === 'audio') {
track.stop();
}
});
return;
}
var audioData = event.inputBuffer.getChannelData(0);
do_stuff(audioData);
};
processor.connect(audioCtx.destination);
}
function start_audio(){
running = true;
navigator.mediaDevices.getUserMedia({
audio: true,
video: false
}).then(handleAudioStream);
}
function stop_audio(){
running = false;
}
As recommended in other questions, to stop the stream, I'm using a global flag to trigger the calling of the stop method on each track from within the stream callback.
However, this doesn't seem to work very well. This does stop audio data from being available, but the processor.onaudioprocess callback continues to get called, consuming a massive amount of CPU.
Also, if I run start_audio() again, it doesn't re-start the audio. The browser just seems to ignore it and the audio context never re-initializes correctly.
What am I doing wrong? How do I cleanly stop an audio stream so that I can later re-start it?
I am trying to write a small library for convenient manipulations with audio. I know about the autoplay policy for media elements, and I play audio after a user interaction:
const contextClass = window.AudioContext || window.webkitAudioContext;
const context = this.audioContext = new contextClass();
if (context.state === 'suspended') {
const clickCb = () => {
this.playSoundsAfterInteraction();
window.removeEventListener('touchend', clickCb);
this.usingAudios.forEach((audio) => {
if (audio.playAfterInteraction) {
const promise = audio.play();
if (promise !== undefined) {
promise.then(_ => {
}).catch(error => {
// If playing isn't allowed
console.log(error);
});
}
}
});
};
window.addEventListener('touchend', clickCb);
}
On android chrome everything ok and on a desktop browser. But on mobile Safari I am getting such error in promise:
the request is not allowed by the user agent or the platform in the current context safari
I have tried to create audios after an interaction, change their "src" property. In every case, I am getting this error.
I just create audio in js:
const audio = new Audio(base64);
add it to array and try to play. But nothing...
Tried to create and play after a few seconds after interaction - nothing.
Looking for experience working with media devices:
I'm working on recording on cache and playback from Microphone source; Firefox & Chrome using HTML5.
This is what I've so far:
var constraints = {audio: true, video: false};
var promise = navigator.mediaDevices.getUserMedia(constraints);
I've been checking on official documentation from MDN on getUserMedia
but nothing related to storage the audio from the constraint to cache.
No such question has been asked previously at Stackoverflow; I'm wondering if's possible.
Thanks you.
You can simply use the MediaRecorder API for such task.
In order to record only the audio from your video+audio gUM stream, you will need to create a new MediaStream, from the gUM's audioTrack:
// using async for brevity
async function doit() {
// first request both mic and camera
const gUMStream = await navigator.mediaDevices.getUserMedia({video: true, audio: true});
// create a new MediaStream with only the audioTrack
const audioStream = new MediaStream(gUMStream.getAudioTracks());
// to save recorded data
const chunks = [];
const recorder = new MediaRecorder(audioStream);
recorder.ondataavailable = e => chunks.push(e.data);
recorder.start();
// when user decides to stop
stop_btn.onclick = e => {
recorder.stop();
// kill all tracks to free the devices
gUMStream.getTracks().forEach(t => t.stop());
audioStream.getTracks().forEach(t => t.stop());
};
// export all the saved data as one Blob
recorder.onstop = e => exportMedia(new Blob(chunks));
// play current gUM stream
vid.srcObject = gUMStream;
stop_btn.disabled = false;
}
function exportMedia(blob) {
// here blob is your recorded audio file, you can do whatever you want with it
const aud = new Audio(URL.createObjectURL(blob));
aud.controls = true;
document.body.appendChild(aud);
document.body.removeChild(vid);
}
doit()
.then(e=>console.log("recording"))
.catch(e => {
console.error(e);
console.log('you may want to try from jsfiddle: https://jsfiddle.net/5s2zabb2/');
});
<video id="vid" controls autoplay></video>
<button id="stop_btn" disabled>stop</button>
And as a fiddle since stacksnippets don't work very well with gUM...
I'm trying to create audio stream from browser and send it to server.
Here is the code:
let recording = false;
let localStream = null;
const session = {
audio: true,
video: false
};
function start () {
recording = true;
navigator.webkitGetUserMedia(session, initializeRecorder, onError);
}
function stop () {
recording = false;
localStream.getAudioTracks()[0].stop();
}
function initializeRecorder (stream) {
localStream = stream;
const audioContext = window.AudioContext;
const context = new audioContext();
const audioInput = context.createMediaStreamSource(localStream);
const bufferSize = 2048;
// create a javascript node
const recorder = context.createScriptProcessor(bufferSize, 1, 1);
// specify the processing function
recorder.onaudioprocess = recorderProcess;
// connect stream to our recorder
audioInput.connect(recorder);
// connect our recorder to the previous destination
recorder.connect(context.destination);
}
function onError (e) {
console.log('error:', e);
}
function recorderProcess (e) {
if (!recording) return;
const left = e.inputBuffer.getChannelData(0);
// send left to server here (socket.io can do the job). We dont need stereo.
}
when function start is fired, the samples can be catched in recorderProcess
when function stop is fired, the mic icon in browser disappears, but...
unless I put if (!recording) return in the beginning of recorderProcess, it still process samples.
Unfortunately it's not a solution at all - the samples are still being received by recordingProcess and if I fire start functiono once more, it will get all samples from previous stream and from new one.
My question is:
How can I stop/start recording without such issue?
or if it's not best solution
How can I totally remove stream in stop function, to safely initialize it again anytime?
recorder.disconnect() should help.
You might want to consider the new MediaRecorder functionality in Chrome Canary shown at https://webrtc.github.io/samples/src/content/getusermedia/record/ (currently video-only I think) instead of the WebAudio API.