I am trying to create a web application with some video chat functionality, but trying to get it to work on mobile (specifically, Chrome for iOS) is giving me fits.
What I would like to do is have users be able to join a game, and join a team within that game. There are two tabs on the page for players - a "Team" tab and a "Game" tab. When the player selects the game tab, they may talk to all participants in the entire game (e.g. to ask the host/moderator a question). When the team tab is selected, the player's stream to the game is muted, and only the player's team can hear them talk. As a result, I believe I need two MediaStream objects for each player - one to stream to the game, and one to stream to the player's team - this way, I can mute one while keeping the other unmuted.
There is an iOS quirk where you can only call the getUserMedia() function once, so I need to clone the stream using MediaStream.clone(). AddVideoStream is a function that just adds the video to the appropriate grid of videos, and it appears to work properly.
The problem is - when I use my iPhone 12 to connect to the game, I can see my video just fine, but when I click over to the "game" tab and look at the second stream, the stream works for a second, and then freezes. The weird thing is, if I open a new tab in Chrome, and then go back to the game tab, both videos seem to run smoothly.
Has anyone ever tried something similar, and figured out why this behavior occurs?
const myPeer = new Peer(undefined);
myPeer.on('open', (userId) => {
myUserId = userId;
console.log(`UserId: ${myUserId}`);
socket.emit('set-peer-id', {
id: userId,
});
});
const myVideo = document.createElement('video');
myVideo.setAttribute('playsinline', true);
myVideo.muted = true;
const myTeamVideo = document.createElement('video');
myTeamVideo.setAttribute('playsinline', true);
myTeamVideo.muted = true;
const myStream =
// (navigator.mediaDevices ? navigator.mediaDevices.getUserMedia : undefined) ||
navigator.mediaDevices ||
navigator.webkitGetUserMedia ||
navigator.mozGetUserMedia;
let myVideoStream;
let myTeamStream;
if (myStream) {
myStream
.getUserMedia({
video: true,
audio: true,
})
.then((stream) => {
myVideoStream = stream;
myTeamStream = stream.clone();
addVideoStream(myTeamVideo, myTeamStream, myUserId, teamVideoGrid);
addVideoStream(myVideo, myVideoStream, myUserId, videoGrid);
myPeer.on('call', (call) => {
call.answer(stream);
const video = document.createElement('video');
video.setAttribute('playsinline', true);
call.on('stream', (userVideoStream) => {
const teammate = teammates.find((t) => {
return t.peerId === call.peer;
});
if (teammate) {
addVideoStream(
video,
userVideoStream,
call.peer,
teamVideoGrid,
teammate.name
);
} else {
addVideoStream(video, userVideoStream, call.peer, videoGrid);
}
});
call.on('close', () => {
console.log(`Call with ${call.peer} closed`);
});
});
socket.on('player-joined', (data) => {
addMessage({
name: 'System',
isHost: false,
message: `${data.name} has joined the game.`,
});
if (data.id !== myUserId) {
if (data.teamId !== teamId) {
connectToNewUser(data.peerId, myVideoStream, videoGrid, data.name);
} else {
connectToNewUser(
data.peerId,
myTeamStream,
teamVideoGrid,
data.name
);
}
}
});
});
}
Related
I ask user the permission to use Camera and Microphone:
await navigator.mediaDevices.getUserMedia({ audio: true, video: true });
And in Firefox, I get the following prompt:
Once the user gave the permission, how can I tell which Camera and Microphone was selected? The return value of getUserMedia doesn't provide much info.
Once gUM has given you a stream object do something like this:
async function getAudioDeviceLabel(stream) {
let audioDeviceLabel = 'unknown'
const tracks = stream.getAudioTracks()
if( tracks && tracks.length >= 1 && tracks[0] ) {
const settings = tracks[0].getSettings()
const chosenDeviceId = settings.deviceId
if (chosenDeviceId) {
let deviceList = await navigator.mediaDevices.enumerateDevices()
deviceList = deviceList.filter(device => device.deviceId === chosenDeviceId)
if (deviceList && deviceList.length >= 1) audioDeviceLabel = deviceList[0].label
}
}
return audioDeviceLabel
}
This gets the deviceId of the audio track of your stream, from its settings. It then looks at the list of enumerated devices to retrieve the label associated with the deviceId.
It is kind of a pain in the xxx neck to get this information.
I am currently working on a project and need to be able to make a recording of my screen and save it locally to my computer.
The recording is being saved as a webm, but everyone of them has a really bad framerate of usually around 10-15 fps. Is there a way to increase the framerate for recording?
I am able to increase the quality of the recording by playing around with the MediaRecorder options and codecs, but this doesn't seem to affect the framerate I am getting at all.
Here is the code I am using to make my recording:
const options = {
mimeType: 'video/webm; codecs="vp9.00.41.8.00.01"',
videoBitsPerSecond: 800 * Mbps,
videoMaximizeFrameRate: true,
};
mediaRecorder = new MediaRecorder(stream, options);
mediaRecorder.ondataavailable = handleDataAvailable;
mediaRecorder.onstop = handleStop;
startBtn.onclick = e => {
mediaRecorder.start();
startBtn.innerHTML = 'Recording';
}
stopBtn.onclick = e => {
mediaRecorder.stop();
startBtn.innerHTML = 'Start';
}
function handleDataAvailable(e) {
recordedChunks.push(e.data);
}
async function handleStop() {
const blob = new Blob(recordedChunks, {
mimeType: 'video/webm'
});
const buffer = Buffer.from(await blob.arrayBuffer());
const { filePath } = await dialog.showSaveDialog({
buttonLabel: 'Save video',
defaultPath: `vid-${Date.now()}.webm`
});
console.log(filePath);
if (filePath) {
writeFile(filePath, buffer, () => console.log('video saved successfully'));
}
}
I have looked through the MDN documentation and haven't found anything about it. I also tried using different codecs with different parameters, but the results are always the same.
The framerate you're getting is typical for any standard screen capture.
The only way to go faster is to utilize the GPU's specific capability to capture and encode. This is out of scope for the web APIs.
I am trying to write a small library for convenient manipulations with audio. I know about the autoplay policy for media elements, and I play audio after a user interaction:
const contextClass = window.AudioContext || window.webkitAudioContext;
const context = this.audioContext = new contextClass();
if (context.state === 'suspended') {
const clickCb = () => {
this.playSoundsAfterInteraction();
window.removeEventListener('touchend', clickCb);
this.usingAudios.forEach((audio) => {
if (audio.playAfterInteraction) {
const promise = audio.play();
if (promise !== undefined) {
promise.then(_ => {
}).catch(error => {
// If playing isn't allowed
console.log(error);
});
}
}
});
};
window.addEventListener('touchend', clickCb);
}
On android chrome everything ok and on a desktop browser. But on mobile Safari I am getting such error in promise:
the request is not allowed by the user agent or the platform in the current context safari
I have tried to create audios after an interaction, change their "src" property. In every case, I am getting this error.
I just create audio in js:
const audio = new Audio(base64);
add it to array and try to play. But nothing...
Tried to create and play after a few seconds after interaction - nothing.
I have a multi peer WebRTC stream using simple-peer and I'm playing the received stream like this:
peer.on("stream", data => {
let audio = document.createElement("audio") as HTMLAudioElement;
audio.src = URL.createObjectURL(data);
audio.play();
});
This works fine on desktop but for chrome on android there is no sound:
Unhandled Promise rejection: play() can only be initiated by a user gesture.
I couldn't find any documentation on how to correctly play the received stream. Do I really have to show a button when the stream is ready?
I have also tried to work around this issue by playing the stream from getUserMedia but this only worked as long as I didn't call audioTag.muted = true which is no solution either because this creates a feedback loop.
let audioTag = document.createElement("audio") as HTMLAudioElement;
audioTag.autoplay = true;
navigator.getUserMedia({video: false, audio: true}, (async stream => {
audioTag.src = window.URL.createObjectURL(stream);
audioTag.muted = true;
// ...
});
Sites like http://talky.io seem to have found a way around this problem though, so what do I have to do?
Check out: https://www.chromium.org/audio-video/autoplay
var promise = document.querySelector('video').play();
if (promise !== undefined) {
promise.then(_ => {
// Autoplay started!
}).catch(error => {
// Autoplay was prevented.
// Show a "Play" button so that user can start playback.
});
}
Looking for experience working with media devices:
I'm working on recording on cache and playback from Microphone source; Firefox & Chrome using HTML5.
This is what I've so far:
var constraints = {audio: true, video: false};
var promise = navigator.mediaDevices.getUserMedia(constraints);
I've been checking on official documentation from MDN on getUserMedia
but nothing related to storage the audio from the constraint to cache.
No such question has been asked previously at Stackoverflow; I'm wondering if's possible.
Thanks you.
You can simply use the MediaRecorder API for such task.
In order to record only the audio from your video+audio gUM stream, you will need to create a new MediaStream, from the gUM's audioTrack:
// using async for brevity
async function doit() {
// first request both mic and camera
const gUMStream = await navigator.mediaDevices.getUserMedia({video: true, audio: true});
// create a new MediaStream with only the audioTrack
const audioStream = new MediaStream(gUMStream.getAudioTracks());
// to save recorded data
const chunks = [];
const recorder = new MediaRecorder(audioStream);
recorder.ondataavailable = e => chunks.push(e.data);
recorder.start();
// when user decides to stop
stop_btn.onclick = e => {
recorder.stop();
// kill all tracks to free the devices
gUMStream.getTracks().forEach(t => t.stop());
audioStream.getTracks().forEach(t => t.stop());
};
// export all the saved data as one Blob
recorder.onstop = e => exportMedia(new Blob(chunks));
// play current gUM stream
vid.srcObject = gUMStream;
stop_btn.disabled = false;
}
function exportMedia(blob) {
// here blob is your recorded audio file, you can do whatever you want with it
const aud = new Audio(URL.createObjectURL(blob));
aud.controls = true;
document.body.appendChild(aud);
document.body.removeChild(vid);
}
doit()
.then(e=>console.log("recording"))
.catch(e => {
console.error(e);
console.log('you may want to try from jsfiddle: https://jsfiddle.net/5s2zabb2/');
});
<video id="vid" controls autoplay></video>
<button id="stop_btn" disabled>stop</button>
And as a fiddle since stacksnippets don't work very well with gUM...