I wish to send two video streams (one video stream and one stream captured from the canvas HTML element) using only one RTCPeerConnection.
What I tried to do is addTrack both tracks to the peer connection object before making the offer, but it doesn't work in Firefox (it works in Chrome). The peerConnection.ontrack event only happens once, with the first track added to the peer connection object (although there are two streams).
I have read about renegotiation, but I am currently adding both tracks to the peer connection before sending the offer so I don't know if I need to do renegotiation. Do I need to?
I have also heard about the interoperability issue in multistreaming between Firefox (Unified Plan) and Chrome (Plan B), so please advise me on what approach I should take now.
I am using adapter.js.
Add code (actually javascript):
function createPeerConnection() {
peerConnection = new RTCPeerConnection(iceServers);
peerConnection.onicecandidate = (event) => {
if (event.candidate) {
// send to server
}
};
videoStream.getTracks().forEach(track => peerConnection.addTrack(track, videoStream));
canvasStream.getTracks().forEach(track => peerConnection.addTrack(track, canvasStream));
}
This is how I create the RTCPeerConnection and add the tracks. After this part is creating the offer and sending to signaling server... It all works well, it's just that the other end only receives the first track added (in Firefox). If you need those bits I will add them.
This is the ontrack event handler.
peerConnection.ontrack = (event) => {
console.log(event); //only prints the first track in Firefox, but prints both tracks in Chrome
};
Try enabling unified plan support in Chrome. This is still work in progress and behind a flag (enable experimental web platform features on chrome://flags).
Then you need to construct your RTCPeerConnection with
new RTCPeerConnection({sdpSemantics: 'unified-plan'})
Related
I have two peers both sending video over WebRTC. The peers use perfect negotiation in parallel, which means in some cases one peer may make an offer, and then throw it away and use the other peer's offer instead. For some reason when this happens, once the connection opens for real and this peer receives a media track, the track is already immediately 'ended' and cannot be used.
I've simplified this to a minimal repro:
const rtc1 = new RTCPeerConnection();
const rtc2 = new RTCPeerConnection();
rtc1.ontrack = ({ track }) => { console.log('rtc1 got track:', track.readyState); };
rtc2.ontrack = ({ track }) => { console.log('rtc2 got track:', track.readyState); };
stream = await navigator.mediaDevices.getUserMedia({ video: true });
rtc2.addTrack(stream.getTracks()[0], stream);
rtc1.addTrack(stream.getTracks()[0], stream);
// These two lines break everything:
o = await rtc2.createOffer();
await rtc2.setLocalDescription(o);
// ---
o = await rtc1.createOffer();
await rtc1.setLocalDescription(o);
await rtc2.setRemoteDescription(o)
a = await rtc2.createAnswer();
await rtc2.setLocalDescription(a);
await rtc1.setRemoteDescription(a);
I think this should set up a WebRTC connection with a SendRecv video stream going in both directions, so that it prints got track: live twice.
Unfortunately, in reality this prints:
rtc2 got track: ended
rtc1 got track: live
I.e. rtc2's track is already 'ended' when the ontrack callback fires. RTC2 never receives a working media track. Why?
I'm testing this in the latest Chrome: 100.0.4896.75.
Commenting out the two lines marked above that create an unused offer does solve this. In that case, both tracks are live as expected. It seems like that offer should not be a problem though, and with the officially encouraged 'perfect negotiation' setup pattern (or anything similar) these kinds of unused offers seem inevitable.
This in fact shouldn't happen, and it doesn't happen in Firefox.
It turns out this is a bug in Chrome: https://bugs.chromium.org/p/chromium/issues/detail?id=1315611
I am capturing a user's audio and video with the navigator.mediaDevices.getUserMedia() and then using MediaRecorder and its ondataavailable to store that video and audio blob locally to upload later.
Now Im dealing with an issue where for some reason the ondataavailable stops being called midway through the recording. I'm not sure why and I get no alerts that anything went wrong. So first, does anyone know why this might happen and how to catch the errors?
Second, I have tried to reproduce. By doing something like this.
navigator.mediaDevices.getUserMedia({ audio: true, video: true })
.then(function(camera) {
local_media_stream = camera;
camera.getVideoTracks()[0].onended = function() { console.log("VIDEO ENDED") }
camera.getAudioTracks()[0].onended = function() { console.log("Audio ENDED") }
camera.onended = function() { console.log("--- ENDED") }
camera.onremovetrack = (event) => { console.log(`${event.track.kind} track removed`); };
}).catch(function(error) {
alert('Unable to capture your camera. Please check logs.' + error);
console.error(error);
});
And recording the stream with
recorder = new MediaRecorder(local_media_stream, {
mimeType: encoding_options,
audioBitsPerSecond: 128000,
videoBitsPerSecond: bits_per_second,
});
recorder.ondataavailable = function(e) {
save_blob(e.data, blob_index)
blob_index++;
}
recorder.onstop = function(e) {
console.log("recorder stopped");
console.log(e)
}
recorder.onerror = function(error) {
console.log("recorder error");
alert(error)
throw error;
}
recorder.onstart = function() {
console.log('started');
};
recorder.onpause = function() {
console.log('paused');
};
recorder.onresume = function() {
console.log('resumed');
};
recorder.start(15000)
Then I try to kill the stream manually to hopefully reproduce whatever issue is occurring by doing
local_media_stream.getVideoTracks()[0].stop()
Now ondataavailable is no longer called but none of the onended events were called. The recording is still going and the local_media_stream is still active.
If I kill the audio too
local_media_stream.getAudioTracks()[0].stop()
Now the local_media_stream is not active but still no event was called telling me the stream stopped and the recorder is still going but ondatavailable is never being called.
What can I do? I want to know that the local stream is being recorded successfully and if not be alerted so I can at least inform the user that the recording is no longer saving.
MediaRecorder has a recorder.stop() method. I don't see you calling it in your example code. Try calling it.
When you call track[n].stop() on the tracks of your media stream, you tell them to stop feeding data to MediaRecorder. So, unsurprisingly, MediaRecorder stops generating its coded output stream.
You also might, if you're running on Google Chrome, try a shorter timeslice than your recorder.start(15000). Or force the delivery of your dataavailable event by using recorder.requestData().
Edit When you call .requestData(), it invokes the ondataavailable event handler. (And, if you specified a timeslice in your .start() call the handler is called automatically.) Each call to that handler delivers the coded media data since the previous call. If you need the whole data stream you can accumulate it in your handler. But when you do that, of course, it needs to go into the browser's RAM heap, so you can't just keep accumulating it indefinitely.
Stopping both tracks should stop your recorder, it does for me in both FF and Chrome: https://jsfiddle.net/gpt51d6y/
But that's very improbable your users are calling stop() themselves.
Most probably the problem isn't in the MediaRecorder, but before that in the MediaStream's tracks.
A MediaRecorder for which all its tracks are muted won't emit new dataavailable event, because from its perspective, there is still something that might be going on, the track may unmute at any time.
Think for instance of a microphone with an hardware push-to-talk feature, the track from this microphone would get muted every time the user releases such button, but the MediaRecorder, even though it won't record anything during this time, still has to increment its internal timing, so that the "gaps" don't get "glued" in the final media. However, since it had nothing passed to its graph, it won't either emit new dataavailable events, it will simply adjust the timestamp in the next chunk that will get emitted.
For your case of trying to find where the problem comes from, you can try to listen for the MediaRecorder's error event, it may fire in some cases.
But you should also definitely add events to every tracks of its stream, and don't forget the mute event:
recorder.stream.getTracks().forEach( (track) => {
track.onmute = track.onended = console.warn;
};
Safari on iOS puts a scrubber on its lock screen for simple HTMLAudioElements. For example:
const a = new Audio();
a.src = 'https://example.com/audio.m4a'
a.play();
JSFiddle: https://jsfiddle.net/0seckLfd/
The lock screen will allow me to choose a position in the currently playing audio file.
How can I disable the ability for the user to scrub the file on the lock screen? The metadata showing is fine, and being able to pause/play is also acceptable, but I'm also fine with disabling it all if I need to.
DISABLE Player on lock screen completely
if you want to completely remove the lock screen player you could do something like
const a = new Audio();
document.querySelector('button').addEventListener('click', (e) => {
a.src = 'http://sprott.physics.wisc.edu/wop/sounds/Bicycle%20Race-Full.m4a'
a.play();
});
document.addEventListener('visibilitychange', () => {
if (document.hidden) a.src = undefined
})
https://jsfiddle.net/5s8c9eL0/3/
that is stoping the player when changing tab or locking screen
(code to be cleaned improved depending on your needs)
From my understanding you can't block/hide the scrubbing commands unless you can tag the audio as a live stream. That being said, you can use js to refuse scrubbing server-side. Reference the answer here. Although that answer speaks of video, it also works with audio.
The lock screen / control center scrubber can also be avoided by using Web Audio API.
This is an example of preloading a sound and playing it, with commentary and error handling:
try {
// <audio> element is simpler for sound effects,
// but in iOS/iPad it shows up in the Control Center, as if it's music you'd want to play/pause/etc.
// Also, on subsequent plays, it only plays part of the sound.
// And Web Audio API is better for playing sound effects anyway because it can play a sound overlapping with itself, without maintaining a pool of <audio> elements.
window.audioContext = window.audioContext || new AudioContext(); // Interoperate with other things using Web Audio API, assuming they use the same global & pattern.
const audio_buffer_promise =
fetch("audio/sound.wav")
.then(response => response.arrayBuffer())
.then(array_buffer => audioContext.decodeAudioData(array_buffer))
var play_sound = async function () {
audioContext.resume(); // in case it was not allowed to start until a user interaction
// Note that this should be before waiting for the audio buffer,
// so that it works the first time (it would no longer be "within a user gesture")
// This only works if play_sound is called during a user gesture (at least once), otherwise audioContext.resume(); needs to be called externally.
const audio_buffer = await audio_buffer_promise; // Promises can be awaited any number of times. This waits for the fetch the first time, and is instant the next time.
// Note that if the fetch failed, it will not retry. One could instead rely on HTTP caching and just fetch() each time, but that would be a little less efficient as it would need to decode the audio file each time, so the best option might be custom caching with request error handling.
const source = audioContext.createBufferSource();
source.buffer = audio_buffer;
source.connect(audioContext.destination);
source.start();
};
} catch (error) {
console.log("AudioContext not supported", error);
play_sound = function() {
// no-op
// console.log("SFX disabled because AudioContext setup failed.");
};
}
I did a search, in search of a way to help you, but I did not find an effective way to disable the commands, however, I found a way to customize them, it may help you, follow the apple tutorial link
I think what's left to do now is wait, see if ios 13 will bring some option that will do what you want.
Safari on iOS puts a scrubber on its lock screen for simple HTMLAudioElements. For example:
const a = new Audio();
a.src = 'https://example.com/audio.m4a'
a.play();
JSFiddle: https://jsfiddle.net/0seckLfd/
The lock screen will allow me to choose a position in the currently playing audio file.
How can I disable the ability for the user to scrub the file on the lock screen? The metadata showing is fine, and being able to pause/play is also acceptable, but I'm also fine with disabling it all if I need to.
DISABLE Player on lock screen completely
if you want to completely remove the lock screen player you could do something like
const a = new Audio();
document.querySelector('button').addEventListener('click', (e) => {
a.src = 'http://sprott.physics.wisc.edu/wop/sounds/Bicycle%20Race-Full.m4a'
a.play();
});
document.addEventListener('visibilitychange', () => {
if (document.hidden) a.src = undefined
})
https://jsfiddle.net/5s8c9eL0/3/
that is stoping the player when changing tab or locking screen
(code to be cleaned improved depending on your needs)
From my understanding you can't block/hide the scrubbing commands unless you can tag the audio as a live stream. That being said, you can use js to refuse scrubbing server-side. Reference the answer here. Although that answer speaks of video, it also works with audio.
The lock screen / control center scrubber can also be avoided by using Web Audio API.
This is an example of preloading a sound and playing it, with commentary and error handling:
try {
// <audio> element is simpler for sound effects,
// but in iOS/iPad it shows up in the Control Center, as if it's music you'd want to play/pause/etc.
// Also, on subsequent plays, it only plays part of the sound.
// And Web Audio API is better for playing sound effects anyway because it can play a sound overlapping with itself, without maintaining a pool of <audio> elements.
window.audioContext = window.audioContext || new AudioContext(); // Interoperate with other things using Web Audio API, assuming they use the same global & pattern.
const audio_buffer_promise =
fetch("audio/sound.wav")
.then(response => response.arrayBuffer())
.then(array_buffer => audioContext.decodeAudioData(array_buffer))
var play_sound = async function () {
audioContext.resume(); // in case it was not allowed to start until a user interaction
// Note that this should be before waiting for the audio buffer,
// so that it works the first time (it would no longer be "within a user gesture")
// This only works if play_sound is called during a user gesture (at least once), otherwise audioContext.resume(); needs to be called externally.
const audio_buffer = await audio_buffer_promise; // Promises can be awaited any number of times. This waits for the fetch the first time, and is instant the next time.
// Note that if the fetch failed, it will not retry. One could instead rely on HTTP caching and just fetch() each time, but that would be a little less efficient as it would need to decode the audio file each time, so the best option might be custom caching with request error handling.
const source = audioContext.createBufferSource();
source.buffer = audio_buffer;
source.connect(audioContext.destination);
source.start();
};
} catch (error) {
console.log("AudioContext not supported", error);
play_sound = function() {
// no-op
// console.log("SFX disabled because AudioContext setup failed.");
};
}
I did a search, in search of a way to help you, but I did not find an effective way to disable the commands, however, I found a way to customize them, it may help you, follow the apple tutorial link
I think what's left to do now is wait, see if ios 13 will bring some option that will do what you want.
I am trying web video chat using webrtc.
I checked webrtc, and It's a enough for this solution.
But, in my case, there are three camera(webcamera, usb camera) in one side computer.
camera1
camera1 <-> camera2
camera3
So, I tryed Add multiple stream to one RTCPeerConnection.
But, webrtc is not support this.
I need create 3 RTCPeerConnection for this.
If I create 3 peer, then it seems like video chat room.
Is there another solution?
pc = new RTCPeerConnection(null);
pc.addStream(localStream1);
pc.addStream(localStream2);
pc.addStream(localStream3);`
Is this possible?
Yes, WebRTC does support this, exactly as you show.
Except addStream has been deprecated, so you want to use addTrack instead. Or use a polyfill:
pc.addStream = stream => stream.getTracks().forEach(t => pc.addTrack(t, stream));
The order of additions determine the order the track events fire on the other end:
pc.ontrack = ({streams: [stream]}) => {
for (const video of [remoteElement1, remoteElement2, remoteElement3]) {
if (video.srcObject && video.srcObject.id != stream.id) continue;
video.srcObject = stream;
break;
}
}
The above code will assign the three incoming streams to three video elements for playback, in order. The track event fires per track, so we check the stream.id in case a stream has more than one track.
Alternatively, we could have sent the stream.ids over a data channel and correlated that way, since stream.ids are identical remotely. Note however that track.ids are not stable this way. The third way is to correlate using transceiver.mid which is always stable, except it is null initially.