Video chat with multiple usb webcamera - javascript

I am trying web video chat using webrtc.
I checked webrtc, and It's a enough for this solution.
But, in my case, there are three camera(webcamera, usb camera) in one side computer.
camera1
camera1 <-> camera2
camera3
So, I tryed Add multiple stream to one RTCPeerConnection.
But, webrtc is not support this.
I need create 3 RTCPeerConnection for this.
If I create 3 peer, then it seems like video chat room.
Is there another solution?
pc = new RTCPeerConnection(null);
pc.addStream(localStream1);
pc.addStream(localStream2);
pc.addStream(localStream3);`
Is this possible?

Yes, WebRTC does support this, exactly as you show.
Except addStream has been deprecated, so you want to use addTrack instead. Or use a polyfill:
pc.addStream = stream => stream.getTracks().forEach(t => pc.addTrack(t, stream));
The order of additions determine the order the track events fire on the other end:
pc.ontrack = ({streams: [stream]}) => {
for (const video of [remoteElement1, remoteElement2, remoteElement3]) {
if (video.srcObject && video.srcObject.id != stream.id) continue;
video.srcObject = stream;
break;
}
}
The above code will assign the three incoming streams to three video elements for playback, in order. The track event fires per track, so we check the stream.id in case a stream has more than one track.
Alternatively, we could have sent the stream.ids over a data channel and correlated that way, since stream.ids are identical remotely. Note however that track.ids are not stable this way. The third way is to correlate using transceiver.mid which is always stable, except it is null initially.

Related

Tone.js audio filters not being heard

I'm trying to add filter effects to an audio stream I have playing on my website. I'm able to connect the Tone.js library to the audio stream but I'm not hearing any changes in the audio stream playing on the website. I'm not seeing any errors in the console and I've tried adjusting the filter from 50 to 5000 but nothing seems to have any effect on the audio. Do I need to set up the new Tone.Player() to actually hear the audio? If so, how do you go about setting up the Player if there is no src for the existing audio element.
$('#testBtn').click(async function () {
const audioElement = document.getElementById('theAudioStream');
const mediaElementSource = Tone.context.createMediaElementSource(audioElement);
const filter = new Tone.Filter(50, 'lowpass').toDestination();
Tone.connect(mediaElementSource, filter);
await Tone.start();
console.log("Started?");
});
The audio stream I'm trying to modify is set up from a JsSip call. The code to start the stream is as follows:
var audioStream = document.getElementById('theAudioStream')
//further down in code
currentSession.answer(options);
if (currentSession.connection) {
currentSession.connection.ontrack = function (e) {
audioStream.srcObject = e.streams[0];
audioStream.play();
}
}
I click the test button after the call has started so I know the audio stream is present before initializing the Tone.js Filters
Working solution:
Removing the audioStream.play() from where the JsSIP call is answered solves the issue.
I don't know the exact reason why this solves the issue (it might even be a workaround) but after much trial and error this way allows the audio to be available to ToneJS for effecting.
Any other solutions are welcome.

JavaScript/ HTML video tag in Safari. Block now playing controls [duplicate]

Safari on iOS puts a scrubber on its lock screen for simple HTMLAudioElements. For example:
const a = new Audio();
a.src = 'https://example.com/audio.m4a'
a.play();
JSFiddle: https://jsfiddle.net/0seckLfd/
The lock screen will allow me to choose a position in the currently playing audio file.
How can I disable the ability for the user to scrub the file on the lock screen? The metadata showing is fine, and being able to pause/play is also acceptable, but I'm also fine with disabling it all if I need to.
DISABLE Player on lock screen completely
if you want to completely remove the lock screen player you could do something like
const a = new Audio();
document.querySelector('button').addEventListener('click', (e) => {
a.src = 'http://sprott.physics.wisc.edu/wop/sounds/Bicycle%20Race-Full.m4a'
a.play();
});
document.addEventListener('visibilitychange', () => {
if (document.hidden) a.src = undefined
})
https://jsfiddle.net/5s8c9eL0/3/
that is stoping the player when changing tab or locking screen
(code to be cleaned improved depending on your needs)
From my understanding you can't block/hide the scrubbing commands unless you can tag the audio as a live stream. That being said, you can use js to refuse scrubbing server-side. Reference the answer here. Although that answer speaks of video, it also works with audio.
The lock screen / control center scrubber can also be avoided by using Web Audio API.
This is an example of preloading a sound and playing it, with commentary and error handling:
try {
// <audio> element is simpler for sound effects,
// but in iOS/iPad it shows up in the Control Center, as if it's music you'd want to play/pause/etc.
// Also, on subsequent plays, it only plays part of the sound.
// And Web Audio API is better for playing sound effects anyway because it can play a sound overlapping with itself, without maintaining a pool of <audio> elements.
window.audioContext = window.audioContext || new AudioContext(); // Interoperate with other things using Web Audio API, assuming they use the same global & pattern.
const audio_buffer_promise =
fetch("audio/sound.wav")
.then(response => response.arrayBuffer())
.then(array_buffer => audioContext.decodeAudioData(array_buffer))
var play_sound = async function () {
audioContext.resume(); // in case it was not allowed to start until a user interaction
// Note that this should be before waiting for the audio buffer,
// so that it works the first time (it would no longer be "within a user gesture")
// This only works if play_sound is called during a user gesture (at least once), otherwise audioContext.resume(); needs to be called externally.
const audio_buffer = await audio_buffer_promise; // Promises can be awaited any number of times. This waits for the fetch the first time, and is instant the next time.
// Note that if the fetch failed, it will not retry. One could instead rely on HTTP caching and just fetch() each time, but that would be a little less efficient as it would need to decode the audio file each time, so the best option might be custom caching with request error handling.
const source = audioContext.createBufferSource();
source.buffer = audio_buffer;
source.connect(audioContext.destination);
source.start();
};
} catch (error) {
console.log("AudioContext not supported", error);
play_sound = function() {
// no-op
// console.log("SFX disabled because AudioContext setup failed.");
};
}
I did a search, in search of a way to help you, but I did not find an effective way to disable the commands, however, I found a way to customize them, it may help you, follow the apple tutorial link
I think what's left to do now is wait, see if ios 13 will bring some option that will do what you want.

WebRTC - Restart Video Stream After Calling stop()

So, I am using webRTC to create a local stream (video and audio), and want to be able to stop and restart the video of said stream.
At the point where I want to stop the stream I am getting the local video track:
var vidTrack = this.videoEl.srcObject.getTracks().find(track => track.kind == 'video')
I then call stop() on the track, which works, and turns off the camera-light indicator on my device (which is what I want). The problem is this seems to be a one way method, there is no way to restart the stream once I call stop() on it.
I have played with just toggling the enabled boolean on the track object, which DOES disable the track from coming through, but does NOT stop displaying the camera-light indicator on my device (which I need, and stop() does).
Just wondering if anyone has come across this issue/has ideas or solutions as to get what I need.
Here is the solution I ended up with for anyone who may see this in the future, based on Dirk V's response:
if (vidTrack && toggle && vidTrack.readyState && vidTrack.readyState == "ended") {
let newVideoStreamGrab = await navigator.mediaDevices.getUserMedia({
video: true
})
this.stream.removeTrack(this.stream.getVideoTracks()[0])
this.stream.addTrack(newVideoStreamGrab.getVideoTracks()[0])
} else {
vidTrack.stop()
}
The best way is to request the stream from the camera again after stopping it as there is no way to restart the stopped track.
The enabled flag is only used to allow or disallow the track to render frames. So this means that it doesn't affect the state of the camera.
When true, enabled indicates that the track is permitted to render its actual media to the output. When enabled is set to false, the track only generates empty frames.
source

Disable iOS Safari lock screen scrubber for media

Safari on iOS puts a scrubber on its lock screen for simple HTMLAudioElements. For example:
const a = new Audio();
a.src = 'https://example.com/audio.m4a'
a.play();
JSFiddle: https://jsfiddle.net/0seckLfd/
The lock screen will allow me to choose a position in the currently playing audio file.
How can I disable the ability for the user to scrub the file on the lock screen? The metadata showing is fine, and being able to pause/play is also acceptable, but I'm also fine with disabling it all if I need to.
DISABLE Player on lock screen completely
if you want to completely remove the lock screen player you could do something like
const a = new Audio();
document.querySelector('button').addEventListener('click', (e) => {
a.src = 'http://sprott.physics.wisc.edu/wop/sounds/Bicycle%20Race-Full.m4a'
a.play();
});
document.addEventListener('visibilitychange', () => {
if (document.hidden) a.src = undefined
})
https://jsfiddle.net/5s8c9eL0/3/
that is stoping the player when changing tab or locking screen
(code to be cleaned improved depending on your needs)
From my understanding you can't block/hide the scrubbing commands unless you can tag the audio as a live stream. That being said, you can use js to refuse scrubbing server-side. Reference the answer here. Although that answer speaks of video, it also works with audio.
The lock screen / control center scrubber can also be avoided by using Web Audio API.
This is an example of preloading a sound and playing it, with commentary and error handling:
try {
// <audio> element is simpler for sound effects,
// but in iOS/iPad it shows up in the Control Center, as if it's music you'd want to play/pause/etc.
// Also, on subsequent plays, it only plays part of the sound.
// And Web Audio API is better for playing sound effects anyway because it can play a sound overlapping with itself, without maintaining a pool of <audio> elements.
window.audioContext = window.audioContext || new AudioContext(); // Interoperate with other things using Web Audio API, assuming they use the same global & pattern.
const audio_buffer_promise =
fetch("audio/sound.wav")
.then(response => response.arrayBuffer())
.then(array_buffer => audioContext.decodeAudioData(array_buffer))
var play_sound = async function () {
audioContext.resume(); // in case it was not allowed to start until a user interaction
// Note that this should be before waiting for the audio buffer,
// so that it works the first time (it would no longer be "within a user gesture")
// This only works if play_sound is called during a user gesture (at least once), otherwise audioContext.resume(); needs to be called externally.
const audio_buffer = await audio_buffer_promise; // Promises can be awaited any number of times. This waits for the fetch the first time, and is instant the next time.
// Note that if the fetch failed, it will not retry. One could instead rely on HTTP caching and just fetch() each time, but that would be a little less efficient as it would need to decode the audio file each time, so the best option might be custom caching with request error handling.
const source = audioContext.createBufferSource();
source.buffer = audio_buffer;
source.connect(audioContext.destination);
source.start();
};
} catch (error) {
console.log("AudioContext not supported", error);
play_sound = function() {
// no-op
// console.log("SFX disabled because AudioContext setup failed.");
};
}
I did a search, in search of a way to help you, but I did not find an effective way to disable the commands, however, I found a way to customize them, it may help you, follow the apple tutorial link
I think what's left to do now is wait, see if ios 13 will bring some option that will do what you want.

How to add multiple tracks to peer connection in WebRTC?

I wish to send two video streams (one video stream and one stream captured from the canvas HTML element) using only one RTCPeerConnection.
What I tried to do is addTrack both tracks to the peer connection object before making the offer, but it doesn't work in Firefox (it works in Chrome). The peerConnection.ontrack event only happens once, with the first track added to the peer connection object (although there are two streams).
I have read about renegotiation, but I am currently adding both tracks to the peer connection before sending the offer so I don't know if I need to do renegotiation. Do I need to?
I have also heard about the interoperability issue in multistreaming between Firefox (Unified Plan) and Chrome (Plan B), so please advise me on what approach I should take now.
I am using adapter.js.
Add code (actually javascript):
function createPeerConnection() {
peerConnection = new RTCPeerConnection(iceServers);
peerConnection.onicecandidate = (event) => {
if (event.candidate) {
// send to server
}
};
videoStream.getTracks().forEach(track => peerConnection.addTrack(track, videoStream));
canvasStream.getTracks().forEach(track => peerConnection.addTrack(track, canvasStream));
}
This is how I create the RTCPeerConnection and add the tracks. After this part is creating the offer and sending to signaling server... It all works well, it's just that the other end only receives the first track added (in Firefox). If you need those bits I will add them.
This is the ontrack event handler.
peerConnection.ontrack = (event) => {
console.log(event); //only prints the first track in Firefox, but prints both tracks in Chrome
};
Try enabling unified plan support in Chrome. This is still work in progress and behind a flag (enable experimental web platform features on chrome://flags).
Then you need to construct your RTCPeerConnection with
new RTCPeerConnection({sdpSemantics: 'unified-plan'})

Categories

Resources