WebRTC stuck in connecting state - javascript

I have successfully communicated the offer, answer and ice candidates for a WebRTC connection from A to B. At this point, the connection is stuck in the "connecting" state. The initiator (A) seems to timeout or something after a while and switch to the "failed" state, whereas its remote (B) is staying in the "connecting" state permanently.
Any help would be very appreciated.
Creation of peer (A and B):
let peer = new RTCPeerConnection({
iceServers: [
{
urls: [
"stun:stun1.l.google.com:19302",
"stun:stun2.l.google.com:19302",
],
},
{
urls: [
"stun:global.stun.twilio.com:3478?transport=udp",
],
},
],
iceCandidatePoolSize: 10,
});
Creating offer (A):
peer.onnegotiationneeded = async () => {
offer = await peer.createOffer();
await peer.setLocalDescription(offer);
};
Collecting ice candidates (A):
peer.onicecandidate = (evt) => {
if (evt.candidate) {
iceCandidates.push(evt.candidate);
} else {
// send offer and iceCandidates to B through signaling server
// this part is working perfectly
}
};
Creating answer and populating ice candidates (B):
await peer.setRemoteDescription(offer);
let answer = await this._peer.createAnswer();
await peer.setLocalDescription(answer);
// send answer back to A through signaling server
for (let candidate of sigData.iceCandidates) {
await peer.addIceCandidate(candidate);
}
On answer from B through signaling server (A):
await peer.setRemoteDescription(answer);
Detect connection state change (A and B):
peer.onconnectionstatechange = () => {
console.log("state changed")
console.log(peer.connectionState);
}
Also note that there were two occasions where it connected successfully, but I am yet to see it work again.
EDIT: I forgot to mention I am also creating a data channel (the onicecandidate event doesn't seem to call without this). This is called immediately after the RTCPeerConnection is constructed and any event handlers have been attached.
let channel = peer.createDataChannel("...", {
id: ...,
ordered: true,
});
EDIT 2: As #jib suggested, I am now also gathering ice candidates in B and sending them back to A to add. However, the exact same problem persists.
EDIT 3: It seems to connect the first time I hard reload the webpage for A and the webpage for B. Connection stops working again until I do another hard reload. Does anyone have any ideas why this is the case? At least I should be able to continue development for the time being until I can figure out this issue.
EDIT 4: I removed the iceServers I was using and left the RTCPeerConnection constructor blank. Somehow it is much more reliable now. But I am yet to get a successful connection on iOS Safari!

Open this in two browser windows and hit the Connect button in one of them. This is the code:
const pc = new RTCPeerConnection();
call.onclick = async () => {
const stream = await navigator.mediaDevices.getUserMedia({video:true,audio:true})
video.srcObject = stream;
for (const track of stream.getTracks()) {
pc.addTrack(track, stream);
}
};
pc.ontrack = ({streams}) => video.srcObject = streams[0];
pc.oniceconnectionstatechange = () => console.log(pc.iceConnectionState);
pc.onicecandidate = ({candidate}) => sc.send({candidate});
pc.onnegotiationneeded = async () => {
await pc.setLocalDescription(await pc.createOffer());
sc.send({sdp: pc.localDescription});
}
const sc = new localSocket(); // localStorage signaling hack
sc.onmessage = async ({data: {sdp, candidate}}) => {
if (sdp) {
await pc.setRemoteDescription(sdp);
if (sdp.type == "offer") {
await pc.setLocalDescription(await pc.createAnswer());
sc.send({sdp: pc.localDescription});
}
} else if (candidate) await pc.addIceCandidate(candidate);
}
This is the same source for A and B. Replace the localSocket hack with your preferred signaling channel (e.g. websocket).
Don't cache ICE candidates since that defeats the purpose of Trickle ICE. It may appear fast locally, but in real networks ICE may take time.
In fact, sending candidates is meaningless if you delay sending the offer/answer until all local candidates have been gathered, since candidates are already embedded in the offer/answer (pc.localDescription) at that point.

Finally! After a few weeks I have figured out the issue, which wasn't apparent in the code I included in my question, but may still be useful for anyone who is having similar problems.
I assumed that the ice gathering was being completed after the onnegotiationneeded event fired and the offer/answer was created.
Because of this incorrect assumption, I was signaling the offer/answer along with the ice candidates at this stage, but very frequently (always in iOS Safari from my experience) the offer/answer was not yet created at this point.
I solved this by creating two promises for a) the completion of ice candidate gathering, and b) the creation of the offer/answer. I used Promise.all on the two promises, and when they both completed, I sent the ice candidates and offer/answer down through the signaling server all at once.
This works, but of course in the future I should "trickle" this information, by sending bits and pieces as they come, instead of waiting for everything to fully complete. But I'll worry about that in the future, since at the moment I am using HTTP requests, and it is too much hassle.
EDIT: My connection is still always stuck when iceServers are included, so I've created a new question. But local connections when no iceServers are included are now 100% fully reliable :)

Related

AWS Rekognition timeout error happening randomly

I have an application in Electron that does facial recognition of people to then decide whether or not they can enter the place and for that I'm using Amazon Rekognition.
Everything was working fine (for a few months) until, two days ago, a customer reported to me that the app was behaving strangely, like it wasn't responding to requests for facial recognition.
After several tests, I discovered that what is happening with it is a timeout error, which occurs in all API calls, whether they are looking for faces (SearchFacesByImage) or registering new faces (IndexFaces).
The error says:
{
"message": "connect ETIMEDOUT 3.226.60.54:443",
"errno": -4039,
"code": "TimeoutError",
"syscall": "connect",
"address": "3.226.60.54",
"port": 443,
"time": "2022-12-14T13:50:10.909Z",
"region": "us-east-1",
"hostname": "rekognition.us-east-1.amazonaws.com",
"retryable": true
}
What intrigued me was the fact that everything was working fine, until this behavior just started happening (and I didn't make any code changes/updates to the app running on my client's computer).
And what makes me even more intrigued is that this behavior occurs completely randomly and only on the machine of that client in question. Sometimes the API calls work correctly (returning whether the person was recognized or not), but most of the time, the calls take about 90 seconds to return the timeout error. When executing the same code on my machine (same methods and same CollectionId) everything runs normally and there was no timeout error at any time - while at the exact same moment on my client's machine the behavior continues.
I was using aws-sdk and then switched to #aws-sdk/client-rekognition (thinking that could solve the problem) but the code only worked on a few of the first calls to the API and a few minutes later it got the timeout errors again.
The code I'm using to configure and make calls to Rekognition is basically this:
const { RekognitionClient, IndexFacesCommand, SearchFacesByImageCommand } = require('#aws-sdk/client-rekognition')
const rekognitionClient = new RekognitionClient({
credentials: {
accessKeyId: 'accessKeyId',
secretAccessKey: 'secretAccessKey'
},
region: 'us-east-1'
})
const registerFaceOnRekognition = async (bytes, userId) => {
const params = {
CollectionId: 'collectionId',
Image: { Bytes: bytes },
ExternalImageId: userId,
MaxFaces: 1,
QualityFilter: 'HIGH'
}
const command = new IndexFacesCommand(params)
try {
const { FaceRecords } = await rekognitionClient.send(command)
if (!FaceRecords.length) {
console.log('No faces detected.')
return
}
console.log('Face created:')
console.log(FaceRecords[0].Face.FaceId)
} catch (error) {
console.error(error) // timeout error
}
}
const searchFaceByImageOnRekognition = async (bytes) => {
const params = {
CollectionId: 'collectionId',
Image: { Bytes: bytes },
MaxFaces: 1,
FaceMatchThreshold: 99,
QualityFilter: 'HIGH'
}
const command = new SearchFacesByImageCommand(params)
try {
const { FaceMatches } = await rekognitionClient.send(command)
if (!FaceMatches.length) {
console.log('This face has not been registered yet')
return
}
console.log('Face found:')
console.log(FaceMatches[0].Face.ExternalImageId)
} catch (error) {
console.error(error) // timeout error
}
}
// Method called through the renderer process that has a canvas where the webcam view is reproduced
const onTakePicture = (event, data) => {
const bytes = Buffer.from(data.dataURL.replace('data:image/jpeg;base64,', ''), 'base64')
// If there is a userId, register the face in the image
if (data.userId) {
registerFaceOnRekognition(bytes, data.userId)
return
}
// Else, search for the face in the image
searchFaceByImageOnRekognition(bytes)
}
Just remembering that: during all tests on my client's computer the internet connection was stable and working properly.
What is the best way to investigate and resolve this issue?
UPDATE:
I enabled Rekognition debug logs and they can be found at: https://gist.github.com/IgorSamer/4e58e09f3fa615401f85ca325b794245
In it, the first three requests (2022-12-16T13:48:45.932Z, 2022-12-16T13:53:20.325Z and 2022-12-16T14:19:12.479Z) occur normally. However, all other consecutive requests start to give the timeout error, where, in fact, no data is returned after the [DEBUG] App: endpoints Resolved endpoint: step.
As previously mentioned the internet connection is working fine. I could also managing to reproduce the error via remote access, that is, the machine internet was ok at the time of error.
Is there a possibility that there is a block made by my client's firewall/network that prevents requests from being sent by the SDK after a few successful requests? If yes, what is the best way to investigate this?
Exploration
This is what I would do initially to gather some info:
Verify if this is happening ALL the time with that specific client.
Verify if this is happening ONLY with one client, or more.
Verify if this is happening in one or multiple regions (i.e us-east-1).
Verify if Amazon Recognition has had/or has issues in the affected region during the time window of interest.
Check Recognition's status in the Health dashboard in your AWS console: link
Use AWS Recognition Guidelines and Quotas as a reference to determine if your app/service usage of Recognition is under the set limits.
Note there's a limit on TPS per resource (i.e SearchFacesByImage, IndexFaces) per account.
Possible approaches
Verify if there was a change in the client network/firewall. Just ask.
Replicate your app's API call with AWS CLI and study logs.
Access remotely to your client's device.
Setup temporal AWS credentials (remember to remove access after the test)
Send an API call to the Recognition endpoint. Note that even a 4XX error will be good news, as you got at least some response.
Set up proper logging for your app (as CloudWatch logs may not be enough to troubleshoot).
Check Splunk's APM and NewRelic's APM
I hope this may be of help to at least create a troubleshooting strategy

Javascript webrtc group video streamming

I would like to receive help to add group video streaming to the bellow code, i would like to implement a group streaming which ll be working with a link like
domain.net/video.html?id=123&host=1 domain.net/video.html?id=456&host=1 to allow friends joining using param id.
'use strict';
// Put variables in global scope to make them available to the browser console.
const constraints = window.constraints = {
audio: true,
video: true
};
function handleSuccess(stream) {
const video = document.querySelector('video');
const videoTracks = stream.getVideoTracks();
window.stream = stream;
video.srcObject = stream;
}
function handleError(error) {
if (error.name === 'ConstraintNotSatisfiedError') {
const v = constraints.video;
console.log("The resolution not supported");
} else if (error.name === 'PermissionDeniedError') {
console.log('Permissions have not been granted');
}
console.log(`getUserMedia error: ${error.name}`, error);
}
async function init(e) {
try {
const stream = await navigator.mediaDevices.getUserMedia(constraints);
handleSuccess(stream);
e.target.disabled = true;
} catch (e) {
handleError(e);
}
}
document.querySelector('#showVideo').addEventListener('click', e => init(e));
Am in my learning stages to JavaScript please help, thanks.
The script you provided currently gets the user camera and audio. You will need to feed that to an RTCPeerConnection and establish a WebRTC session with the other peers by exchanging SDP messages and ICE candidates via a signaling server. Read this to understand more in detail how WebRTC connectivity works.
Since you want to stream to a group of people, one approach is to create a new RTCPeerConnection for every new peer in your room. Check out this example, it does exactly that.
However, since WebRTC is intended for peer-to-peer, this solution is not very scalable, because you are going to be creating new peer connections exponentially, which is quite heavy for the browser to handle and consumes a lot of bandwidth.
With +6 people, the quality of your call will already be terrible, but I think up to 4 people it should be doable.
If your intention is to have a conference room, you should really look into using a Selective Forwarding Unit (SFU) media server. With this approach, the server will perform stream routing and apply some tricks such as Simulcast to make your stream more scalable and adaptive, providing a better experience.
Checkout Janus VideoRoom plugin for an open-source SFU implementation.

How to sync my saved data on apple devices with service worker?

I know that Background Sync API is not supported in the apple ecosystem, so how would you get around it and make a solution that would work in the apple ecosystem and other platforms as well, now i have a solution that uses Background Sync API and for some reason it literally does not do anything on IOS, it just saves the failed requests, and then never sync-s, could i just access the sync queue somehow, with a indexedDB wrapper and then sync at an arbitrary time?
I tried it once and it broke everything, do you guys have an idea how?
const bgSyncPlugin = new workbox.backgroundSync.Plugin('uploadQueue', {
maxRetentionTime: 60 * 24 * 60,
onSync: async ({ queue }) => {
return getAccessToken().then((token) => {
replayQueue(queue, token).then(() => {
return showNotification();
});
});
},
});
This is the code i have, they all. have a purpose, since my token has a timeout i have to check if the token is expired or not and proceed after that and replace the token in the headers if it is expired, and i have to change data as well when i sync in the request bodies, but it all works good on anything other than apple devices. Apple devices never trigger the onsync, i tried to do listen to fetch events and trigger onsync with:
self.registration.sync.register('uploadQueue');
But to no awail, i tried to register sync on servvice worker registration, nothing seems to help.
If the sync registration is not viable on ios, then can i access the upload queue table somehow?
P.S.: I`m using dexie.js as a indexedDB wrapper, it is a vue.js app, with laravel api, and the sync process is quite complex, but it is working, just have to figure out how to do it on IOS!
I have found an answer to this after like 2 weeks of it being on my mind and on my to do list.
Now get some popcorn and strap yourself the heck in, because this is quite a chonker.
In my case the sync process was pretty complex as my users could be away from any connection for such a long time that my accessTokens would expire so i had to do a check for the access token expiration as well and reFetch it.
Furthermore my users could add new people to the database of people, which all had their on unique server side id-s, so i had to order my requests in a way that the person registrations are sent first then the tasks and campaigns that were completed for them, so i can receive the respective ids from the API.
Now for the fun part:
Firstly you cant use a bgSyncPlugin, because you cant access the replayQueue, you have to use a normal queue, like this:
var bgSyncQueue = new workbox.backgroundSync.Queue('uploadQueue', {
maxRetentionTime: 60 * 24 * 60,
onSync: () => syncData(),
});
And push the failed requests to the queue inside the fetch listener:
this.onfetch = (event) => {
let requestClone = event.request.clone();
if (requestClone.method === 'POST' && 'condition to match the requests you need to replay') {
event.respondWith(
(() => {
const promiseChain = fetch(requestClone).catch(() => {
return bgSyncQueue.pushRequest(event);
});
event.waitUntil(promiseChain);
return promiseChain;
})()
);
} else {
event.respondWith(fetch(event.request));
}
};
When user has connection we trigger the "syncData()" function, on ios this is a bit complicated(more on this later), on android it happens automatically, as the service worker sees it has connection, now lets just check out what syncData does:
async function syncData() {
if (bgSyncQueue) //is there data to sync?
return getAccessToken() //then get the access token, if expired refresh it
.then((token) => replayQueue(bgSyncQueue, token).then(() => showNotification({ body: 'Succsesful sync', title: 'Data synced to server' })))
.catch(() => showNotification({ title: 'Sync unsuccessful', body: 'Please find and area with better coverage' })); //replay the requests and show a notification
return Promise.resolve('empty');//if no requests to replay return with empty
}
For the android/desktop side of thing we are finished you can be happy with your modified data being synced, now on iOS we cant just have the users data be uploaded only when they restart the PWA, thats bad user experience, but we are playing with javascript everything is possible in a way or another.
There is a message event that can be fired every time that the client code sees that it has internet, which looks like this:
if (this.$online && this.isIOSDevice) {
if (window.MessageChannel) {
var messageChannel = new MessageChannel();
messageChannel.port1.onmessage = (event) => {
this.onMessageSuccess(event);
};
} else {
navigator.serviceWorker.onmessage = (event) => {
this.onMessageSuccess(event);
};
}
navigator.serviceWorker.ready.then((reg) => {
try {
reg.active.postMessage(
{
text: 'sync',
port: messageChannel && messageChannel.port2,
},
[messageChannel && messageChannel.port2]
);
} catch (e) {
//firefox support
reg.active.postMessage({
text: 'sync',
});
}
});
}
this is inside a Vue.js watch function, which watches whether we have connection or not, if we have connection it also checks if this is a device from the apple ecosystem, like so:
isIosDevice() {
return !!navigator.platform && /iPad|iPhone|MacIntel|iPod/.test(navigator.platform) && /^((?!chrome|android).)*safari/i.test(navigator.userAgent);
}
And so it tells the service worker that it has internet and it has to sync, in that case this bit of code gets activated:
this.onmessage = (event) => {
if (event.data.text === 'sync') {
event.waitUntil(
syncData().then((res) => {
if (res !== 'empty') {
if (event.source) {
event.source.postMessage('doNotification');//this is telling the client code to show a notification (i have a built in notification system into the app, that does not use push notification, just shows a little pill on the bottom of the app with the message)
} else if (event.data.port) {
event.data.port.postMessage('doNotification'); //same thing
}
return res;
}
})
);
}
};
Now the most useful part in my opinion, the replay queue function, this guy gets the queue and the token from getAccessToken, and then it does its thing like clockwork:
const replayQueue = async (queue, token) => {
let entry;
while ((entry = await queue.shiftRequest())) {//while we have requests to replay
let data = await entry.request.clone().json();
try {
//replay the person registrations first and store them into indexed db
if (isPersonRequest) {
//if new person
await fetchPerson(entry, data, token);
//then replay the campaign and task submissions
} else if (isTaskOrCampaignRequest) {
//if task
await fetchCampaigns(entry, data, token);
}
} catch (error) {
showNotification({ title: 'no success', body: 'go for better internet plox' });
await queue.unshiftRequest(entry); //put failed request back into queue, and try again later
}
}
return Promise.resolve();
};
Now this is the big picture as how to use this guy on iOS devices and make Apple mad as heck :) I am open to any questions that are related, in this time i think i have become pretty good with service worker related stuff as this was not the only difficult part of this project but i digress, thats a story for another day.
(you may see that error handling is not perfect and maybe this thing is not he most secure of them all, but this project has a prettty small amount of users, with a fixed number which know how to use it and what it does, so im not really afraid of security in this case, but you may want to improve on things if you use in in a more serious project)
Hope i could help and all of you have a grea day.

Getting WebRTC working (how to debug ICE Fail?)

I can't figure out how to debug WebRTC. I keep getting 'ICE Failed' errors, but I doubt that's the issue. Here's my code: https://github.com/wamoyo/webrtc-cafe/tree/master/2.1%20Establishing%20a%20Connection%20%28within%20a%20Local%20Area%20Network%29
I'm using node.js/express/socket.io for setting up rooms and connecting peers, and then some default public servers for signalling.
The strange thing is, it appears the I have the remoteStream on the client.
Here's the two errors I'm getting (By they way, for now, I'm just trying to connect form my phone to laptop or two browser tabs, all within a LAN):
HTTP "Content-Type" of "text/html" is not supported. Load of media resource http://192.168.1.2:3000/%5Bobject%20MediaStream%5D failed.
ICE failed, see about:webrtc for more details
Any help would rock!
I've made a few comments already, but I think it's also worthwhile to write an answer.
There are 3 big things I see after my first quick read of your code. I haven't tried to actually run or debug your code beyond a superficial reading.
First, you should set the remoteVideo.src URL parameter in the same way as you do the local video stream:
pc.onaddstream = function(media) { // This function runs when the remote stream is added.
console.log(media);
remoteVideo.src = window.URL.createObjectURL(media.stream);
}
Second, you should pass a constraints object to the createOffer() and createAnswer() methods of RTCPeerConnection. The constraints should/could look like this:
var constraints = {
mandatory: {
OfferToReceiveAudio: true,
OfferToReceiveVideo: true
}
};
And you pass this after the success and error callback arguments:
pc.createOffer(..., ..., constraints);
and:
pc.createAnswer(..., ..., constraints);
Lastly, you are not exchanging ICE candidates between your peers. ICE candidates can be part of the offer/answer SDP, but not always. To ensure that you send all of them, you should implement an onicecandidate handler on the RTCPeerConnection:
pc.onicecandidate = function (event) {
if (event.candidate) {
socket.emit("ice candidate", event.candidate);
}
}
You will have to implement "ice candidate" message relaying between clients in your server.js
Hope this helps, and good luck!

How to create data channel in WebRTC peer connection?

I'm trying to learn how to create an RTCPeerConnection so that I can use the DataChannel API. Here's what I have tried from what I understood:
var client = new mozRTCPeerConnection;
var server = new mozRTCPeerConnection;
client.createOffer(function (description) {
client.setLocalDescription(description);
server.setRemoteDescription(description);
server.createAnswer(function (description) {
server.setLocalDescription(description);
client.setRemoteDescription(description);
var clientChannel = client.createDataChannel("chat");
var serverChannel = server.createDataChannel("chat");
clientChannel.onmessage = serverChannel.onmessage = onmessage;
clientChannel.send("Hello Server!");
serverChannel.send("Hello Client!");
function onmessage(event) {
alert(event.data);
}
});
});
I'm not sure what's going wrong, but I'm assuming that the connection is never established because no messages are being displayed.
Where do I learn more about this? I've already read the Getting Started with WebRTC - HTML5 Rocks tutorial.
I finally got it to work after sifting through a lot of articles: http://jsfiddle.net/LcQzV/
First we create the peer connections:
var media = {};
media.fake = media.audio = true;
var client = new mozRTCPeerConnection;
var server = new mozRTCPeerConnection;
When the client connects to the server it must open a data channel:
client.onconnection = function () {
var channel = client.createDataChannel("chat", {});
channel.onmessage = function (event) {
alert("Server: " + event.data);
};
channel.onopen = function () {
channel.send("Hello Server!");
};
};
When the client creates a data channel the server may respond:
server.ondatachannel = function (channel) {
channel.onmessage = function (event) {
alert("Client: " + event.data);
};
channel.onopen = function () {
channel.send("Hello Client!");
};
};
We need to add a fake audio stream to the client and the server to establish a connection:
navigator.mozGetUserMedia(media, callback, errback);
function callback(fakeAudio) {
server.addStream(fakeAudio);
client.addStream(fakeAudio);
client.createOffer(offer);
}
function errback(error) {
alert(error);
}
The client creates an offer:
function offer(description) {
client.setLocalDescription(description, function () {
server.setRemoteDescription(description, function () {
server.createAnswer(answer);
});
});
}
The server accepts the offer and establishes a connection:
function answer(description) {
server.setLocalDescription(description, function () {
client.setRemoteDescription(description, function () {
var port1 = Date.now();
var port2 = port1 + 1;
client.connectDataConnection(port1, port2);
server.connectDataConnection(port2, port1);
});
});
}
Phew. That took a while to understand.
I've posted a gist that shows setting up a data connection, compatible with both Chrome and Firefox.
The main difference is that where in FF you have to wait until the connection is set up, in Chrome it's just the opposite: it seems you need to create the data connection before any offers are sent back/forth:
var pc1 = new RTCPeerConnection(cfg, con);
if (!pc1.connectDataConnection) setupDC1(); // Chrome...Firefox defers per other answer
The other difference is that Chrome passes an event object to .ondatachannel whereas FF passes just a raw channel:
pc2.ondatachannel = function (e) {
var datachannel = e.channel || e;
Note that you currently need Chrome Nightly started with --enable-data-channels for it to work as well.
Here is a sequence of events I have working today (Feb 2014) in Chrome. This is for a simplified case where peer 1 will stream video to peer 2.
Set up some way for the peers to exchange messages. (The variance in how people accomplish this is what makes different WebRTC code samples so incommensurable, sadly. But mentally, and in your code organization, try to separate this logic out from the rest.)
On each side, set up message handlers for the important signalling messages. You can set them up and leave them up. There are 3 core messages to handle & send:
an ice candidate sent from the other side ==> call addIceCandidate with it
an offer message ==> SetRemoteDescription with it, then make an answer & send it
an answer message ===> SetRemoteDescription with it
On each side, create a new peerconnection object and attach event handlers to it for important events: onicecandidate, onremovestream, onaddstream, etc.
ice candidate ===> send it to other side
stream added ===> attach it to a video element so you can see it
When both peers are present and all the handlers are in place, peer 1 gets a trigger message of some kind to start video capture (using the getUserMedia call)
Once getUserMedia succeeds, we have a stream. Call addStream on the peer 1's peer connection object.
Then -- and only then -- peer 1 makes an offer
Due to the handlers we set up in step 2, peer 2 gets this and sends an answer
Concurrently with this (and somewhat obscurely), the peer connection object starts producing ice candidates. They get sent back and forth between the two peers and handled (steps 2 & 3 above)
Streaming starts by itself, opaquely, as a result of 2 conditions:
offer/answer exchange
ice candidates received, exchanged, and added
I haven't found a way to add video after step 9. When I want to change something, I go back to step 3.

Categories

Resources