in about:webrtc in firefox my IceState is always "in progress" until it fail.
If i use the website in local it works, but if a friend try to call me it doesn't work the remote starts but it's blank.
my turn stun server:
{ "iceserver":{url:'stun:stun01.sipphone.com'},
{url:'stun:stun.ekiga.net'},
{url:'stun:stun.fwdnet.net'},
{url:'stun:stun.ideasip.com'},
{url:'stun:stun.iptel.org'},
{url:'stun:stun.rixtelecom.se'},
{url:'stun:stun.schlund.de'},
{url:'stun:stun.l.google.com:19302'},
{url:'stun:stun1.l.google.com:19302'},
{url:'stun:stun2.l.google.com:19302'},
{url:'stun:stun3.l.google.com:19302'},
{url:'stun:stun4.l.google.com:19302'},
{url:'stun:stunserver.org'},
{url:'stun:stun.softjoys.com'},
{url:'stun:stun.voiparound.com'},
{url:'stun:stun.voipbuster.com'},
{url:'stun:stun.voipstunt.com'},
{url:'stun:stun.voxgratia.org'},
{url:'stun:stun.xten.com'},
And i'm using an AWS server as STUN and signaling.
in about:webrtc errors:
INFO setting pair to state FAILED
ERR specified too many components
WARNING specified bogus candidate
ERR pairing local trickle ICE candidate srflx
Your setup seems to require TURN and you have not provided working TURN servers.
By having ten stun servers, you're trying to get an opinion about your public IP address from ten different people. The answer won't change. Just use a single STUN server...
Using other people's TURN credentials is not something you should do without permission. If you test the credentials using http://webrtc.github.io/samples/src/content/peerconnection/trickle-ice/ you will notice that you don't get relay candidates.
For turn:numb.viagenie.ca the credentials are incorrect and 192.158.29.39 doesn't seem to be running a TURN server anymore.
Related
I have 2 devices (desktop PCs), each running a browser tab that instantiates an IPFS node using js-ipfs.
//file index.html, served over HTTPS
const node = IpfsCore.create(); //ipfs browser node
Both nodes have peers (calling node.swarm.addrs() returns about 50 peers). They do not list each other as peers.
I want to connect those two nodes to each other, so if I call node.add( ... ) on the first, I can then call node.cat( ... ) on the second, and acquire a file from the first. (Or so that browser 1's pubsub broadcasts always reach browser 2; browser 1 can read browser 2's wantlist etc.)
How do I connect these 2 browser tabs as peers?
The example at https://github.com/ipfs/js-ipfs/blob/master/packages/interface-ipfs-core/src/swarm/connect.js uses this command:
ipfsA.swarm.connect( ipfsBId.addresses[0] )
But in my case, both my browser-tabs have no addresses.
console.log( ( await node.id() ).addresses ); //[] empty array
I don't know how the browser tabs manage connecting to other peers without their own addresses, and I don't know how to make them connect to each other.
There is a 4-year-old question about browser peers here IPFS - pubsub connect to peers from browser, but its fairly unrelated and seems to rely on the the deprecated / out-of-date webrtc-star https://github.com/libp2p/js-libp2p-webrtc-star
I know if I were setting up a WebRTC connection I would use fetch() or XHR or a websocket to a public-facing server (with a DNS record or IP address) to exchange negotiation info while querying a list of iceServers (also with DNS records / IP addresses).
I don't want to rely on a list of servers I own or configure, and I don't want to burden any public example TURN servers or anything. js-libp2p might use multicastDNS? I don't think browser tabs can broadcast signals though (I could be wrong? Maybe fetch() can do that somehow with some sneaky url stuff?)
What do I do? How can these two browser-tab IPFS peers discover each other, specifically?
I suspect this has a very straight-forward answer, but I have been researching for days now and read hundreds of pages, and none of the documentation I have read anywhere is relevant. Wherever the answer is on this vast Internet, I have not been able to find it.
Don't know if this example is correct
The process, call new RTCPeerConnection() then createOffer() then setLocalDescription()
Then I wait for onicecandidate take what it gives and first send the offer and second the icecandidates through the signal server to the other peer
Then the other peer takes the received offer into setRemoteDescription(offer) then the received icecandidates into addIceCandidate(icecandidates) then calls createAnswer() this gives an answer to put in setLocalDescription(answer) this triggers onicecandidate take these icecandidates with the answer=offer and send them back to the other peer
The other peer takes the answer into setRemoteDescription(answer) then the received icecandidates into addIceCandidate(icecandidates)
I think in this example the connection will work when testing inside local network but what if it doesn't because its not a local network, at what step in this example will the API call the STUN server and what other functions do I need to call if it does call the STUN server?
I've found that one way to generate BIND requests to be sent to the STUN server right away is to set the iceCandidatePoolSize option in the configuration to be > 0.
config = {iceServers: [{urls:stun:stunserver.stunprotocol.org}], iceCandidatePoolSize: 1};
peerConnection = new RTCPeerConnection(config); // pretty much starts to resolve the DNS name and sends BIND requests right away.
Hope this helps.
Also: this link is chock-full of great suggestions to troubleshoot webrtc connections.
You need to specify a STUN server in the peer connection's configuration. E.g.:
pc = new RTCPeerConnection({iceServers: [{urls: "stun:stun.1.google.com:19302"}]});
There are no other methods to call, provided it works on a LAN already. You should see additional calls to onicecandidate from this, compared to before. That's it.
Note that a couple of the things you describe happen in parallel, but in short, what triggers the browser to connect to the STUN server is setLocalDescription. It causes the browser's built-in ICE agent to kick off its candidate gathering process for this connection, and STUN is part of that.
I have two peers that want to connect to each other via WebRTC. Typically the first peer would create an offer and send it to the second via a signalling channel/server, the second peer would respond with an answer. This scenario works fine.
However, is it possible to support the case where both peers happen to try to connect to each other simultaneously both sending SDP offers to one another concurrently via the signalling server.
// Both peers do this simultaneously:
const conn = new RTCPeerConnection(null);
const sdpOffer = await conn.createOffer();
await conn.setLocalDescription(sdpOffer);
signalingService.send(peerId, sdpOffer);
// At some point in the future both peers also receive an SDP offer
// (rather than answer) from the other peer whom they sent an offer to
// via the signaling service. If this was an answer we'd call
// RTCPeerConnection.setRemoteDescription, however this doesn't work for an
// offer:
conn.setRemoteDescription(peerSDPOffer);
// In Chrome results in "DOMException: Failed to execute 'setRemoteDescription' on 'RTCPeerConnection': Failed to set remote offer sdp: Called in wrong state: kHaveLocalOffer"
I even tried to "convert" the received peer offers into answers by rewriting the SDP type from offer to answer and setup:actpass to setup:active but that doesn't seem to work, instead I just get a new exception.
So the question is, is this simultaneous connect/offer use case supported in some fashion - or should I close one side/peer RTCPeerConnection & instantiate a new one using RTCPeerConnection.createAnswer this time?
This situation is known as "signaling glare". The WebRTC API does not really define how to handle this (except for something called "rollback" but it is not implemented in any browser yet and nobody has missed it so far) so you have to avoid this situation yourself.
Simply replacing the a=setup won't work since the underlying DTLS mechanism still needs the concept of a client and a server.
The answer for how to avoid glare these days is to use the Perfect Negotiation Pattern: https://developer.mozilla.org/en-US/docs/Web/API/WebRTC_API/Perfect_negotiation
However, what OP described does work with the slight modification of setting setup:active on one peer and setup:passive on the other:
https://codepen.io/evan-brass/pen/mdpgWGG?editors=0010
It might not work for audio / video connections (because those may require negotiating codecs), but I've tested it on Chrome / Firefox / Safari for DataChannel only connections.
You could choose which peer is active and which is passive using whatever system you use to determine 'politeness' in perfect negotiation. One possibility would be to compare the DTLS fingerprints and make whichever one is larger the active peer.
I'm following the guide on Licode page
I have installed everything on Ubuntu 14.04.
I have configure ssl for licode and erizo controller in licode_config.js file to make the example works. Every other configurations i just keep them un-touch.
I have run the basic example but i cannot make a video conference.
Tracing google chrome console log, i catched:
WARNING: Publishing Stream 665544631310986500 has failed after successful ICE checks
DEBUG: Event: stream-failed
Stream Failed, act accordingly
DEBUG: Received a removeStream for 665544631310986500 and it has not been registered here, ignoring.
INFO: Stream unpublished
It's looks like i have to configure STUN or something in configuration of licode to make it works.
Got to say "Thank you!", it works for me by setting following in licode_config.js
set the range of ports to be used by libnice:
config.erizo.minport=30000
config.erizo.maxport=31000
set server public IP
config.erizoController.publicIP=serverPublicIP
config.erizoAgent.publicIP=serverPublicIP
Change default stun server as stun.google is wall'ed in countries like north korea, Iran etc.
My licode runs in docker, having ports mapping from server to the docker container of range 30000-31000, so have to ensure libnice ports fall into the range.
After reading several articles on Licode website and their community. I find out that, the issue just because my server is an Azure VPS - not a local computer. It has public IP and private IP so I have to set config.erizoController.publicIP, config.erizoAgent.publicIP to the public IP of the server.
Also Azure vps close all ports by default (except some ports i have already opened). Because of that, i have to open suitable port range and setting the config.erizo.minport, config.erizo.maxport in licode_config.js file. The port range i use: 30000-31000.
The valuable reference: http://discourse.lynckia.com/t/running-licode-in-azure/29
According to the presence documentation, Pubnub will fire the Timeout presence event after 5 minutes of not receiving a heartbeat.
I need to modify this value but I cannot find a way of doing this with the Python SDK. Here is a link to the Pubnub docs showing how to do it with JavaScript: http://www.pubnub.com/docs/web-javascript/presence#optimizing_timeout_events
Does anyone know how to achieve this using the python SDK?
Thanks a lot.
edit: My clients are not javascript clients. They are python console applications.
Heartbeat can be monkey-patched into the Pubnub class with something like this:
from pubnub import Pubnub
class PubnubHeartbeat(Pubnub):
def __init__(self, heartbeat=300, **kwargs):
self.heartbeat = heartbeat
super(PubnubHeartbeat, self).__init__(**kwargs)
def getUrl(self, request):
if "subscribe" in request['urlcomponents'][:2]:
if "urlparams" not in request:
request['urlparams'] = {}
request['urlparams']['heartbeat'] = self.heartbeat
return super(PubnubHeartbeat, self).getUrl(request)
p = PubnubHeartbeat(
subscribe_key="demo",
publish_key="demo",
heartbeat=60
)
def recv(msg):
print msg
p.subscribe(channels="heartbeat_test", callback=recv)
This isn't recommended for long-term production code (unless maybe if you are pinning your Pubnub dependency with pubnub==3.7.3 during install). The example subclass uses an undocumented method to inject the heartbeat URL parameter. (See Craig Conover's answer for a description of what that does).
PubNub Python SDK Presence
Because Python is rarely used as a client, the PubNub Python SDK's presence API is not as robustly implemented as the traditional client SDKs (JavaScript, etc.). So there is no heartbeat parameter in the Pubnbub intitializer nor is there a setter or attribute for this so you are forced to stick with the default 5 minute heartbeat setting.
However, with the PubNub JavaScript SDK, when you init PUBNUB with a custom heartbeat (60 seconds for example), the heartbeat key/value is just passed along as a query param in the REST URL:
http://pubsub.pubnub.com/subscribe/demo/my_channel/0/14411482999795083?uuid=12345&pnsdk=PubNub-JS-Web%2F3.7.14&heartbeat=60
So if you really wanted to, you could just subscribe using REST calls and pass the heartbeat in that way.
What I forgot to mention when I first posted this answer is that your client is responsible for pinging the PubNub server at least once every 60 seconds, preferably on a 30 second interval this the 60 second heartbeat window that the server is configured for this client.
With the PubNub SDK, this is done in a separate thread over the same connection (sort of - at least in a way that the server knows that it is the same client that set the heartbeat).
That said, we are getting into a less trivial solution using REST and so why even use the SDK. It would be easier for us to update the Python SDK than for you to do all the dirty work. We will do just that but not in the short term but hopefully with the next minor release of the Python SDK.
Based on our off-SO conversation, you just want to shorten the window of time that a client will appear to be online when in fact the client is not connected and was unable to explicitly unsubscribe before the connection was closed (closed the terminal instead of "logging off" using your app's UI or command line).
What you can do is implement a ping/ack handshake protocol. This is very high level so there may be some finer points that need to be filled in but it should provide the general concept.
Before one client (sender) engages in communication with another (receiver), just send a ping message to the other client on the client’s private channel (every client will subscribe to a channel unique to that client: for example, private_client001, private_client002, etc.).
The receiving client will auto-ack back on the sender’s unique channel (which will be part of the ping msg payload)
If the sender of the ping doesn’t get an ack msg back within a second (or whatever time tolerance works for you) then assume the receiver is not online.
When the receiver comes back online, you get missed messages, and any pings that are less than 5 minutes old, you can ack back and see if the sender still wants to engage.
This is a common issue for many use cases (especially chat) because there is always that window of time (the heartbeat window) that a client could really be offline but appear to be online because they did not leave in proper, predictable fashion that would have produced an explicit unsubscribe resulting in a leave event. So implementing this sort of handshake pre-connect protocol is a good practice.