I am working with ProjectRTC and I successfully tested it with Firefox and the android client. I put the server code on a remote server(digitalocean), accessing through my home pc.
However, everything works fine until I test it using a home ADSL, or (slower) ADSL for the pc with Firefox and a 3G/4G Network for the Android client.
If I use a 3G/4G network for my pc through my mobile (using the hotspot option), it tries to connect to the client but I get the error
"Ice Failed" on the javascript console.
I tried to add a TURN client on
public/javascripts/rtcClient.js
adding this:
var localId,
config = {
peerConnectionConfig: {
iceServers: [
/*test*/
{
"username":"e7db750a-2fcc-40c6-8415-cab22743a68a",
"url": "turn:turn1.xirsys.com:443?transport=tcp",
"credential":"287ae254-9380-4f81-af88-e1cc9ed27eb0"
},
{
"username":"e7db750a-2fcc-40c6-8415-cab22743a68a",
"url": "turn:turn1.xirsys.com:443?transport=udp",
"credential":"287ae254-9380-4f81-af88-e1cc9ed27eb0"
},
/*end test*/
{
"url": "stun:stun.l.google.com:19305"
}
]
},
peerConnectionConstraints: {
optional: [{
"DtlsSrtpKeyAgreement": true,
}]
}
},
peerDatabase = {},
localStream,
remoteVideoContainer = document.getElementById('remoteVideosContainer'),
socket = io();
socket.on('message', handleMessage);
socket.on('id', function(id) {
localId = id;
});
but I still got no luck, getting again "ICE failed".
I also tried to read this, but I don't think it's what I'm searching for.
Do you have any idea to get this to work with mobile connections?
Thanks in advance for your interest!
Don't hardcode your ICE string. The XirSys ICE string is time sensitive and will expire after 30 seconds. You need to request a fresh ICE string for each connection. This may fix your issue or may not, but it will at least rule out the ICE string as being the problem.
Regards,
Lee
Related
I have an application in Electron that does facial recognition of people to then decide whether or not they can enter the place and for that I'm using Amazon Rekognition.
Everything was working fine (for a few months) until, two days ago, a customer reported to me that the app was behaving strangely, like it wasn't responding to requests for facial recognition.
After several tests, I discovered that what is happening with it is a timeout error, which occurs in all API calls, whether they are looking for faces (SearchFacesByImage) or registering new faces (IndexFaces).
The error says:
{
"message": "connect ETIMEDOUT 3.226.60.54:443",
"errno": -4039,
"code": "TimeoutError",
"syscall": "connect",
"address": "3.226.60.54",
"port": 443,
"time": "2022-12-14T13:50:10.909Z",
"region": "us-east-1",
"hostname": "rekognition.us-east-1.amazonaws.com",
"retryable": true
}
What intrigued me was the fact that everything was working fine, until this behavior just started happening (and I didn't make any code changes/updates to the app running on my client's computer).
And what makes me even more intrigued is that this behavior occurs completely randomly and only on the machine of that client in question. Sometimes the API calls work correctly (returning whether the person was recognized or not), but most of the time, the calls take about 90 seconds to return the timeout error. When executing the same code on my machine (same methods and same CollectionId) everything runs normally and there was no timeout error at any time - while at the exact same moment on my client's machine the behavior continues.
I was using aws-sdk and then switched to #aws-sdk/client-rekognition (thinking that could solve the problem) but the code only worked on a few of the first calls to the API and a few minutes later it got the timeout errors again.
The code I'm using to configure and make calls to Rekognition is basically this:
const { RekognitionClient, IndexFacesCommand, SearchFacesByImageCommand } = require('#aws-sdk/client-rekognition')
const rekognitionClient = new RekognitionClient({
credentials: {
accessKeyId: 'accessKeyId',
secretAccessKey: 'secretAccessKey'
},
region: 'us-east-1'
})
const registerFaceOnRekognition = async (bytes, userId) => {
const params = {
CollectionId: 'collectionId',
Image: { Bytes: bytes },
ExternalImageId: userId,
MaxFaces: 1,
QualityFilter: 'HIGH'
}
const command = new IndexFacesCommand(params)
try {
const { FaceRecords } = await rekognitionClient.send(command)
if (!FaceRecords.length) {
console.log('No faces detected.')
return
}
console.log('Face created:')
console.log(FaceRecords[0].Face.FaceId)
} catch (error) {
console.error(error) // timeout error
}
}
const searchFaceByImageOnRekognition = async (bytes) => {
const params = {
CollectionId: 'collectionId',
Image: { Bytes: bytes },
MaxFaces: 1,
FaceMatchThreshold: 99,
QualityFilter: 'HIGH'
}
const command = new SearchFacesByImageCommand(params)
try {
const { FaceMatches } = await rekognitionClient.send(command)
if (!FaceMatches.length) {
console.log('This face has not been registered yet')
return
}
console.log('Face found:')
console.log(FaceMatches[0].Face.ExternalImageId)
} catch (error) {
console.error(error) // timeout error
}
}
// Method called through the renderer process that has a canvas where the webcam view is reproduced
const onTakePicture = (event, data) => {
const bytes = Buffer.from(data.dataURL.replace('data:image/jpeg;base64,', ''), 'base64')
// If there is a userId, register the face in the image
if (data.userId) {
registerFaceOnRekognition(bytes, data.userId)
return
}
// Else, search for the face in the image
searchFaceByImageOnRekognition(bytes)
}
Just remembering that: during all tests on my client's computer the internet connection was stable and working properly.
What is the best way to investigate and resolve this issue?
UPDATE:
I enabled Rekognition debug logs and they can be found at: https://gist.github.com/IgorSamer/4e58e09f3fa615401f85ca325b794245
In it, the first three requests (2022-12-16T13:48:45.932Z, 2022-12-16T13:53:20.325Z and 2022-12-16T14:19:12.479Z) occur normally. However, all other consecutive requests start to give the timeout error, where, in fact, no data is returned after the [DEBUG] App: endpoints Resolved endpoint: step.
As previously mentioned the internet connection is working fine. I could also managing to reproduce the error via remote access, that is, the machine internet was ok at the time of error.
Is there a possibility that there is a block made by my client's firewall/network that prevents requests from being sent by the SDK after a few successful requests? If yes, what is the best way to investigate this?
Exploration
This is what I would do initially to gather some info:
Verify if this is happening ALL the time with that specific client.
Verify if this is happening ONLY with one client, or more.
Verify if this is happening in one or multiple regions (i.e us-east-1).
Verify if Amazon Recognition has had/or has issues in the affected region during the time window of interest.
Check Recognition's status in the Health dashboard in your AWS console: link
Use AWS Recognition Guidelines and Quotas as a reference to determine if your app/service usage of Recognition is under the set limits.
Note there's a limit on TPS per resource (i.e SearchFacesByImage, IndexFaces) per account.
Possible approaches
Verify if there was a change in the client network/firewall. Just ask.
Replicate your app's API call with AWS CLI and study logs.
Access remotely to your client's device.
Setup temporal AWS credentials (remember to remove access after the test)
Send an API call to the Recognition endpoint. Note that even a 4XX error will be good news, as you got at least some response.
Set up proper logging for your app (as CloudWatch logs may not be enough to troubleshoot).
Check Splunk's APM and NewRelic's APM
I hope this may be of help to at least create a troubleshooting strategy
I create 60 client connections to socket.io server in google chrome browser.
Server at specific time send screenshot to the clients. And some websocket connections, that are subprotocol of socket.io are broken, so connection at about 1-4 chrome tabs are closed. I tried to increase pingTimeout, it helped to overcome tcp transport close problem only (this problem I have as well), but this solution doesn't help to fix sending screenshot problem.
In my opinion google chrome can't support about 50-60 tabs at one time, because CPU and RAM are increased to the max values because of sending screenshots to 60 clients (each client has 2 websocket connection: the first for simple messages, the second for graphics (to send screenshots)), so chrome closes some websocket connections.
Part of code for the server socket io here:
// server
this.http = this._createHttpServer(sslCert, sslKey);
this.io = socketIo(this.http, {
'pingTimeout': 180000,
'pingInterval': 60000
});
const jwtAuth = socketioJwt.authorize({
secret: jwtSecret,
timeout: 15000
});
this.io.on('connection', (socket) => {
socket.once('authenticate', (data) => {
socket.rawAuthData = data;
});
jwtAuth(socket);
});
// client
var connOptions = {
"reconnectionAttempts": 2
};
var socket = io(options.url, connOptions);
socket.on('connect', function() {
if (options.token) {
socket.emit('authenticate', {token: options['token'], tag : tag});
socket.on('authenticated', function() {
ctx.printLog('Authorized. Waiting for handshake');
socket.once('tunnel-handshake', function() {
ctx.printLog('handshake received! connection is ready');
processConnected();
});
}).on('unauthorized', function(msg) {
ctx.printLog("Authorization failed: " + JSON.stringify(msg.data));
eventHandlers.onerror({ code: ctx.ERROR_CODE.INVALID_TOKEN});
});
} else {
processConnected();
}
});
socket.on('reconnect_failed', eventHandlers.onerror.bind(this, {code: 1, reason: "Reconnection failed"}));
socket.on('disconnect', eventHandlers.onclose);
socket.on('error', eventHandlers.onerror);
Does exist any ideas, what the cause could be? Does exist any solution of this problem?
Is it google chrome problem or socket.io options problem?
Thanks
Changing socket.io to 3.0 version can't resolve the issue. socket.io v3.0 has engine.io v4.0. Next information in release notes of engine.io v4.0 describes the problem ("Heartbeat mechanism reversal" title):
We have received a lot of reports from users that experience random disconnects due to ping timeout, even though their Internet connection is up and the remote server is reachable. It should be noted that in that case the client reconnects right away, but still it was an annoying issue.
After analysis, it seems to be caused by delayed timers on the
client-side. Those timers are used in the ping-pong mechanism which
helps to ensure the connection between the server and the client is
still healthy. A delay on the client-side meant the client sent the
ping packet too late, and the server considered that the connection
was closed.
That’s why the ping packets will now be sent by the server, and the
client will respond with a pong packet.
But increasing pingTimeout and pingInterval to 1073741823 value resolves the issue.
Overview
I am using the Node.js library actions-on-google client to build a smarthome action for the Garage Door device type. This action is deployed as a Cloud Function in GCP. I can confirm that the following works perfectly so far:
Accounting linking with our OAuth flow
Responding to sync intents (ie. onSync() in the client)
Responding to execute intents (ie. onExecute() in the client)
Problem
Despite that other callbacks (onSync() and onExecute()) work fine, we do not see any evidence of onQuery() being called no matter what. There aren't any errors showing in Stackdriver nor are their any logs being generated in Stackdriver under the "Google Actions" filter either.
We expect onQuery() to run when we ask Google Assistant things like is the garage door open? and is Matt's Door closed?
We tried removing async to see if the call was hanging
We tried removing headers in the lambda
We tried removing all other code and redploying to isolate onQuery()
Code
The following code shows the simple onQuery() callback. onExecute() and onSync code has been removed for clarity.
'use strict';
const functions = require('firebase-functions');
const {smarthome} = require('actions-on-google');
const app = smarthome({
jwt: require('./XXXXXXXXX-XXXXXXXXXX.json'),
debug: true,
});
//
// Note: Removed onSync() and onExecute() for clarity
//
app.onQuery(async (body, headers) => {
// Expecting to see these logging statements in
// Stackdriver like we do for onExecute() and
// onSync() ... but nothing ever shows up.
console.info('=== onQuery.body', body);
console.info('=== onQuery.headers', headers);
// We have hardcoded the following ID and a "closed"
// state. It matches a valid device ID to the testing
// account we are using.
return {
requestId: body.requestId,
payload: {
devices: {
'2489e4a92799728292b8d5a8b1c9d177': {
on: true,
online: true,
openPercent: 0,
}
}
}
};
});
exports.smarthome = functions.https.onRequest(app);
We considered the possibility that the JSON returned in the call to onSync() might be missing a trait or something that prevents it from responding to a query intent properly but we have not been able to identify anything that might be incorrect or missing. Here is the JSON payload returned from onSync():
{
"requestId": "ff36a3cc-ec34-11e6-b1a0-64510650abcf",
"payload": {
"agentUserId": "1836.15267389",
"devices": [{
"id": "1234",
"type": "action.devices.types.GARAGE",
"traits": [
"action.devices.traits.OpenClose"
],
"name": {
"defaultNames": ["Smart Garage Door"],
"name": "Matt's Door",
"nicknames": ["Matt's Door"]
},
"willReportState": true,
"attributes": {
"openDirection": [
"UP",
"DOWN"
]
},
"deviceInfo": {
"manufacturer": "ABC Corp",
"model": "test",
"hwVersion": "1.0",
"swVersion": "1.0"
}
}]
}
}
Expected Result
We expect Google Assistant to respond with "Garage door is closed" or some other equivalent. Instead, we receive "Sorry, I can't reach Matt's Door right now. Please try again."
Answering my own question here in case anyone else has this trouble. The reason why my action was handling the SYNC and EXECUTE intents but not the QUERY intent came down to what default/user names I assigned each device in the SYNC response.
Ultimately, I started using the following and my action began responding to QUERY intents as expected again:
...
name: {
defaultNames: ['Garage Door'],
name: door.name,
nicknames: [door.name, 'Garage Door']
},
...
where door.name is a name which is set by the user and is returned by an API call.
I have a WebRTC multi-party app that works on both localhost and on an ngrok.io localhost tunnel. However, when I try and test it with my friend, who is connected through a router on their end, I am able to see an offer/answer exchange as well as an ICE candidate exchange, but no sound gets streamed through.
After first having this problem, I did some research and learned that you need a TURN server to get through a router's NAT. I'm using a public TURN server that I've confirmed works in https://webrtc.github.io/samples/src/content/peerconnection/trickle-ice/
var configuration = {
"iceServers": [{ "url": "stun:stun2.1.google.com:19302" }],
url: 'turn:192.158.29.39:3478?transport=udp',
credential: 'JZEOEt2V3Qb0y27GRntt2u2PAYA=',
username: '28224511:1379330808'
};
yourConn = new webkitRTCPeerConnection(configuration);
yourConn2 = new webkitRTCPeerConnection(configuration);
yourConn3 = new webkitRTCPeerConnection(configuration);
The sound packets should be routed through this TURN server and through my friend's NAT, but we still can't stream to each other.
Your turn server credentials are taken from https://www.html5rocks.com/en/tutorials/webrtc/infrastructure/ and have expired in 2013. If you used https://webrtc.github.io/samples/src/content/peerconnection/trickle-ice/ it should have told you this doesn't work -- i'd be rather surprised if it gave you relay candidates.
Run your own server.
You should change configuration:
var configuration = {
"iceServers": [
{ "url": "stun:stun2.1.google.com:19302" },
{
"url": "turn:192.158.29.39:3478?transport=udp",
"credential": "yourpassword",
"username": "yourusename"
}
],
};
I’m writing a video chat application and I’m currently using google’s public STUN server and making request to xirsys’ endpoint for remote TURN servers. When I look at the logs I noticed bag tcp packets are being sent with devices using LTE connection and udp packets being sent with devices using WiFi connection. At first, I assumed that the issue was that I was only using STUN servers instead of both TURN and STUN servers. However, I’m still getting “data.sdp is undefined” and/or the icestateconnection is stuck in the state of “checking.” Can someone please not only review and analyze the code and it’s fallacies but explain why this is occurring and provide any resources you know to help my understanding of the issue. Thanks!
[code]
//fetch for ice servers
if (servers) {
console.log(`servers: ${JSON.stringify(servers.v.iceServers, null, 2)}`);
iceServers = [
...servers.v.iceServers,
{"url": "stun:stun.l.google.com:19302"},
{"url":'stun:stun1.l.google.com:19302'},
{"url":'stun:stun2.l.google.com:19302'},
{"url":'stun:stun3.l.google.com:19302'},
{"url":'stun:stun4.l.google.com:19302'},
];
return;
}
//configuration
pc = new RTCPeerConnection({"iceServers": iceServers};);
//exchange to signaling server
exchange(data) {
const fromId = data.from;
if (!pc)
this.createPC(fromId, false);
console.log(`Data: ${JSON.stringify(data, null, 2)}`);
console.log(`Data.sdp ${data.sdp}`);
if (data.sdp) {
console.log('exchange sdp', data.sdp);
pc.setRemoteDescription(new RTCSessionDescription(data.sdp), function () {
if (pc.remoteDescription.type === "offer")
pc.createAnswer(function(desc) {
console.log('createAnswer', desc);
pc.setLocalDescription(desc, function () {
console.log('setLocalDescription', pc.localDescription);
socket.emit('exchange', {'to': fromId, 'sdp': pc.localDescription });
}, logError);
}, logError);
}, logError);
} else {
console.log('exchange candidate', data);
pc.addIceCandidate(new RTCIceCandidate(data.candidate));
}
}
Turns out that I initiated my configuration for the ICE Servers before my servers were set after a network request! Problem solved!