How to timeout if connection not establish - javascript

Is there any default timeout value that after a number of tries if connection not establish then i got timeout from socket.io API ? in my application i try to connect with Nodejs server using socket.io but if connection not establish or unreachable i want that at least i get some event after x number of tries and then i should inform a user that there is a connection problem with server. but some how my client continuously trying to connect with a server and print the following exception on console:
socket.io-1.3.5.js:2 GET https://chatapp.local:8898/socket.io/?EIO=3&transport=polling&t=1485528658982-172 net::ERR_CONNECTION_REFUSED
Here is my code:
socket = io(socketUrl, {'force new connection': true});
socket.on('connect', function () {
uiHandler("socket.connect");
});
socket.on('error', function (err) {
uiHandler("socket.error", {error: err});
});
socket.on('disconnect', function() {
uiHandler("socket.disconnect");
});
socket.on('end', function() {
uiHandler("socket.end");
});
How i can set a timeout if connection not establish within 30sec. Any suggestion please.

From what I read in the API docs you can set the timeout value and the number of retries on each connection, so if you want to try for 30 seconds you basically have
maxTime = timeout * reconnectionAttempts
Please note that you have a delay between each retry (which default to 1000 ms) and a randomization factor.If you want to have total control over the duration before emitting a ConnectionError to your clients you will have to tinker with them a little bit.
From the API docs you can also see that each time an a connection fail an error is emitted as either connect_timeout or connection_error. If every available attempts fail then a reconnect_failed will be fired. Then you will be able to tell your user that something went wrong.
In a more general way you have several options to implement a control over an asynchronous process. Two come to mind immediately : promises and observables. You might want to explore them for a more general & extensible approach.
Please feel free to ask in the comms if you want more details or if I do not answer properly.

Related

websocket client close failed when switch network/ offline

i'd like to ask some question about how to close a websocket client when offline/switched network.
when i try to close the socket for the 2 case in chrome, after i call websocket.close, i cannot recieve onclose event for a long time (around 60s), then i can recieve it finally.
after i check the readystate, i found that in the coming 60s, the state is 2(CLOSEING), not turned to 3(CLOSED).
so i'd like to know is there any steps i missed when i call websocket.close() in offline/switched network condition. while it runs well when the network is normal.
what's your back-end framework?
If you try to handle client's network that suddenly turned offline, there're two way you can try to close websocket from client as follows.
Kindly refer to source code here.
Using the js offline event handle
If we would like to detect if the user went offline we simply add websocket close function into offline event function.
front-end
function closeWebSocket() {
websocket.close();
}
$(window).on('beforeunload offline', event => {
closeWebSocket();
});
back-end (WebSocketServer)
#OnClose
public void onClose(Session session) {
CURRENT_CLIENTS.remove(session.getId());
}
Using Ping interval on the client-side and decrease the websocket timeout on server-side
If websocket server doesn't receive any message in specific time, it will lead to a timeout. So we can use this mechanism to decrease the timeout to close session if the client doesn't send any ping due to offline.
front-end
// send ping to server every 3 seconds
const keepAlive = function (timeout = 20000) {
if (websocket.readyState === websocket.OPEN) {
websocket.send('ping');
}
setTimeout(keepAlive, timeout);
};
back-end (WebSocketConfig)
#Bean
public ServletServerContainerFactoryBean createWebSocketContainer() {
ServletServerContainerFactoryBean container = new ServletServerContainerFactoryBean();
container.setMaxSessionIdleTimeout(5000L);
return container;
}

Reconnect to Laravel Echo server after session disconnection

I am attempting to write an web application with a persistent echo connection to a laravel-echo-server instance, which needs to detect disconnections and attempt to reconnect gracefully. The scenario I am attempting to overcome now is a user's machine has gone to sleep / reawoke and their session key has been invalidated (echo server requires an active session in our app). Detecting this situation from an HTTP perspective is solved - I setup a regular keepAlive, and if that keepAlive detects a 400-level error, it reconnects and updates the session auth_token.
When my Laravel session dies, I cannot tell that has happened from an echo perspective. The best I've found is I can attach to the 'disconnect' event, but that only gets triggered if the server-side laravel-echo-server process dies, rather than the session is invalid:
this.echoConnection.connector.socket.on('connect', function() {
logger.log('info', `Echo server running`);
})
this.echoConnection.connector.socket.on('disconnect', function() {
logger.log('warn', `Echo server disconnected`);
});
On the laravel-echo-server side, I can tell that the connection is dead - it will show this error:
⚠ [7:03:30 PM] - 5TwHN2qUys5VEFP5AAAG could not be authenticated to private.1
I cannot figure out how to catch this failure event programmatically from the client. Is there a way to capture it? Again, I can tell the session is dead eventually because I poll the server regularly via a http keepAlive function, but I would definitely also like to tell directly from the echo connection if possible, as it polls at a much higher natural rate.
As a second (more important) question, if I detect that my session has died, what should I do to recycle the echo connection (after I have logged in again via HTTP and gotten a new auth_token)? Is there anything specific I should call / etc? I've had some success calling disconnect() then setting up the connection again from scratch, but I do see errors such as:
websocket.js:201 WebSocket is already in CLOSING or CLOSED state.
Here is my current (naive) reconnection code, which is my initial connection code with an attempt to disconnect first stapled onto it:
async attemptEchoReconnect() {
if (this.echoConnection !== null) {
this.echoConnection.disconnect();
this.echoConnection = null;
}
const thisConnectionParams = this.props.connections[this.connectionName];
const curThis = this;
this.echoConnection = new Echo({
broadcaster: 'socket.io',
host: thisConnectionParams.echoHost,
authEndpoint: 'api/broadcasting/auth',
auth: {
headers: {
Authorization: `Bearer ` + thisConnectionParams.authToken
}
}
});
this.echoConnection.connector.socket.on('connect', function() {
logger.log('info', `Echo server running`);
})
this.echoConnection.connector.socket.on('disconnect', function() {
logger.log('warn', `Echo server disconnected`);
});
this.echoConnection.join('everywhere')
.here(users => {
logger.log('info', `Rejoined presence channel`);
});
this.echoConnection.private(`private.${this.props.id}`)
.listen(...);
setTimeout(() => { this.keepAlive() }, 120 * 1000);
}
Any help would be so great - these APIs are not well documented to the end that I really want, and I am hoping I can get some stability with this connection rather than having to do something ugly like force restart.
For anyone who needs help with this problem, my above echo reconnection code seems to be pretty stable, along with a keepAlive function to determine the state of the HTTP connection. I am still a bit uncertain of the origin of the console errors I am seeing, but I suspect they have to do with connection loss during a sleep cycle, which is not something I am particularly worried about.
I'd still be interested in hearing other thoughts if anyone has any. I am somewhat inclined to believe long-term stability of an echo connection is possible, though it does appear you have to proactively monitor it with what tools you have available.

Weird socket.io behavior when Node server is down and then restarted

I implemented a simple chat for my website where users can talk to each other with ExpressJS and Socket.io. I added a simple protection from a ddos attack that can be caused by one person spamming the window like this:
if (RedisClient.get(user).lastMessageDate > currentTime - 1 second) {
return error("Only one message per second is allowed")
} else {
io.emit('message', ...)
RedisClient.set(user).lastMessageDate = new Date()
}
I am testing this with this code:
setInterval(function() {
$('input').val('message ' + Math.random());
$('form').submit();
}, 1);
It works correctly when Node server is always up.
However, things get extremely weird if I turn off the Node server, then run the code above, and start Node server again in a few seconds. Then suddenly, hundreds of messages are inserted into the window and the browser crashes. I assume it is because when Node server is down, socket.io is saving all the client emits, and once it detects Node server is online again, it pushes all of those messages at once asynchronously.
How can I protect against this? And what is exactly happening here?
edit: If I use Node in-memory instead of Redis, this doesn't happen. I am guessing cause servers gets flooded with READs and many READs happen before RedisClient.set(user).lastMessageDate = new Date() finishes. I guess what I need is atomic READ / SET? I am using this module: https://github.com/NodeRedis/node_redis for connecting to Redis from Node.
You are correct that this happens due to queueing up of messages on client and flooding on server.
When the server receives messages, it receives messages all at once, and all of these messages are not synchronous. So, each of the socket.on("message:... events are executed separately, i.e. one socket.on("message... is not related to another and executed separately.
Even if your Redis-Server has a latency of a few ms, these messages are all received at once and everything always goes to the else condition.
You have the following few options.
Use a rate limiter library like this library. This is easy to configure and has multiple configuration options.
If you want to do everything yourself, use a queue on server. This will take up memory on your server, but you'll achieve what you want. Instead of writing every message to server, it is put into a queue. A new queue is created for every new client and delete this queue when processing the last item in queue.
(update) Use multi + watch to create lock so that all other commands except the current one will fail.
the pseudo-code will be something like this.
let queue = {};
let queueHandler = user => {
while(queue.user.length > 0){
// your redis push logic here
}
delete queue.user
}
let pushToQueue = (messageObject) => {
let user = messageObject.user;
if(queue.messageObject.user){
queue.user = [messageObject];
} else {
queue.user.push(messageObject);
}
queueHandler(user);
}
socket.on("message", pushToQueue(message));
UPDATE
Redis supports locking with WATCH which is used with multi. Using this, you can lock a key, and any other commands that try to access that key in thet time fail.
from the redis client README
Using multi you can make sure your modifications run as a transaction,
but you can't be sure you got there first. What if another client
modified a key while you were working with it's data?
To solve this, Redis supports the WATCH command, which is meant to be
used with MULTI:
var redis = require("redis"),
client = redis.createClient({ ... });
client.watch("foo", function( err ){
if(err) throw err;
client.get("foo", function(err, result) {
if(err) throw err;
// Process result
// Heavy and time consuming operation here
client.multi()
.set("foo", "some heavy computation")
.exec(function(err, results) {
/**
* If err is null, it means Redis successfully attempted
* the operation.
*/
if(err) throw err;
/**
* If results === null, it means that a concurrent client
* changed the key while we were processing it and thus
* the execution of the MULTI command was not performed.
*
* NOTICE: Failing an execution of MULTI is not considered
* an error. So you will have err === null and results === null
*/
});
}); });
Perhaps you could extend your client-side code, to prevent data being sent if the socket is disconnected? That way, you prevent the library from queuing messages while the socket is disconnected (ie the server is offline).
This could be achieved by checking to see if socket.connected is true:
// Only allow data to be sent to server when socket is connected
function sendToServer(socket, message, data) {
if(socket.connected) {
socket.send(message, data)
}
}
More information on this can be found at the docs https://socket.io/docs/client-api/#socket-connected
This approach will prevent the built in queuing behaviour in all scenarios where a socket is disconnected, which may not be desirable, however if should protect against the problem you are noting in your question.
Update
Alternatively, you could use a custom middleware on the server to achieve throttling behaviour via socket.io's server API:
/*
Server side code
*/
io.on("connection", function (socket) {
// Add custom throttle middleware to the socket when connected
socket.use(function (packet, next) {
var currentTime = Date.now();
// If socket has previous timestamp, check that enough time has
// lapsed since last message processed
if(socket.lastMessageTimestamp) {
var deltaTime = currentTime - socket.lastMessageTimestamp;
// If not enough time has lapsed, throw an error back to the
// client
if (deltaTime < 1000) {
next(new Error("Only one message per second is allowed"))
return
}
}
// Update the timestamp on the socket, and allow this message to
// be processed
socket.lastMessageTimestamp = currentTime
next()
});
});

How to catch and deal with "WebSocket is already in CLOSING or CLOSED state" in Node

I've been searching for a solution to the issue "WebSocket is already in CLOSING or CLOSED state" and found this:
Meteor WebSocket is already in CLOSING or CLOSED state error
WebSocket is already in CLOSING or CLOSED state.
Answer #1 is for strictly related to Meteor and #2 has no answers... I have a Node server app with a socket:
const WebSocket = require('ws');
const wss = new WebSocket.Server({ server });
wss.on('connection', function connection(socket) {
socket.on('message', function incoming(data) {
console.log('Incoming data ', data);
});
});
And clients connect like this:
const socket = new WebSocket('ws://localhost:3090'); //Create WebSocket connection
//Connection opened
socket.addEventListener('open', function(event) {
console.log("Connected to server");
});
//Listen to messages
socket.addEventListener('message', function(event) {
console.log('Message from server ', event);
});
However after a few minutes, clients randomly disconnect and the function
socket.send(JSON.stringify(data));
Will then throw a "WebSocket is already in CLOSING or CLOSED state.".
I am looking for a way to detect and deal these disconnections and immediately attempt to connect again.
What is the most correct and efficient way to do this?
The easiest way is to check if the socket is open or not before sending.
For example - write a simple function:
function isOpen(ws) { return ws.readyState === ws.OPEN }
Then - before any socket.send make sure it is open:
if (!isOpen(socket)) return;
socket.send(JSON.stringify(data));
You can also rewrite the send function like this answer but in my way you can log this situations.
And, for your second request
immediately attempt to connect again
There is no way you can do it from the server.
The client code should monitor the WebSocket state and apply reconnect method based on your needs.
For example - check this VueJS library that do it nicely. Look at Enable ws reconnect automatically section
Well, my answer is simple, is just you send message to the web socket in an interval of time, to understand that you are using the service. It is better than you got another connection. Now, you start your project where are the web socket function and inspect elements to see the state Time, and see the time that change of "pending" for the time when closes. So now you will define a media of interval to make a setInterval functions like this for example: enter code here
const conn = WebSocket("WSS://YourLocationWebSocket.com");
setInterval(function(){
var object = {"message":"ARandonMessage"};
object = JSON.stringify(object);
conn.send(object);
},/*The time, I suggest 40 seconds, so*/ 40000)
might be late to the party, but i recently encountered this problem & figured that the reason is because the readystate property of the websocket connection is 3 (CLOSING) https://developer.mozilla.org/en-US/docs/Web/API/WebSocket/readyState at the time the message was sent.
i resolved this by checking the readystate property; if it equals to 3, close and reinitialize the websocket connection. then do a while loop that exits when the readystate property equals to 1, otherwise a delay, to ensure that the new connection is already open.
if ( this.ws.readyState === 3 ) {
this.ws.close();
this.ws = new WebSocket(`wss://...`);
// wait until new connection is open
while (this.ws.readyState !== 1) {
await new Promise(r => setTimeout(r, 250));
}
}
this.ws.send(...)

Sending many requests from Node.js to an API causes error

I have more than 2000 user in my database , when I try to broadcast a message to all users, it barely sends about 200 request then my server stops and I get an error as below :
{ Error: connect ETIMEDOUT 31.13.88.4:443
at Object.exports._errnoException (util.js:1026:11)
at exports._exceptionWithHostPort (util.js:1049:20)
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1090:14)
code: 'ETIMEDOUT',
errno: 'ETIMEDOUT',
syscall: 'connect,
address: '31.13.88.4',
port: 443 }
Sometimes I get another error that says :
Error!: Error: socket hang up
This is my request :
function callSendAPI(messageData) {
request({
uri: 'https://graph.facebook.com/v2.6/me/messages',
qs: { access_token: '#####' },
method: 'POST',
json: messageData
}, function (error, response, body) {
if (!error && response.statusCode == 200) {
var recipientId = body.recipient_id;
var messageId = body.message_id;
if (messageId) {
console.log("Successfully sent message with id %s to recipient %s",
messageId, recipientId);
} else {
console.log("Successfully called Send API for recipient %s",
recipientId);
}
} else {
console.error("Failed calling Send API");
console.log(error)
}
});
}
I have tried
setTimeout to make the the API calling wait for a while:
setTimeout(function(){callSendAPI(data)},200);
Can anyone help if he/she faced a similar error ?
EDITED
I'm using Messenger Platform which support high rate of calls to the Send API and it is not limited with 200 calls .
You may be hitting Facebook API limits. To throttle the requests you should send every request after some interval from the previous one. You didn't include where you're iterating over all users but I suspect that you maybe do it in a loop and if you use setTimeout to delay every request with flat 200ms delay then you have all requests done at the same time like you did before - just 200ms later.
What you can do is:
You can use setTimeout and add variable delay for every request (not recommended)
You can use Async module's series or parallelLimit (using callbacks)
You can use Bluebird's Promise.mapSeries or Promise.map with concurrency limit (using promises)
The 1 is not recommended because it will still be fire-and-forget (unless you add more complexity to that) and you still risk that you have too much concurrency and go over limit because you only control when the requests start, not how many of outstanding requests are there.
The 2 and 3 are mostly the same but differ by using callbacks or promises. In your example you're using callbacks but your callSendAPI doesn't take its own callback which it should if you want option 2 to work - or, alternatively, it should return a promise if you want option 3 to work.
For more info see the docs:
https://caolan.github.io/async/docs.html#parallelLimit
https://caolan.github.io/async/docs.html#series
http://bluebirdjs.com/docs/api/promise.map.html
http://bluebirdjs.com/docs/api/promise.mapseries.html
Of course there are more ways to do it but those are the most straightforward.
Ideally, if you want to fully utilize the 200 requests per hour limit then you should queue the requests yourself and make the requests at certain intervals that correspond to that limit. Sometimes if you didn't do a lot of requests in an hour then you won't need delays, sometime you will. What you should really do here is to queue all requests centrally and empty the queue at intervals corresponding to the already used up portion to the limit which you should track yourself - but that can be tricky.
It sounds like you are hitting a rate limit.
From the Facebook documentation:
Your app can make 200 calls per hour per user in aggregate.
You can check the dashboard to see if you are hitting the rate limiting in these cases.

Categories

Resources