I create socket.io connection with next code
var socket = new io.connect('http://localhost:8181', {
'reconnect': true,
'reconnection delay': 500,
'max reconnection attempts': 50
});
But when I kill server CTRL+C and start it again, reconnection isn't happening. But disconnect event raised on client side. What maybe reason of it?
This is an old question, but for other people like me, who are looking how to configure reconnect in socket.io (1.x) here is a correct syntax:
var socket = new io.connect('http://localhost:8181', {
'reconnection': true,
'reconnectionDelay': 1000,
'reconnectionDelayMax' : 5000,
'reconnectionAttempts': 5
});
I realise this is an old question, but I've been having some trouble with socket io reconnecting and found this post high in the search results, so thought I would contribue. Try debugging exactly which events are firing using the following code:
# coffeescript. compile if you're writing javascript, obviously.
socket.on 'connect',-> console.log 'connected'
socket.on 'reconnect',-> console.log 'reconnect'
socket.on 'connecting',-> console.log 'connecting'
socket.on 'reconnecting',-> console.log 'reconnecting'
socket.on 'connect_failed',-> console.log 'connect failed'
socket.on 'reconnect_failed',-> console.log 'reconnect failed'
socket.on 'close',-> console.log 'close'
socket.on 'disconnect',-> console.log 'disconnect'
This should give you more insight into the state of the client socket.
Also, try looking in the Network tab of your web inspector to see if it is firing XHR requests as a fallback. Finally, in your web console, try typing io.sockets and expand it out to see whether it is actually trying to reconnect or not.
I have encountered problems with reconnect_failed not firing, and the reconnect tally not resetting. The following are links to discussions of these issues on github.
reconnection delay - exponential back off not resetting properly
reconnect_failed gets never fired
some potential fixes/workarounds
This is an old question, but I had the same question (for a different reason) when using v1.4.5. My chat room app worked beautifully, but when I hit Ctrl+C in the terminal, my browser continued to loop and report ERR_CONNECTION_REFUSED every few seconds until I shut it down.
Changing a previous answer just a bit gave me the solution.
For v1.4.5, here is my original code for "var socket" in my client js file:
var socket = io();
And here is the solution:
var socket = io({
'reconnection': true,
'reconnectionDelay': 1000,
'reconnectionDelayMax' : 5000,
'reconnectionAttempts': 5
});
Obviously I could change the values if I want, but the important point is that this killed the never ending reconnection requests.
reconnection delay is too small 500ms increase that, on top of that 50 retries means 500 * 50 = 25000 ms which is 25 seconds. If that doesn't help set a timeout on error event on client side to recreate the socket object (After error and some delay retry to create connection).
Related
I'm new to node.js and discord.js, the previous version of this bot was written in discord.py (now deprecated).
This function iterates through all the webhooks (id and token) in my SQL database and sends a message to each of them. There are about 1500 of them (one for each server). I have to send a message roughly every 5 seconds. This worked perfectly in the python version, but that only had to run on about 300 guilds. I don't have the code for it anymore, but it worked the same way (all the requests were sent at once, probably about 300 requests in 500ms and this worked fine), so I don't think this is a rate limit issue.
client.on('messageCreate', (message) => {
if (message.channelId === '906659272744665139') {
console.log('found message');
const url = message.content;
var webhooklist = [];
//get all the webhooks from the database
db.prepare('SELECT * FROM webhooks').all().forEach(webhook => {
const webhookclient = new WebhookClient({id: webhook.webhookID, token: webhook.webhookToken});
webhooklist.push(webhookclient);
})
console.time('sent');
var failed = 0;
webhooklist.forEach(webhook => {
var row = new MessageActionRow()
.addComponents(
savebutton,
reportbutton
)
webhook.send({content: url, components: [row]})
.catch(err => {
if (err instanceof DiscordAPIError) {
if (err.code === 10015) {
//remove the webhook from the database
db.prepare('DELETE FROM webhooks WHERE webhookID = ?').run(webhook.id);
console.log(`Removed webhook ${webhook.id}`);
} else {
failed += 1;
console.log(err);
}
} else {
failed += 1;
}
});
});
console.timeEnd('sent');
console.log(failed);
});
}
});
The problem:
A request is sent to each webhook, however, the messages don't actually get sent half the time. For example, I'm looking at 3 different servers that this bot is in, and in one of them a message appeared and in the other two it didn't (any other combination of these outcomes also occurs, its not a problem with how the servers are set up). There are also no errors, indicated by the failed variable. To clarify, about 50% of the messages get through, but the other 50% are supposedly sent by the bot but never appear in the channel.
Things that are (most likely) NOT the issue:
-Discord Rate limits
Why: Sending messages through webhooks does not count against the bot's sent messages, and therefore will not cause rate limits (the global 50 messages per second). The only way this would cause me to be rate limited is if I was exceeding the 5/5 rate limit for each webhook individually (this is the same as what happens when you try to spam one channel).
-API rate limits
Why: API rate limits would only trip if I sent more than 10,000 invalid requests in 10 minutes. All of the requests go through, and there are no errors in the console. If I was being API rate limited, I would get completely blocked from using the discord API for up to an hour.
-I'm setting off some sort of spam protection
I've already considered this. Since most of the messages get through, I don't think this is the problem. If I did set off any filters regarding volume of requests, the requests would probably either time out, or I would get blocked.
Other notes:
-A delay between requests is not a viable solution because pretty much any amount of delay multiplied by the 1500 times this has to run would result in this function taking multiple minutes to run.
-This could be related to another issue I'm having, where buttons take a long time to respond to so the interactions will often time out before I can even run .deferReply(), however, none of the webhook requests time out (which suggests that this may not be the issue)
-my internet has seemed slow recently even though I have gigabit, but again, if internet was the issue there would be errors.
In conclusion, a large volume of webhook messages like this SHOULD work, so this issue is most likely client-side.
Found the issue: The large volume of webhook messages sent in such a short amount of time slowed my internet down to a crawl so many of the web requests ended up not going through. The way I wrote this function, those errors didn't get logged correctly. I solved this issue by using await instead of .then().
Here is the code:
var process = require('process')
var c = 0;
while (true) {
var t = process.hrtime();
console.log(++c);
}
Here is my environment:
nodejs v4.2.4, Ubuntu 14.04 LTS on Oracle VM virtualbox v5.0.4 r102546 running in Windows 7
This loop can only run about 60k to 80k times before it hangs. Nothing happens after that.
In my colleague's computer maybe 40k to 60k times. But shouldn't this loop continues forever?
I was first running a benchmark which tests avg execution time of setting up connections, so I can't just get the start time at first then end time after everything finished.
Is this related to the OS that I use?
Thanks if anyone knows the problem.
==========================================================
update 2016.4.13:
One day right after I raised this question, I realized what a stupid question it was. And it was not what I really want to do. So I'm gonna explain it further.
Here is the testing structure:
I have a node server which handles connections.Client will send a 'setup' event on 'connect' event. A Redis subscribe channel will be made at server side and then make some queries from db, then call client's callback of 'setup' event. Client disconnect socket in 'setup' callback, and reconnect on 'disconnect' event.
The client codes use socket.io-client to run in backend and cluster to simulate high concurrency.
Codes are like these:
(some of the functions are not listed here)
[server]
socket.on('setup', function(data, callback) {
queryFromDB();
subscribeToRedis();
callback();
}
[client]
var requests = 1000;
if (cluster.isMaster) {
for (var i = 0; i < 100; ++i) {
cluster.fork();
}
} else {
var count = 0;
var startTime = process.hrtime();
socket = io.connect(...);
socket.on('connect', function() {
socket.emit('setup', {arg1:'...', arg2:'...'}, function() {
var setupEndTime = process.hrtime();
calculateSetupTime(startTime, setupEndTime);
socket.disconnect();
}
}
socket.on('disconnect', function() {
if (count++ < requests) {
var disconnectEndTime = process.hrtime();
calculateSetupTime(startTime, disconnectEndTime);
socket.connect();
} else {
process.exit();
}
}
}
At first the connections could only make 500 or 600 times. Somehow I removed all the hrtime() codes, it made it to 1000 times. But later I raised the number of requests to like 2000 times (without hrtime() codes), it could not finish again.
I was totally confused. Yesterday I thought it was related to hrtime, but of course it wasn't, any infinite loop would hang. I was misled by hrtime.
But what's the problem now?
===================================================================
update 2016.4.19
I solved this problem.
The reason is my client codes use socket.disconnect and socket.connect to simulate a new user. This is wrong.
In this case server may not recognize the old socket disconnected. You have to delete your socket object and new another one.
So you may find the connection count does not equal to disconnection count, and this will prevent our code from disconnecting to redis, thus the whole loop hang because of redis not responsing.
Your code is an infinite loop - at some point this will always exhaust system resources and cause your application to hang.
Other than causing your application to hang, the code you have posted does very little else. Essentially, it could be described like this:
For the rest of eternity, or until my app hangs, (whichever happens first):
Get the current high-resolution real time, and then ignore it without doing anything with it.
Increment a number and log it
Repeat as quickly as possible
If this is really what you wanted to do - you have acheived it, but it will always hang at some point. Otherwise, you may want to explain your desired result further.
For some reason, SignalR will just stop calling client methods after a short period of time (about 1 hour or less I estimate). I have a page that shows Alerts... a very simple implementation. Here's the Javascript:
$(function () {
// enable logging for debugging
$.connection.hub.logging = true;
// Declare a proxy to reference the hub.
var hub = $.connection.alertHub;
hub.client.addAlert = function (id, title, url, dateTime) {
console.log(title);
};
$.connection.hub.start().done(function () {
console.log("Alert Ready");
});
});
If I refresh the page, it works again for about an hour, then will stop calling the client event addAlert. There are no errors in the log, no warnings. The last event in log (other than the pings to the server) is:
[15:18:58 GMT-0600 (CST)] SignalR: Triggering client hub event
'addAlert' on hub 'AlertHub'.
Many of these events will come in for a short while, then just stop, even though the server should still be sending them.
I am using Firefox 35.0.1 on Mac and SignalR 2.0.0.
I realize that a work-around is to force a page refresh every 10 mins or so, but I'm looking for a way to fix the root cause of the problem.
I enabled SignalR tracing on the server. I created an "alert" on the server after a fresh refresh of the Alert page and the alert came through. I waited about 10 mins and I tried it again, and it failed to come through. Here's what the logs read (sorry for the verbosity, not sure what was relevant):
SignalR.Transports.TransportHeartBeat Information: 0 : Connection b8b21c4c-22b4-4686-9098-cb72c904d4c9 is New.
SignalR.Transports.TransportHeartBeat Verbose: 0 : KeepAlive(b8b21c4c-22b4-4686-9098-cb72c904d4c9)
SignalR.Transports.TransportHeartBeat Verbose: 0 : KeepAlive(b8b21c4c-22b4-4686-9098-cb72c904d4c9)
SignalR.Transports.TransportHeartBeat Verbose: 0 : KeepAlive(b8b21c4c-22b4-4686-9098-cb72c904d4c9)
SignalR.Transports.TransportHeartBeat Verbose: 0 : KeepAlive(b8b21c4c-22b4-4686-9098-cb72c904d4c9)
There are dozens more of the SignalR.Transports.TransportHeartBeat messages, but nothing else.
i think theres a timeout of default 110 seconds for signalr. Can you try signalr disconnected event to reconnect it back.
$.connection.hub.disconnected(function () {
setTimeout(function () {
startHub();
}, 5000);
});
and in startHub() you can start connection again.
reference : https://github.com/SignalR/SignalR/issues/3128
and How to use SignalR events to keep connection alive in the right way?
As it turns out the problem was the way I was handling the AlertHub connections. I am using Enterprise Library Caching to store connections backing the AlertHub, and I was expiring the cache entries 20 minutes after they were created. Ergo, when the server called the client method, no errors where reported because there were no client(s) to send the message(s) to.
I have since increased the cache expiration to a reasonable value, which solved the problem.
You can refresh page if client is inactive, no mouse movement (in about every 15-30 min). I had same problem and solved it that way. That was nasty workaround but later i forgot about it and never fixed it completly ;)
I have JavaScript sending the coordinates of my mouse over the window via a WebSocket connection.
var webSocket = new WebSocket('ws:// ...');
$(window).mousemove(function (evt) {
webSocket.send(evt.pageX + ' ' + evt.pageY);
});
Then, PHP, my backend, is simply listening for these coordinates and sending to everyone who is connected the position of the cursor.
//whenever someone sends coordinates, all users connected will be notified
foreach ($webSocket->getUsers() as $user)
$user->send($xpos . ' ' . $ypos);
JavaScript gets these numbers and moves a red square based on this point.
//when PHP notifies a user, the square is updated in position
$('.square').css({
left: xpos,
top: ypos
});
The end product looks like this:
Now the issue is that it's very laggy, but I've found a way to combat this. I've added a interval for the JavaScript -- which just sends filler data every 50 milliseconds.
setInterval(function() {
webSocket.send('filler data');
}, 50);
Surprisingly, the change made it much smoother:
I've noticed how the left-side (where the mouse is being moved from), is always smooth, and I'm guessing that because the left window is always sending data, the connection is being kept smoother whereas the right window is only receiving data.
I've tried:
$socket = socket_create(AF_INET, SOCK_STREAM, SOL_TCP);
socket_set_option($socket, SOL_SOCKET, TCP_NODELAY, 1);
var_dump(socket_get_option($socket, SOL_SOCKET, TCP_NODELAY));
The output was int(-1), and it seems that Nagle's algorithm is still present in my application.
Somewhat related, Does setting TCP_NODELAY affect the behaviour of both ends of the socket?, this might be the issue of JavaScript purposely delaying the packets?
Why does sending something make it smoother?
Is there a way I can speed this up, without having to send useless data?
Your issue has to do with Nagle's Algorithm: Small packets of data are waiting for each other.
to disable this set TCP_NODELAY option using socket_set_option()
Regarding your edit, you are absolutely right. The client side is the problem here, because, although you can use javascript in your browser, windows systems, for example, do have a registry setting which by default enable Nagle on tcp.
Unfortunately this is a client-sided issue.
Using the registry editor (Start Orb -> Run -> regedit), we can enable TCPNoDelay as well set the TcpAckFrequency.
Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\Tcpip\Parameters\Interfaces\
Furthermore, I have not found it possible or relevant to change TCP_NODELAY on SOCK_STREAM with contexts.
In the current situation, I have just decided to acknowledge (ACK) every packet that the server sends. I know this will be sacrificing bandwidth for latency.
webSocket.onmessage = function(evt) {
webSocket.send(''); //empty, but still includes headers
dispatch(evt);
}
Following instruction from my previous question, I now have an array of connected users in socket.io. My problem (which I was warned of in the answer) is that sockets stay in this array even after the browser has disconnected.
I tried removing sockets from the array in a socket.on('disconnect' function, but there is still a delay of ~1 minute between when the browser disconnects and socket.io triggers the disconnect.
What is the best way to "test" a socket to see if its actually alive? I am tempted to try to send a message and catch any errors, but I feel like there is a more elegant solution.
Solution on how to test if "socket is still open"
if(socket.readyState === socket.OPEN)
{
}
Why it works:
readyState = The current state of the connection; this is one of the Ready state
constants. Read only.
"the Ready state constants"
CONNECTING 0: The connection is not yet open.
OPEN 1: The connection is open and ready to communicate.
CLOSING 2: The connection is in theprocess of closing.
CLOSED 3: The connection is closed or couldn't be opened.
https://developer.mozilla.org/en-US/docs/Web/API/WebSocket
I had an error in my disconnect handler. What I ended up using:
socket.on('disconnect', function() {
users.splice(users.indexOf(socket), 1);
});
socket.on('end',function(){
//your code
})
or
socket.on('error',function(err){
//in case of any errors
})
The disconnect event wont fire until all the clients has been disconnected!