Long response time on socket polling with Heroku - javascript

My client is connecting to socket server using socket.io 1.0+ lib as:
$scope.socket = io.connect( "/gateway" );
On server side I launch express server and socket server attached as:
httpServer = http.createServer( app ).listen( process.env.PORT, process.env.IP || "0.0.0.0", function() {
io = require( 'socket.io' )( httpServer ).of("/gateway");
io.on('connection', function( socket ) {
// socket events here
}
});
Then the project is tested on heroku. What bother me a lot is this screen from chrome dev tools
You can see there that constantly 2 polling requests are being performed. One gets response in couple of milliseconds, other one takes somewhere around 26 seconds. If I click on one of them I could see that the real difference between them is request method: the one that uses POST gets quick response, the one uses GET remains in pending state until gets response (or timeout) after ~26 seconds.
In my development enviromnent (c9.io) I do not see this behaviour, but in testing (heroku free node) I get this.
Probably because of this I get some other weird behaviour only on heroku, for example on tab close I do not receive a disconnect event, while on c9 I do..
Has anybody faced the same problem? Is there a fix?

As it turns out the long response was mistreated to a visible open connection when polling transport is being used. In this case a 25 second connection is being held open for client-server communication. It is normal, although you have to know about it - some logging modules/solutions may treat it as long response and constantly warn you.
In short
if the sebsocket transport is used there will be one websocket connection open all the time
is the polling transport is used, then every 25-26 seconds a new GET connection will be (re)established.

Related

Closing single node server connection closes them all

I barely ask any questions on Stack Overflow, but this one is beyond me. I guess I'm missing something basic as I'm pretty new to Node server.
Our application is pretty basic. The server is supposed to receive a handful of text lines (data), merge and parse them, and once the connection is closed (data sending is over) it sends the data to the api.
var net = require('net');
var fs = require('fs');
const axios = require('axios')
const server = new net.Server();
server.listen(PORT, IP);
server.on("connection", client => {
client.write("Hello\n");
console.log('connected');
let received = "";
client.on("data", data => {
received += data
console.log("Partial data is: " + data);
});
client.on("close", () => {
received = received.toString('utf8');
fs.appendFile('log.txt', received, function (err) {});
received = received.replace(/(?:\r\n|\r|\n)/g, "||");
axios.post(APIADDRESS, {data: received});
console.log('Full data is: '+ {data: received});
});
});
To send the data I'm simply running a netcat or nc using the netcat ipaddress port, that's not a problem. It's connecting fine, status message is received.
The thing is - once I open two or more connections from two DIFFERENT SSh servers something weird happens. I can send the line after line just fine. The server reports back "partial data" debug without problem, for both of them.
However, once I close one of the connections (ctrl+c) they BOTH close.
In the end, only the data from the manually closed connection is received. The other one, from a separate nc on a separate ssh server never reaches the client.on("close") part, it seems. It's just terminated for no reason.
Any ideas? I don't even know where to start.
//EDIT
Just tested it from my pc and some ssh mobile app using separated SSH servers. As soon as ctrl+c is sent at any device it closes the connection for all clients.
//Forgot to mention I'm running pm2 to keep the server up. Once I turned on the script by hand, ignoring pm2 - it works fine. Weird. It is happening because of PM2.
I would guess that you have Putty configured to ‘Share SSH connections if possible’. Per some doc, when doing so:
When this mode is in use, the first PuTTY that connected to a given server becomes the ‘upstream’, which means that it is the one managing the real SSH connection. All subsequent PuTTYs which reuse the connection are referred to as ‘downstreams’: they do not connect to the real server at all, but instead connect to the upstream PuTTY via local inter-process communication methods.
So, if you Ctrl+C the PuTTY session that is managing the actual shared connection, they both lose their connection.
You could presumably disable this shared connection feature at either the client or server end of things since both must be enabled for sharing to occur.
To anyone coming here in the future.
If you are using pm2 with --watch enabled and the text log file is in the same folder as your main server script... That's the reason why it drops the connection after a single client disconnects. It just detects that the log has changed.
I'm not facepalming, that's not even funny.

netty-socketio: client did not complete upgrade - closing transport

I have a socket server running with netty-socketio and a web app that connects to it using socket.io-client JS library.
The problem is that I'm losing a few connections (not all, let's say 20%).
For the lost connections: right after the connection is made by the client, the server logs client did not complete upgrade - closing transport and disconnects the client.
This happens on my production server (using nginx as proxy) and also on my local environment (connecting directly to the netty-socketio server). It's pretty much random and I cant identify a pattern on it. For example, if I continuously keep refreshing the client app on the browser (with a 5 seconds interval), at some point this error will happen, and for the subsequent tries it will work normal again (until it happens another time).
This is the error on the netty-socketio lib: https://github.com/mrniko/netty-socketio/blob/master/src/main/java/com/corundumstudio/socketio/transport/WebSocketTransport.java#L196
but I could not figure out why it happens randomly (some times at the first try)
Any thoughts on this are really appreciated.
Thanks
After some research and tests I found out that when using netty-socketio as server, you need to specify the transport method on the client side.
var socket = io('server-address', { transports: [ 'polling' ] });
// or
var socket = io('server-address', { transports: [ 'websocket' ] });
If you don't specify it, the connection will be established using polling as transport method and netty will automatically try to upgrade it to websocket. This is what was causing connection failures.
After specifying the transport method I had 0% connection failures so far.

Correct way to handle Websocket

I've a client to server Websocket connection which should be there for 40 seconds or so. Ideally it should be forever open.
The client continually sends data to server and vice-versa.
Right now I'm using this sequence:
var socket;
function senddata(data)
{
if (!socket)
{
socket = new WebSocket(url);
socket.onopen = function (evt) {
socket.send(data);
socket.onmessage = function (evt) {
var obj = JSON.parse(evt.data);
port.postMessage(obj);
}
socket.oneerror = function (evt) {
socket.close();
socket = null;
}
socket.onclose = function(evt){
socket = null;
}
}
}
else
{
socket.send(data);
}
}
Clearly as per current logic, in case of error, the current request data may not be sent at all.
To be frank it sometimes gives error that websocket is still in connecting state. This connection breaks often due to networking issues. In short it does not work perfectly well.
I've read a better design : How to wait for a WebSocket's readyState to change but does not cover all cases I need to handle.
Also I've Googled about this but could not get the correct procedure for this.
So what is the right way to send regular data through Websockets which handles well these issues like connection break etc?
An event you don't seem to cover is onclose. Which should work really well, since it's called whenever the connection terminates. This is more reliable than onerror, because not all connection disruptions result in an error.
I personally use Socket.IO, it enables real-time bidirectional event-based communication between client and server.
It is event driven. Events such as
on connection :: socket.on('conection',callback);
and
on disconnect :: socket.on('disconnect',callback);
are built in with socket.io so it can help you with your connection concerns. Pretty much very easy to use, check out their site if you are interested.
I use two-layer scheme on client: abstract-wrapper + websocket-client:
The responsibilities of the websocket-client are interacting with a server, recovering the connection and providing interfaces (event-emitter and some methods) to abstract-wrapper.
The abstract-wrapper is a high-level layer, which interacts with websocket-client, subscribes to its events and aggregating data, when the connection is temporary failed. The abstract-wrapper can provide to application layer any interface such as Promise, EventEmitter and so on.
On application layer, I just work with abstract-wrapper and don't worry about connection or data losing. Undoubtedly, it's a good idea to have here information about the status of connection and data sending confirmation, because it's useful.
If it is necessary, I can provide some code for example
This apparently is a server issue not a problem in the client.
I don't know how the server looks like here. But this was a huge problem for me in the past when I was working on a websocket based project. The connection would continuously break.
So I created a websocket server in java, and that resolved my problem.
websockets depend on lots of settings, like if you're using servlets then servlet container's settings matter, if you're using some php etc, apache and php settings matter, for example if you create a websocket server in php and php has default time-out of 30 seconds, it will break after 30 seconds. If keep-alive is not set, the connection wont stay alive etc.
What you can do as quick solution is
keep sending pings to a server after a certain amount of time (like 2 or 3 seconds, so that if a websocket is disconnected it is known to the client so it could invoke onclose or ondisconnect, I hope you know that there is no way to find if a connection is broken other than failing to send something.
check server's keep-alive header
If you have access to server, then it's timeouts etc.
I think that would help

socket.io 'disconnect' not called if page refreshed fast enough

I have the following abridged code:
io.on('connection', function(client) {
client.uuid = uuid.v4();
// add client object to server...
console.log('socket.io:: client connected ' + client.uuid );
client.on('disconnect', function() {
console.log('socket.io:: client disconnected ' + client.uuid );
// remove client object from server...
});
});
If I open up this page in the browser, everything seems fine. If I refresh the page, it will fire disconnect and then connection. However, if I refresh fast enough, there are times where disconnect doesn't get called and thus client data doesn't get cleaned from the server. Is there any way to protect from this?
Edit: reword reconnect -> connection
As adeneo mentioned, socket.io has heartbeats which automatically check to see if a client is still connected. The disconnect code is fired if it detects the client is actually gone. After replicating my original issue, I tried leaving the server on, and about 30 seconds later, the "dead" clients were being removed. So to solve the issue, you just have to wait. Socket.io takes care of everything on its own.
The same question was answered here.
TL;DR
You can use those options on the client:
const socket = io({
transports: ['websocket'],
upgrade: false
});
This will prevent socket.io from using the HTTP polling method at all, which causes the issues. I used this trick successfully on v4.

Weird JavaScript in a Express/Socket.io app I am working on

I am trying to develop a SCADA like app using Brian Ford's excellent https://github.com/btford/angular-socket-io-seed as a starting point, but I have run into some some JavaScript code that I just don't understand. Worse I don't even know what to search for.
Every example I find via Google uses the second syntax which here at least does not work.
This code in the main app.js works, but I need access to the socket object so I can pass it to my under development simulation module, so I need to change it. But when I change the socket connect call back the module no longer gets loaded. I know when code in routes/socket.js gets run because the log line MET3: is from it.
Can somebody give me a clue what the original line is doing so I can make changes to it?
Is this some cool new shorthand I should be using?
Not certain if it is relevant but I am running socket.io 0.9.16 and node.js 0.10.29.
var io = require('socket.io').listen(server);
// many lines latter
io.sockets.on('connection', require('./routes/socket.js')); // What is this?
Working Output
Express server listening on port 3000
debug - client authorized
info - handshake authorized hOEv8Iv7pPO1xLdxdq1V
MET1: routes/index.js index()
debug - setting request GET /socket.io/1/websocket/hOEv8Iv7pPO1xLdxdq1V
**MET3:** routes/socket.js Socket ID [hOEv8Iv7pPO1xLdxdq1V] connected
debug - websocket writing 5:::{"name":"send:name","args":[{"sockName":"JohnDoe","sockPage":"RETS"}]}
debug - websocket writing 5:::{"name":"send:time","args":[{"time":"Fri Jul 25 2014 17:39:32 GMT-0400 (EDT)"}]}
var io = require('socket.io').listen(server);
// many lines latter
io.sockets.on('connection', function (socket) {
console.log ("MET0: app.js io.sockets.on() running");
require('./routes/socket.js'); // This should work
});
Broken Output
Express server listening on port 3000
debug - client authorized
info - handshake authorized Hn9It34K2OCT6o8ceinF
**MET0:** app.js io.sockets.on() running
MET1: routes/index.js index()
debug - emitting heartbeat for client Hn9It34K2OCT6o8ceinF
debug - websocket writing 2::
debug - set heartbeat timeout for client Hn9It34K2OCT6o8ceinF
debug - got heartbeat packet
debug - cleared heartbeat timeout for client Hn9It34K2OCT6o8ceinF
io.sockets.on('connection', require('./routes/socket.js')); // What is this?
Essentially this says pass whatever is returned from require('./routes/socket.js') to the sockets function. If you aren't familiar with how requiring modules work in node, it is likely that /routes/socket.js contains something like the following:
module.exports = function() {
// Do some work
};
This means that the function above will be returned by the require call. Have a look inside socket.js and see what is returned.

Categories

Resources