Why doesn't io.emit() work in process.on()? - javascript

I'm trying to apply this so that my server will tell the clients when it is closed. I don't understand why the server will not emit. It seems like the program closes before it gets a chance to emit, but console.log() works. I think my problem probably has to do with the synchronous nature of process.on as mentioned here, but honestly I don't understand enough about what (a)synchronous really means in this context. Also, I'm on Windows 7 if that helps.
// catch ctrl+c event and exit normally
process.on('SIGINT', function (code) {
io.emit("chat message", "Server CLOSED");
console.log("Server CLOSED");
process.exit(2);
});
I just started messing around with this stuff today so forgive my ignorance. Any help is greatly appreciated!
Full server code.

io.emit() is an asynchronous operation (you can say that it works in the background) and due to various TCP optimizations (perhaps such as Nagle's algorithm), your data may not be sent immediately.
process.exit() takes effect immediately.
You are likely shutting down your app and thus all resources it owns before the message is successfully sent and acknowledged over TCP.
One possible work-around is to do the process.exit(2) on a slight delay that gives the TCP stack a chance to send the data before you shut it down.
Another possibility is to just avoid that last chat message. The client will shortly see that the connection to the server was closed and that it cannot reconnect so it should be equipped to display that info to the user anyway (in cases of a server crash).
You could also consider turning off the Nagle algorithm which attempts to wait a short bit before sending data in case you immediately send some more data that could be combined into the same packet. But, to know whether that would work reliably, you'd have to test pretty thoroughly on appropriate platforms and it's possible that even turning this off wouldn't fix the issue since it is a race between the TCP stack to send out its buffered data and the shutting down of all resources owned by this process (which includes the open socket).

Related

Node.js: How to determine cause of server hangs

I recently started noticing my Node.js app server hanging after a number of requests. After more investigation I narrowed it down to a specific endpoint. After about 30 hits on this endpoint, the server just stops responding and requests start timing out.
In summary, on each request the clients upload a file, server does some preliminary checks, creates a new processing job, puts that on a queue, using bull, and returns a response to the client. The server then continues to process the job on the queue. The job process involves retrieving job data from a Redis database, clients opening WebSocket connections to check on job status, and server writing to the database once job is complete. I understand any one of those things could be causing the hang up.
There are no errors or exceptions being thrown or logged. All I see is requests start to time out. I am using node-inspector to try to figure out what is causing the hang ups but not sure what to look for.
My question is, is there a way to determine the root cause of hang up, using a debugger or some other means, to figure out too many ABC instances have been created, or too many XYZ connections are open.
In this case it was not actually a Node.js specific issue. I was leaking a database connection in one code path. I learned the hard way, if you exhaust database connections, your Node server will stop responding. Requests just seem to hang.
A good way to track this down, on Postgres at least is by running the query:
SELECT * FROM pg_stat_activity
Which lists all the open database connections, as well as which query was last run on that connection. Then you can check your code for where that query is called, which is super helpful in tracking down leaks.
A lot of your node server debugging can be done using the following tools.
Nodemon is an excellent tool to do debugging because it will display errors just like node but it will also attempt to restart your server based on new changes; removing a lot of the stop/start hastle.
https://nodemon.io/
Finally I would recommend Postman. Postman lets you manually send in requests to your server, and will help you narrow your search.
https://www.getpostman.com/
Related to OP's issue and solution from my experience:
If you are using the mysql library and get a specific connection connection.getConnection(...) you need to remember to release it afterwards connection.release();
Otherwise your pool will run out of connections and your process will hang.

Implications (if any) of an ECONNRESET error on my Node.js server?

I have two software components working in tandem - a Visual Studio C# chat application I made, which connects via a TCP NetworkStream to a Node.js application which is hosting a very simple TCP server.
The C# application sends messages via the TCP NetworkStream to the server, and everything works well, until I close the application (the actual process is relatively unimportant, but I've listed it here and included a short gif for clarity):
First, I start up the Node.js TCP server
Then the C# Application starts and opens up a TCP NetworkStream to the Node.js server
I type the message 'Hello!' into the C# app's input and hit 'Send'
The message is recieved by the TCP server
However when I close my C# application, I get an ECONNRESET error
I'm closing the NetworkStream on the client side using NetworkStream.Close() which, as #RonBeyer pointed out, is ill-advised. According to MSDN it:
Closes the current stream and releases any resources (such as sockets and file handles) associated with the current stream. Instead of calling this method, ensure that the stream is properly disposed.
I assume this is done with the using keyword in C# (which I am pretty unfamiliar with, given my novice understanding of the language).
BUT I DIGRESS. What I know is that an ECONNRESET error is caused by an abrupt closure of the socket / stream. The peculiar thing is, disregarding the ECONNRESET error on the server allows the server to accept new connections seemingly unperturbed.
So my question is:
does ignoring the error even matter in this context? Are there any possible implications I'm not seeing here?
and if it does/there are, what problems is it causing under the surface (undisposed resources etc.)?
I say this because the function of the server (to accept new TCP connections) remains uninterrupted, so it seems to make no difference superficially. Any expertise on the subject would be really helpful.
Pastebin to my C# code
Thanks in advance!
I suggest reading this post and then deciding for yourself based on your own knowledge of your code whether or not it's OK to ignore the ECONNRESET. It sounds like your Node app may be trying to write to the closed connection (heartbeats being sent?). Proper closing of the connection from your C# app would probably take care of this, but I have no knowledge of C#.
You may have a problem if you get lots of users and if ECONNRESET causes the connection to go into TIME_WAIT. That will tie up the port for 1-2 minutes. You would use netstat on Linux to look for that but I'm sure there is an equivalent Windows app.
If you really want to get into the nitty gritty of socket communications I suggest the excellent Unix Network Programming, by Stevens.

Does using jQuery.get effectively double the ping time?

Suppose I have some script myScript.js that uses jQuery.get() to retrieve a small piece of data from the server. Suppose also that my ping time is horrible at 1500ms. Does using jQuery.get effectively double the ping time to 3000ms?
Or is there async magic that allows some sort of parallel processing? The reason I'm asking is that we use jQuery.get() fairly liberally and I'm wondering if it is an area we need to look at optimizing.
Edit: double compared to if I can somehow rearrange things to just load all the data upon the initial load and bypass jQuery get altogether
Ping time is usually server-related, where as jQuery is all client side. So the answer is no, it doesn't affect your ping time.
If you're asking if using jQuery.get (or ajax in general) can make your client side slower then the answer is that yes, the more JS you have then generally the slower the client gets if you're trying to process a lot of things since everything pretty much runs on the same thread. However, by default these ajax requests are asynchronous so until the server sends the response back the thread is usually idling anyways.
I'd suggest you open your page in Chrome and use the developer tools to see the network usage. That will tell you exactly how much time is taken 'waiting' on the server.
If you break down a request, you can get an idea of what latency you can expect.
Every TCP connection begins with a three-way-handshake:
SYN (client to server)
SYN-ACK (server to client)
ACK (client to server)
If the request fits in the size of one tcp packet (~1500 bytes) it can be sent as the last part of the handshake to optimize the network flow.
The response might be sent in just one packet as well (depending on its size). Once sent, both sides engage in a connection termination which takes two pairs of FIN-ACK sequences unless the connection is kept alive. At this point I'm not entirely sure whether the server can send FIN together with the last response packet.
So, in the best case scenario you can expect at least 2x ping time, but more likely 3-4x.

Why do Javascript exceptions leave the interpreter in an unpredictable state?

Exceptions cause Node.js servers to crash. The common wisdom is that one needs to monitor Node.js processes and restart them on crash. Why can't one just wrap the entire script in a try {} catch {} or listen for exception events?
It's understood that catching all exceptions is bad because it leaves the interpreter in an unpredictable state. Is this actually true? Why? Is this an issue specific to V8?
Catching all exception does not leave the interpreter in a bad state, but may leave your application in a bad state. An uncaught exception means that something that you were expecting to work, failed, and your application does not know how to handle that failure.
For example, if your app is a web server that listens to port 80 for connections, and when the app launches the port is in use, an exception is raised. Your code may ignore it, and then the process may continue running without actually listening to the port, so the process will be futile, or handle it: print an error message, a warning, kill the other process, or any way you want it to be handled. But you can see why ignoring it is not a good idea.
Another example is failing to communicate with a database (dropping a connection, unable to connect, receiving an unexpected error). If you app flow does not catch the exception properly, but just ignores it, you may be sending the user an acknowledgement of an event that failed.
Exceptions are part of the event engine. Instead of wrapping in a try what you want to do is listen for that exception
http://debuggable.com/posts/node-js-dealing-with-uncaught-exceptions:4c933d54-1428-443c-928d-4e1ecbdd56cb Then respond in the proper manner.
As for the second part of your question:
It really depends on your application. You need to test to see if the exception is something more or less you expect. Sometimes an exception is real not just a file not found.

How do I recover from a WebSocket client computer going to sleep or app going to background (Safari on iPad)

I have browser client Javascript which opens a WebSocket (using socket.io) to request a long-running process start, and then gets a callback when the process is done. When I get the callback, I update the web page to let the user know the process has completed.
This works ok, except on my iPad when I switch to another app and then come back (it never gets the callback, because I guess the app is not online at the time). I'm assuming the same thing will happen on a laptop or other computer that sleeps while waiting for the callback.
Is there a standard way (or any way) to deal with this scenario? Thanks.
For reference, if you want to see the problem page, it is at http://amigen.perfectapi.com/
There are a couple of things to consider in this scenario:
Detect the app going off/on line
See: Online and offline events.
When your app detects the online event after the computer wakes up you can get any information that you've missed.
For older web browsers you'll need to do this in a cleverer way. At Pusher we've added a ping - pong check between the client and server. If the client doesn't receive a ping within a certain amount of time it knows there's a connection problem. If the server sends a ping and doesn't get a pong back with a certain time it knows there's a problem.
A ping pong mechanism is defined in the spec but a way of sending a ping or pong hasn't been defined on the WebSocket API as yet.
Fetching missed information
Most realtime servers only deliver messages to connected to clients. If a client isn't connected, maybe due to temporary network disturbance or their computer has been asleep for a while, then those clients will miss the message.
Some frameworks do provide access to messages through a history/cache. For those that don't you'll need to detect the problem (as above) and then fetch any missed messages. A good way to do this is by providing a timestamp or sequence ID with each messages so you can make a call to your web server to say "give me all messages since X".

Categories

Resources