I recently started noticing my Node.js app server hanging after a number of requests. After more investigation I narrowed it down to a specific endpoint. After about 30 hits on this endpoint, the server just stops responding and requests start timing out.
In summary, on each request the clients upload a file, server does some preliminary checks, creates a new processing job, puts that on a queue, using bull, and returns a response to the client. The server then continues to process the job on the queue. The job process involves retrieving job data from a Redis database, clients opening WebSocket connections to check on job status, and server writing to the database once job is complete. I understand any one of those things could be causing the hang up.
There are no errors or exceptions being thrown or logged. All I see is requests start to time out. I am using node-inspector to try to figure out what is causing the hang ups but not sure what to look for.
My question is, is there a way to determine the root cause of hang up, using a debugger or some other means, to figure out too many ABC instances have been created, or too many XYZ connections are open.
In this case it was not actually a Node.js specific issue. I was leaking a database connection in one code path. I learned the hard way, if you exhaust database connections, your Node server will stop responding. Requests just seem to hang.
A good way to track this down, on Postgres at least is by running the query:
SELECT * FROM pg_stat_activity
Which lists all the open database connections, as well as which query was last run on that connection. Then you can check your code for where that query is called, which is super helpful in tracking down leaks.
A lot of your node server debugging can be done using the following tools.
Nodemon is an excellent tool to do debugging because it will display errors just like node but it will also attempt to restart your server based on new changes; removing a lot of the stop/start hastle.
https://nodemon.io/
Finally I would recommend Postman. Postman lets you manually send in requests to your server, and will help you narrow your search.
https://www.getpostman.com/
Related to OP's issue and solution from my experience:
If you are using the mysql library and get a specific connection connection.getConnection(...) you need to remember to release it afterwards connection.release();
Otherwise your pool will run out of connections and your process will hang.
Related
I have not been able to get an answer to this anywhere online. I want to remove possible jitter from my nodejs server. I am using socket.io to create connections to node.
If a user goes to a specific part of my website, a connection is started. However, if the user refreshes the site too quickly and often, the connection is created very frequently, and issues arise with my server.
While I realized it's possible this could be solved a couple different ways, I am hoping a server solution is out there. Meaning, whenever a user connects, make sure the user is connected for at least 5 seconds. Then move on. Otherwise, disconnect the user. Thanks for any insight!
First off a little background. With a default configuration, when a socket.io connection starts, it first does 2-5 http connections and then once it has established the "logical" connection, it tries to establish a connection using the webSocket transport. If that is successful, then it keeps that webSocket connection as a long lasting connection and sends socket.io packets over it.
If the client refreshes in the middle of the transition to a webSocket connection, it creates a period of unknown state on the server where the server isn't sure if the user is just still in the middle of the transition to a lasting webSocket connection, if the user is gone entirely already, if the user is having some sort of connection issues or if the user is doing some refresh thing. You can easily end up with a situation where the server thinks there are multiple connections all from the same user in the process of being confirmed. It can be a bit messy if your server is sensitive to that kind of thing.
The quickest thing you can do is to force the connection process to go immediately to the webSocket transport. You can do that in the client by adding an options to your connection code:
let socket = io(yourURL, {transports: ["websocket"]});
You can also configure the server to only accept webSocket connections if you're try to protect against any other types of connections besides just from your own web pages.
This will then go through the usual webSocket connection which starts with a single http request that is then "upgraded" to the webSocket protocol. Once connection, one socket. The server will know right away, either the user is or isn't connected. And, once they've switched over to the webSocket protocol, the server will known immediately if the user hits refresh because the browser will close the webSocket immediately.
The "start with http first" feature in socket.io is largely present because in the early days of webSockets, there were some browsers that didn't yet support them and some network infrastructure (like corporate proxies) that didn't always support webSocket connections. The browser issue is completely gone now. All browsers in use support webSocket connections. I don't personally have any data on the corporate proxies issues, but I don't ever hear about any issues with people using webSockets these days so I don't think that's much of an issue any more either.
So, the above change will get you a quick, confirmed connection and get rid of the confusion around whether a user is or isn't connected early in the connection process.
Now, if you still have users who are messing things up by rapid refresh, you probably need to just implement some protection on your server for that. If you cookie each user that arrives on your server, you could create some middleware that would keep track of how many page requests in some recent time interval have come from the browser with this cookie and just return them an error page that explains they can't make requests that quickly. I would probably implement this at the web page level, not the webSocket level as that will give users better feedback to stop hitting refresh. If it's really a refresh you're trying to protect against and not general navigation on your site, then you can keep a record of a combination cookie and URL and if you see even two of those within a few seconds, then return the error page instead of the expected content. If you redirect to an error page, it forces a more conscious action to go back to the right page again before they can get to the content.
I'm starting my first Express web app. I'm not quite understanding the logic of when/where to start my SQL connection and when I'm supposed to end it.
Copy pasting projects online to get started, I had a connect and close inside the same db.js file. It seems that no matter where it's required, it instantly creates a connection. Whether it's in the APP.JS entry point, or just the fact that it's required by a certain model file.
So removing the connection.end() method solved my issue of not being able to insert query because of "cannot enqueue query after invoking quit". But if I connect and end manually after the function insert, i need to create a NEW database object, or the connect() won't even work.
Where am I REALLY supposed to "start" the connection. From the require in the app.js? Does it even matter since it starts from any require anywhere? When do I type the connection.end() command? It's not like a desktop app, so killing the node server on VS Code just ends it anyways.
I just don't get it, the Node.js documentation doesn't really spell it out for me. Should I use pooling? Where do I close the connection? Why one way or the other? I really tried to google it, but nothing goes over the why and where conventions.
Generally, database connections should be created just before your query executes and closed straight after.
Long lived database connections can drain the server of resources, and may cause connection limit exhaustion (there is usually a cap set by the client driver). The more concurrent use you have in the application, the more prone you are to long lived connections causing problems.
Creating a connection from scratch is a resource and time consuming process. So you should definitely be using connection pooling which makes the "creation" of a connection a fast operation of simply grabbing an available connection from the pool.
By releasing the connection as soon as possible back to the pool, you free it up for other workers to use.
https://softwareengineering.stackexchange.com/a/142068/167591
UPDATE
I have a few questions about the combination of Nginx and Nodejs.
I've used Nodejs to create my server and now I'm facing with an issue about catching the server for an actions (writing, removing and etc..).
We are using Redis to lock the server when there are requests to the server, for example if a new user is doing a sign up action all the rest of the requests are waiting until the process is done, or if there is another process (longer one) all the other requests will wait longer.
We thought about creating a Load balancer (using Nginx) that will check if the server is locked, and if the server is locked it will open a new task and won't wait until the first process is done.
I used this tutorial and created a dummy server, then I've struggled with the idea of do this functionality of opening a new ports.
I'm new with load balancing implementation and I will be happy to hear your thoughts and help.
Thank you.
The gist of it is that your server needs to not crash if more than one connection attempt are made to it. Even if you use NGINX as a load balancer and have five different instances of your server running...what happens when six clients try to access your app at once?
I think you are thinking about load balancers slightly wrong. There are different load balancing methods, but the simplest one to think about is "round robin" in which each connection gets forwarded to the next server in the list (the rest are just more robust and complicated versions of this one). When there are no more servers to forward to, the next connection gets forwarded to the first server again (whether or not it is done with its last connection) and the circle starts over. Thus, load balancers aren't supposed to manage "unique connections" from clients...they are supposed to distribute connections among servers.
Your server doesn't necessarily need to accept connections and handle them all at once. But it needs to at least allow connections to queue up without crashing, and then accept and deal with each one by one.
You can go the route you are discussing. That is, you can fire up a unique instance of your server...via Heroku or other...for every single connection that is made to your app. But this is not efficient and will ultimately create more work for you in trying to architect a system that can do that well. Why not just fix your server?
i am running a nodejs code (server.js) as a jxcore using
jx mt-keep:4 server.js
we have a lot of request hit per seconds and mostly transaction take place.
I am looking for a way to catch error incase any thread dies and the request information is
returned back to me so that i can catch that request and take appropriate action based on it.
So in this i might not lose teh request coming in and would handle it.
This is a nodejs project and due to project urgency has been moved to jxcore.
Please let me know if there is a way to handle it even from code level.
Actually it's similar to a single Node.JS instance. You have same tools and options for handling the errors.
Besides, JXcore thread warns the task queue when it catches an unexpected exception on the JS land (Task queue stops sending the requests back to this instance) then safely restarts the particular thread. You may listen to 'uncaught exception', 'restart' events for the thread and manage a softer restart.
process.on('restart', res_cb, exit_code){
// thread needs a restart (due to unhandled exception, IO, hardware etc..)
// prepare your app for this thread's restart
// call res_cb(exit_code) for restart.
});
Note: JXcore expects the application is up and running at least for 5 seconds before restarting any thread. Perhaps this limitation protects the application from looping thread restarts.
You may start your application using 'jx monitor' it supports multi thread and reloads the crashed processes.
I've been googling for hours for this issue, but did not find any solution.
I am currently working on this app, built on Meteor.
Now the scenario is, after the website is opened and all the assets have been loaded in browser, the browser constantly makes recursive xhr calls to server. These calls are made at the regular interval of 25 seconds.
This can be seen in the Network tab of browser console. See the Pending request of the last row in image.
I can't figure out from where it originates, and why it is invoked automatically even when the user is idle.
Now the question is, How can I disable these automatic requests? I want to invoke the requests manually, i.e. when the menu item is selected, etc.
Any help will be appriciated.
[UPDATE]
In response to the Jan Dvorak's comment:
When I type "e" in the search box, the the list of events which has name starting with letter "e" will be displayed.
The request goes with all valid parameters and the Payload like this:
["{\"msg\":\"sub\",\"id\":\"8ef5e419-c422-429a-907e-38b6e669a493\",\"name\":\"event_Coll_Search_by_PromoterName\",\"params\":[\"e\"]}"]
And this is the response, which is valid.
a["{\"msg\":\"data\",\"subs\":[\"8ef5e419-c422-429a-907e-38b6e669a493\"]}"]
The code for this action is posted here
But in the case of automatic recursive requests, the request goes without the payload and the response is just a letter "h", which is strange. Isn't it? How can I get rid of this.?
Meteor has a feature called
Live page updates.
Just write your templates. They automatically update when data in the database changes. No more boilerplate redraw code to write. Supports any templating language.
To support this feature, Meteor needs to do some server-client communication behind the scenes.
Traditionally, HTTP was created to fetch dead data. The client tells the server it needs something, and it gets something. There is no way for the server to tell the client it needs something. Later, it became needed to push some data to the client. Several alternatives came to existence:
polling:
The client makes periodic requests to the server. The server responds with new data or says "no data" immediately. It's easy to implement and doesn't use much resources. However, it's not exactly live. It can be used for a news ticker but it's not exactly good for a chat application.
If you increase the polling frequency, you improve the update rate, but the resource usage grows with the polling frequency, not with the data transfer rate. HTTP requests are not exactly cheap. One request per second from multiple clients at the same time could really hurt the server.
hanging requests:
The client makes a request to the server. If the server has data, it sends them. If the server doesn't have data, it doesn't respond until it does. The changes are picked up immediately, no data is transferred when it doesn't need to be. It does have a few drawbacks, though:
If a web proxy sees that the server is silent, it eventually cuts off the connection. This means that even if there is no data to send, the server needs to send a keep-alive response anyways to make the proxies (and the web browser) happy.
Hanging requests don't use up (much) bandwidth, but they do take up memory. Nowadays' servers can handle multiple concurrent TCP connections, so it's less of an issue than it was before. What does need to be considered is the amount of memory associated with the threads holding on to these requests - especially when the connections are tied to specific threads serving them.
Browsers have hard limits on the number of concurrent requests per domain and in total. Again, this is less of a concern now than it was before. Thus, it seems like a good idea to have one hanging request per session only.
Managing hanging requests feels kinda manual as you have to make a new request after each response. A TCP handshake takes some time as well, but we can live with a 300ms (at worst) refractory period.
Chunked response:
The client creates a hidden iFrame with a source corresponding to the data stream. The server responds with an HTTP response header immediately and leaves the connection open. To send a message, the server wraps it in a pair of <script></script> tags that the browser executes when it receives the closing tag. The upside is that there's no connection reopening but there is more overhead with each message. Moreover, this requires a callback in the global scope that the response calls.
Also, this cannot be used with cross-domain requests as cross-domain iFrame communication presents its own set of problems. The need to trust the server is also a challenge here.
Web Sockets:
These start as a normal HTTP connection but they don't actually follow the HTTP protocol later on. From the programming point of view, things are as simple as they can be. The API is a classic open/callback style on the client side and the server just pushes messages into an open socket. No need to reopen anything after each message.
There still needs to be an open connection, but it's not really an issue here with the browser limits out of the way. The browser knows the connection is going to be open for a while, so it doesn't need to apply the same limits as to normal requests.
These seem like the ideal solution, but there is one major issue: IE<10 doesn't know them. As long as IE8 is alive, web sockets cannot be relied upon. Also, the native Android browser and Opera mini are out as well (ref.).
Still, web sockets seem to be the way to go once IE8 (and IE9) finally dies.
What you see are hanging requests with the timeout of 25 seconds that are used to implement the live update feature. As I already said, the keep-alive message ("h") is used so that the browser doesn't think it's not going to get a response. "h" simply means "nothing happens".
Chrome supports web sockets, so Meteor could have used them with a fallback to long requests, but, frankly, hanging requests are not at all bad once you've got them implemented (sure, the browser connection limit still applies).