I'm working on Node + Mongo + Express to create REST API. There are cases when node server gets crashed and I've to restart it again. I'm using forever to do the restart stuff. But I am unable to find the solution for process which are lost during the crash.
Example: I am handling 10 http request at a moment and my node server get crashed for any request. In this case other 9 running request will be lost.
Is there any fallback mechanism to prevent this?
The nearest I see right now would be a NodeJS cluster, which I believe the master will handle the crash of one of his child and move the http request, but, if this does not work then make it with nginx and some node processes at the same time, it will handle that kind of things
Hope it helps you.
The server crashes if there is an unhandled exception, so you need to add error handling to your functions with try catch. There are several events sent to the process objects that can help you with your problem. You could try listening for uncaughtException and then follow a strategy like it's mentioned here: http://blog.argteam.com/coding/hardening-node-js-for-production-part-3-zero-downtime-deployments-with-nginx/, just replace 'SIGTERM'.
Related
I have an Express Node API on Heroku. It has a few open routes that makes upstream requests to another web server that then executes some queries and returns data.
So client->Express->Apache->DB2.
This has been running just fine for several months on a Heroku Hobby dyno. Now, however, we exposed another route and more client requests are coming in. Heroku is then throwing H12 errors (bc the Express app isn't giving back a response within 30S).
I'm using axios in the Express app to make the requests to Apache and get data. However, I'm not seeing anything fail in the logs. No errors are getting caught to give me some more details about why things could be timing out. Investigated the Apache->DB2 side of things and the bottleneck doesn't seem there and is almost certainly on the Express side of things.
Per Heroku's advice, I connected the app to NewRelic but haven't gained any insights yet. Sounds like this could be a scalability issue with Express and a high number of new requests coming in at a short period of time? There's not particularly that many. i.e ~50/min at highest. Would beefing up the Heroku dyno do anything? Any other ways to actually see what's going on with Express?
It seems like 10-15% of the client requests are receiving the timeout (and seems to happen at a time when there's lots of incoming requests)
Thanks!
I recently started noticing my Node.js app server hanging after a number of requests. After more investigation I narrowed it down to a specific endpoint. After about 30 hits on this endpoint, the server just stops responding and requests start timing out.
In summary, on each request the clients upload a file, server does some preliminary checks, creates a new processing job, puts that on a queue, using bull, and returns a response to the client. The server then continues to process the job on the queue. The job process involves retrieving job data from a Redis database, clients opening WebSocket connections to check on job status, and server writing to the database once job is complete. I understand any one of those things could be causing the hang up.
There are no errors or exceptions being thrown or logged. All I see is requests start to time out. I am using node-inspector to try to figure out what is causing the hang ups but not sure what to look for.
My question is, is there a way to determine the root cause of hang up, using a debugger or some other means, to figure out too many ABC instances have been created, or too many XYZ connections are open.
In this case it was not actually a Node.js specific issue. I was leaking a database connection in one code path. I learned the hard way, if you exhaust database connections, your Node server will stop responding. Requests just seem to hang.
A good way to track this down, on Postgres at least is by running the query:
SELECT * FROM pg_stat_activity
Which lists all the open database connections, as well as which query was last run on that connection. Then you can check your code for where that query is called, which is super helpful in tracking down leaks.
A lot of your node server debugging can be done using the following tools.
Nodemon is an excellent tool to do debugging because it will display errors just like node but it will also attempt to restart your server based on new changes; removing a lot of the stop/start hastle.
https://nodemon.io/
Finally I would recommend Postman. Postman lets you manually send in requests to your server, and will help you narrow your search.
https://www.getpostman.com/
Related to OP's issue and solution from my experience:
If you are using the mysql library and get a specific connection connection.getConnection(...) you need to remember to release it afterwards connection.release();
Otherwise your pool will run out of connections and your process will hang.
i am running a nodejs code (server.js) as a jxcore using
jx mt-keep:4 server.js
we have a lot of request hit per seconds and mostly transaction take place.
I am looking for a way to catch error incase any thread dies and the request information is
returned back to me so that i can catch that request and take appropriate action based on it.
So in this i might not lose teh request coming in and would handle it.
This is a nodejs project and due to project urgency has been moved to jxcore.
Please let me know if there is a way to handle it even from code level.
Actually it's similar to a single Node.JS instance. You have same tools and options for handling the errors.
Besides, JXcore thread warns the task queue when it catches an unexpected exception on the JS land (Task queue stops sending the requests back to this instance) then safely restarts the particular thread. You may listen to 'uncaught exception', 'restart' events for the thread and manage a softer restart.
process.on('restart', res_cb, exit_code){
// thread needs a restart (due to unhandled exception, IO, hardware etc..)
// prepare your app for this thread's restart
// call res_cb(exit_code) for restart.
});
Note: JXcore expects the application is up and running at least for 5 seconds before restarting any thread. Perhaps this limitation protects the application from looping thread restarts.
You may start your application using 'jx monitor' it supports multi thread and reloads the crashed processes.
I'm quite new to Express.js and one of the things that surprised me more at first, compare to other servers such as Apache or IIS, is that Express.js server crashes every time it encounters an uncatched exception or some error turning down the site and making it accessible for users. A terrible thing!
For example, my application is crashing with a Javascript error because a variable is not defined due to a name change in the database table.
TypeError: Cannot call method 'replace' of undefined
This is not such a good example, because I should solve it before moving the site to production, but sometimes similar errors can take part which shouldn't be causing a server crash.
I would expect to see an error page or just an error in that specific page, but turning down the whole server for these kind of things sounds terrifying.
Express error handlers doesn't seem to be enough for this purposes.
I've been reading about how to solve this kind of things in Node.js by using domains, but I found nothing specifically for Express.js.
Another option I found, which doesn't seem to be recommended in all cases, is using tools to keep running a process forever, so after a crash, it would restart itself. Tools like Forever, Upstart or Monit.
How do you guys deal with this kind of problems in Express.js?
The main difference between Apache and nodejs in general is that Apache forks a process per request while nodejs is single threaded, hence if an error occurs in Apache then the process handling that request will crash while the others will continue to work, in nodejs instead the only thread goes down.
In my projects I use monit to check memory/cpu (if nodejs takes to much resources of my vps then monit will restart nodejs) and daemontools to be sure nodejs is always up and running.
I would recommend using Domains along with clusters. There is example in doc itself at http://nodejs.org/api/domain.html. There are also some modules for expressjs https://www.npmjs.org/package/express-domain-middleware.
So when such errors occur use of domain along with cluster will help us separate context of where error occur and will effect only single worker in cluster, we should be logging them and disconnect that worker in cluster and refork it. We can then read logs to fix such errors that need to be fixed in code.
I was facing the same issue and I fixed using try/cache like this. So I created different route files and included each route files in try/cache block like this.
try{
app.use('/api', require('./routes/user'))
}
catch(e)
{
console.log(e);
}
try{
app.use('/api', require('./routes/customer'))
}
catch(e)
{
console.log(e);
}
Is there a way I can make nodejs reload everytime it serves a page?
I want to do this during the dev cycle so I can avoid having to shutdown & startup on each code change?
Edit: Try nodules and their require.reloadable() function.
My former answer was about why not to reload the process of Node.js and does not really apply here. But I think it is still important, so I leave it here.
Node.js is evented IO and crafted specifically to avoid multiple threads or processes. The famous C10k problem asks how to serve 10 thousand clients simultaneously. This is where threads don't work very well. Node.js can serve 10 thousand clients with only one thread. If you were to restart Node.js each time you would severely cripple Node.js.
What does evented IO mean?
To take your example: serving a page. Each time Node.js is about to serve a page, a callback is called by the event loop. The event loop is inherent to each Node.js application and starts running after initializations have completed. Node.js on the server-side works exactly like client-side Javascript in the browser. Whenever an event (mouse-click, timeout, etc.) happens, a callback - an event handler - is called.
And on the server side? Let's have a look at a simple HTTP server (source code example taken from Node.js documentation)
var http = require('http');
http.createServer(function (request, response) {
response.writeHead(200, {'Content-Type': 'text/plain'});
response.end('Hello World\n');
}).listen(8124);
console.log('Server running at http://127.0.0.1:8124/');
This first loads the http module, then creates an HTTP server and tells it to invoke the inner function starting with function (request, response) every time an HTTP request comes in, then makes the server listen to port 8124. This completes almost immediately so that console.log will be executed thereafter.
Now Node.js event loop takes over. The application does not end but waits for requests. And voilĂ each request is answered with Hello World\n.
In a summary, don't restart Node.js, but let its event loop decide when your code has to be run.
Found Nodemon, exactly what I wanted: https://github.com/remy/nodemon
I personnaly use spark2 (the fork) and will switch to cluster as soon as i found the time to test it. Among other things, those 2 will listen to file changes and reload the server when appropriate, which seems to be what you're looking for.
There are many apps for doing this, but I think the best is Cluster, since you have 0 downtime for your server. You can also set multiple workers with it or manually start/stop/restart/show stats with the cli REPL functionality.
No. Node.js should always run. I think you may have misunderstood the concept of nodejs. In order for nodejs to serve pages, it has to be its own server. Once you start a server instance and start listening on ports, it will run until you close it.
Is it possible that you've called a function to close the server instead of just closing a given stream?