How to hook process exit event on Express? - javascript

process.on('exit', async () => {
console.log('updating')
await campaignHelper.setIsStartedAsFalse()
console.log('exit')
process.exit(1)
})
I'm going to hook the process exit event and update database field before exit.
updating is being shown at exit. But further actions are not executed.
DB is Mongo
Then this code is in dev mode so I'm using ctrl+c to terminate the process.

For Ctrl-C (which you have now added to your question), you can't do what you want to do. node.js does not have that feature.
Since the Ctrl-C is under your control, you could send a command to your server to do a normal shut-down and then it could do your asynchronous work and then call process.exit() rather than you just typing Ctrl-C in the console. This is what many real servers do in production. They have a control port (that is not accessible from the outside world) that you can issue commands to, one of which would be to do a controlled shut-down.
Original answer
(before there was any mention of Ctrl-C being the shut-down initiation)
You can't run asynchronous operations on the exit event (it's too late in the shutdown sequence).
You can run asynchronous operations on the beforeExit event.
But, the beforeExit event is only called if nodejs naturally exits on its own because it's queue of remaining work scheduled is empty (no open sockets, files, timers, etc...). It will not be called if the process exits abnormally (such as an unhandled exception or Ctrl-C) or if process.exit() is called manually.
You can handle the case of manually calling process.exit() by replacing the call to process.exit() in your app with a call to a custom shutdown function that does your housekeeping work and then when that has successfully completed, you then call process.exit().

Related

Node.js - eventListener - How to log in a file while triggering an event

I saw on the node.js documentation that "Listener functions must only perform synchronous operations".
But, when i tried to use fs.writeFileSync, it doesn't works :
net.createServer(function (socket)
{
[...]
socket.on('error', (err) => {
**fs.writeFileSync('C:\temp\test.txt', err.message);**
console.log('Connection error:', err.message);
});
}).listen(port);
Maybe I didn't understand something ?
The console.log just allows my script to not crash when the socket is abruptly closed, for example, due to an RST sent by a Load Balancer.
In production, I need to trace in a file when i get an error from the listening port (using TCP). But if I add the fs.writeFileSync line, my script keeps crashing and I don't get any log in my file. Is there any way to do this?
This quote from the documentation only applies to the event that it describes - process.on('exit'):
Listener functions must only perform synchronous operations. The Node.js process will exit immediately after calling the 'exit' event listeners causing any additional work still queued in the event loop to be abandoned.
This is not a general statement - you can trigger asynchronous operations in event listeners just fine (but note that the EventEmitter will not wait for your async work to finish - so be wary of concurrency issues). For logging, consider opening a file just once and appending to avoid additional overhead.
Usually, a logging library can take care of this for you. For example, winston can write directly to files. A popular alternative approach is to just log everything to stdout using a more light-weight library such as pino, and run Node.js under a process supervisor such as PM2 or docker that already captures the standard output and writes it to a file automatically.

Does node js execute multiple commands in parallel?

Does node js execute multiple commands in parallel or execute one command (and finish it!) and then execute the second command?
For example if multiple async functions use the same Stack, and they push & pop "together", can I get strange behaviour?
Node.js runs your main Javascript (excluding manually created Worker Threads for now) as a single thread. So, it's only ever executing one piece of your Javascript at a time.
But, when a server request contains asynchronous operations, what happens in that request handle is that it starts the asynchronous operation and then returns control back to the interpreter. The asynchronous operation runs on its own (usually in native code). While all that is happening, the JS interpreter is free to go back to the event loop and pick up the next event waiting to be run. If that's another incoming request for your server, it will grab that request and start running it. When it hits an asynchronous operation and returns back to the interpreter, the interpreter then goes back to the event loop for the next event waiting to run. That could either be another incoming request or it could be one of the previous asynchronous operations that is now ready to run it's callback.
In this way, node.js makes forward progress on multiple requests at a time that involve asynchronous operations (such as networking, database requests, file system operations, etc...) while only ever running one piece of your Javascript at a time.
Starting with node v10.5, nodejs has Worker Threads. These are not automatically used by the system yet in normal service of networking requests, but you can create your own Worker Threads and run some amount of Javascript in a truly parallel thread. This probably isn't need for code that is primarily I/O bound because the asynchronous nature of I/O in Javascript already gives it plenty of parallelism. But, if you had CPU-intensive operations (heavy crypto, image analysis, video compression, etc... that was done in Javascript), Worker Threads may definitely be worth adding for those particular tasks.
To show you an example, let's look at two request handlers, one that reads a file from disk and one that fetches some data from a network endpoint.
app.get("/getFileData", (req, res) => {
fs.readFile("myFile.html", function(err, html) {
if (err) {
console.log(err);
res.sendStatus(500);
} else {
res.type('html').send(html);
}
})
});
app.get("/getNetworkData", (req, res) => {
got("http://somesite.com/somepath").then(result => {
res.json(result);
}).catch(err => {
console.log(err);
res.sendStatus(500);
});
});
In the /getFileData request, here's the sequence of events:
Client sends request for http://somesite.com/getFileData
Incoming network event is processed by the OS
When node.js gets to the event loop, it sees an event for an incoming TCP connection on the port its http server is listening for and calls a callback to process that request
The http library in node.js gets that request, parses it, and notifies the observes of that request, once of which will be the Express framework
The Express framework matches up that request with the above request handler and calls the request handler
That request handler starts to execute and calls fs.readFile("myfile.html", ...). Because that is asynchronous, calling the function just initiates the process (carrying out the first steps), registers its completion callback and then it immediately returns.
At this point, you can see from that /getFileData request handler that after it calls fs.readFile(), the request handler just returns. Until the callback is called, it has nothing else to do.
This returns control back to the nodejs event loop where nodejs can pick out the next event waiting to run and execute it.
In the /getNetworkData request, here's the sequence of events
Steps 1-5 are the same as above.
6. The request handler starts to execute and calls got("http://somesite.com/somepath"). That initiates a request to that endpoint and then immediately returns a promise. Then, the .then() and .catch() handlers are registered to monitor that promise.
7. At this point, you can see from that /getNetworkData request handler that after it calls got().then().catch(), the request handler just returns. Until the promise is resolved or rejected, it has nothing else to do.
8. This returns control back to the nodejs event loop where nodejs can pick out the next event waiting to run and execute it.
Now, sometime in the future, fs.readFile("myFile.html", ...) completes. At this point, some internal sub-system (that may use other native code threads) inserts a completion event in the node.js event loop.
When node.js gets back to the event loop, it will see that event and run the completion callback associated with the fs.readFile() operation. That will trigger the rest of the logic in that request handler to run.
Then, sometime in the future the network request from got("http://somesite.com/somepath") will complete and that will trigger an event in the event loop to call the completion callback for that network operation. That callback will resolve or reject the promise which will trigger the .then() or .catch() callbacks to be called and the second request will execute the rest of its logic.
Hopefully, you can see from these examples how request handlers initiate an asynchronous operation, then return control back to the interpreter where the interpreter can then pull the next event from the event loop and run it. Then, as asynchronous operations complete, other things are inserted into the event loop causing further progress to run on each request handler until eventually they are done with their work. So, multiple sections of code can be making progress without more than one piece of code every running at the same time. It's essentially cooperative multi-tasking where the time slicing between operations occurs at the boundaries of asynchronous operations, rather than an automatic pre-emptive time slicing in a fully threaded system.
Nodejs gets a number of advantages from this type of multi-tasking as it's a lot, lot lower overhead (cooperative task switching is a lot more efficient than time-sliced automatic task switching) and it also doesn't have most of the usual thread synchronization issues that true multi-threaded systems do which can make them a lot more complicated to code and/or more prone to difficult bugs.

Will a child process block the parent process in node.js?

I'm sorry if this sound like a question I could just google, but I can't quite find the answer to it, or I couldn't understand the explanation.
My assumption is it would, or else how is it possible to pipe a child process' output to the parent process.
But here's what I don't understand:
let { spawn } = require('child_process');
if (process.argv[2] === "child") {
console.log("In if!!");
}else{
const child = spawn(process.execPath, [__filename, "child"]);
child.stdout.on("data", (data) => {
console.log("In else!! ", data.toString());
});
}
Why is it outputting
In else!! In if!!
I thought by spawning a child process, it execute it immediately, so it goes to the if statement, after consoling out In if!!, it resumes to the parent process, than reaches the event listener, thus consoling In else!!. Am I misunderstanding something?
My guess is that the console.log, doesn't actually logs, but return the In if String, then passes it to the parent process, which is the data in the callback. But if that's the case, why doesn't it actually logs?
Thank you for responding in advance.
Yours is a perfectly valid question.
Remember that even if you are spawning multiple processes (and each one will be then individually managed by the system), inside each NodeJS process code execution will remain sigle threaded.
The first thing about your code is that you are using the async version of the spawn command. Child Process is a NodeJS API so its execution will be governed by NodeJS rules (single thread), so it will run as any other async function in NodeJS (new "independent" process will not start working until spawn function executes).
With that being said, your parent process will add spawn to the pending work and it will run it when it finishes the current work (when your script ends).
If you want your parent process to wait for the child process, you will have to use spawnSync command.
See Asynchronous Process Creation and Synchronous Process Creation in the NodeJS Child Process API Documentation for more info.

Node JS Asynchronous Execution and Event Emission / Listener Models

I am new to Node JS and am trying to understand the concurrent / asynchronous execution models of Node.
So far, I do understand that whenever an asynchronous task is encountered in Node, that task runs in the background ( e.g an asynchronous setTimeout function will start timing) and the control is then sent back to other tasks that are there on the call stack. Once the timer times out, the callback that was passed to the asynchronous task is pushed onto the callback queue and once the call stack is empty, that callback gets executed. I took the help of this visualization to understand the sequence of task execution. So far so good.
Q1. Now, I am not being able to wrap my head around the paradigm of event listeners and event emitters and would appreciate if someone could explain how even emitters and listeners fall into the picture of call stack, event loops and callback queues.
Q2. I have the following code that reads data from the serial port of a raspberry pi.
const SerialPort = require('serialport');
const port = new SerialPort('/dev/ttyUSB0',{baudRate: 9600}, (err) => {
if (err) {
console.log("Port Open Error: ", err);
}
} )
port.on('data', (data) => {
console.log(data.toString());
})
As can be seen from the example, to read data from the serial port, an 'event-listener' has been employed. From what I understand, whenever data comes to the port, a 'data' event is emitted which is 'responded to' or rather listened to by the listener, which just prints the data onto the console.
When I run the above program, it runs continuously, with no break, printing the data onto the console whenever a data arrives at the serial port. There are no continuously running while loops continuously scanning the serial port as would be expected in a synchronous program. So my question is, why is this program running continuously? It is obvious that the event emitter is running continuously, generating an event whenever data comes, and the event listener is also running continuously, printing the data whenever a 'data' event is emitted. But WHERE are these things actually running, that too, continuously? How are these things fitting into the whole picture of the call/execution stack, the event loop and the callback queue?
Thanks
Q1. Now, I am not being able to wrap my head around the paradigm of event listeners and event emitters and would appreciate if someone could explain how even emitters and listeners fall into the picture of call stack, event loops and callback queues.
Event emitters on their own have nothing to do with the event loop. Event listeners are called synchronously whenever someone emits an event. When some code calls someEmitter.emit(...), all listeners are called synchronously from the time the .emit() occurred one after another. This is just plain old function calls. You can look in the eventEmitter code yourself to see a for loop that calls all the listeners one after another associated with a given event.
Q2. I have the following code that reads data from the serial port of a raspberry pi.
The data event in your code is an asynchronous event. That means that it will be triggered one or more times at an unknown time in the future. Some lower level code will be registered for some sort of I/O event. If that code is native code, then it will insert a callback into the node.js event queue. When node.js is done running other code, it will grab the next event from the event queue. When it gets to the event associated with data being available on the serial port, it will call port.emit(...) and that will synchronously trigger each of the listeners for the data event to be called.
When I run the above program, it runs continuously, with no break, printing the data onto the console whenever a data arrives at the serial port. There are no continuously running while loops continuously scanning the serial port as would be expected in a synchronous program. So my question is, why is this program running continuously?
This is the event-driven nature of node.js in a nutshell. You register an interest in certain events. Lower level code sees that incoming data has arrived and triggers those events, thus calling your listeners.
This is how the Javascript interpreter manages the event loop. Run current piece of Javascript until it's done. Check to see if any more events in the event loop. If so, grab next event and run it. If not, wait until there is an event in the event queue and then run it.
It is obvious that the event emitter is running continuously, generating an event whenever data comes, and the event listener is also running continuously, printing the data whenever a 'data' event is emitted. But WHERE are these things actually running, that too, continuously?
The event emitter itself is not running continuously. It's just a notification scheme (essentially a publish/subscribe model) where one party can register an interest in certain events with .on() and another party can trigger certain events with .emit(). It allows very loose coupling through a generic interface. Nothing is running continuously in the emitter system. It's just a notification scheme. Someone triggers an event with .emit() and it looks in its data structures to see who has registered an interest in that event and calls them. It knows nothing about the event or the data itself or how it was triggered. The emitters job is just to deliver notifications to those who expressed an interest.
We've described so far how the Javascript side of things works. It runs the event loop as described above. At a lower level, there is serial port code that interfaces directly with the serial port and this is likely some native code. If the OS supports a native asynchronous interface for the serial port, then the native code would use that and tell the OS to call it when there's data waiting on the serial port. If there is not a native asynchronous interface for the serial port data in the OS, then there's probably a native thread in the native code that interfaces with the serial port that handles getting data from the port, either polling for it or using some other mechanism built into the hardware to tell you when data is available. The exact details of how that works would be built into the serial port module you're using.
How are these things fitting into the whole picture of the call/execution stack, the event loop and the callback queue?
The call/execution stack comes into play the moment an event in the Javascript event queue is found by the interpreter and it starts to execute it. Executing that event will always start with a Javascript callback. The interpreter will call that callback (putting a return address on the call/execution stack). That callback will run until it returns. When it returns, the call/execution stack will be empty. The interpreter will then check to see if there's another event waiting in the event queue. If so, it will run that one.
FYI, if you want to examine the code for the serial port module it appears you are using, it's all there on Github. It does appear to have a number of native code files. You can see a file called poller.cpp here and it appears to do cooperative polling using the node.js add-on programming interface offered by libuv. For example, it creates a uv_poll_t which is a poll handle described here. Here's an excerpt from that doc:
Poll handles are used to watch file descriptors for readability, writability and disconnection similar to the purpose of poll(2).
The purpose of poll handles is to enable integrating external libraries that rely on the event loop to signal it about the socket status changes, like c-ares or libssh2. Using uv_poll_t for any other purpose is not recommended; uv_tcp_t, uv_udp_t, etc. provide an implementation that is faster and more scalable than what can be achieved with uv_poll_t, especially on Windows.
It is possible that poll handles occasionally signal that a file descriptor is readable or writable even when it isn’t. The user should therefore always be prepared to handle EAGAIN or equivalent when it attempts to read from or write to the fd.

NodeJS: client.emit('message') needs delay after connection in order to be seen

I'm using socket.io in a Node application. Here is a snippet from my code:
io.sockets.on('connection', socket => {
setTimeout(function () {
console.log('a client connected!')
clients.forEach(s => s.emit('to_client', 'a client connected'))
}, 0)
})
If I remove the setTimeout wrapper, 'a client connected' is not seen in the console of the client (Chrome browser), however, even with a timeout of zero, it does show up. What could be the issue? I would prefer going without the setTimeout since it does not sound like something that should be required here.
Node is an asynchronous single threaded run-time so it uses callbacks to avoid blocking the I/O.
Using the setTimeout is one way (along with nodes built in process.nextTick() method for handling asynchronous code!). Your example code is trying to access clients, I suspect whatever is handling this has not been initialised before your connection callback has executed.
The setTimeout method basically pushes the code (callback function) onto the event queue and therefore anything currently on the call stack will be processed before this setTimeout callback can be run.

Categories

Resources