NodeJS Event Loop Fundamendals - javascript

I'm sure it's a commonly asked question but didn't find a concrete answer.
I kind of understand the basic concept of NodeJS and it's asynchronous/non-blocking nature of processing I/O.
For argument sake, let's take a simple example of a HTTP server written in node that executes the unix command 'find /' and writes the result to the http response (therefore displaying the result of the command on the user's browser).
Let's assume that this takes 3 seconds.
Let's assume that there are two users 'A' and 'B' requesting through their browsers exactly at the same time.
As I understand the user's requests are queued in the event queue (Message A, Message B). The message also has a reference to it's associated callback to be executed once the processing is done.
Since, the event loop is single threaded and processes the events one by one,
In my above example, Will it take 6 seconds for the Callback of "User B" to get triggered? [3 for "User A"s event processing and 3 for it's own event processing]
This sounds like I'm missing something here?
The worst is if 100 users are requesting at the same millisecond? The 100th event owner is going to be the most unfortunate user and has to wait for eternity.
As I understand, there is only one event queue in the runtime, the above problem can applicable to any user in any part of the application. For example, a slow Database Query in web page X would slow down the a different user in web page Y?
Fundamentally, I see a problem in serial processing of events and serial execution of their associated callbacks.
Am I missing something here?

A properly written node.js server will use async I/O and communication for any networking, disk I/O, timers or communication with other processes. When written this way, multiple http requests can be worked on in parallel. Though the node.js code that processes any given request is only run one at a time, anytime one request is waiting for I/O (which is typically much of the time of a request), then other requests can run.
The end result is that all requests appear to progress at the same time (though in reality, the work on them is interwoven). The Javascript event queue is the mechanism for serializing the work among all the various requests. Whenever an async operation finishes it's work or wishes to notify the main JS thread of some event, it puts something in the event queue. When the current thread of JS execution finishes (even if it has its own async operations in progress), the JS engine looks in the event queue and then executes the next item in that queue (usually some form of a callback) and, in that way, the next queued operation proceeds.
In your specific example, when you fire up another process and then asynchronously wait for its result, the current thread of execution finishes and then the next item in the event queue gets to run. If that next item is another http request, then that request starts processing. When this second request, then hits some async point, it's thread of execution finishes and again the next item in the event queue runs. In this way, new http requests get started and asynchronous callbacks from async operations that have finished get to run. Things happen in roughly a FIFO (first-in, first-out) order for how they are put in the event queue. I say "roughly" because there are actually different types of events and not all are serialized equally, but for the purpose of this discussion that implementation detail can be ignored.
So, if three http requests arrive at the exact same time, then one will run until it hits an async point. Then, the next will run until it hits an async point. Then, the third will run until it hits an async point. Then, whichever request finishes its first async operation will get a callback from that async operation and it will run until it is done or hits another async point. And, so on...
Since much of what usually causes a web server to take much time to respond is usually some sort of I/O operation (disk or networking) which can all be programmed asynchronously in node.js, this whole process generally works quite well and its actually a lot more efficient with server resources than using a separate thread per request. The one time that it doesn't work very well is if there's a heavy compute-intensive or some long running, but not asynchronous operation that ties up the main node.js thread for long periods of time. Because the node.js system is a cooperative CPU sharing system, if you have a long running operation that ties up the main node.js thread, it will hog the system (there is no pre-emptive sharing at all with other operations like there could be with a mutli-threaded system). Hogging the system makes all other requests wait until the first one is done. The node.js answer to some CPU hogging computation would be to move that one operation to another process and communicate asynchronously with that other process from the node.js thread - thus preserving the async model for the single node.js thread.
For node.js database operations, the database will generally provide an async interface for node.js programming to use the database in an async fashion and then it is up to the implementation of the database interface to actually implement the interface in an async fashion. This will likely be done by communicating with some other process where the actual database logic is implemented (probably communicating via TCP). That actual database logic may use actual threads or not - that's an implementation detail that is up to the database itself. What is important to node.js is that the computation and database work is out of the node.js thread in some other process, perhaps even on another host so it does not block the node.js thread.

Related

Promise.all() vs await

I'm trying to understand node.js single threaded architecture and the eventloop to make our application more efficient. So consider this scenario where I have to make several database calls for an http api call. I can do it using Promise.all() or using a separate await.
example:
Using async/await
await inserToTable1();
await insertToTable2();
await updateTable3();
Using Promise.all() I can do the same by
await Promise.all[inserToTable1(), insertToTable2(), updateTable3()]
Here for one API hit at a given time, Promise.all() will be quicker to return the response as it fires the DB calls in parallel. But, if I have 1000 API hits per second, will there be any difference? For this scenario, is Promise.all() better for the eventloop?
Update
Assume the following,
By 1000 API hits, I meant the overall traffic to the application. Consider there are 20-25 APIs. Out of these a few might do DB operations, a few might make a few http calls, etc. Also, at no point we will be reaching the DB pool max connections.
Thanks in advance!!
As usual when it comes to system design, the answer is: it depends.
There are a lot of factors that determines the performance of either. In general, awaiting a single Promise.all() waits for all requests in parallel.
Event Loop
The event loop uses exactly 0% CPU time to wait for a request. See my answer to this related question for an explanation of how exactly the event loop works: Performance of NodeJS with large amount of callbacks
So from the event loop point of view there is no real difference between requesting sequentially and requesting in parallel with a Promise.all(). So if this is the core of your question I guess the answer is there is no difference between the two.
However, processing the callbacks does take CPU time. Again, the time to complete executing all the callbacks are the same. So from the point of view of CPU performance again there is no difference between the two.
Making requests in parallel does reduce overall execution time however. Firstly if the service is multithreaded you are essentially using it's multithreadedness by making parallel requests. This is what makes node.js fast even though it's single threaded.
Even if the service you are requesting from isn't multithreaded and actually handle requests sequentially, or if the server you're requesting from is a single core CPU (rare these days but you can still rent single-core virtual machines) then parallel requests reduces networking overhead since your OS can send multiple requests in a single Ethernet frame thus amortizing the overhead of packet headers over several requests. This does have a diminishing return beyond around half a dozen parallel requests however.
One Thousand Requests
You've hypothesized making 1000 requests. Weather or not awaiting 1000 promises in parallel actually causes parallel requests depends on how the API works at the network level.
Connection pools.
Lots of database libraries implement connection pools. That is, the library will open some number of connections to the database, for example 5, and reuse the connections.
In some implementation, making 1000 requests via such a library will cause the low-level networking code of the library to batch them 5 requests at a time. This means that at most you can have 5 parallel requests (assuming a pool of 5 connections). In this case it is perfectly safe to make 1000 parallel requests.
Some implementations however have a growable connection pool. In such implementations making 1000 parallel requests will cause your software to open 1000 sockets to access the remote resource. In such cases how safe it is to make 1000 parallel requests will depend on weather the remote server allows this.
Connection limit.
Most databases such as Mysql and Postgresql allows the admin to configure a connection limit, for example 5, such that the database will reject more than the limited number of connections per IP address. If you use a library that does not automatically manage maximum connections to your database then your database will accept the first 5 requests and reject the remaining until another slot is available (it's possible that a connection is freed before node.js finishes opening the 1000th socket). In this case you cannot successfully make 1000 parallel requests - you need to manage how many parallel requests you make.
Some API services also limit the number of connections you can make in parallel. Google Maps for example limits you to 500 requests per second. Therefore awaiting 1000 parallel requests will cause 50% of your requests to fail and possibly cause your API key or IP address to be banned.
Networking limits.
There is a theoretical limit on the number of sockets your machine or a server can open. However this number is extremely high so it's not worth discussing here.
However, all OSes that is currently in existence limit the maximum number of open sockets. On Linux (eg Ubuntu & Android) and Unix (eg MacOSX and iOS) sockets are implemented as file descriptors. And there is a maximum number of file descriptors allocated per process.
For Linux this number usually defaults to 1024 files. Note that a process opens 3 file descriptors by default: stdin, stdout and stderr. That leaves 1021 file descriptors shared by files and sockets. So your 1000 request in parallel skirts very close to this number and may fail if two clients try to make 1000 parallel requests at the same time.
This number can be increased but it does have a hard limit. The current maximum number of file descriptors you can configure on Linux is 590432. However this extreme configuration only works properly on a single user system with no daemons (or other background programs) running.
What to do?
The first rule when writing networking code is try not to break the network. Be reasonable in the number of requests you make at any one time. You can batch your requests to the limit of what the service expects.
With async/await it's easy. You can do something like this:
let parallel_requests = 10;
while (one_thousand_requests.length > 0) {
let batch = [];
for (let i=0;i<parallel_requests;i++) {
let req = one_thousand_requests.pop();
if (req) {
batch.push(req());
}
}
await Promise.all(batch);
}
Generally the more requests you can make in parallel the better (shorter) overall process time will be. I guess this is what you wanted to hear. But you need to balance parallelism with the factors above. 5 is generally OK. 10 maybe. 100 will depend on the server responding to the requests. 1000 or more and the admin who installed the server will probably have to tune his OS.
await approach will suspend the function execution for every await call and execute them sequentially while Promise.all can execute things parallel (in async) and return success when all of them are successful.
So it's better to use Promise.all if your three (inserToTable1(), insertToTable2(), table3()) methods are independent.
The ability of javascript to execute other stuff while a heavy operations are happening by suspending is achieved through event loops and call stacks.
Event Loops
The decoupling of the caller from the response allows for the JavaScript runtime to do other things while waiting for your asynchronous operation to complete and their callbacks to fire.
JavaScript runtimes contain a message queue which stores a list of messages to be processed and their associated callback functions. These messages are queued in response to external events (such as a mouse being clicked or receiving the response to an HTTP request) given a callback function has been provided.
The Event Loop has one simple job — to monitor the Call Stack and the Callback Queue. If the Call Stack is empty, it will take the first event from the queue and will push it to the Call Stack, which effectively runs it.

How to use pdfkit npm in async manner

I have written an application in node.js which takes input from user and generates pdfs file based on few templates.
I am using pdfkit npm for this purpose. My application is running in production. But my application is very slow, below are the reasons :
What problem I am facing :
It is working in sync manner. I can explain it by giving an example- Suppose a request come to the application to generate a pdf, is starts processing and after processing it returns back the response with generated pdf url. But if multiple request comes to the server it process each request one by one(in sync manner).
All request in queue have to wait untill the previous one is finished.
Maximum time my application gives Timeout or Internal Server Error.
I can not change the library, why ?
There are 40 templates I have written in js for pdfkit. And each template is of 1000 - 3000 lines.
If I will change the lib, i have to rewrite those templates according to new library.
It will take many months to rewrite and test it properly.
What solution I am using now :
I am managing a queue now, once a request come it got queued and a satisfactory message send back in response to the user.
Why this solution is not feasible ?
User should be provided valid pdf url upon success of request. But in queue approach, user is getting only a confirmation message. And pdf is being processed later in queue.
What kind of solution I am seeking now ?
Any way through which I can make this application multi-threaded/asynchronous, So that it will be capable of handling multiple request on a time without blocking the resource?
Please save my life.
I hate to break it to you, but doing computation in the order tasks come in is a pretty fundamental part of node. It sounds like loading these templates is a CPU-bound task, and since Node is single-threaded, it knocks these off the queue in the order they come in.
On the other hand, any framework would have a similar problem. Node being single-threading means its actually very efficient, because it doesn't lose cycles to context switching.
How many PDF-generations can your program handle at once? What type of hardware are you running this on? If it's failing on a few requests a second, then there's probably a programming fix.
For node, the more things you can make asynchronous the better. For example, any time you're reading a file in, it should be asynchronous.
Can you post the code for one of your PDF-creating request functions?

How to stop all functions (multiple asyncs etc) from executing if user abort connecting to the server (refresh while connecting, close browser, etc..)

I'm currently working with React.js and Server-Side rendering where I have multiple async calls requesting json from API server. So the problem is if user stops connecting, the functions keep executing even though it has nobody to serve it to.
For example if I hold down refresh button, the node.js Express server will keep sending hundreds of async requests (and probably non async functions as well which take longer to execute) and then executing functions once received response.
So basically I need some way to stop functions from firing if user stops HTTP request, worst case if holding refresh button down...
I've tried to use res.end(), but the functions keep firing. Is there some smart way to listen to an event and stop the async or something?
requests are made with "superagent" and async with "async.js"
Thanks.
There are no particularly great options for you on the server.
If a single client sends off multiple requests from the server and then while the requests are being processed or are in the server queue waiting to be processed, the end user hits refresh, then the browser will drop/close the open Ajax sockets that the currently open web page has.
But, requests that are already underway on the server will continue to process and the server will not even know that those sockets have been closed until the server tries to write to those sockets. At that point, it may (depending upon timing) discover that the sockets have been closed. But, of course by that time, you've already processed the requests and have the results ready to go. It may also occur that the request is processed and sent before the server is even aware that the sockets have been closed. This will eventually cause an error on the socket (and a resulting close of the socket) when no response comes back from the other end or a close comes back from the other end while awaiting confirmation of the data that was sent (since TCP is a reliable protocol, every send must be ACKed by the other end of the connection).
So, this all explains why there really isn't a whole lot you can do on the server.
The situation you describe will be exacerbated if the client is sending multiple API requests in parallel and the end-user is somehow led to refresh the page in the middle of that. The things you can do to lessen this issue are as follows:
Combine multiple requests into a single API request. If your API already permits that type of structuring of the request, then switch to that. Or, if you control the API, then add new capabilities to the API so that rather than having a client send multiple requests in parallel, it can send a single request and get all the data in that one request. This limits the number of requests that might get caught in process when the web page is closed.
Limit the number of parallel requests you send. You can do this either by using the Async library feature that controls how many requests are sent at once from a list of requests you're trying to send or you can serialize your requests so one is sent only after the previous one finishes. This would also limit the number of requests that might get caught in process when the web page is closed.
If you had a particularly expensive server-side operation (something that might take minutes to run such as building a large report file), you could devise some sort of check-in with the client and then during your processing you could check to see if the client was still active and if not, then you could abort your own server-side processing. For most normal types of server-side requests, this would probably not be worth the extra trouble (since verifying the client is still active has its own cost), but if you have a particularly costly server-side operation, you could implement something like this.
If you have async operations in your request processing and if you're willing to put checks into your request handling after each async operation, then you could register a listener for the 'close' with req.connection.addListener('close', function () { ... });, then set some sort of flag on that request that you check after each async operation and then abort the rest of the request processing whenever you discover that flag on that connection. I scoured the net module docs for some flag or method on the socket object that would tell you if it's already been closed (such information should be there), but I could not find it mentioned in the nodejs doc.
Aborting requests halfway through isn't the job of async. From async's point of view, a request to the server is just a single task that can't be split up any further. If it is possible to do something like this, it would be part of the built-in http module or express. But I'm not sure that it would be very helpful, even if there is a way of doing it. The request has probably reached the internet by the time you try to unsend it. What you really want is not to send so many requests in the first place.
You haven't posted any code, so I can't write modifications to your code in response. Instead of having an event handler that says "send a request and do something with the response", write one that says "if you haven't sent a request in the last 2 seconds, send one and do something with the response, and also save the response in a variable; otherwise, look at the value of the variable and pretend you've received it as the response". (I'm assuming that all requests are identical - if not, then you need to think a bit more about it.)

What happens if all node.js's worker threads are busy

I try to understand how node.js works and although I have read this article: When is the thread pool used? I am not sure what happens if all worker threads are busy and another async I/O operation is ready to be executed.
If I got this http://www.future-processing.pl/blog/on-problems-with-threads-in-node-js/ article right, the event loop is blocked until a worker thread is free to take care of an additional I/O operation. This would imply that if five users tried to access a webpage simultaneously (like their profile page which lets say requires a db-query), the 5th user will be blocked until the first db-query is done and this worker thread is free again?
I/O in general does not block the event loop (there are some exceptions like the crypto module currently and things like fs.*Sync() methods) and especially in the case of network I/O, the libuv thread pool is not used at all (only for things like dns (currently) and fs operations).
If your database driver is written in C++ as a node.js addon, there is a chance that it could either block the event loop (it's doing something synchronous) or it could be using the libuv thread pool. However if the database driver is only written in JavaScript, it typically uses some sort of network I/O, which as previously mentioned does not block anything and does not use the libuv thread pool.
So with your example, depending on how the database driver is implemented, all 5 users could be serviced at once. For example, MySQL at the protocol level only supports one outstanding query at a time per connection. So what most MySQL drivers for node.js will do is queue up additional queries until the current query has finished. However, it's entirely possible for a MySQL driver to internally maintain some sort of connection pool so that you have greater concurrency.
However, if each of the 5 requests were instead causing something to be from disk, then it's possible that the 5th request will have to wait until one of the other 4 fs requests have finished, due to the current default size of the libuv thread pool. This doesn't mean the event loop itself is blocked, as it can still service new incoming requests and other things, but the 5th client will just have to wait a bit longer.
Is there any kind of queue for operations which are carried out in worker threads?
Yes. libuv manages this queue. It is possible for all of the worker threads to be occupied, in which case new asynchronous requests that go to the worker pool will not make progress. Note that an "asynchronous FS request" must still be completed somewhere, and will block the worker thread on which it is being handled until it completes.

How Nodejs's internal threadpool works exactly?

I have read a lot of article about how NodeJs works. But I still can not figure out exactly how the internal threads of Nodejs proceed IO operations.
In this answer https://stackoverflow.com/a/20346545/1813428 , he said there are 4 internal threads in the thread pool of NodeJs to process I/O operations . So what if I have 1000 request coming at the same time , every request want to do I/O operations like retrieve an enormous data from the database . NodeJs will deliver these request to those 4 worker threads respectively without blocking the main thread . So the maximum number of I/O operations that NodeJs can handle at the same time is 4 operations. Am I wrong?.
If I am right , where will the remaining requests will handle?. The main single thread is non blocking and keep driving the request to corresponding operators , so where will these requests go while all the workers thread is full of task? .
In the image below , all of the internal worker threads are full of task , assume all of them need to retrieve a lot of data from the database and the main single thread keep driving new requests to these workers, where will these requests go? Does it have a internal task queuse to store these requests?
The single, per-process thread pool provided by libuv creates 4 threads by default. The UV_THREADPOOL_SIZE environment variable can be used to alter the number of threads created when the node process starts, up to a maximum value of 1024 (as of libuv version 1.30.0).
When all of these threads are blocked, further requests to use them are queued. The API method to request a thread is called uv_queue_work.
This thread pool is used for any system calls that will result in blocking IO, which includes local file system operations. It can also be used to reduce the effect of CPU intensive operations, as #Andrey mentions.
Non-blocking IO, as supported by most networking operations, don't need to use the thread pool.
If the source code for the database driver you're using is available and you're able to find reference to uv_queue_work then it is probably using the thread pool.
The libuv thread pool documentation provides more technical details, if required.
In the image below , all of the internal worker threads are full of task , assume all of them need to retrieve a lot of data from the database and the main single thread keep driving new requests to these workers
This is not how node.js use those threads.
As per Node.js documentation, the threads are used like this:
All requests and responses are "handled" in the main thread. Your callbacks (and code after await) simply take turns to execute. The "loop" between the javascript interpreter and the "event loop" is usually just a while loop.
Apart from worker_threads that you yourself start there are only 4 things node.js use threads for: waiting for DNS response, disk I/O, the built-in crypto library and the built-in zip library. Worker_threads are the only places where node.js execute javascript outside the main thread. All other use of threads execute C/C++ code.
If you are want to know more then I've written several answers to related questions:
Node js architecture and performance
how node.js server is better than thread based server
node js - what happens to incoming events during callback excution
Does javascript process using an elastic racetrack algorithm
Is there any other way to implement a "listening" function without an infinite while loop?
no, main use case for thread pool is offloading CPU intensive operations. IO is performed in one thread - you don't need multiple threads if you are waiting external data in parallel, and event loop is exactly a technique to organise execution flow so that you wait for as much as possible in parallel
Example:
You need to send 100 emails with a question (y/n) and another one with number of answered "y". It takes about 30 second to write email and two hours on average for reply + 10 seconds to read response. You start by writing all 100 emails ( 50 minutes of time ), then you wait alert sound which wakes you up every time reply arrives, and as you receive answers you increase number of "y". in ~2 hours and 50 minutes you're done. This is example of async IO and event loop ( no thread pools )
Blocking example: send email, wait for answer, repeat. Takes 4 days (two if you can clone another you )
Async thread pool example: each response is in the language you don't know. You have 4 translator friends. You email text to them, and they email translated text back to you (Or, more accurate: you print text and put it into "needs translation" folder. Whenever translator available, text is pulled from folder)

Categories

Resources