How is JavaScript never blocking? [duplicate] - javascript

I'm not a Node programmer, but I'm interested in how the single-threaded non-blocking IO model works.
After I read the article understanding-the-node-js-event-loop, I'm really confused about it.
It gave an example for the model:
c.query(
'SELECT SLEEP(20);',
function (err, results, fields) {
if (err) {
throw err;
}
res.writeHead(200, {'Content-Type': 'text/html'});
res.end('<html><head><title>Hello</title></head><body><h1>Return from async DB query</h1></body></html>');
c.end();
}
);
Que: When there are two requests A(comes first) and B since there is only a single thread, the server-side program will handle the request A firstly: doing SQL querying is asleep statement standing for I/O wait. And The program is stuck at the I/O waiting, and cannot execute the code which renders the web page behind. Will the program switch to request B during the waiting? In my opinion, because of the single thread model, there is no way to switch one request from another. But the title of the example code says that everything runs in parallel except your code.
(P.S I'm not sure if I misunderstand the code or not since I have
never used Node.)How Node switch A to B during the waiting? And can
you explain the single-threaded non-blocking IO model of Node in a
simple way? I would appreciate it if you could help me. :)

Node.js is built upon libuv, a cross-platform library that abstracts apis/syscalls for asynchronous (non-blocking) input/output provided by the supported OSes (Unix, OS X and Windows at least).
Asynchronous IO
In this programming model open/read/write operation on devices and resources (sockets, filesystem, etc.) managed by the file-system don't block the calling thread (as in the typical synchronous c-like model) and just mark the process (in kernel/OS level data structure) to be notified when new data or events are available. In case of a web-server-like app, the process is then responsible to figure out which request/context the notified event belongs to and proceed processing the request from there. Note that this will necessarily mean you'll be on a different stack frame from the one that originated the request to the OS as the latter had to yield to a process' dispatcher in order for a single threaded process to handle new events.
The problem with the model I described is that it's not familiar and hard to reason about for the programmer as it's non-sequential in nature. "You need to make request in function A and handle the result in a different function where your locals from A are usually not available."
Node's model (Continuation Passing Style and Event Loop)
Node tackles the problem leveraging javascript's language features to make this model a little more synchronous-looking by inducing the programmer to employ a certain programming style. Every function that requests IO has a signature like function (... parameters ..., callback) and needs to be given a callback that will be invoked when the requested operation is completed (keep in mind that most of the time is spent waiting for the OS to signal the completion - time that can be spent doing other work). Javascript's support for closures allows you to use variables you've defined in the outer (calling) function inside the body of the callback - this allows to keep state between different functions that will be invoked by the node runtime independently. See also Continuation Passing Style.
Moreover, after invoking a function spawning an IO operation the calling function will usually return control to node's event loop. This loop will invoke the next callback or function that was scheduled for execution (most likely because the corresponding event was notified by the OS) - this allows the concurrent processing of multiple requests.
You can think of node's event loop as somewhat similar to the kernel's dispatcher: the kernel would schedule for execution a blocked thread once its pending IO is completed while node will schedule a callback when the corresponding event has occured.
Highly concurrent, no parallelism
As a final remark, the phrase "everything runs in parallel except your code" does a decent job of capturing the point that node allows your code to handle requests from hundreds of thousands open socket with a single thread concurrently by multiplexing and sequencing all your js logic in a single stream of execution (even though saying "everything runs in parallel" is probably not correct here - see Concurrency vs Parallelism - What is the difference?). This works pretty well for webapp servers as most of the time is actually spent on waiting for network or disk (database / sockets) and the logic is not really CPU intensive - that is to say: this works well for IO-bound workloads.

Well, to give some perspective, let me compare node.js with apache.
Apache is a multi-threaded HTTP server, for each and every request that the server receives, it creates a separate thread which handles that request.
Node.js on the other hand is event driven, handling all requests asynchronously from single thread.
When A and B are received on apache, two threads are created which handle requests. Each handling the query separately, each waiting for the query results before serving the page. The page is only served until the query is finished. The query fetch is blocking because the server cannot execute the rest of thread until it receives the result.
In node, c.query is handled asynchronously, which means while c.query fetches the results for A, it jumps to handle c.query for B, and when the results arrive for A arrive it sends back the results to callback which sends the response. Node.js knows to execute callback when fetch finishes.
In my opinion, because it's a single thread model, there is no way to
switch from one request to another.
Actually the node server does exactly that for you all the time. To make switches, (the asynchronous behavior) most functions that you would use will have callbacks.
Edit
The SQL query is taken from mysql library. It implements callback style as well as event emitter to queue SQL requests. It does not execute them asynchronously, that is done by the internal libuv threads that provide the abstraction of non-blocking I/O. The following steps happen for making a query :
Open a connection to db, connection itself can be made asynchronously.
Once db is connected, query is passed on to the server. Queries can be queued.
The main event loop gets notified of the completion with callback or event.
Main loop executes your callback/eventhandler.
The incoming requests to http server are handled in the similar fashion. The internal thread architecture is something like this:
The C++ threads are the libuv ones which do the asynchronous I/O (disk or network). The main event loop continues to execute after the dispatching the request to thread pool. It can accept more requests as it does not wait or sleep. SQL queries/HTTP requests/file system reads all happen this way.

Node.js uses libuv behind the scenes. libuv has a thread pool (of size 4 by default). Therefore Node.js does use threads to achieve concurrency.
However, your code runs on a single thread (i.e., all of the callbacks of Node.js functions will be called on the same thread, the so called loop-thread or event-loop). When people say "Node.js runs on a single thread" they are really saying "the callbacks of Node.js run on a single thread".

Node.js is based on the event loop programming model. The event loop runs in single thread and repeatedly waits for events and then runs any event handlers subscribed to those events. Events can be for example
timer wait is complete
next chunk of data is ready to be written to this file
theres a fresh new HTTP request coming our way
All of this runs in single thread and no JavaScript code is ever executed in parallel. As long as these event handlers are small and wait for yet more events themselves everything works out nicely. This allows multiple request to be handled concurrently by a single Node.js process.
(There's a little bit magic under the hood as where the events originate. Some of it involve low level worker threads running in parallel.)
In this SQL case, there's a lot of things (events) happening between making the database query and getting its results in the callback. During that time the event loop keeps pumping life into the application and advancing other requests one tiny event at a time. Therefore multiple requests are being served concurrently.
According to: "Event loop from 10,000ft - core concept behind Node.js".

The function c.query() has two argument
c.query("Fetch Data", "Post-Processing of Data")
The operation "Fetch Data" in this case is a DB-Query, now this may be handled by Node.js by spawning off a worker thread and giving it this task of performing the DB-Query. (Remember Node.js can create thread internally). This enables the function to return instantaneously without any delay
The second argument "Post-Processing of Data" is a callback function, the node framework registers this callback and is called by the event loop.
Thus the statement c.query (paramenter1, parameter2) will return instantaneously, enabling node to cater for another request.
P.S: I have just started to understand node, actually I wanted to write this as comment to #Philip but since didn't have enough reputation points so wrote it as an answer.

if you read a bit further - "Of course, on the backend, there are threads and processes for DB access and process execution. However, these are not explicitly exposed to your code, so you can’t worry about them other than by knowing that I/O interactions e.g. with the database, or with other processes will be asynchronous from the perspective of each request since the results from those threads are returned via the event loop to your code."
about - "everything runs in parallel except your code" - your code is executed synchronously, whenever you invoke an asynchronous operation such as waiting for IO, the event loop handles everything and invokes the callback. it just not something you have to think about.
in your example: there are two requests A (comes first) and B. you execute request A, your code continue to run synchronously and execute request B. the event loop handles request A, when it finishes it invokes the callback of request A with the result, same goes to request B.

Okay, most things should be clear so far... the tricky part is the SQL: if it is not in reality running in another thread or process in it’s entirety, the SQL-execution has to be broken down into individual steps (by an SQL processor made for asynchronous execution!), where the non-blocking ones are executed, and the blocking ones (e.g. the sleep) actually can be transferred to the kernel (as an alarm interrupt/event) and put on the event list for the main loop.
That means, e.g. the interpretation of the SQL, etc. is done immediately, but during the wait (stored as an event to come in the future by the kernel in some kqueue, epoll, ... structure; together with the other IO operations) the main loop can do other things and eventually check if something happened of those IOs and waits.
So, to rephrase it again: the program is never (allowed to get) stuck, sleeping calls are never executed. Their duty is done by the kernel (write something, wait for something to come over the network, waiting for time to elapse) or another thread or process. – The Node process checks if at least one of those duties is finished by the kernel in the only blocking call to the OS once in each event-loop-cycle. That point is reached, when everything non-blocking is done.
Clear? :-)
I don’t know Node. But where does the c.query come from?

The event loop is what allows Node.js to perform non-blocking I/O operations — despite the fact that JavaScript is single-threaded — by offloading operations to the system kernel whenever possible. Think of event loop as the manager.
New requests are sent into a queue and watched by the synchronous event demultiplexer. As you see each operations handler is also registered.
Then those requests are sent to the thread pool (Worker Pool) synchronously to be executed. JavaScript cannot perform asynchronous I/O operations. In browser environment, browser handles the async operations. In node environment, async operations are handled by the libuv by using C++. Thread's pool default size is 4, but it can be changed at startup time by setting the UV_THREADPOOL_SIZE environment variable to any value (maximum is 128). thread pool size 4 means 4 requests can get executed at a time, if event demultiplexer has 5 requsts, 4 would be passed to thread pool and 5th would be waiting. Once each request gets executed, result is returned to the `event demultiplexer.
When a set of I/O operations completes, the Event Demultiplexer pushes a set of corresponding events into the Event Queue.
handler is the callback. Now event loop keeps an eye on the event queue, if there is something ready, it is pushed to stack to execute the callback. Remember eventually callbacks get executed on stack. Note that some callbacks has priorities on other, the event loop does pick the callbacks based on their priorities.

For those who seek short answer and don't want to go to the deepest levels of Node.js internals.
Node.js is not single threaded, it runs on 5 threads by default.
Yes, the only single thread is for actual JavaScript processing, but it always switches from function to function.
It sends SQL query to a database and lets it wait in other thread, while single threaded Node.js continues to compute some other code ready to be computed.
If you wish more explanations, there are good articles about Event Loop, Worker Pool and the whole libuv documentation.

Related

What is the difference between the event loop in JavaScript and async non-blocking I/O in Node.js?

In this answer to the question -
What is non-blocking or asynchronous I/O in Node.js?
the description sounds no different from the event loop in vanilla js. Is there a difference between the two? If not, is the Event loop simply re-branded as "Asynchronous non-blocking I/O" to sell Node.js over other options more easily?
The event loop is the mechanism. Asynchronous I/O is the goal.
Asynchronous I/O is a style of programming in which I/O calls do not wait for the operation to complete before returning, but merely arrange for the caller to be notified when that happens, and for the result to be returned somewhere. In JavaScript, the notification is usually performed by invoking a callback or resolving a promise. As far as the programmer is concerned, it doesn’t matter how this happens: it just does. I request the operation, and when it’s done, I get notified about it.
An event loop is how this is usually achieved. The thing is, in most JavaScript implementations, there is literally a loop somewhere that ultimately boils down to:
while (poll_event(&ev)) {
dispatch_event(&ev);
}
Performing an asynchronous operation is then done by arranging for the completion of the operation to be received as an event by that loop, and having it dispatch to a callback of the caller’s choice.
There are ways to achieve asynchronous programming not based on an event loop, for example using threads and condition variables. But historical reasons make this programming style quite difficult to realise in JavaScript. So in practice, the predominant implementation of asynchrony in JavaScript is based on dispatching callbacks from a global event loop.
Put another way, ‘the event loop’ describes what the host does, while ‘asynchronous I/O’ describes what the programmer does.
From a non-programmer’s bird’s eye view this may seem like splitting hairs, but the distinction can be occasionally important.
There are 2 different Event Loops:
Browser Event Loop
NodeJS Event Loop
Browser Event Loop
The Event Loop is a process that runs continually, executing any task queued. It has multiple task sources which guarantees execution order within that source, but the Browser gets to pick which source to take a task from on each turn of the loop. This allows Browser to give preference to performance sensitive tasks such as user-input.
There are a few different steps that Browser Event Loop checks continuously:
Task Queue - There can be multiple task queues. Browser can execute queues in any order they like. Tasks in the same queue must be executed in the order they arrived, first in - first out. Tasks execute in order, and the Browser may render between tasks. Task from the same source must go in the same queue. The important thing is that task is going to run from start to finish. After each task, Event Loop will go to Microtask Queue and do all tasks from there.
Microtasks Queue - The microtask queue is processed at the end of each task. Any additional microtasks queued during during microtasks are added to the end of the queue and are also processed.
Animation Callback Queue - The animation callback queue is processed before pixels repaint. All animation tasks from the queue will be processed, but any additional animation tasks queued during animation tasks will be scheduled for the next frame.
Rendering Pipeline - In this step, rendering will happen. The Browser gets to decide when to do this and it tried to be as efficient as possible. The rendering steps only happen if there is something actually worth updating. The majority of screens update at a set frequency, in most cases 60 times a second (60Hz). So, if we would change page style 1000 times a second, rendering steps would not get processed 1000 times a second, but instead it would synchronize itself with the display and only render up to a frequency display is capable of.
Important thing to mention are Web APIs, that are effectively threads. So, for example setTimeout() is an API provided to us by Browser. When you call setTimeout() Web API would take over and process it, and it will return the result to the main thread as a new task in a task queue.
The best video I found that describes how Event Loops works is this one. It helped me a lot when I was investigating how Event Loop works. Another great videos are this one and this one. You should definitely check all of them.
NodeJS Event Loop
NodeJS Event Loop allows NodeJS to perform non-blocking operations by offloading operation to the system kernel whenever possible. Most modern kernels are multi-threaded and they can perform multiple operations in the background. When one of these operations completes, the kernel tells NodeJS.
Library that provides the Event Loop to NodeJS is called Libuv. It will by default create something called Thread Pool with 4 threads to offload asynchronous work to. If you want, you can also change the number of threads in the Thread Pool.
NodeJS Event Loop goes through different phases:
timers - this phase executes callbacks scheduled by setTimeout() and setInterval().
pending callbacks - executes I/O callbacks deferred to the next loop iteration.
idle, prepare - only used internally.
poll - retrieve new I/O events; execute I/O related callbacks (almost all with the exception of close callbacks, the ones scheduled by timers, and setImmediate()) Node will block here when appropriate.
check - setImmediate() callbacks are invoked here.
close callbacks - some close callbacks, e.g. socket.on('close', ...).
Between each run of the event loop, Node.js checks if it is waiting for any asynchronous I/O or timers and shuts down cleanly if there are not any.
In Browser, we had Web APIs. In NodeJS, we have C++ APIs with the same rule.
I found this video to be useful if you want to check for more information.
For years, JavaScript has been limited to client-side applications
such as interactive web applications that run on the browser. Using
NodeJS, JavaScript can be used to develop server-side applications as
well. Though it’s the same programming language which is used in both
use cases, client-side and server-side have different requirements.
“Event Loop” is a generic programming pattern and JavaScript/NodeJS
event loops are no different. The event loop continuously watches for
any queued event handlers and will process them accordingly.
The “Events”, in a browser’s context, are user interactions on web
pages (e.g, clicks, mouse movements, keyboard events etc.), but in
Node’s context, events are asynchronous server-side operations (e.g,
File I/O access, Network I/O etc.)

How does Node.js process incoming requests?

In order to dive into more complex concepts about Node.js, I am doing some research to make sure I understand the principles about the language and the basic building blocks it´s build upon.
As far as I´m concerned, Node.js relies in the reactor pattern to process each incoming request. This algorithm is based in the Event Demultiplexer algorithm. The second one is in charge of processing the request operation and it´s resources and then, once the operation is finished, it adds the 'result' to the Event Queue. After this process, the Event Loop is now in charge of executing the Handlers for each event.
As a single threaded process, I´m struggling to understand how does the Event demultiplexer handle all the incoming operations if the event loop is managing all the event queue tasks in parallel...
I´ve found this in the Node.js documentation:
Since most modern kernels are multi-threaded, they can handle multiple
operations executing in the background. When one of these operations
completes, the kernel tells Node.js so that the appropriate callback
may be added to the poll queue to eventually be executed. We'll
explain this in further detail later in this topic.
Does this mean that the Event Demultiplexer tasks are not handled by Node and that Node just gets the result in order to add it to the Event Queue? Doesn´t that mean that Node is not single threaded?
I've found some useful graphics on the web that clearly explain the path that each request follows, but It´s really hard to find some explaining the actual timing in which the thread processes each one.
In node.js, it runs your Javascript in a single thread (assuming we're not talking about worker threads or clustering). But, many asynchronous operations in the node.js built-in library such as file I/O or some crypto operations use their own threads to accomplish their task. So, when you call an asynchronous operations such as fs.open() to open a file, it transitions to native code, grabs a thread from an internal thread pool and that thread goes about opening the file. The fs.open() function then returns back to your Javascript (while the thread continues in the background). Sometime later when it finishes its task, the internal thread inserts an event into the nodejs event queue and when nodejs has cycles to check the event queue, it will find that event and run the Javascript callback associated with it to provide the asynchronous result back to your Javascript.
So, even though threads may be involved, your Javascript is all still single threaded through the event loop.
So, nodejs does use native code threads for some internal native code operations. Other things like networking and timers are implemented without threads.
Then if I send a request to fetch data from a database in Node, how does the Event Demultiplexer process It? Does the main thread stop the event loop to handle that operation and then resumes after the event handler is queued?
These terms like "Event Demultiplexor" are things you are applying to node.js. They are not something that node.js uses at all so I'm not entirely sure what you're asking.
Node.js runs one event at a time. It has no interrupt-driven abilities. So, it runs one event until that event returns control back to the event loop (by issuing a return from the callback that started everything). Now, the original event may not be complete - it may be doing something asynchronous that will trigger another event to announce completion, but it has finished running Javascript for now and has returned control back to the event loop.
When an incoming Fetch request (which is just an incoming http request) arrives at the server, it is first queued by the OS. Then, when the event loop has a chance to see it, it is added to the node.js event queue and is served in FIFO order when other events that were before it are done processing. Now, the nodejs event loop is not as simple as a single event queue. It actually has several different queues for different types of work and has priorities for what gets to run first, but at the simplest level, you can start out thinking of it as a single FIFO queue. And, nothing is ever interrupted.
Pull an event out of the event queue, run the Javascript callback associated with it. When that callback returns back to the event loop, get the next event and do the same.
Events may be added to the event queue either by native code threads (like might be done with file I/O or some crypto operations) or via some polling mechanisms built into the event loop as part of the event loop cycle it checks for certain things that are ready to run (as with networking and timers).

What happend in libuv when you make parallel requests

As Javascript is a single threaded, how libuv handles when i manage to make two requests parallely? Eg: Making array of promises and resolving latter
I presume what you're really asking is how does libv8 handle two asynchronous requests that are "in flight" at the same time. Since Javascript is single thread, you can't start them at the same moment. One will be started, then your JS will be able to run some more and start the second one. They will both be "in process" at the same time.
First off, the library used in nodejs is generally called libuv, not libv8. Here's the doc for libuv.
The answer for how libuv does this is that it depends upon the type of asynchronous operation. Here's a diagram from the libuv site:
Disk I/O in libuv uses native threads via a thread pool. A native thread runs each disk I/O operation and then it completes, it then puts an event into the nodejs event queue so that when nodejs is available, it can pull that event from the event queue and call the callback registered for the async I/O operation. This functionality originally came from libeio, but is apparently its own implementation now.
Networking operations in libuv use native OS async capabilities such as epoll, kqueue and IOCP.
technically you don't make the request parallel, one does come first. that one starts first, but it listens or checks for one if it is finished, and then the other, and back and forth until one finishes first.

Javascript/Nodejs how readFile is implemented [duplicate]

So I have an understanding of how Node.js works: it has a single listener thread that receives an event and then delegates it to a worker pool. The worker thread notifies the listener once it completes the work, and the listener then returns the response to the caller.
My question is this: if I stand up an HTTP server in Node.js and call sleep on one of my routed path events (such as "/test/sleep"), the whole system comes to a halt. Even the single listener thread. But my understanding was that this code is happening on the worker pool.
Now, by contrast, when I use Mongoose to talk to MongoDB, DB reads are an expensive I/O operation. Node seems to be able to delegate the work to a thread and receive the callback when it completes; the time taken to load from the DB does not seem to block the system.
How does Node.js decide to use a thread pool thread vs the listener thread? Why can't I write event code that sleeps and only blocks a thread pool thread?
Your understanding of how node works isn't correct... but it's a common misconception, because the reality of the situation is actually fairly complex, and typically boiled down to pithy little phrases like "node is single threaded" that over-simplify things.
For the moment, we'll ignore explicit multi-processing/multi-threading through cluster and webworker-threads, and just talk about typical non-threaded node.
Node runs in a single event loop. It's single threaded, and you only ever get that one thread. All of the javascript you write executes in this loop, and if a blocking operation happens in that code, then it will block the entire loop and nothing else will happen until it finishes. This is the typically single threaded nature of node that you hear so much about. But, it's not the whole picture.
Certain functions and modules, usually written in C/C++, support asynchronous I/O. When you call these functions and methods, they internally manage passing the call on to a worker thread. For instance, when you use the fs module to request a file, the fs module passes that call on to a worker thread, and that worker waits for its response, which it then presents back to the event loop that has been churning on without it in the meantime. All of this is abstracted away from you, the node developer, and some of it is abstracted away from the module developers through the use of libuv.
As pointed out by Denis Dollfus in the comments (from this answer to a similar question), the strategy used by libuv to achieve asynchronous I/O is not always a thread pool, specifically in the case of the http module a different strategy appears to be used at this time. For our purposes here it's mainly important to note how the asynchronous context is achieved (by using libuv) and that the thread pool maintained by libuv is one of multiple strategies offered by that library to achieve asynchronicity.
On a mostly related tangent, there is a much deeper analysis of how node achieves asynchronicity, and some related potential problems and how to deal with them, in this excellent article. Most of it expands on what I've written above, but additionally it points out:
Any external module that you include in your project that makes use of native C++ and libuv is likely to use the thread pool (think: database access)
libuv has a default thread pool size of 4, and uses a queue to manage access to the thread pool - the upshot is that if you have 5 long-running DB queries all going at the same time, one of them (and any other asynchronous action that relies on the thread pool) will be waiting for those queries to finish before they even get started
You can mitigate this by increasing the size of the thread pool through the UV_THREADPOOL_SIZE environment variable, so long as you do it before the thread pool is required and created: process.env.UV_THREADPOOL_SIZE = 10;
If you want traditional multi-processing or multi-threading in node, you can get it through the built in cluster module or various other modules such as the aforementioned webworker-threads, or you can fake it by implementing some way of chunking up your work and manually using setTimeout or setImmediate or process.nextTick to pause your work and continue it in a later loop to let other processes complete (but that's not recommended).
Please note, if you're writing long running/blocking code in javascript, you're probably making a mistake. Other languages will perform much more efficiently.
So I have an understanding of how Node.js works: it has a single listener thread that receives an event and then delegates it to a worker pool. The worker thread notifies the listener once it completes the work, and the listener then returns the response to the caller.
This is not really accurate. Node.js has only a single "worker" thread that does javascript execution. There are threads within node that handle IO processing, but to think of them as "workers" is a misconception. There are really just IO handling and a few other details of node's internal implementation, but as a programmer you cannot influence their behavior other than a few misc parameters such as MAX_LISTENERS.
My question is this: if I stand up an HTTP server in Node.js and call sleep on one of my routed path events (such as "/test/sleep"), the whole system comes to a halt. Even the single listener thread. But my understanding was that this code is happening on the worker pool.
There is no sleep mechanism in JavaScript. We could discuss this more concretely if you posted a code snippet of what you think "sleep" means. There's no such function to call to simulate something like time.sleep(30) in python, for example. There's setTimeout but that is fundamentally NOT sleep. setTimeout and setInterval explicitly release, not block, the event loop so other bits of code can execute on the main execution thread. The only thing you can do is busy loop the CPU with in-memory computation, which will indeed starve the main execution thread and render your program unresponsive.
How does Node.js decide to use a thread pool thread vs the listener thread? Why can't I write event code that sleeps and only blocks a thread pool thread?
Network IO is always asynchronous. End of story. Disk IO has both synchronous and asynchronous APIs, so there is no "decision". node.js will behave according to the API core functions you call sync vs normal async. For example: fs.readFile vs fs.readFileSync. For child processes, there are also separate child_process.exec and child_process.execSync APIs.
Rule of thumb is always use the asynchronous APIs. The valid reasons to use the sync APIs are for initialization code in a network service before it is listening for connections or in simple scripts that do not accept network requests for build tools and that kind of thing.
Thread pool how when and who used:
First off when we use/install Node on a computer, it starts a process among other processes which is called node process in the computer, and it keeps running until you kill it. And this running process is our so-called single thread.
So the mechanism of single thread it makes easy to block a node application but this is one of the unique features that Node.js brings to the table. So, again if you run your node application, it will run in just a single thread. No matter if you have 1 or million users accessing your application at the same time.
So let's understand exactly what happens in the single thread of nodejs when you start your node application. At first the program is initialized, then all the top-level code is executed, which means all the codes that are not inside any callback function (remember all codes inside all callback functions will be executed under event loop).
After that, all the modules code executed then register all the callback, finally, event loop started for your application.
So as we discuss before all the callback functions and codes inside those functions will execute under event loop. In the event loop, loads are distributed in different phases. Anyway, I'm not going to discuss about event loop here.
Well for the sack of better understanding of Thread pool I a requesting you to imagine that in the event loop, codes inside of one callback function execute after completing execution of codes inside another callback function, now if there are some tasks are actually too heavy. They would then block our nodejs single thread. And so, that's where the thread pool comes in, which is just like the event loop, is provided to Node.js by the libuv library.
So the thread pool is not a part of nodejs itself, it's provided by libuv to offload heavy duties to libuv, and libuv will execute those codes in its own threads and after execution libuv will return the results to the event in the event loop.
Thread pool gives us four additional threads, those are completely separate from the main single thread. And we can actually configure it up to 128 threads.
So all these threads together formed a thread pool. and the event loop can then automatically offload heavy tasks to the thread pool.
The fun part is all this happens automatically behind the scenes. It's not us developers who decide what goes to the thread pool and what doesn't.
There are many tasks goes to the thread pool, such as
-> All operations dealing with files
->Everyting is related to cryptography, like caching passwords.
->All compression stuff
->DNS lookups
This misunderstanding is merely the difference between pre-emptive multi-tasking and cooperative multitasking...
The sleep turns off the entire carnival because there is really one line to all the rides, and you closed the gate. Think of it as "a JS interpreter and some other things" and ignore the threads...for you, there is only one thread, ...
...so don't block it.

Understanding event loop with twisted and node

I am trying to choose a platform to code my network application on, it will be a small realtime online game server. I am not very familiar with the async theory though I know how to write a little asynchronous code.
I both know javascript and python, on the same level.
So I was reading on twisted here and he says:
During a callback, the Twisted loop is effectively “blocked” on our
code. So we should make sure our callback code doesn’t waste any time.
In particular, we should avoid making blocking I/O calls in our
callbacks. Otherwise, we would be defeating the whole point of using
the reactor pattern in the first place. Twisted will not take any
special precautions to prevent our code from blocking, we just have to
make sure not to do it. As we will eventually see, for the common case
of network I/O we don’t have to worry about it as we let Twisted do
the asynchronous communication for us.
I wanted to see how this is different to how the event loop on node.js is done. I believe node.js implements the event loop and it never blocks, or am I missing something?
I write somewhat blocking codes on my callbacks with node.js does this mean I'm making a mistake?
Why is twisted called async and event driven when it still blocks?
Cheers,
Maj
Twisted, node.js, and every other asynchronous framework behave the exact same way here: If you writing blocking code in your callbacks, the entire event loop is blocked until your callback is done.
Asynchronous frameworks are really great for doing I/O-bound work; the event loop never gets blocked waiting for I/O, because it can all be done in a non-blocking way. When there is data ready for reading, the event loop fires off your callback, the callback handles the data, and then the event loop takes control again. When you hear these frameworks being called "async" and "event-driven", it's referring to this non-blocking I/O + event loop model.
However, when you actually need to do some kind of processing with the data being sent/received, you need to be careful. Event loops are single-threaded; only one CPU-based operation can happen at a time. That means if you do some expensive calculation that takes 10 seconds in a callback, your event loop is blocked for 10 seconds. There's no extra magic in node.js that avoids this.
If you want to be able to do CPU-based operations without blocking your event loop, node.js (and twisted) have mechanisms for sending the CPU-bound work to a sub-process, and then fetching the results when the sub-process is complete. The node.js About page actually mentions this:
But what about multiple-processor concurrency? Aren't threads
necessary to scale programs to multi-core computers? You can start new
processes via child_process.fork() these other processes will be
scheduled in parallel. For load balancing incoming connections across
multiple processes use the cluster module.

Categories

Resources