What does it mean that all event handlers are fired synchronously? - javascript

I am confused about some terms. I am trying to find out how the event system of Node.js actually works, and in a lot of places I read that the event handlers are totally synchronous.
For me that seemed really strange, because one of the advantages of using an event-driven approach would be that the main thread would not be blocked by events. So I tried to come up with my own example, and it seems like that what did happen was what I actually expected:
const fs = require('fs')
const util = require('util')
const readFile = util.promisify(fs.readFile)
const events = require('events')
const emitter = new events.EventEmitter()
emitter.on('fire', () => {
readFile('bigFile.txt')
.then(() => console.log('Done reading bigFile.txt'))
.catch(error => console.log(error))
console.log('Sync thing in handler')
})
emitter.on('fire', () => {
console.log('Second handler')
})
console.log('First outside')
emitter.emit('fire')
console.log('Last outside')
Note that bigFile.txt is an actually large text file, processing it takes a few hundred milliseconds on my machine.
Here I first log out 'First outside' synchronously. Then I raise the event which starts the event handling process. The event handler does seem to be asynchronous, because even though we first log out the synchronous 'Sync thing in handler' text, we start using the thread pool in the background to return back with the result of reading the file later. After running the first handler, the second handler runs printing out its message, and finally we print out the last sync message, 'Last outside'.
So I started with trying to prove what some people say, which is that event handlers are by nature synchronous, and then I found them to be asynchronous. My best guess is that either people saying that the event system is synchronous mean something else, or that I have some conceptual misunderstanding. Please help me understand this issue!

The EventEmitter class is synchronous in regard to the emit function: event handlers are called synchronously from within the .emit() call, as you've demonstrated with the fire event you fired yourself.
In general, events that come from the operating system (file and network operations, timers etc) through node's event loop are fired asynchronously. You're not firing them yourself, some native API does fire them. When you listen to these events, you can be sure that they will occur not before the next tick.
The event handler does seem to be asynchronous, because even though we first log out the synchronous 'Sync thing in handler' text, we start using the thread pool in the background to return back with the result of reading the file later
Yes, you are calling the asynchronous function readFile (that will notify you later), but that doesn't make your event listener function or the .emit('fire') call asynchronous. Even "asynchronous functions" that start a background process will immediately (synchronously) return something - often nothing (undefined) or a promise.

Related

DOMContentLoaded event and task queue

I heard that there are three queues which have tasks in Event Loop Processing Model.
MacroTaskQueue : this queue have callback functions of setTimeout, setInterval ..etc
MicroTaskQueue : this queue have callback functions of promise, mutationOberver ..etc
AnimationFrameQueue : this queue have callback functions of requestAnimationFrame.
So, what i'm wondering is that
Who fires DOMContentLoaded event ?
Where the callback function of DOMContentLoaded is queued ? MacroTaskQueue or MicroTaskQueue?
finally,
var a = 10;
console.log(a);
setTimeout(function b() { console.log('im b'); }, 1000);
in this code,
var a = 10;
console.log(a);
is this code also queued in MacroTaskQueue or MicroTaskQueue ?
or only the b is queued in MacroTaskQueue after (min) 1000ms ?
Im in black hole. Help me please :D
What you call the "MacroTaskQueue" is actually made of several task-queues, where tasks are being queued. (Note that the specs only use multiple task-sources, there could actually be a single task-queue). At the beginning of the event-loop processing, the browser will choose from which task queue it will pick the next "main" task to execute. It's important to understand that these tasks may very well not imply any JavaScript execution at all, JS is only a small part of what a browser does.
The microtask-queue will be visited and emptied several times during a single event-loop iteration. For instance every time that the JS call stack has been emptied (i.e after almost every JS callback execution) and if it wasn't enough there are fixed "Perform a microtask checkpoint" points in the event-loop processing model.
While similar to a queue, the animation frame callbacks are actually stored in an ordered map, not in a queue per se, this allows to "queue" new callbacks from one of these callbacks without it being dequeued immediately after. More importantly, a lot of other callbacks are also executed at the same time, e.g the scroll events, resize events, Web animation steps + events, ResizeObserver callbacks, etc. But this "update the rendering" step happens only once in a while, generally at the monitor refresh rate.
But, that's not saying much about DOMContentLoaded.
Who fires DOMContentLoaded event ?
This event is fired as part of the Document parsing steps, in the "the end" section. The browser has to first queue a task on the DOM manipulation task-source. This task will then eventually get selected by the browser as part of the first step of the event-loop. And once this task's steps will be executed, the event will be fired and dispatched on the Document. That's only as part of this dispatch an event algorithm that the browser will invoke and inner-invoke until it calls our listener's callback.
Note that this Document parsing step is in itself quite interesting as a task since this is the most obvious place where you will have multiple microtask-checkpoints interleaved inside the "main" task (at each <script> for instance).
Where the callback function of DOMContentLoaded is queued ?
The callback function is not queued, it is conceptually stored in the EventTarget's event listener list. In the facts, it's stored in memory, since here the EventTarget is a DOM object (Document), it's probably attached to this DOM object, though this is an implementation detail on which the specs have little to say as this is transparent to us web-devs.
MacroTaskQueue or MicroTaskQueue?
As I hope you now understand better, neither. Task queues and the microtask-queue only store tasks and microtasks, not callbacks. The callbacks are stored elsewhere, depending on what kind of callbacks they are (e.g timers and events are stored in different "conceptual" places), and some task or microtask's steps will then call them.
is this code also queued in MacroTaskQueue or MicroTaskQueue?
That depends where this script has been parsed from. If it's inline in a classic <script> tag, then that would be the special parsing task we already talked about. If it's from a <script src="url.js">, then it will be part of a task queued from fetch a classic script, but it can also be part of a microtask, e.g if after an await in a module script, or you can even force it to be if you want:
queueMicrotask(() => {
console.log("in microtask");
eval(document.querySelector("[type=myscript]").textContent);
console.log("still in microtask");
});
console.log("in parsing task");
<script type="myscript">
var a = 10;
console.log(a);
setTimeout(function b() { console.log('im b'); }, 1000);
</script>
And it is even theoretically possible by specs that a microtask becomes a "macro-"task, though no browser does implements this anymore apparently.
All this to say, while I personally find all this stuff fascinating, as a web-dev you shouldn't block yourself on it.

Why does the browser not freeze when awaiting these promises? [duplicate]

When using Javascript promises, does the event loop get blocked?
My understanding is that using await & async, makes the stack stop until the operation has completed. Does it do this by blocking the stack or does it act similar to a callback and pass of the process to an API of sorts?
When using Javascript promises, does the event loop get blocked?
No. Promises are only an event notification system. They aren't an operation themselves. They simply respond to being resolved or rejected by calling the appropriate .then() or .catch() handlers and if chained to other promises, they can delay calling those handlers until the promises they are chained to also resolve/reject. As such a single promise doesn't block anything and certainly does not block the event loop.
My understanding is that using await & async, makes the stack stop
until the operation has completed. Does it do this by blocking the
stack or does it act similar to a callback and pass of the process to
an API of sorts?
await is simply syntactic sugar that replaces a .then() handler with a bit simpler syntax. But, under the covers the operation is the same. The code that comes after the await is basically put inside an invisible .then() handler and there is no blocking of the event loop, just like there is no blocking with a .then() handler.
Note to address one of the comments below:
Now, if you were to construct code that overwhelms the event loop with continually resolving promises (in some sort of infinite loop as proposed in some comments here), then the event loop will just over and over process those continually resolved promises from the microtask queue and will never get a chance to process macrotasks waiting in the event loop (other types of events). The event loop is still running and is still processing microtasks, but if you are stuffing new microtasks (resolved promises) into it continually, then it may never get to the macrotasks. There seems to be some debate about whether one would call this "blocking the event loop" or not. That's just a terminology question - what's more important is what is actually happening. In this example of an infinite loop continually resolving a new promise over and over, the event loop will continue processing those resolved promises and the other events in the event queue will not get processed because they never get to the front of the line to get their turn. This is more often referred to as "starvation" than it is "blocking", but the point is that macrotasks may not get serviced if you are continually and infinitely putting new microtasks in the queue.
This notion of an infinite loop continually resolving a new promise should be avoided in Javascript. It can starve other events from getting a chance to be serviced.
Do Javascript promises block the stack
No, not the stack. The current job will run until completion before the Promise's callback starts executing.
When using Javascript promises, does the event loop get blocked?
Yes it does.
Different environments have different event-loop processing models, so I'll be talking about the one in browsers, but even though nodejs's model is a bit simpler, they actually expose the same behavior.
In a browser, Promises' callbacks (PromiseReactionJob in ES terms), are actually executed in what is called a microtask.
A microtask is a special task that gets queued in the special microtask-queue.
This microtask-queue is visited various times during a single event-loop iteration in what is called a microtask-checkpoint, and every time the JS call stack is empty, for instance after the main task is done, after rendering events like resize are executed, after every animation-frame callback, etc.
These microtask-checkpoints are part of the event-loop, and will block it the time they run just like any other task.
What is more about these however is that a microtask scheduled from a microtask-checkpoint will get executed by that same microtask-checkpoint.
This means that the simple fact of using a Promise doesn't make your code let the event-loop breath, like a setTimeout() scheduled task could do, and even though the js stack has been emptied and the previous task has been executed entirely before the callback is called, you can still very well lock completely the event-loop, never allowing it to process any other task or even update the rendering:
const log = document.getElementById( "log" );
let now = performance.now();
let i = 0;
const promLoop = () => {
// only the final result will get painted
// because the event-loop can never reach the "update the rendering steps"
log.textContent = i++;
if( performance.now() - now < 5000 ) {
// this doesn't let the event-loop loop
return Promise.resolve().then( promLoop );
}
else { i = 0; }
};
const taskLoop = () => {
log.textContent = i++;
if( performance.now() - now < 5000 ) {
// this does let the event-loop loop
postTask( taskLoop );
}
else { i = 0; }
};
document.getElementById( "prom-btn" ).onclick = start( promLoop );
document.getElementById( "task-btn" ).onclick = start( taskLoop );
function start( fn ) {
return (evt) => {
i = 0;
now = performance.now();
fn();
};
}
// Posts a "macro-task".
// We could use setTimeout, but this method gets throttled
// to 4ms after 5 recursive calls.
// So instead we use either the incoming postTask API
// or the MesageChannel API which are not affected
// by this limitation
function postTask( task ) {
// Available in Chrome 86+ under the 'Experimental Web Platforms' flag
if( window.scheduler ) {
return scheduler.postTask( task, { priority: "user-blocking" } );
}
else {
const channel = postTask.channel ||= new MessageChannel();
channel.port1
.addEventListener( "message", () => task(), { once: true } );
channel.port2.postMessage( "" );
channel.port1.start();
}
}
<button id="prom-btn">use promises</button>
<button id="task-btn">use postTask</button>
<pre id="log"></pre>
So beware, using a Promise doesn't help at all with letting the event-loop actually loop.
Too often we see code using a batching pattern to not block the UI that fails completely its goal because it is assuming Promises will let the event-loop loop. For this, keep using setTimeout() as a mean to schedule a task, or use the postTask API if you are in a near future.
My understanding is that using await & async, makes the stack stop until the operation has completed.
Kind of... when awaiting a value it will add the remaining of the function execution to the callbacks attached to the awaited Promise (which can be a new Promise resolving the non-Promise value).
So the stack is indeed cleared at this time, but the event loop is not blocked at all here, on the contrary it's been freed to execute anything else until the Promise resolves.
This means that you can very well await for a never resolving promise and still let your browser live correctly.
async function fn() {
console.log( "will wait a bit" );
const prom = await new Promise( (res, rej) => {} );
console.log( "done waiting" );
}
fn();
onmousemove = () => console.log( "still alive" );
move your mouse to check if the page is locked
An await blocks only the current async function, the event loop continues to run normally. When the promise settles, the execution of the function body is resumed where it stopped.
Every async/await can be transformed in an equivalent .then(…)-callback program, and works just like that from the concurrency perspective. So while a promise is being awaited, other events may fire and arbitrary other code may run.
As other mentioned above... Promises are just like an event notification system and async/await is the same as then(). However, be very careful, You can "block" the event loop by executing a blocking operation. Take a look to the following code:
function blocking_operation_inside_promise(){
return new Promise ( (res, rej) => {
while( true ) console.log(' loop inside promise ')
res();
})
}
async function init(){
let await_forever = await blocking_operation_inside_promise()
}
init()
console.log('END')
The END log will never be printed. JS is single threaded and that thread is busy right now. You could say that whole thing is "blocked" by the blocking operation. In this particular case the event loop is not blocked per se, but it wont deliver events to your application because the main thread is busy.
JS/Node can be a very useful programming language, very efficient when using non-blocking operations (like network operations). But do not use it to execute very intense CPU algorithms. If you are at the browser consider to use Web Workers, if you are at the server side use Worker Threads, Child Processes or a Microservice Architecture.

Potential race conditions when Promise used in subscriptions in Javascript / TypeScript

I recently dived into subscriptions of Subject/BehaviorSubject/ etc and I am looking for the goto approach when used in combinations with Promises.
Given is the example code below:
firebase.user.subscribe((user: any | null) => {
fs.readFile('path/to/file')
.then((buf: Buffer) => {
this.modifySomeData = buf;
});
});
I subscribe to a Subject that triggers whenever the user logs in or out of their service. Whenever this happens, I read a file from disk. This readFile event could potentially take longer than the next "login/logout" event. Of course, I am in JS and in an asynchronous environment. This means, my user code is not multithreaded, but still, the 2nd user event and 2nd readFile could theoretically be faster than the first readFile.
First user event fired
First readFile is executed
Second user event is fired
Second readFile is executed
Second readFile is resolved <---
First readFile is resolved <---
The order is mixed up. The silliest approach I could think of is to create a uuid before reading the file and check inside the promise if this is still the same. If it's not I discard the data.
Is there a better solution?
If i have a process where older requests can be discarded i often keep a variable in scope to track the latest request and compare, similar to your UUID idea:
let lastRead: Promise<Buffer> | null = null;
firebase.user.subscribe((user: any | null) => {
const read = lastRead = fs.readFile('path/to/file');
read.then((buf: Buffer) => {
if (read != lastRead)
return;
this.modifySomeData = buf;
});
});
In this specific case, readFile also supports an abort signal. So you might also be able to abort the last request instead; you will still need to track it though.
The first approach is to see if your event generation logic could handle waiting for event handling. For example, you can use a promise to wait for the event OR generate another event, say doneReadFile and only then send the next event. Usually, this is not the case for a generic (distributed) environment.
If event generation does not care about how long it took to handle events, you can still use the above approach but check for the intermediate event doneReadFile in the next event handler (login/logout). This can be achieved by implementing some kind of polling or busy-wait/sleep

Reply to messages in child processes

I am looking for an effective way to reply to a message sent to a child process. Currently, I am using the following code:
const { fork } = require('child_process');
const child = fork(path.join(__dirname, 'sub.js'));
async function run() {
console.log('Requesting status....');
child.send('status');
const status = await awaitMessage(child);
console.log(status);
}
function awaitMessage(childProcess) {
return new Promise((resolve) => {
childProcess.on('message', (m) => {
resolve(m);
});
});
}
The problem of this code is that it creates a new event listener every single time the awaitMessage() function is called, which is prone to memory leaks. Is there an elegant way of receiving a reply from the child process?
This isn't really "prone to memory leaks" in that a leak is something that is supposed to get freed (per the rules of the garbage collector), but isn't. In this case, you've left a promise hooked up to an event handler that can still get called so the system simply can't know that you intend for it to be freed.
So, the system is retaining exactly what your code told it to retain. It's the consequence of how your code works that it is retaining every promise ever created in awaitMessage() and also firing a bunch of extra event handlers too. Because you keep the event listener, the garbage collector sees that the promise is still "reachable" by that listener and thus cannot and should not remove the promise even if there are no outside references to it (per the rules of the Javascript garbage collector).
If you're going to add an event listener inside a promise, then you have to remove that event listener when the promise resolves so that the promise can eventually be freed. A promise is no magic object in Javascript, it's just a regular object so as long as you have an object that can be referenced by a live event listener, that object can't be garbage collected.
In addition, this is subject to race conditions if you ever call awaitMessage() twice in a row as both promises will then respond to the next message that comes. In general, this is just not a good design approach. If you want to wait for a message, then you have to somehow tag your messages so you know which message response is the actual one you're waiting for to avoid your race conditions and you have to remove the event listener after you get your message.
To avoid the memory build-up because of the accumulation of listeners, you can do this:
function awaitMessage(childProcess) {
return new Promise((resolve) => {
function handleMsg(m) {
childProcess.removeListener(handleMsg);
resolve(m);
}
childProcess.on('message', handleMsg);
});
}

JavaScript synchronization and critical sections in event handlers

I have a function which is an event handler for websocket.onmessage, now since the server can send multiple messages (one after another) and each message will trigger that event, and since the function block may take a few seconds (a lot of rendering going on inside), the function may be called again while the first function call is still running.
I need a critical block in this function in some cases so that the second call will only start the critical section when the first call ends, what's considered a 'best practice' for implementing locks in JavaScript?
Since js is single-threaded, you can't really do locks. Well, you can but you shouldn't.
One idea might be to keep a status variable.
Your function will be called on each onmessage, but you only do something if the variable is set to false. If so, you set it to true and when its done, set it back to false.
var handler; //expose outside the closure
(function(){
var busy = false;
handler = function(){
if( !busy ){
busy = true;
//do rendering stuff
busy = false;
}
}
})();
Obviously, adapt this idea to your own needs.
You could use jQuery Socket https://github.com/flowersinthesand/jquery-socket as it has a callback for the message event.
message(data, [callback])
This means you can get the next message after the first has completed.
EXAMPLE:
websocket.onmessage(data, function(){
//get next message
});
JS is multithreaded only if you use webworkers. I don't know if websockets are even allowed on worker threads because of the lack of synchronization available, but if you set up all your websockets from the main thread, the events will all fire in order on the main thread, so you do not have to perform any blocking or synchronization yourself (see this thread on synchronization in JS)

Categories

Resources