Considering the following JavaScript code:
var promise = new Promise();
setTimeout(function() {
promise.resolve();
}, 10);
function foo() { }
promise.then(foo);
In the promise implementations I've seen, promise.resolve() would simply set some property to indicate the promise was resolved and foo() would be called later during an event loop, yet it seems like the promise.resolve() would have enough information to immediately call any deferred functions such as foo().
The event loop method seems like it would add complexity and reduce performance, so why is it used?
While most of my use of promises is with JavaScript, part of the reason for my question is in implementing promises in very performance intensive cases like C++ games, in which case I'm wondering if I could utilize some of the benefits of promises without the overhead of an event loop.
All promise implementations, at least good ones do that.
This is because mixing synchronicity into an asynchronous API is releasing Zalgo.
The fact promises do not resolve immediately sometimes and defer sometimes means that the API is consistent. Otherwise, you get undefined behavior in the order of execution.
function getFromCache(){
return Promise.resolve(cachedValue || getFromWebAndCache());
}
getFromCache().then(function(x){
alert("World");
});
alert("Hello");
The fact promise libraries defer, means that the order of execution of the above block is guaranteed. In broken promise implementations like jQuery, the order changes depending on whether or not the item is fetched from the cache or not. This is dangerous.
Having nondeterministic execution order is very risky and is a common source of bugs. The Promises/A+ specification is throwing you into the pit of success here.
Whether or not promise.resolve() will synchronously or asynchronously execute its continuations really depends on the implementation.
Furthermore, the "Event Loop" is not the only mechanism to provide a different "execution context". There may be other means, for example threads or thread pools, or think of GCD (Grand Central Dispatch, dispatch lib), which provides dispatch queues.
The Promises/A+ Spec clearly requires that the continuation (the onFulfilled respectively the onRejected handler) will be asynchronously executed with respect to the "execution context" where the then method is invoked.
onFulfilled or onRejected must not be called until the execution context stack contains only platform code. [3.1].
Under the Notes you can read what that actually means:
Here "platform code" means engine, environment, and promise implementation code. In practice, this requirement ensures that onFulfilled and onRejected execute asynchronously, after the event loop turn in which then is called, and with a fresh stack.
Here, each event will get executed on a different "execution context", even though this is the same event loop, and the same "thread".
Since the Promises/A+ specification is written for the Javascript environment, a more general specification would simply require that the continuation will be asynchronously executed with respect to the caller invoking the then method.
There are good reasons to this in that way!
Example (pseudo code):
promise = async_task();
printf("a");
promise.then((int result){
printf("b");
});
printf("c");
Assuming, the handler (continuation) will execute on the same thread as the call-site, the order of execution should be that the console shows this:
acb
Especially, when a promise is already resolved, some implementations tend to invoke the continuation "immediately" (that is synchronously) on the same execution context. This would clearly violate the rule stated above.
The reason for the rule to invoke the continuation always asynchronously is that a call-site needs to have a guarantee about the relative order of execution of handlers and code following the then including the continuation statement in any scenario. That is, no matter whether a promise is already resolved or not, the order of execution of the statements must be the same. Otherwise, more complex asynchronous systems may not work reliable.
Another bad design choice for implementations in other languages which have multiple simultaneous execution contexts - say a multi-threaded environment (irrelevant in JavaScript, since there is only one thread of execution), is that the continuation will be invoked synchronously with respect to the resolve function. This is even problematic when the asynchronous task will finish in a later event loop cycle and thus the continuation will be indeed executed asynchronously with respect to the call-site.
However, when the resolve function will be invoked by the asynchronous task when it is finished, this task may execute on a private execution context (say the "worker thread"). This "worker thread" usually will be a dedicated and possibly special configured execution context - which then calls resolve. If that resolve function will synchronously execute the continuation, the continuation will run on the private execution context of the task - which is generally not desired.
Promises are all about cooperative multitasking.
Pretty much the only method to achieve that is to use message based scheduling.
Timers (usually with 0 delay) are simply used to post the task/message into message queue - yield-to-next-task-in-the-queue paradigm. So the whole formation consisting of small event handlers works and more frequently you yield - more smoothly all this work.
Related
So based on 2 StackOverflow answers, what I have understood is:
XHR callback is queued with Macrotasks
Fetch method is queued with Microtasks
So my question is:
Is this true?
If yes, why is it this way? Shouldn't both of them be treated in the same way?
Is this true?
No. Re-read the answer your linked:
When the the request response will be received […], the browser will queue a new task which will only be responsible of resolving that Promise, […]
I've emphasised the macrotask for you.
Shouldn't both of them be treated in the same way?
No, why would they? One is a promise API, the other is not. Notice that if you wrap XMLHttpRequest in a promise, you get exactly the same behaviour: the load/readystatechange event (a macro task) resolves a promise, scheduling any promise handler (a micro task).
But ultimately you should ask yourself: does it even matter? You normally shouldn't need to concern yourself with such timing details.
Is this true?
Yes.
When XMLHttpRequest was created there was no microtask queue. Only one - what is now called the macrotask queue.
However, when fetch() was introduced, promises were already in the standard. The result of fetch() is a promise and all effects after a promise resolution are done via the microtask queue:
setTimeout(() => console.log("macrotask done"), 0); //logged second
Promise.resolve().then(() => console.log("microtask done")); //logged first
Hence resolving the promise from fetch() will also add the subsequent handlers to the microtask queue. Again, it is the one used for the handlers all promises.
If yes, why is it this way? Shouldn't both of them be treated in the same way?
There is no requirement for the two to work the same. Nor would the resolution of these make much of a practical difference in day-to-day code.
Do note that the two are not really the same, either - fetch will resolve as soon as a result is returned before the body of the result is read. Hence why calling .json() or .text() is needed, see Why does .json() return a promise? - calling those methods that will actually process the body. XHR does not have this intermediate step required, its body is processed once it assumes ready state 4 (done).
When debugging Node.js code I often encounter call stacks that do not include my program's code in them, only node_modules/non-user code, despite the current line of execution being at a location in my code. This defeats the purpose of following the call stack to see the execution path through my application code.
Why are my source files not showing in the call stack?
It appears that you're looking at an asynchronous stack trace where your code is not in the stack except for your callback because your code unwound/finished and THEN the async callback was called.
All .then() handlers for all promises are called asynchronously with a clean stack. That's per the promise specification. So, promises always let the current thread of execution finish and unwind and then they fire .then() handlers with no user code on the stack when the callback is called. What you are describing is how synchronous code would work, not asynchronous code. We could talk a lot more specifically instead of theoretically if you showed actual code and described where you're looking at the call stack.
Async progress often has to be tracked with logging because you can't easily step through it and you can't just break and look at stack traces either.
As a little simpler example to look at:
function foo() {
setTimeout(() => {
console.log("timer"); // set breakpoint here
}, 100);
}
foo();
The function foo() has finished executing and returned before the callback is called and thus the stack trace will not have any of your code (other than just the callback) on it.
While .then() handlers use a slightly different scheduler than setTimeout(), the principle is the same.
I am writing a node.js addon to perform some cryptographic computation, which may take about 1 μs – 20 μs. Now I have a choice: implement this as a synchronous or as an asynchronous method (which does the computation on a background worker)?
It is obvious that network and I/O, which sometimes takes longer than a millisecond should be done asynchronously. Parsing JSON input is fast and should be done synchronously.
In my situation keeping the latency low is important, but optimizing away microseconds feels a lot like premature optimization. So with this context in mind I would be interested to get your view on the question:
When using node.js, how long does a (synchronous) call have to block until you decide to run it asynchronously on a background thread?
It is obvious that network and I/O, which sometimes takes longer than a millisecond should be done asynchronously. Parsing JSON input is fast and should be done synchronously.
This is not so obvious. There are asynchronous JSON parsers for Node. See:
https://www.npmjs.com/package/async-json-parse
https://www.npmjs.com/package/json-parse-async
But it's true that at some point for a CPU intensive operation you need to use asynchronous operations. I would say that any CPU-intensive logic shouldn't be done in the main thread blocking the event loop and should be done in external processes or worker, or in a thread spawned from C++ to make it maximally transparent to the user.
See how it is done in bcrypt and bcrypt-nodejs:
https://www.npmjs.com/package/bcrypt
https://www.npmjs.com/package/bcrypt-nodejs
If you can make your function work asynchronously (not only in a sense of using a callback but by actually not blocking the event loop) then I would recommend making at least two kinds of APIs - a function taking a callback and a function returning a promise (which can be one function in practice).
Currently with async/await you can use any function that returns a promise almost as if it were synchronous:
let x = await f();
let y = await g(x);
// ...
But there are some cases where you need a truly synchronous function, like if you want to have something that you can directly export from a module:
module.exports = f();
Here when the f() function is blocking there is no harm because the require() itself is blocking as well, and you should only use it once during startup. But if the function is asynchronous - by being declared with an async keyword and thus implicitly returning a promise, by explicitly returning a promise or by taking a callback, then you will not be able to export the value from a module and use it in certain ways.
So if you think that it makes sense that the return value of your function might be exported from modules then you may also need to provide a blocking, synchronous version.
Why not implement this as a synchronous AND as an asynchronous method with 2 function like cryptAsync() and cryptSync() ? I think it's better and not difficult for you to do.
The spec says (para 5):
The PendingJob records from a single Job Queue are always initiated in
FIFO order. This specification does not define the order in which
multiple Job Queues are serviced. An ECMAScript implementation may
interweave the FIFO evaluation of the PendingJob records of a Job
Queue with the evaluation of the PendingJob records of one or more
other Job Queues.
Does this mean I can't count on the callback supplied to .then being evaluated before a callback supplied to setTimeout in an otherwise synchronous control flow?
In other words, can I depend on the following printing one two.
setTimeout(() => console.log('two'));
Promise.resolve().then(() => console.log('one'));
Does this mean I can't count on the callback supplied to .then being evaluated before a callback supplied to setTimeout in an otherwise synchronous control flow?
Yes, that's what it means; the spec doesn't require that implementations work that way.
But in practice, the implementations with native Promise support I've tested it on have scheduled the then callback (a "microtask" from the PendingJobs queue) immediately after finishing the "macrotask" that scheduled it, before other pending macrotasks, even when the pending macrotask was scheduled before the microtask. (setTimeout and events are macrotasks.)
E.g., in the environments where I've tested it, this outputs A, C, B reliably:
console.log("A");
setTimeout(_ => console.log("B"), 0);
Promise.resolve().then(_ => console.log("C"));
But the JavaScript spec doesn't require it.
As Bergi points out, for user agent environments, the HTML5 spec covers this in its specification for microtasks and macrotasks. But that's only applicable to user agent environments (like browsers).
Node doesn't follow that spec's definition, for instance (not least because its timer functions return objects, not numbers), but Node also gives us A, C, B above, because (thanks Benjamin Gruenbaum!) it runs promise resolutions after the nextTick queue but before any timer or I/O callbacks. See his gist for details.
Yes, that's what it means - an other event might fire before the promise callback.
No, that won't happen - while ECMAScript allows it, the setTimeout spec does not.
setTimeout does not mean that the supplied function will be executed after the provided time. It adds the function to the end of the queue once the delay has elapsed.
It really depends on when your promise resolves, as to the execution of the two statements. In your example, setTimeout will add it's callback to the queue ahead of the resolved promise, so you can expect one two.
I am trying to grasp on Javascript Asynchronous functions and callbacks.
I got stuck on the concept of callback functions, where I am reading on some places: they are use to have sequential execution of code (mostly in context of jquery e.g animate)and some places specially in the context of Nodejs; they are use to have a parallel execution Asynchronous and avoid blocking of code.
So can some expert in this topic please shed light on this and clear this fuzz in my mind (examples??).
so I could make my mind for the usage of callback function
or that is solely depends on the place of where you are calling/placing a callback function in your code? .
Thanks,
P.S: I am scared that this question would be close as subjective but still I could expect concrete answer for this (perhaps some examples)
Edit: actually this is the example from internet which makes me ambigous:
function do_a(){
// simulate a time consuming function
setTimeout( function(){
console.log( '`do_a`: this takes longer than `do_b`' );
}, 1000 );
}
function do_b(){
console.log( '`do_b`: this is supposed to come out after `do_a` but it comes out before `do_a`' );
}
do_a();
do_b();
Result
`do_b`: this is supposed to come out after `do_a` but it comes out before `do_a`
`do_a`: this takes longer than `do_b`
when JS is sequential then do_b should always come after do_a according to my understanding.
The core of JavaScript is largely synchronous, in that functions complete their task fully, before completing. Prior to the advent of AJAX, it was really only setTimeout and setInterval that provided asynchronous behavior.
However, it's easy to forget that event handlers are, effectively async code. Attaching a handler does not invoke the handler code and that code isn't executed until some unknowable time in the future.
Then came AJAX, with its calls to the server. These calls could be configured to be synchronous, but developers generally preferred async calls and used callback methods to implement them.
Then, we saw the proliferation of JS libraries and toolkits. These strove to homogenize different browsers' implementations of things and built on the callback approach to async code. You also started to see a lot more synchronous callbacks for things like array iteration or CSS query result handling.
Now, we are seeing Deferreds and Promises in the mix. These are objects that represent the value of a long running operation and provide an API for handling that value when it arrives.
NodeJS leans towards an async approach to many things; that much is true. However this is more a design decision on their part, rather than any inherent async nature of JS.
Javascript is always a synchronous(blocking) single thread language but we can make Javascript act Asynchronous through programming.
Synchronous code:
console.log('a');
console.log('b');
Asynchronous code:
console.log('a');
setTimeout(function() {
console.log('b');
}, 1000);
setTimeout(function() {
console.log('c');
}, 1000);
setTimeout(function() {
console.log('d');
}, 1000);
console.log('e');
This outputs: a e b c d
In node long running processes use process.nextTick() to queue up the functions/callbacks. This is usually done in the API of node and unless your programming(outside the api) with something that is blocking or code that is long running then it doesn't really effect you much. The link below should explain it better then I can.
howtonode process.nextTick()
jQuery AJAX also takes callbacks and such as it its coded not to wait for server responses before moving on to the next block of code. It just rememebers the function to run when the server responds. This is based on XMLHTTPRequest object that the browsers expose. The XHR object will remember the function to call back when the response returns.
setTimeout(fn, 0) of javascript will run a function once the call stack is empty (next available free tick) which can be used to create async like features.setTimeout(fn, 0) question on stackoverflow
To summerise the async abilities of javascript is as much to do with the environments they are programmed in as javascript itself. You do not gain any magic by just using lots of function calls and callbacks unless your using some API/script.
Jquery Deferred Object Is another good link for async capabilities of jQuery. Googling might find you information on how jQuery Deferred works also for more insight.
In JavaScript the term "asynchronous" typically refers to code that gets executed when the call stack is empty and the engine picks a job from one of its job queues for execution.
Once code is being executed, it represents a synchronous sequence of execution, which continues until the call stack is empty again. This sequence of execution will not be interrupted by events in order to execute some other JavaScript code (when we discard Web Workers). In other words, a single JavaScript environment has no preemptive concurrency.
While synchronous execution is ongoing, events might be registered as jobs in some job queues, but the engine will not process those before first properly execution what is on the call stack. Only when the call stack is empty will the engine take notice of the job queues, pick one according to priority, and execute it (and that is called asynchronous).
Callbacks
Callbacks can be synchronous or asynchronous -- this really depends on how they are called.
For instance, here the callback is executed synchronously:
new Promise(function (resolve) { /* .... */ });
And here the callback is executed asynchronously:
setTimeout(function () { /* ... */ });
It really depends on the function that takes the callback as argument; how it deals with eventually calling that callback.
Ways to get code to execute asynchronously
The core ECMAScript language does not offer a lot of ways to do this. The well-known ones are offered via other APIs, such as the Web API, which are not part of the core language (setTimeout, setInterval, requestAnimationFrame, fetch, queueMicrotask, addEventListener, ...).
Core ECMAScript offers Promise.prototype.then and (depending on that) await. The callback passed to then is guaranteed to execute asynchronously. await will ensure that the next statements in the same function will be executed asynchronously: await makes the current function return, and this function's execution context will be restored and resumed by a job.
It also offers listening to when the garbage collector is about to garbage collect an object, with FinalizationRegistry.
Web Workers
Web Workers will execute in a separate execution environment, with their own call stack. Here preemptive concurrency is possible. When the term asynchronous is used in the JavaScript world, it typically does not refer to this kind of parallelism, although the communication with a Web Worker happens via asynchronous callback functions.
Javascript by default is "synchronous", it's the web APIs that handle "asynchronous" behaviour.
As for the setTimeout example,
console.log(...), in the global scope works straight away, while those inside functions wrapped inside setTimeout, wait inside the callback queue only to be pushed back on the call stack once ready. Thus they take time. Also, the time specified is not exact but the minimum time after which that piece of code can run anytime.
Thanks !