I know that iterators aren't really handled async with node and typescript, but the syntax sugar would seem to lie to me.
If I use a construct like this:
async function(checks) {
log('start loop')
checks.forEach(async (check) => {
actionResult = await this.runAction(check)
console.log('found 1 actionResult', actionResult)
}
})
log('end loop')
}
I would expect the async to apply to each inner iteration and wait on the loop to complete.
However the log output begs to differ:
start loop
end loop
found 1 actionResult {
So the inner event happens "after" the loop has run.
Is this correct/expected behavior? It seems a bit misleading.
I've seen some other syntax like:
Promise.all( elem => return someAsyncFn(x) )
which is another way to do it, but a bit hard to read.
for (const check of checks) {
A good old for (const elem of list) seems to properly observe the async of its outer wrapper, which is the way I usually end up going.
I also just found for...await of
So...
wondering if I have the syntax wrong in the first example? seems like it should work.
what is the most recommended way in 2020 to do this?
do generators or other techniques help?
related:
https://www.typescriptlang.org/docs/handbook/release-notes/typescript-2-3.html#async-iterators
what's the best method to handle asynchronous operations in an iterator with await?
None of the typical array iterations methods such as .forEach() and .map() are promise-aware. .map() can be used to collect an array of promises that you then await with Promise.all(), but that runs everything in parallel, not sequentially (useful sometimes, but it appears to not be what you were asking to do).
If you want to run the operations sequentially, then use a regular for loop instead of .forEach() since for will wait for your await and will properly sequence your asynchronous operations. This is the built-in, modern way to do it.
async function(checks) {
log('start loop')
for (let check of checks) {
actionResult = await this.runAction(check)
console.log('found 1 actionResult', actionResult)
}
log('end loop')
}
for await (loop parameters here) is when you have an actual asynchronous iterator which is not what you have here. Your situation is a regular iterator with an asynchronous operation inside it.
wondering if I have the syntax wrong in the first example? seems like it should work.
Your async callback you pass to .forEach() returns a promise, but .forEach() doesn't pay any attention to it. It just blindly goes right onto the next iteration of the loop (even though the asynchronous operations in the first iteration haven't finished yet). So, that's why it doesn't properly sequence things. .forEach() is not promise-aware. In fact, it finishes starting every single iteration of the loop before any of the asynchronous operations inside the loop have a chance to finish.
what is the most recommended way in 2020 to do this?
A regular for loop is the modern way to use await in a loop when you want the loop iterations to wait for the asynchronous operations before proceeding.
do generators or other techniques help?
Sometimes in some circumstances, but not needed here.
Avoid using forEach with promises as it's not promise aware
from MDN
forEach does not wait for promises. Kindly make sure you are aware of the implications while using promises(or async functions) as forEach callback.
The Promise.all approach is able to process the tasks in parallel (e.g. multiple network requests), while a loop is sequential. It might look uglier, but it can be beneficial for some tasks.
async function(checks) {
log('start loop');
await Promise.all(
checks.map((check) => this.runAction(check))
);
log('end loop');
}
Related
Is it legit (or good practice) do a loop with a higher order function - like Array.map() - to perform some side effects ?
This is mostly a theoretical question since I notice that sometimes I found (or perform myself) some loops using the .map() method, like this:
let myObject = {...}
let myItemsArray = [ ... , ... ]
myItemsArray.map( (item, index) => {
// do some side effect for ex:
myObject.item[index] = item
}
But I know that this map() method actually returns an array. So calling myItemsArray.map() is like I'm returning an array without assign it to any variable.
So the question, is this legit? Should I avoid this and use a classic for() loop or a .forEach() method?
Side question: one of the reasons I'm asking this is because I need to perform some loops on an async function so promises and await operators are involved, I remember that forEach() loops are not ideal on an async function, but wanted to know why.
So the question, is this legit? Should I avoid this and use a classic for() loop or a .forEach() method?
If you aren't returning anything from the map function, then you should use forEach instead. You end up with the same result but you don't imply, to anyone maintaining your code, that you are returning something useful.
one of the reasons I'm asking this is because I need to perform some loops on an async function so promises and await operators are involved, I remember that forEach() loops are not ideal on an async function, but wanted to know why.
Neither forEach nor map will await if the function you pass to it returns a promise (which async functions do).
So there are three possible scenarios here:
// Your loop
...myItemsArray...
// Code that comes after the loop
...etc...
A: The items in the loop need to handled sequentially
Use a regular for () loop as the outer function won't pause.
B: The items in the loop can be handled in parallel and the code that comes after doesn't need to wait for it
Use a forEach.
C: The code that comes after needs to wait for everything in the loop to finish
Use a map. Pass the array of returned promises to Promise.all. Then await that.
I’m working on an Angular 6 application and I’ve been told the following is an anti-pattern:
await someFunction().then(result => {
console.log(result);
});
I realize that it is pointless to await a promise chain. If someFunction() returns a promise, you don’t need a promise chain if you’re awaiting it. You can do this:
const result = await someFunction();
console.log(result);
But I’m being told awaiting a promise chain can cause bugs, or that it will break things in my code. If the first code snippet above does the same thing as the second snippet, what does it matter which one is used. What dangers does the first snippet introduce that the second one doesn’t?
I’m being told awaiting a promise chain will break things in my code.
Not necessarily, your two code snippets do indeed work the same (as long as someFunction() really returns a promise).
What does it matter which one is used. What dangers does the first snippet introduce that the second one doesn’t?
It's harder to understand and maintain, it's confusing to mix different styles. Confusion leads to bugs.
Consider that you would need to add another promise call at the location of the console.log() call, or even a conditional return from the function. Can you use await in the callback like elsewhere in the function, do you need to return the result from the then callback, is it even possible to return from the outer function? All these questions don't even come up in the first snippet. And while they can be easily answered for your toy example, it might not be as easy in real code with more complex and nested control flow.
So you should prefer the more concise and clean one. Stick to await for consistency, avoid then in async functions1.
1: Of course, there's always an exception to the rule. I would say that it's cleaner to use promise chaining for error handling in some cases where you would use catch or the second then callback.
Under the hood, async/await is just promises.
That is, when you have some code that looks like:
const result = await myAsyncFunction();
console.log(result):
That's exactly the same as writing:
myAsyncFunction().then(data => {
const result = data;
console.log(result);
});
The reason then - that you shouldn't mix async/await and .then chains - is because it's confusing.
It's better to just pick one style, and stick to it.
And while you're picking one - you might as well pick async/await - it's more understandable.
If your then code returned a promise instead of calling console.log, your first example would await, but your second example would not.
When you use async/await, you will catch your rejects in try/catch blocks. Your code will be less nested and clearer.
Using then often results in more nesting, and harder to read code.
You can await anything, whether or not it returns a promise. Sometimes this future-proofs calling a method that may one day become async or just return a promise without declaring async.
The downsides are complexity, performance, and compatibility, all of which pale in comparison to the gains.
I find that if you rely on a function's return value after calling it, and it is or may eventually become asynchronous, decorate calling your functions with await to your heart's delight, whether or not it is current async or returns a promise.
After reading http://www.promisejs.org/patterns, I saw this Javascript ECMA 6.0 pattern.
function all(promises) {
var accumulator = [];
var ready = Promise.resolve(null);
promises.forEach(function (promise, ndx) {
ready = ready.then(function () {
return promise;
}).then(function (value) {
accumulator[ndx] = value;
});
});
return ready.then(function () { return accumulator; });
}
I am curious whether there is another way with Javascript promises to set promiseChain = promiseChain.all() to meet the objective of
resolving a long chain of promises in their original sequential order.
I found this StackOverflow article. http://stackoverflow.com/questions/28066429/promise-all-order-of-resolved-values
which is relevant to my question.
Also, could I use recursion rather than looping to allow evaluation of promises conditional on the resolution or error handling in the previous promise? Thank you.
At http://www.promisejs.org/patterns I saw this Javascript ECMA 6.0 pattern
No, it's not a pattern at all. It was meant to serve as an explanation of how Promise.all works, suggesting it could be implemented like that function all(promises) { … }. Only that this implementation is absolutely horrible, inelegant, and fails too meet the specification requirements in many ways.
The page puts it as "it should give you an idea of how promises can be combined in interesting ways." Yeah, interesting maybe, but it's really just incorrect code that should not be taken up by anyone as a pattern.
I am curious whether there is another way to meet the objective of resolving a long chain of promises in their original sequential order.
That makes no sense. A promise chain (a promise built from many successive then calls) is already implicitly sequenced, you don't have to do anything special for that.
If you are talking about an array of promises (similar to how Promise.all takes one), multiple arbitrary promises are independent from each other and do not form a sequence - nor can they be forced to do anything in sequence. Remember that a promise is the result of a running asynchronous task, it's not a task that can do anything.
Could I use recursion rather than looping to allow evaluation of promises conditional on the resolution or error handling in the previous promise?
Yes, a recursive approach goes very well with promises and is in fact the only solution to unbounded conditional repetition.
Can you explain me the following phrase (taken from an answer to Stack Overflow question What are the differences between Deferred, Promise and Future in Javascript?)?
What are the pros of using jQuery promises against using the previous jQuery callbacks?
Rather than directly passing callbacks to functions, something which
can lead to tightly coupled interfaces, using promises allows one to
separate concerns for code that is synchronous or asynchronous.
A promise is an object that represents the result of an asynchronous operation, and because of that you can pass it around, and that gives you more flexibility.
If you use a callback, at the time of the invocation of the asynchronous operation you have to specify how it will be handled, hence the coupling. With promises you can specify how it will be handled later.
Here's an example, imagine you want to load some data via ajax and while doing that you want to display a loading page.
With callbacks:
void loadData = function(){
showLoadingScreen();
$.ajax("http://someurl.com", {
complete: function(data){
hideLoadingScreen();
//do something with the data
}
});
};
The callback that handles the data coming back has to call hideLoadingScreen.
With promises you can rewrite the snippet above so that it becomes more readable and you don't have to put the hideLoadingScreen in the complete callback.
With promises
var getData = function(){
showLoadingScreen();
return $.ajax("http://someurl.com").promise().always(hideLoadingScreen);
};
var loadData = function(){
var gettingData = getData();
gettingData.done(doSomethingWithTheData);
}
var doSomethingWithTheData = function(data){
//do something with data
};
UPDATE: I've written a blog post that provides extra examples and provides a clear description of what is a promise and how its use can be compared to using callbacks.
The coupling is looser with promises because the operation doesn't have to "know" how it continues, it only has to know when it is ready.
When you use callbacks, the asynchronous operation actually has a reference to its continuation, which is not its business.
With promises, you can easily create an expression over an asynchronous operation before you even decide how it's going to resolve.
So promises help separate the concerns of chaining events versus doing the actual work.
I don't think promises are more or less coupled than callbacks, just about the same.
Promises however have other benefits:
If you expose a callback, you have to document whether it will be called once (like in jQuery.ajax) or more than once (like in Array.map). Promises are called always once.
There's no way to call a callback throwing and exception on it, so you have to provide another callback for the error case.
Just one callback can be registered, more than one for promises, and you can register them AFTER the event and you will get called anyway.
In a typed declaration (Typescript), Promise make easier to read the signature.
In the future, you can take advantage of an async / yield syntax.
Because they are standard, you can make reusable components like this one:
disableScreen<T>(promiseGenerator: () => Promise<T>) : Promise<T>
{
//create transparent div
return promiseGenerator.then(val=>
{
//remove transparent div
return val;
}, error=>{
//remove transparent div
throw error;
});
}
disableScreen(()=>$.ajax(....));
More on that: http://www.html5rocks.com/en/tutorials/es6/promises/
EDIT:
Another benefit is writing a sequence of N async calls without N levels of indentation.
Also, while I still don't think it's the main point, now I think they are a little bit more loosely coupled for this reasons:
They are standard (or at least try): code in C# or Java that uses strings are more lousy coupled than similar code in C++, because the different implementations of strings there, making it more reusable. Having an standard promise, the caller and the implementation are less coupled to each other because they don't have to agree on a (pair) of custom callbacks with custom parameters orders, names, etc... The fact that there are many different flavors on promises doesn't help thought.
They promote a more expression-based programming, easier to compose, cache, etc..:
var cache: { [key: string] : Promise<any> };
function getData(key: string): Promise<any> {
return cache[key] || (cache[key] = getFromServer(key));
}
you can argue that expression based programming is more loosely coupled than imperative/callback based programming, or at least they pursue the same goal: composability.
Promises reify the concept of delayed response to something.
They make asynchronous computation a first-class citizen as you can pass it around. They allow you to define structure if you want - monadic structure that is - upon which you can build higher order combinators that greatly simplify the code.
For example you can have a function that takes an array of promises and returns a promise of an array(usually this is called sequence). This is very hard to do or even impossible with callbacks. And such combinators don't just make code easier to write, they make it much easier to read.
Now consider it the other way around to answer your question. Callbacks are an ad-hoc solution where promises allow for clearer structure and re-usability.
They aren't, this is just a rationalization that people who are completely missing the point of promises use to justify writing a lot more code than they would write using callbacks. Given that there is obviously no benefit in doing this, you can at least always tell yourself that the code is less coupled or something.
See what are promises and why should I use them for actual concrete benefits.
I was at a node.js meetup today, and someone I met there said that node.js has es6 generators. He said that this is a huge improvement over callback style programming, and would change the node landscape. Iirc, he said something about call stack and exceptions.
I looked them up, but haven't really found any resource that explains them in a beginner-friendly way. What's a high-level overview of generators, and how are the different (or better?) than callbacks?
PS: It'd be really helpful if you could give a snippet of code to highlight the difference in common scenarios (making an http request or a db call).
Generators, fibers and coroutines
"Generators" (besides being "generators") are also the basic buildings blocks of "fibers" or "coroutines". With fibers, you can "pause" a function waiting for an async call to return, effectively avoiding to declare a callback function "on the spot" and creating a "closure". Say goodbye to callback hell.
Closures and try-catch
...he said something about call stack and exceptions
The problem with "closures" is that even if they "magically" keep the state of the local variables for the callback, a "closure" can not keep the call stack.
At the moment of callback, normally, the calling function has returned a long time ago, so any "catch" block on the calling function cannot catch exceptions in the async function itself or the callback. This presents a big problem. Because of this, you can not combine callbacks+closures with exception catching.
Wait.for
...and would change the node landscape
If you use generators to build a helper lib like Wait.for-ES6 (I'm the author), you can completely avoid the callback and the closure, and now "catch blocks" work as expected, and the code is straightforward.
It'd be really helpful if you could give a snippet of code to highlight the difference in common scenarios (making an http request or a db call).
Check Wait.for-ES6 examples, to see the same code with callbacks and with fibers based on generators.
UPDATE 2021: All of this has been superseded by javascript/ES2020 async/await. My recommendation is to use Typescript and async/await (which is based on Promises also standardized)
Generators is one of many features in upcoming ES6. So in the future it will be possible to use them in browsers (right now you can play with them in FF).
Generators are constructors for iterators. Sounds like gibberish, so in easier terms they allow to create objects that later will be possible to iterate with something like for loops using .next() method.
Generators are defined in a similar way to functions. Except they have * and yield in them. * is to tell that this is generator, yield is similar to return.
For example this is a generator:
function *seq(){
var n = 0;
while (true) yield n++;
}
Then you can use this generator with var s = seq(). But in contrast to a function it will not execute everything and give you a result, it will just instantiate the generator. Only when you will run s.next() the generator will be executed. Here yield is similar to return, but when the yield will run, it will pause the the generator and continues to work on the next expression after next. But when the next s.next() will be called, the generator will resume its execution. In this case it will continue doing while loop forever.
So you can iterate this with
for (var i = 0; i < 5; i++){
console.log( s.next().value )
}
or with a specific of construct for generators:
for (var n of seq()){
if (n >=5) break;
console.log(n);
}
These are basics about generators (you can look at yield*, next(with_params), throw() and other additional constructs). Note that it is about generators in ES6 (so you can do all this in node and in browser).
But how this infinite number sequence has anything to do with callback?
Important thing here is that yield pauses the generator. So imagine you have a very strange system which work this way:
You have database with users and you need to find the name of a user with some ID, then you need to check in your file system the key for a this user's name and then you need to connect to some ftp with user's id and key and do something after connection. (Sounds ridiculous but I want to show nested callbacks).
Previously you would write something like this:
var ID = 1;
database.find({user : ID}, function(userInfo){
fileSystem.find(userInfo.name, function(key){
ftp.connect(ID, key, function(o){
console.log('Finally '+o);
})
})
});
Which is callback inside callback inside callback inside callback. Now you can write something like:
function *logic(ID){
var userInfo = yield database.find({user : ID});
var key = yield fileSystem.find(userInfo.name);
var o = yield ftp.connect(ID, key);
console.log('Finally '+o);
}
var s = logic(1);
And then use it with s.next(); As you see there is no nested callbacks.
Because node heavily uses nested callbacks, this is the reason why the guy was telling that generators can change the landscape of node.
A generator is a combination of two things - an Iterator and an Observer.
Iterator
An iterator is something when invoked returns an iterable which is something you can iterate upon. From ES6 onwards, all collections (Array, Map, Set, WeakMap, WeakSet) conform to the Iterable contract.
A generator(iterator) is a producer. In iteration the consumer PULLs the value from the producer.
Example:
function *gen() { yield 5; yield 6; }
let a = gen();
Whenever you call a.next(), you're essentially pull-ing value from the Iterator and pause the execution at yield. The next time you call a.next(), the execution resumes from the previously paused state.
Observer
A generator is also an observer using which you can send some values back into the generator. Explained better with examples.
function *gen() {
document.write('<br>observer:', yield 1);
}
var a = gen();
var i = a.next();
while(!i.done) {
document.write('<br>iterator:', i.value);
i = a.next(100);
}
Here you can see that yield 1 is used like an expression which evaluates to some value. The value it evaluates to is the value sent as an argument to the a.next function call.
So, for the first time i.value will be the first value yielded (1), and when continuing the iteration to the next state, we send a value back to the generator using a.next(100).
Where can you use this in Node.JS?
Generators are widely used with spawn (from taskJS or co) function, where the function takes in a generator and allows us to write asynchronous code in a synchronous fashion. This does NOT mean that async code is converted to sync code / executed synchronously. It means that we can write code that looks like sync but internally it is still async.
Sync is BLOCKING; Async is WAITING. Writing code that blocks is easy. When PULLing, value appears in the assignment position. When PUSHing, value appears in the argument position of the callback
When you use iterators, you PULL the value from the producer. When you use callbacks, the producer PUSHes the value to the argument position of the callback.
var i = a.next() // PULL
dosomething(..., v => {...}) // PUSH
Here, you pull the value from a.next() and in the second, v => {...} is the callback and a value is PUSHed into the argument position v of the callback function.
Using this pull-push mechanism, we can write async programming like this,
let delay = t => new Promise(r => setTimeout(r, t));
spawn(function*() {
// wait for 100 ms and send 1
let x = yield delay(100).then(() => 1);
console.log(x); // 1
// wait for 100 ms and send 2
let y = yield delay(100).then(() => 2);
console.log(y); // 2
});
So, looking at the above code, we are writing async code that looks like it's blocking (the yield statements wait for 100ms and then continue execution), but it's actually waiting. The pause and resume property of generator allows us to do this amazing trick.
How does it work ?
The spawn function uses yield promise to PULL the promise state from the generator, waits till the promise is resolved, and PUSHes the resolved value back to the generator so it can consume it.
Use it now
So, with generators and spawn function, you can clean up all your async code in NodeJS to look and feel like it's synchronous. This will make debugging easy. Also the code will look neat.
BTW, this is coming to JavaScript natively for ES2017 - as async...await. But you can use them today in ES2015/ES6 and ES2016 using the spawn function defined in the libraries - taskjs, co, or bluebird
Summary:
function* defines a generator function which returns a generator object. The special thing about a generator function is that it doesn't execute when it is called using the () operator. Instead an iterator object is returned.
This iterator contains a next() method. The next() method of the iterator returns an object which contains a value property which contains the yielded value. The second property of the object returned by yield is the done property which is a boolean (which should return true if the generator function is done).
Example:
function* IDgenerator() {
var index = 0;
yield index++;
yield index++;
yield index++;
yield index++;
}
var gen = IDgenerator(); // generates an iterator object
console.log(gen.next().value); // 0
console.log(gen.next().value); // 1
console.log(gen.next().value); // 2
console.log(gen.next()); // object,
console.log(gen.next()); // object done
In this example we first generate an iterator object. On this iterator object we then can call the next() method which allows us to jump form yield to yield value. We are returned an object which has both a value and a done property.
How is this useful?
Some libraries and frameworks might use this construct to wait for the completion of asynchronous code for example redux-saga
async await the new syntax which lets you wait for async events uses this under the hood. Knowing how generators work will give you a better understanding of how this construct works.
To use the ES6 generators in node, you will need to either install node >= 0.11.2 or iojs.
In node, you will need to reference the harmony flag:
$ node --harmony app.js
or you can explicitly just reference the generators flag
$ node --harmony_generators app.js
If you've installed iojs, you can omit the harmony flag.
$ iojs app.js
For a high level overview on how to use generators, checkout this post.