As I understand the current spec for Javascript generators, you have to mark functions containing yields explicitly.
I wonder what the rationate behind this is.
If this is true, it would force people to write:
let thirdfunc = function*() {
let value = 5;
let other = yield 6;
return value;
};
let secondfunc = function*() {
yield thirdfunc();
};
let firstfunc = function*() {
yield secondfunc();
};
let gen = function*() {
// some code
// more code
yield firstfunc();
// and code
};
let it = gen();
while( !it.done() ) {
it.next();
};
Which means, generators would spread like cancer in a codebase.
While in the end, to the developer only yielding and handling the iterator is really interesting.
I would find it much more practical, to just define, where I want to handle the iteration.
let thirdfunc = function() {
let value = 5;
let other = yield 6; // Change 1: incorporate yield
return value;
};
let secondfunc = function() {
thirdfunc();
};
let firstfunc = function() {
secondfunc();
};
let gen = function*() { // Change 2: at this level I want to deal with descendant yields
// some code
// more code
firstfunc();
// and code
};
// Change 3: Handle iterator
let it = gen();
while( !it.done() ) {
it.next();
}
If the browser then has to turn everything between the yield call and the generator handler (firstfunc, secondfunc, thirdfunc) into promise / future form, that should work automagically and not be the business of Javascript developers.
Or are there really good arguments for not doing this?
I described the rationale for this aspect of the design at http://calculist.org/blog/2011/12/14/why-coroutines-wont-work-on-the-web/ -- in short, full coroutines (which is what you describe) interfere with the spirit of JavaScript's run-to-completion model, making it harder to predict when your code can be "pre-empted" in a similar sense to multithreaded languages like Java, C#, and C++. The blog post goes into more detail and some other reasons as well.
Related
To give you an idea what I'm trying to do:
As someone who teaches sorting algorithms, I'd like to enable students to easily visualize how their sorting algorithms work and found generators to be realy useful for that, sice I can interrupt the execution at any point: The following code that a student could write can be turned into an animation by my library:
function* bubbleSort(arr){
let done = false;
while(!done){
done = true;
for(let i=1;i<arr.length;i++){
yield {type:"comparison",indexes:[i-1,i]};
if(arr[i-1]>arr[i]){
yield {type:"swap",indexes:[i-1,i]};
swap(arr,i-1,i);
done = false;
}
}
}
return "sorted";
}
This works quite well, but it would be a lot better, if I could write functions compare(i,j) and swap(i,j) that handle the yield internally (and in case of compare also return a boolean value). So I'd like to be able to express the above as:
function* bubbleSort(arr){
let done = false;
while(!done){
done = true;
for(let i=1;i<arr.length;i++){
if(compare(i-1,i)){
swap(arr,i-1,i);
done = false;
}
}
}
return "sorted";
}
Is there a way to do this?
You could just do
if(yield* compare(i-1,i))
Which will pass the yield calls inside of compare to the outside.
I'm currently looking at async-await in C#, and noticed similarities to JavaScript promises. Looking into this I see that JavaScript is also going to support async-await statements, and that there are similarities between this and promises (look at this blog post for example).
On a whim, I wondered what JavaScript's implementation of async-await was and found this question (Java Equivalent of C# async/await?).
The accepted answer suggests that async-await (and by extension, I guess, promises) are implementations of a 'state machine'.
Question: What is meant by a 'state machine' in terms of promises, and are JavaScript promises comparable to C#'s async-await?
JavaScript promises are comparable to C# Task objects which have a ContinueWith function that behaves like .then in JavaScript.
By "state machines" it is means that they are typically implemented by a state and a switch statement. The states are places the function can be at when it runs synchronously. I think it's better to see how such a transformation works in practice. For example let's say that your runtime only understands regular functions. An async function looks something like:
async function foo(x) {
let y = x + 5;
let a = await somethingAsync(y);
let b = await somethingAsync2(a);
return b;
}
Now, let's look at all the places the function can be when it executes a step synchronously:
async function foo(x) {
// 1. first stage, initial
let y = x + 5;
let a = await somethingAsync(y);
// 2. after first await
let b = await somethingAsync2(a);
// 3. after second await
return b;
// 4. done, with result `c`.
}
Now, since our runtime only understands synchronous functions - our compiler needs to do something to make that code into a synchronous function. We can make it a regular function and keep state perhaps?
let state = 1;
let waitedFor = null; // nothing waited for
let waitedForValue = null; // nothing to get from await yet.
function foo(x) {
switch(state) {
case 1: {
var y = x + 5;
var a;
waitedFor = somethingAsync(y); // set what we're waiting for
return;
}
case 2: {
var a = waitedForValue;
var b;
waitedFor = somethingAsync(a);
return;
}
case 3: {
b = waitedFor;
returnValue = b; // where do we put this?
return;
}
default: throw new Error("Shouldn't get here");
}
}
Now, it's somewhat useful, but doesn't do anything too interesting - we need to actually run this as a function. Let's put the state in a wrapper and automatically run the promises when they're resolved:
function foo(x) { // note, not async
// we keep our state
let state = 1, numStates = 3;
let waitedFor = null; // nothing waited for
let waitedForValue = null, returnValue = null; // nothing to get from await yet.
// and our modified function
function stateMachine() {
switch(state) {
case 1: {
var y = x + 5;
var a;
waitedFor = somethingAsync(y); // set what we're waiting for
return;
}
case 2: {
var a = waitedForValue;
var b;
waitedFor = somethingAsync(a);
return;
}
case 3: {
b = waitedFor;
returnValue = b; // where do we put this?
return;
}
default: throw new Error("Shouldn't get here");
}
// let's keep a promise for the return value;
let resolve, p = new Promise(r => resolve = r); // keep a reference to the resolve
// now let's kickStart it
Promise.resolve().then(function pump(value) {
stateMachine();
state++; // the next state has progressed
if(state === numStates) resolve(returnValue); // return the value
return Promise.resolve(waitedFor).then(pump);
});
return p; // return the promise
}
Effectively, the Promise.resolve().then(... part calls the stateMachine and waits for the value that's being awaited every time until it is at the final state at which point it resolves the (returned beforehand) promise.
This is effectively what Babel or TypeScript do with your code too. What the C# compiler does is very close - with the biggest difference is that it's put in a class.
Note we are ignoring conditionals, exceptions and loops here - it makes things a little bit more complicated but not much harder (you just need to handle each case separately).
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have 2 very related questions. One practical, relating to a real problem I have, and one more theoretical. Let me start with the latter:
Q1: Does async/await + generators make sense?
In general, it doesn't make too much sense to have yield inside of callbacks or promises. The problem (among others) is there could be races between multiple yields, it's impossible to tell when the generator ends, etc.
function* this_is_nonsense() {
setTimeout(() => { yield 'a'; }, 500);
setTimeout(() => { yield 'b'; }, 500);
return 'c';
}
When using async/await these problems seem to actually mostly go away
function timeout(ns) {
return new Promise((resolve) => setTimeout(resolve, ns));
}
async function* this_could_maybe_make_sense() {
// now stuff is sequenced instead of racing
await timeout(500);
yield 'a';
await timeout(500);
yield 'b';
return 'c';
}
I assume nothing like this is currently possible (please correct me if wrong!). So my first question is, is there a conceptual issue with using generators with async/await? If not, is it on any kind of roadmap? It seems like implementation would require more than just mashing together the two features.
Q2: How to write lazy wiki crawler
This is a contrived version of a real code problem I have.
Imagine writing a function that does traversal of Wikipedia, starting from some arbitrary page and repeatedly following links (doesn't matter how - can be depth first, breadth first or something else). Assume we can asynchronously crawl Wikipedia pages to get linked pages. Something like:
async function traverseWikipedia(startLink) {
const visited = {};
const result = [];
function async visit(link) {
if (visited[link]) { return; }
visited[link] = true;
result.push(link);
const neighboringLinks = await getNeighboringWikipediaLinks(link);
for (const i = 0; i < neighboringLinks.length; i++) {
await visit(neighboringLinks[i]);
}
// or to parallelize:
/*
await Promise.all(
neighboringLinks.map(
async (neighbor) => await visit(neighbor)
)
);
*/
);
await visit(startLink);
return result;
}
This is fine... except that sometimes I only want the first 10 links, and this is gonna crawl over tons of links. Sometimes I'm searching for the first 10 that contains the substring 'Francis'. I want a lazy version of this.
Conceptually, instead of a return type of Promise<Array<T>>, I want Generator<T>, or at least Generator<Promise<T>>. And I want the consumer to be able to do arbitrary logic, including asynchronous stopping condition. Here's a fake, totally wrong (i assume) version:
function* async traverseWikipedia(startLink) {
const visited = {};
function* async visit(link) {
if (visited[link]) { return; }
visited[link] = true;
yield link;
const neighboringLinks = await getNeighboringWikipediaLinks(link);
for (const i = 0; i < neighboringLinks.length; i++) {
yield* await visit(neighboringLinks[i]);
}
);
yield* await visit(startLink);
}
This is the same nonsense from Q1, combining async/await + generators as if it did what I want. But if I did have it, I could consume like this:
function find10LinksWithFrancis() {
const results = []
for (const link of traverseWikipedia()) {
if (link.indexOf('Francis') > -1) {
results.push(link);
}
if (results.length === 10) {
break;
}
}
return results;
}
But I could also search for different number of results and strings, have asynchronous stopping conditions, etc. For example, it could show 10 results, and when the user presses a button, keeps crawling and shows the next 10.
I'm okay with just using promises/callbacks, without async/await, as long as I still get the same API for the consumer.
The caller of the generator can do asynchronous things. So I think it's possible to implement this, where the generator function yields the promise of neighbors, and it's up to the caller to call gen.next with the resulting actual neighbors. Something like:
function* traverseWikipedia(startLink) {
const visited = {};
function* visit(link) {
if (visited[link]) { return; }
visited[link] = true;
yield { value: link };
const neighboringLinks = yield { fetch: getNeighboringWikipediaLinks(link) };
for (const i = 0; i < neighboringLinks.length; i++) {
yield* visit(neighboringLinks[i]);
}
);
yield* visit(startLink);
}
But that's not very nice to use... you have to detect the case of an actual result vs. the generator asking you to give it back an answer, it's less typesafe, etc. I just want something that looks like Generator<Promise<T>> and a relatively straightforward way to use it. Is this possible?
Edited to add: it occurred to me you could implement this if the body of the generator function could access the Generator it returned somehow... not sure how to do that though
Edited again to add example of consumer, and note: I would be okay with the consumer being a generator also, if that helps. I'm thinking it might be possible with some kind of engine layer underneath, reminiscent of redux-saga. Not sure if anyone has written such a thing
I found some code online. I've squashed the original code down into this little excerpt, that when ran, will print 1-20 to the console.
var NumbersFromOne = {
*[Symbol.iterator] () {
for (let i = 1;; ++i) yield i;
}
};
var take = function* (numberToTake, iterable) {
let remaining = numberToTake;
for (let value of NumbersFromOne) {
if (remaining-- <= 0) break;
yield value;
}
}
var printToTwenty = take(20, NumbersFromOne)
console.log(...printToTwenty);
Now, I understand that take() is a GeneratorFunction.
When take() is called, it is given an iterator.
The code "...printToTwenty" uses the spread operator to iterate through that function.
I understand that NumbersFromOne is an object.
I've come here looking for an explanation of what this part means:
*[Symbol.iterator] () {}
Declaring generator functions is done like this: function* () {}
So I'm assuming this isn't declaring a generator function.
* also doesn't represent the function name
* also can't be replaced with another operator (/, -, +)
What is the deal with that syntax, and why is the * before [Symbol.iterator]
If placed after, it will not run.
I had considered that *[Symbol.iterator] () is a way to overwrite the existing iterator property, but then wouldn't it say this[Symbol.iterator].
Thanks!
There are a few things that might make this code look complicated:
It uses the object property shorthand notation. What you're seeing here is actually the following:
var NumbersFromOne = {
[Symbol.iterator]: function* () {
for (let i = 1;; ++i) yield i;
}
};
Symbol.iterator creates a custom iterator for your NumbersFromOne object.
So your code basically means that the iterator of NumbersFromOne is defined as a generator. Instead of manually having to define a function which returns a next and other properties:
var NumbersFromOne = {
[Symbol.iterator]: function () {
var i = 1;
return {
next: function() {
return { value: i++, done: false };
}
};
}
};
Returning the generator creates the next function automatically for. This allows you to yield when you need to.
It can then be called as:
const it = NumbersFromOne[Symbol.iterator]();
it.next(); // 1
it.next(); // 2
it.next(); // 3
// ...
Note: Written this way, this iterator never ends! So if you were to call it in a for ... of loop without an end-condition, it would freeze your program.
So my code is of the "let's use Generators to avoid Callback Hell" variety. I'm trying to have an accessor function wrap a generator that handles opening IndexedDB. I need to have the generator yield to the "callback", and then yield to the accessor function so the accessor function can return the created object which originated in the generator. Maybe this is a brainfart, but I'm kinda lost as to how to do this without polling on a boolean (such as idb_ready = false; and wait for idb_ready to be true before starting the while(gen.next()) business).
Code:
accessor = function() {
var gen = function* () {
var ret = {};
var request = indexedDB.open("blah", 1, "description");
request.onupgradeneeded = resume;
var event = yield;
ret.db_instance = event.target.result;
yield ret;
};
function resume(val) { gen.next(val); }
gen.next(); // start the generator
// HERE is where I need to wait for the second yield
}
Is there a way to do this without polling? Thanks!