So I'm fairly new to JavaScript and aware of asynchronous functions calls. I have done quite a bit of research and found that if you want to run asynchronous calls in succession of one another, you can use callback functions and promises. Now I have come to understand how both of these implementations are useful if you are running just a few asynchronous functions. I'm trying to tackle a completely different animal; at least to my knowledge. I'm currently building a site that needs to appear as if it's writing text to itself. Just to clue everyone here in to my JS code, here is the function that writes to the webpage (I'm fairly new so if you think you have a better solution, an example with a small description would be appreciated):
function write(pageText, elementId, delay) {
var element = document.getElementById(elementId);
var charCount = 0;
setInterval(function() {
if (charCount > pageText.length) {
return;
} else {
element.innerHTML = pageText.substr(0, charCount++);
}
}, delay);
}
write("This is an example", 'someRandomDiv', 100);
<div id="someRandomDiv">
</div>
With this I'm trying to write a line of text to a webpage one line after another. Essentially I'm use to writing code like such in Java and C#:
function writePassage()
{
var passage=["message one", "message two", "... message n"];
for(var i = 0; i<passage.length; i++)
{
write(passage[i], 'someRandomDiv', 100);
}
}
Obviously since this won't work because the for loop in wirtePassage() will finish executing before just one or two of the asynchronous function calls end. I'm asking if there is a sane solution to this error where I have n asynchronous calls, and I need to have one perform before the next one is triggered. It's worth mentioning that I don't want to just run this loop above and add another variable that just keeps track of how long I should delay each passage that will be written. I would prefer if there was a programmatic way that forces the execution of the function before the next one is called. Thanks for reading this monster question!
There are a few things you'll need to do to get this working.
First, your write function will need an asynchronous interface. As you mentioned it could either take a callback or return a promise. Taking a callback would look something like:
function write(pageText, elementId, delay, callback)
{
var element = document.getElementById(elementId);
var charCount=0;
var interval = setInterval(function(){
if(charCount>pageText.length)
{
clearInterval(interval);
callback();
}
else
{
element.innerHTML = pageText.substr(0,charCount++);
}
}, delay);
}
That calls callback when the full pageText has been written into element. Note that it also clears your interval timer when it's done, which avoids an event loop leak.
Then, you'll need to chain your asynchronous calls using this callback. You can do this quite cleanly with a library such as async:
function writePassage()
{
var passage=["message one", "message two", "... message n"];
async.series(passage.map(function(text){
return function(done){
write(text, 'someRandomDiv', 100, done);
};
}));
}
But it's also not so much trouble to do by hand:
function writePassage()
{
var passage=["message one", "message two", "... message n"];
var writeOne = function() {
if (!passage.length) return;
var text = passage.shift();
write(text, 'someRandomDiv', 100, writeOne);
}
// Kick off the chain.
writeOne();
}
That's just asynchronous recursion. Welcome to JavaScript. :)
A promise-based solution can also be pretty clean. First you need to return a promise from write:
function write(pageText, elementId, delay)
{
return new Promise(resolve) {
var element = document.getElementById(elementId);
var charCount=0;
var interval = setInterval(function(){
if(charCount>pageText.length)
{
clearInterval(interval);
resolve();
}
else
{
element.innerHTML = pageText.substr(0,charCount++);
}
}, delay);
}
}
Then you can create a chain of promises via reduction:
function writePassage()
{
var passage=["message one", "message two", "... message n"];
passage.reduce(function(chain, text) {
return chain.then(function(){
return write(text, 'someRandomDiv', 100, writeOne);
});
}, Promise.resolve());
}
In addition to Bo's answer, here is how you would do it with promises (because promises are awesome!). It is a bit more advanced, but I also find it more elegant (array method call on strings, reduce).
I also used arrow functions. If you need to support old browsers, you may want to replace them with regular functions.
// Return a promise resolved after time ms.
var wait = (time) => new Promise((resolve) => setTimeout(resolve, time));
function write(pageText, elementId, delay){
// Fetch the element.
var element = document.getElementById(elementId);
// Empty the element.
element.innerHTML = '';
// Reduce on each character of pageText with a resolved promise
// as a initialiser, and return the resulting promise.
return Array.prototype.reduce.call(pageText, (promise, char) => {
// Chain to the previous promise.
return promise
// First wait delay ms.
.then(() => wait(delay))
// Then add the current character to element's innerHTML.
.then(() => element.innerHTML += char);
}, Promise.resolve());
}
var messages = ["message one", "message two", "... message n"];
messages.reduce((promise, message) => {
return promise
// Write current message.
.then(() => write(message, "the-element", 100))
// Wait a bit after each messages.
.then(() => wait(400));
}, Promise.resolve());
<div id="the-element"></div>
Related
I am new to node async/sync concept. these day i am confused by the sequence of async function executions.
I am kinda hard to get the benefit of the async calls of single-process node:
Here is an example:
1.Sequence of async/await
Assume an async function:
async readPathInFile(filePath) {
//open filePath, read the content and return
return file path in filePath
}
We have several calls like:
var a = 'a.txt';
var b = await readPathInFile(a); // call_a
var c = await readPathInFile(b); // call_b
var d = await readPathInFile(c); // call_c
As we know the readPathInFile is defined as asynchronous, but the calls
call_a, call_b, call_c relies on the previous ones sequentially.
For these calls, the calls have no differences with synchronous calls.
So, What is the benefit of the asynchronous definition ?
Callbacks concept has the same concerns:
For example:
readPathInFile(filePath, callback) {
//open filePath, read the content and return
callback(file path in filePath)
}
var a = 'a.txt';
readPathInFile(a, (b)=>{
//call_a
readPathInFile(b, (c)=>{
//call_b
readPathInFile(c, (d)=>{
//
});
});
}); // call_a
call_z();
//only if the call_z does not rely on a,b,c, z finishes without caring about a,b,c. This is a only benefit.
Please correct me if my assumption is wrong.
As the term implies, asynchronous means that functions run out of order. In this respect your understanding is correct. What this implies however is not at all obvious.
To answer your question is a few words, the async/await keywords are syntax sugar, meaning they're not so much intrinsic qualities of the language, rather are shortcuts for saying something else. Specifically, the await keyword translates the expression into the equivalent Promise syntax.
function someFunction(): Promise<string> {
return new Promise(function(resolve) {
resolve('Hello World');
});
}
console.log('Step 1');
console.log(await someFunction());
console.log('Step 3');
is the same as saying
...
console.log('Step 1');
someFunction().then(function (res) {
console.log(res);
console.log('Step 3');
});
Note the 3rd console.log is inside the then callback.
When you redefine someFunction using the async syntax, you'd get the following:
async function someFunction() {
return 'Hello World';
}
The two are equivalent.
To answer your question
I'd like to reiterate that async/await are syntactical sugar. their effect can be replicated by existing functionality
Their purpose therefore is to combine the worlds of asynchronous and synchronous code. If you've ever seen an older, larger JS file, you'll quickly notice how many callbacks are used throughout the script. await is the mechanism to use asynchronous functions while avoiding the callback hell, but fundamentally, a function marked async will always be async regardless of how many awaits you put in front of it.
If you're struggling to understand how an await call works, just remember that the callback you pass to .then(...) can be called at any time. This is the essence of asynchronicity. await just adds a clean way to use it.
A visual example
I like analogies. Here's my favourite;
Picture yourself at a cafe. There are 3 people waiting in line. The barista takes your order and informs you that your coffee will be ready in 5 minutes. So this means you can sit down at a table and read a magazine or what ever, instead of waiting in line for the barista to make your coffee and accept payment etc.
The advantage of that is that the people behind you in the queue can get their own things done, instead of waiting in line for the coffees to be ready.
This is fundamentally how asynchronous code works. The barista can only do one thing at a time, but their time can be optimised by allowing everyone in the queue to do other things while they wait
Hope that helps
This article (linked below in the comments) has an awesome illustration: Here it is
What is the benefit of the asynchronous definition ?
To enable standard logical-flow code to run asynchronously. Contrast your examples:
var a = 'a.txt';
var b = await readPathInFile(a); // call_a
var c = await readPathInFile(b); // call_b
var d = await readPathInFile(c); // call_c
vs.
var a = 'a.txt';
readPathInFile(a, (b)=>{
//call_a
readPathInFile(b, (c)=>{
//call_b
readPathInFile(c, (d)=>{
//
});
});
}); // call_a
The latter is often called "callback hell." It's much harder to reason about even in that case, but as soon as you add branching and such, it gets extremely hard to reason about. In contrast, branching with async functions and await is just like branching in synchronous code.
Now, you could write readPathInFileSync (a synchronous version of readPathInFile), which would not be an async function and would let you do this:
var a = 'a.txt';
var b = readPathInFileSync(a); // call_a
var c = readPathInFileSync(b); // call_b
var d = readPathInFileSync(c); // call_c
So what's the difference between that and the async version at the beginning of the answer? The async version lets other things happen on the one main JavaScript thread while waiting for those I/O calls to complete. The synchronous version doesn't, it blocks the main thread, waiting for I/O to complete, preventing anything else from being handled.
Here's an example, using setTimeout to stand in for I/O:
const logElement = document.getElementById("log");
function log(msg) {
const entry = document.createElement("pre");
entry.textContent = Date.now() + ": " + msg;
logElement.appendChild(entry);
entry.scrollIntoView();
}
// Set up "other things" to happen
const things = [
"this", "that", "the other", "something else", "yet another thing", "yada yada"
];
let starting = false; // We need this flag so we can show
// the "Doing X work" messsage before
// we actually start doing it, without
// giving the false impression in
// rare cases that other things happened
// during sync work
otherThing();
function otherThing() {
if (!starting) {
log(things[Math.floor(Math.random() * things.length)]);
}
setTimeout(otherThing, 250);
}
// Stand-in for a synchronous task that takes n milliseconds
function doSomethingSync(n) {
const done = Date.now() + n;
while (Date.now() < done) {
// Busy-wait. NEVER DO THIS IN REAL CODE,
// THIS IS SIMULATING BLOCKING I/O.
}
}
// Stand-in for an asynchronous task that takes n milliseconds
function doSomething(n) {
return new Promise(resolve => {
setTimeout(resolve, n);
});
}
function doWorkSync() {
doSomethingSync(200);
doSomethingSync(150);
doSomethingSync(50);
doSomethingSync(400);
}
async function doWorkAsync() {
await doSomething(200);
await doSomething(150);
await doSomething(50);
await doSomething(400);
}
// Do the work synchronously
document.getElementById("run-sync").addEventListener("click", (e) => {
const btn = e.currentTarget;
btn.disabled = true;
log(">>>> Doing sync work");
starting = true;
setTimeout(() => {
starting = false;
doWorkSync();
log("<<<< Done with sync work");
btn.disabled = false;
}, 50);
});
// Do the work asynchronously
document.getElementById("run-async").addEventListener("click", (e) => {
const btn = e.currentTarget;
btn.disabled = true;
log(">>>> Doing async work");
starting = true;
setTimeout(() => {
starting = false;
doWorkAsync().then(() => {
log("<<<< Done with async work");
btn.disabled = false;
});
}, 50);
});
html {
font-family: sans-serif;
}
#log {
max-height: 5em;
min-height: 5em;
overflow: auto;
border: 1px solid grey;
}
#log * {
margin: 0;
padding: 0;
}
<div>Things happening:</div>
<div id="log"></div>
<input type="button" id="run-sync" value="Run Sync">
<input type="button" id="run-async" value="Run Async">
There are times you don't care about that blocking, which is what the xyzSync functions in the Node.js API are for. For instance, if you're writing a shell script and you just need to do things in order, it's perfectly reasonable to write synchronous code to do that.
In constrast, if you're running a Node.js-based server of any kind, it's crucial not to allow the one main JavaScript thread to be blocked on I/O, unable to handle any other requests that are coming in.
So, in summary:
It's fine to write synchronous I/O code if you don't care about the JavaScript thread doing other things (for instance, a shell script).
If you do care about blocking the JavaScript thread, writing code with async/await makes it much easier (vs. callback functions) to avoid doing that with simple, straight-forward code following the logic of your operation rather than its temporality.
Javascript Promises lets asynchronous functions return values like synchronous methods instead of immediately returning the final value the asynchronous function returns a promise to provide data in the future.
Actually Promises solves the callback hell issue.
I have this problem in the jQuery Terminal library. I have an echo method that prints the stuff on the terminal and you can print a string, promise, or function that returns a promise or string (to simplify let's assume string or promise).
But the issue is that if you echo a few promises and strings they are not printed in order. The code was just waiting with the next echo until the promise was resolved. The problem is that it only works for one promise.
So I was thinking that I need a kind of data structure that will keep adding promises and it will wait for all promises. But I'm not sure how to do this.
The problem I have is that I can't just chain promises because the echo method needs to be synchronous when there is nothing in a queue and you print a string. But this is not how Promise A+ behaves they are always async (even Promise.resolve()). I have a lot of unit tests that rely on echo being synchronous and it will be break change and I don't want that.
My idea was to just create an array of promises but I'm not sure when I should remove the promise from the array so the queue can be empty when all promises are resolved and I can do synchronous call.
Something like:
class PromiseQueue {
constructor() {
this._promises = [];
}
add(promise) {
this._promises.push(promise);
}
empty() {
return !this._promises.length;
}
then(fn) {
if (this.empty()) {
fn();
} else {
Promise.all(this._promises).then(function() {
// what do do with this._promises?
fn();
});
}
}
}
I guess it's not that simple as in my attempt. I have no idea how to implement this behavior.
EDIT:
I have this two cases that I want to handle:
function render(text, delay) {
return new Promise(resolve => {
setTimeout(() => resolve(text), delay);
});
}
term.echo(() => render('lorem', 1000));
term.echo('foo');
term.echo(() => render('ipsum', 1000));
term.echo('bar');
term.echo(() => render('dolor', 1000));
term.echo('baz');
setTimeout(function() {
// this should render immediately because all promises
// would be already resolved after 5 seconds
term.echo('lorem ipsum');
// so after the call I check the DOM and see the output
// this is important for unit tests and will be huge
// breaking change if echo don't render string synchronously
}, 5000);
NOTE: echo promise and function that return a promise in this example is the same the only difference is that function is re-invoked in each re-render (e.g. when browser or container is resized).
Another example is just:
term.echo('foo');
term.echo('bar');
term.echo('baz');
that should be also synced. I need a generic solution so you don't need to know exactly what echo is doing.
I would not even use Promise.all here - wait only for the first promise in the queue.
const term = {
/** an array while a promise is currently awaited, null when `echo` can be synchronous */
_queue: null,
echo(value) {
if (this._queue) {
this._queue.push(value);
} else {
this._echo(value);
}
},
/** returns a promise if the `value` is asynchronous, undefined otherwise */
_echo(value) {
try {
if (typeof value == "function") {
value = value();
}
if (typeof value.then == "function") {
this._queue ??= [];
return Promise.resolve(value).then(console.log, console.error).finally(() => {
while (this._queue.length) {
if (this._echo(this._queue.shift())) {
return;
}
}
this._queue = null;
});
} else {
console.log(value);
}
} catch(err) {
console.error(err);
}
}
};
function render(text, delay) {
return new Promise(resolve => {
setTimeout(() => resolve(text), delay);
});
}
term.echo('foo');
term.echo(() => render('lorem', 1000));
term.echo('bar');
term.echo(() => render('ipsum', 1000));
term.echo('baz');
term.echo(() => render('dolor', 1000));
term.echo('quam');
setTimeout(function() {
// this should render immediately because all promises
// would be already resolved after 5 seconds
term.echo('lorem ipsum');
console.log('end');
}, 5000);
console.log('echos queued');
While I was editing the question I've realized that this is similar to exec behavior:
term.exec('timer --time 1000 "hello"');
term.exec('echo world');
term.exec('timer --time 1000 "hello"');
term.exec('echo world');
and I solve this using same mechanism that was proved to work.
I've added a flag:
if (is_promise(next)) {
echo_promise = true;
}
similar to paused flag.
Then when promise next is resolved. I used the same what was done in resume()
unpromise(next, function() {
echo_promise = false;
var original = echo_delay;
echo_delay = [];
for (var i = 0; i < original.length; ++i) {
self.echo.apply(self, original[i]);
}
});
unpromise is a function that I always use. It invokes the function as then callback or calls immediately. So it's sync by default but async when needed. You can find the code on GitHub jquery.terminal-src.js#L1072.
And then last thing in echo main code:
if (echo_promise) {
echo_delay.push([arg, options]);
} else {
echo(arg);
}
This is not very clean code because echo is invoked multiple times but it works. If you know a better solution please share.
Maybe this kind of code can be abstracted into single PromiseQueue interface.
I've been learning promises using bluebird for two weeks now. I have them mostly understood, but I went to go solve a few related problems and it seems my knowledge has fell apart. I'm trying to do this simple code:
var someGlobal = true;
whilePromsie(function() {
return someGlobal;
}, function(result) { // possibly even use return value of 1st parm?
// keep running this promise code
return new Promise(....).then(....);
});
as a concrete example:
// This is some very contrived functionality, but let's pretend this is
// doing something external: ajax call, db call, filesystem call, etc.
// Simply return a number between 0-999 after a 0-999 millisecond
// fake delay.
function getNextItem() {
return new Promise.delay(Math.random()*1000).then(function() {
Promise.cast(Math.floor(Math.random() * 1000));
});
}
promiseWhile(function() {
// this will never return false in my example so run forever
return getNextItem() !== false;
}, // how to have result == return value of getNextItem()?
function(result) {
result.then(function(x) {
// do some work ...
}).catch(function(err) {
console.warn("A nasty error occured!: ", err);
});
}).then(function(result) {
console.log("The while finally ended!");
});
Now I've done my homework! There is the same question, but geared toward Q.js here:
Correct way to write loops for promise.
But the accepted answers, as well as additional answers:
Are geared toward Q.js or RSVP
The only answer geared toward bluebird uses recursion. These seems like it's likely to cause a huge stack overflow in an infinite loop such as mine? Or at best, be very inefficient and create a very large stack for nothing? If I'm wrong, then fine! Let me know.
Don't allow you to use result of the condition. Although this isn't requirement -- I'm just curious if it's possible. The code I'm writing, one use case needs it, the other doesn't.
Now, there is an answer regarding RSVP that uses this async() method. And what really confuses me is bluebird documents and I even see code for a Promise.async() call in the repository, but I don't see it in my latest copy of bluebird. Is it in the git repository only or something?
It's not 100% clear what you're trying to do, but I'll write an answer that does the following things you mention:
Loops until some condition in your code is met
Allows you to use a delay between loop iterations
Allows you to get and process the final result
Works with Bluebird (I'll code to the ES6 promise standard which will work with Bluebird or native promises)
Does not have stack build-up
First, let's assume you have some async function that returns a promise whose result is used to determine whether to continue looping or not.
function getNextItem() {
return new Promise.delay(Math.random()*1000).then(function() {
return(Math.floor(Math.random() * 1000));
});
}
Now, you want to loop until the value returned meets some condition
function processLoop(delay) {
return new Promise(function(resolve, reject) {
var results = [];
function next() {
getNextItem().then(function(val) {
// add to result array
results.push(val);
if (val < 100) {
// found a val < 100, so be done with the loop
resolve(results);
} else {
// run another iteration of the loop after delay
setTimeout(next, delay);
}
}, reject);
}
// start first iteration of the loop
next();
});
}
processLoop(100).then(function(results) {
// process results here
}, function(err) {
// error here
});
If you wanted to make this more generic so you could pass in the function and comparison, you could do this:
function processLoop(mainFn, compareFn, delay) {
return new Promise(function(resolve, reject) {
var results = [];
function next() {
mainFn().then(function(val) {
// add to result array
results.push(val);
if (compareFn(val))
// found a val < 100, so be done with the loop
resolve(results);
} else {
// run another iteration of the loop after delay
if (delay) {
setTimeout(next, delay);
} else {
next();
}
}
}, reject);
}
// start first iteration of the loop
next();
});
}
processLoop(getNextItem, function(val) {
return val < 100;
}, 100).then(function(results) {
// process results here
}, function(err) {
// error here
});
Your attempts at a structure like this:
return getNextItem() !== false;
Can't work because getNextItem() returns a promise which is always !== false since a promise is an object so that can't work. If you want to test a promise, you have to use .then() to get its value and you have to do the comparson asynchronously so you can't directly return a value like that.
Note: While these implementations use a function that calls itself, this does not cause stack build-up because they call themselves asynchronously. That means the stack has already completely unwound before the function calls itself again, thus there is no stack build-up. This will always be the case from a .then() handler since the Promise specification requires that a .then() handler is not called until the stack has returned to "platform code" which means it has unwound all regular "user code" before calling the .then() handler.
Using async and await in ES7
In ES7, you can use async and await to "pause" a loop. That can make this type of iteration a lot simpler to code. This looks structurally more like a typical synchronous loop. It uses await to wait on promises and because the function is declared async, it always returns a promise:
function delay(t) {
return new Promise(resolve => {
setTimeout(resolve, t);
});
}
async function processLoop(mainFn, compareFn, timeDelay) {
var results = [];
// loop until condition is met
while (true) {
let val = await mainFn();
results.push(val);
if (compareFn(val)) {
return results;
} else {
if (timeDelay) {
await delay(timeDelay);
}
}
}
}
processLoop(getNextItem, function(val) {
return val < 100;
}, 100).then(function(results) {
// process results here
}, function(err) {
// error here
});
I'm kind of new to Node, but I understand that writing syncronous functions is a bad thing. I locks up the event loop or something... So it's good to write everything asyncronous.
But in some cases, writing everything async could be bad. For example I have a function that makes an API call (to a 3rd party API service) and then I have to write the result to a database. I need to do this a bunch over a short period of time, for example 500 times.
Calling this API 500 times async and then writing to the database 500 times async would be probably ban me from the API service (throttling) and overload my database server.
What is the best way to control or limit something like this? I want to keep things async so it's effecient but I simply cannot take the above approach.
I've researched some Promise throttling approaches. Is that the correct way to approach this type of problem? Is there a better more appropriate way to do it?
The async npm package is wonderful and has several solutions that can be used in this particular situation. One approach is using a queue with a set concurrency limit (example taken directly from the async README):
// create a queue object with concurrency 2
var q = async.queue(function (task, callback) {
console.log('hello ' + task.name);
callback();
}, 2);
// assign a callback
q.drain = function() {
console.log('all items have been processed');
}
// add some items to the queue
q.push({name: 'foo'}, function (err) {
console.log('finished processing foo');
});
github.com/caolan/async#queue
In your particular situation, just wait to call the callback() until whatever timing or transaction detail you are waiting for has completed.
I am not sure how Promise throttle works under the hood, I believe Promise a better approach compared to setTimeout, with promise it is more event based, my issue with that npm package is, it offers no callback option once your call is done, my implemenation would be something like:
class PromiseThrottler {
constructor(maxParallelCalls) {
this.maxParallelCalls = maxParallelCalls;
this.currentCalls = 0; // flag holding the no. of parallel calls at any point
this.queue = []; // queue maintaining the waiting calls
}
// pormiseFn - the fn that wraps some promise call the we need to make, thenChain - callback once your async call is done, args- arguments that needs to be passed to the function
add(promiseFn, thenChain, ...args) {
this.queue.push({
promiseFn, thenChain, args
});
this.call();
}
call() {
if (!this.queue.length || this.currentCalls >= this.maxParallelCalls) return;
this.currentCalls++;
let obj = this.queue.shift();
let chain = obj.args.length ? obj.promiseFn(...obj.args) : obj.promiseFn();
if (obj.thenChain) chain.then(obj.thenChain);
chain
.catch(() => {})
.then(() => {
this.currentCalls--;
this.call();
});
this.call();
}
}
//usage
let PT = new PromiseThrottler(50)
, rn = max => Math.floor(Math.random() * max) // generate Random number
, randomPromise = id => new Promise(resolve => setTimeout(() => resolve(id), rn(5000))) // random promise generating function
, i = 1
, thenCall = id => {
console.log('resolved for id:', id);
let div = document.createElement('div');
div.textContent = `resolved for id: ${id}`;
document.body.appendChild(div);
};
while (++i < 501) PT.add(randomPromise, thenCall, i);
One simple way of limiting this is to use setTimeout and do a "recursive" loop like this.
function asyncLoop() {
makeAPICall(function(result) {
writeToDataBase(result, function() {
setTimeout(asyncLoop, 1000);
});
});
}
And of course you can also use that same strategy with promises.
I was trying to use promises to force serialization of a series of Ajax calls. These Ajax calls are made one for each time a user presses a button. I can successfully serialize the operations like this:
// sample async function
// real-world this is an Ajax call
function delay(val) {
log("start: ", val);
return new Promise(function(resolve) {
setTimeout(function() {
log("end: ", val);
resolve();
}, 500);
});
}
// initialize p to a resolved promise
var p = Promise.resolve();
var v = 1;
// each click adds a new task to
// the serially executed queue
$("#run").click(function() {
// How to detect here that there are no other unresolved .then()
// handlers on the current value of p?
p = p.then(function() {
return delay(v++);
});
});
Working demo: http://jsfiddle.net/jfriend00/4hfyahs3/
But, this builds a potentially never ending promise chain since the variable p that stores the last promise is never cleared. Every new operation just chains onto the prior promise. So, I was thinking that for good memory management, I should be able to detect when there are no more .then() handlers left to run on the current value of p and I can then reset the value of p, making sure that any objects that the previous chain of promise handlers might have held in closures will be eligible for garbage collection.
So, I was wondering how I would know in a given .then() handler that there are no more .then() handlers to be called in this chain and thus, I can just do p = Promise.resolve() to reset p and release the previous promise chain rather than just continually adding onto it.
I'm being told that a "good" promise implementation would not cause accumulating memory from an indefinitely growing promise chain. But, there is apparently no standard that requires or describes this (other than good programming practices) and we have lots of newbie Promise implementations out there so I have not yet decided if it's wise to rely on this good behavior.
My years of coding experience suggest that when implementations are new, facts are lacking that all implementations behave a certain way and there's no specification that says they should behave that way, then it might be wise to write your code in as "safe" a way as possible. In fact, it's often less work to just code around an uncertain behavior than it is to go test all relevant implementations to find out how they behave.
In that vein, here's an implementation of my code that seems to be "safe" in this regard. It just saves a local copy of the global last promise variable for each .then() handler and when that .then() handler runs, if the global promise variable still has the same value, then my code has not chained any more items onto it so this must be the currently last .then() handler. It seems to work in this jsFiddle:
// sample async function
// real-world this is an Ajax call
function delay(val) {
log("start: ", val);
return new Promise(function(resolve) {
setTimeout(function() {
log("end: ", val);
resolve();
}, 500);
});
}
// initialize p to a resolved promise
var p = Promise.resolve();
var v = 1;
// each click adds a new task to
// the serially executed queue
$("#run").click(function() {
var origP = p = p.then(function() {
return delay(v++);
}).then(function() {
if (p === origP) {
// no more are chained by my code
log("no more chained - resetting promise head");
// set fresh promise head so no chance of GC leaks
// on prior promises
p = Promise.resolve();
v = 1;
}
// clear promise reference in case this closure is leaked
origP = null;
}, function() {
origP = null;
});
});
… so that I can then reset the value of p, making sure that any objects that the previous chain of promise handlers might have held in closures will be eligible for garbage collection.
No. A promise handler that has been executed (when the promise has settled) is no more needed and implicitly eligible for garbage collection. A resolved promise does not hold onto anything but the resolution value.
You don't need to do "good memory management" for promises (asynchronous values), your promise library does take care of that itself. It has to "release the previous promise chain" automatically, if it doesn't then that's a bug. Your pattern works totally fine as is.
How do you know when the promise chain has completely finished?
I would take a pure, recursive approach for this:
function extendedChain(p, stream, action) {
// chains a new action to p on every stream event
// until the chain ends before the next event comes
// resolves with the result of the chain and the advanced stream
return Promise.race([
p.then(res => ({res}) ), // wrap in object to distinguish from event
stream // a promise that resolves with a .next promise
]).then(({next, res}) =>
next
? extendedChain(p.then(action), next, action) // a stream event happened first
: {res, next:stream}; // the chain fulfilled first
);
}
function rec(stream, action, partDone) {
return stream.then(({next}) =>
extendedChain(action(), next, action).then(({res, next}) => {
partDone(res);
return rec(next, action, partDone);
});
);
}
var v = 1;
rec(getEvents($("#run"), "click"), () => delay(v++), res => {
console.log("all current done, none waiting");
console.log("last result", res);
}); // forever
with a helper function for event streams like
function getEvents(emitter, name) {
var next;
function get() {
return new Promise((res) => {
next = res;
});
}
emitter.on(name, function() {
next({next: get()});
});
return get();
}
(Demo at jsfiddle.net)
It is impossible to detect when no more handlers are added.
It is in fact an undecidable problem. It is not very hard to show a reduction to the halting (or the Atm problem). I can add a formal reduction if you'd like but in handwavey: Given an input program, put a promise at its first line and chain to it at every return or throw - assuming we have a program that solves the problem you describe in this question - apply it to the input problem - we now know if it runs forever or not solving the halting problem. That is, your problem is at least as hard as the halting problem.
You can detect when a promise is "resolved" and update it on new ones.
This is common in "last" or in "flatMap". A good use case is autocomplete search where you only want the latest results. Here is an [implementation by Domenic
(https://github.com/domenic/last):
function last(operation) {
var latestPromise = null; // keep track of the latest
return function () {
// call the operation
var promiseForResult = operation.apply(this, arguments);
// it is now the latest operation, so set it to that.
latestPromise = promiseForResult;
return promiseForResult.then(
function (value) {
// if we are _still_ the last value when it resovled
if (latestPromise === promiseForResult) {
return value; // the operation is done, you can set it to Promise.resolve here
} else {
return pending; // wait for more time
}
},
function (reason) {
if (latestPromise === promiseForResult) { // same as above
throw reason;
} else {
return pending;
}
}
);
};
};
I adapted Domenic's code and documented it for your problem.
You can safely not optimize this
Sane promise implementations do not keep promises which are "up the chain", so setting it to Promise.resolve() will not save memory. If a promise does not do this it is a memory leak and you should file a bug against it.
I tried to check if we can see the promise's state in code, apprantly that is only possible from console, not from code, so I used a flag to moniter the status, not sure if there is a loophole somewhere:
var p
, v = 1
, promiseFulfilled = true;
function addPromise() {
if(!p || promiseFulfilled){
console.log('reseting promise...');
p = Promise.resolve();
}
p = p.then(function() {
promiseFulfilled = false;
return delay(v++);
}).then(function(){
promiseFulfilled = true;
});
}
fiddle demo
You could push the promises onto an array and use Promise.all:
var p = Promise.resolve,
promiseArray = [],
allFinishedPromise;
function cleanup(promise, resolvedValue) {
// You have to do this funkiness to check if more promises
// were pushed since you registered the callback, though.
var wereMorePromisesPushed = allFinishedPromise !== promise;
if (!wereMorePromisesPushed) {
// do cleanup
promiseArray.splice(0, promiseArray.length);
p = Promise.resolve(); // reset promise
}
}
$("#run").click(function() {
p = p.then(function() {
return delay(v++);
});
promiseArray.push(p)
allFinishedPromise = Promise.all(promiseArray);
allFinishedPromise.then(cleanup.bind(null, allFinishedPromise));
});
Alternatively, since you know they are executed sequentially, you could have each completion callback remove that promise from the array and just reset the promise when the array is empty.
var p = Promise.resolve(),
promiseArray = [];
function onPromiseComplete() {
promiseArray.shift();
if (!promiseArray.length) {
p = Promise.resolve();
}
}
$("#run").click(function() {
p = p.then(function() {
onPromiseComplete();
return delay(v++);
});
promiseArray.push(p);
});
Edit: If the array is likely to get very long, though, you should go with the first option b/c shifting the array is O(N).
Edit: As you noted, there's no reason to keep the array around. A counter will work just fine.
var p = Promise.resolve(),
promiseCounter = 0;
function onPromiseComplete() {
promiseCounter--;
if (!promiseCounter) {
p = Promise.resolve();
}
}
$("#run").click(function() {
p = p.then(function() {
onPromiseComplete();
return delay(v++);
});
promiseCounter++;
});