Promise not calling "then" callback - javascript

C#/Dart's async/await feature has spoiled me a little. I noticed that ES7 has a proposal for similar syntax, and that there is a library that adds that feature to Node.JS apps.
This library doesn't work in the browser. I thought by trying to write my own mini-solution might help me see why, so I decided to try it out, just to educate myself. This is my attempt so far, on Github Gist. I've included snippets below.
In the await function:
function await (promise) {
/* PromiseState declared here */
var self = {
result: null,
state: PromiseState.pending
};
function onPromiseUpdate(context, newState) {
return function (value) {
console.log("Promise Updated!");
context.result = value;
context.state = newState;
}
}
console.log("awaiting");
// this never shows the Promise to be pending (using example below)
console.log(promise);
promise
.then(onPromiseUpdate(self, PromiseState.resolved)) // this is never called
.catch(onPromiseUpdate(self, PromiseState.rejected));
// Shouldn't this not block the app if it's inside a Promise?
while (self.state == PromiseState.pending) ;
console.log("delegating");
/* just returning the value here */
}
Example:
// is there another way to pass 'await' without a parameter?
unstableFunc = async(function (await) {
console.log("running unstable");
if(Math.random() > 0.5) return Math.random() * 15 + 5;
else throw "fail";
});
expensiveFunc = async(function (await, x, y) {
result = await(unstableFunc())
for (var i = y * 8; i >= 0; i--) {
result *= i ** y / x;
console.log(result);
}
return result;
});
window.addEventListener('load', function () {
console.log("about to do something expensive");
// 'expensiveFunc' returns a Promise. Why does this block the webpage?
expensiveFunc(10, 2).then(function (val) {
console.log("Result: " + val.toString());
});
console.log("called expensive function");
});
When running this, the browser doesn't finish loading. It's to do with the loop I set up to check for the state of a Promise being resolved, but that's not the focus of my question. What I'm wondering is why neither the then or catch callbacks are being called. When logging, the console never logs a pending Promise, and I always thought that then and catch executes their callbacks immediately if the future is not pending. Why isn't that so in this case?

The moment this line of code is reached/executed:
while (self.state == PromiseState.pending) ;
your script becomes blocked forever (or until the tab crashes or the browser kills it). While that loop is running, callbacks cannot run (nor can anything else) and therefore your promise state will never change to pending thus causing an infinite loop. Having promises involved doesn't change any of the above.

Related

How do use Javascript Async-Await as an alternative to polling for a statechange?

I'd like to accomplish the following using promises: only execute further once the state of something is ready. I.e. like polling for an external state-change.
I've tried using promises and async-await but am not getting the desired outcome. What am I doing wrong here, and how do I fix it?
The MDN docs have something similar but their settimeout is called within the promise--that's not exactly what I'm looking for though.
I expect the console.log to show "This function is now good to go!" after 5 seconds, but instead execution seems to stop after calling await promiseForState();
var state = false;
function stateReady (){
state = true;
}
function promiseForState(){
var msg = "good to go!";
var promise = new Promise(function (resolve,reject){
if (state){
resolve(msg);
}
});
return promise;
}
async function waiting (intro){
var result = await promiseForState();
console.log(intro + result)
}
setTimeout(stateReady,5000);
waiting("This function is now ");
What you're doing wrong is the promise constructor executor function executes immediately when the promise is created, and then never again. At that point, state is false, so nothing happens.
Promises (and async/await) are not a replacement for polling. You still need to poll somewhere.
The good news: async functions make it easy to do conditional code with loops and promises.
But don't put code inside promise constructor executor functions, because of their poor error handling characteristics. They are meant to wrap legacy code.
Instead, try this:
var state = false;
function stateReady() {
state = true;
}
const wait = ms => new Promise(resolve => setTimeout(resolve, ms));
async function promiseForState() {
while (!state) {
await wait(1000);
}
return "good to go!";
}
async function waiting(intro) {
var result = await promiseForState();
console.log(intro + result)
}
setTimeout(stateReady,5000);
waiting("This function is now ");
Based on your comments that you are waiting for messages from a server it appears you are trying to solve an X/Y problem. I am therefore going to answer the question of "how do I wait for server messages" instead of waiting for global variable to change.
If your network API accepts a callback
Plenty of networking API such as XMLHttpRequest and node's Http.request() are callback based. If the API you are using is callback or event based then you can do something like this:
function myFunctionToFetchFromServer () {
// example is jQuery's ajax but it can easily be replaced with other API
return new Promise(function (resolve, reject) {
$.ajax('http://some.server/somewhere', {
success: resolve,
error: reject
});
});
}
async function waiting (intro){
var result = await myFunctionToFetchFromServer();
console.log(intro + result);
}
If your network API is promise based
If on the other hand you are using a more modern promise based networking API such as fetch() you can simply await the promise:
function myFunctionToFetchFromServer () {
return fetch('http://some.server/somewhere');
}
async function waiting (intro){
var result = await myFunctionToFetchFromServer();
console.log(intro + result);
}
Decoupling network access from your event handler
Note that the following are only my opinion but it is also the normal standard practice in the javascript community:
In either case above, once you have a promise it is possible to decouple your network API form your waiting() event handler. You just need to save the promise somewhere else. Evert's answer shows one way you can do this.
However, in my not-so-humble opinion, you should not do this. In projects of significant size this leads to difficulty in tracing the source of where the state change comes form. This is what we did in the 90s and early 2000s with javascript. We had a lot of events in our code like onChange and onReady or onData instead of callbacks passed as function parameters. The result was that sometimes it takes you a long time to figure out what code is triggering what event.
Callback parameters and promises forces the event generator to be in the same place in the code as the event consumer:
let this_variable_consumes_result_of_a_promise = await generate_a_promise();
this_function_generate_async_event((consume_async_result) => { /* ... */ });
From the wording of your question you seem to be wanting to do this instead;
..somewhere in your code:
this_function_generate_async_event(() => { set_global_state() });
..somewhere else in your code:
let this_variable_consumes_result_of_a_promise = await global_state();
I would consider this an anti-pattern.
Calling asynchronous functions in class constructors
This is not only an anti-pattern but an impossibility (as you've no doubt discovered when you find that you cannot return the asynchronous result).
There are however design patterns that can work around this. The following is an example of exposing a database connection that is created asynchronously:
class MyClass {
constructor () {
// constructor logic
}
db () {
if (this.connection) {
return Promise.resolve(this.connection);
}
else {
return new Promise (function (resolve, reject) {
createDbConnection(function (error, conn) {
if (error) {
reject(error);
}
else {
this.connection = conn; // cache the connection
resolve(this.connection);
}
});
});
}
}
}
Usage:
const myObj = new MyClass();
async function waiting (intro){
const db = await myObj.db();
db.doSomething(); // you can now use the database connection.
}
You can read more about asynchronous constructors from my answer to this other question: Async/Await Class Constructor
The way I would solve this, is as follows. I am not 100% certain this solves your problem, but the assumption here is that you have control over stateReady().
let state = false;
let stateResolver;
const statePromise = new Promise( (res, rej) => {
stateResolver = res;
});
function stateReady(){
state = true;
stateResolver();
}
async function promiseForState(){
await stateResolver();
const msg = "good to go!";
return msg;
}
async function waiting (intro){
const result = await promiseForState();
console.log(intro + result)
}
setTimeout(stateReady,5000);
waiting("This function is now ");
Some key points:
The way this is written currently is that the 'state' can only transition to true once. If you want to allow this to be fired many times, some of those const will need to be let and the promise needs to be re-created.
I created the promise once, globally and always return the same one because it's really just one event that every caller subscribes to.
I needed a stateResolver variable to lift the res argument out of the promise constructor into the global scope.
Here is an alternative using .requestAnimationFrame().
It provides a clean interface that is simple to understand.
var serverStuffComplete = false
// mock the server delay of 5 seconds
setTimeout(()=>serverStuffComplete = true, 5000);
// continue until serverStuffComplete is true
function waitForServer(now) {
if (serverStuffComplete) {
doSomethingElse();
} else {
// place this request on the next tick
requestAnimationFrame(waitForServer);
}
}
console.log("Waiting for server...");
// starts the process off
requestAnimationFrame(waitForServer);
//resolve the promise or whatever
function doSomethingElse() {
console.log('Done baby!');
}

promise chaining over async await - javascript [duplicate]

I know that the async await is the new Promise in the town and it is a new way to write asynchronous code and I also know that
We didn’t have to write .then, create an anonymous function to handle the response
Async/await makes it finally possible to handle both synchronous and asynchronous errors with the same construct, good old try/catch
The error stack returned from a promise chain gives no clue of where the error happened. However, the error stack from async/await points to the function that contains the error
AND SO ON...
but
here I have done a simple bench mark https://repl.it/repls/FormalAbandonedChimpanzee
In the benchmark I have run 2 loops for 1 million times.
In first loop I am calling a function that is returning 1
in another function I am calling a function that is throwing 1 as an exception.
the time taken by first loop which is calling a function that is returning 1 is almost half of the function that is throwing 1 as error.
Which shows that time taken by throw is almost double of the time taken by return
node v7.4 linux/amd64
return takes 1.233seconds
1000000
throw takes 2.128seconds
1000000
Benchmark Code Below
function f1() {
return 1;
}
function f2() {
throw 1;
}
function parseHrtimeToSeconds(hrtime) {
var seconds = (hrtime[0] + (hrtime[1] / 1e9)).toFixed(3);
return seconds;
}
var sum = 0;
var start = 0;
var i = 0;
start = process.hrtime();
for (i = 0; i < 1e6; i++) {
try {
sum += f1();
} catch (e) {
sum += e;
}
}
var seconds = parseHrtimeToSeconds(process.hrtime(start));
console.log('return takes ' + seconds + 'seconds');
console.log(sum);
sum = 0;
start = process.hrtime();
for (i = 0; i < 1e6; i++) {
try {
sum += f2();
} catch (e) {
sum += e;
}
}
seconds = parseHrtimeToSeconds(process.hrtime(start));
console.log('throw takes ' + seconds + 'seconds');
console.log(sum);
Your benchmark has nothing to do with the performance between async/await vs raw promises. All I can see is that throwing an error takes a longer time to compute. This is expected.
Back to the main question, should use async/await rather than .then with raw promises?
Keep in mind that async/await is merely syntactic sugar over raw promises, so there shouldn't be much impact on the overall performance. However, it does make your code more linear which removes a lot of cognitive overhead from the developer.
The conclusion is use what you prefer. Promises can be polyfill'd but new syntaxes cannot, so you might want to keep that in mind when deciding which style to use.
Some misunderstanding:
The error stack returned from a promise chain gives no clue of where the error happened
That is not true. A quick check with:
function error() {
return new Promise(function(res, rej) {
res(undefined()); // uh oh
});
}
error().then(console.log, e => console.log("Uh oh!", e.stack));
shows the entire error stack including the location.
As most things go, the answer is "it depends".
Before talking about performance, the more important aspect is the maintainability of the code, and limitation of async/await vs raw Promise.
async/await is a great way to execute asynchronous code sequentially, while Promise enables you to run asynchronous code concurrently.
async function foo() {
const a = await backend.doSomething()
const b = await backend.doAnotherThing()
return a + b
}
In the code above, backend.doAnotherThing() will not be executed until backend.doSomething() has returned. On the other hand:
function foo() {
Promise.all([backend.doSomething(), backend.doAnotherThing()])
.then(([a, b]) => {
return a + b
})
}
will execute both calls, and wait for both to complete.
As you mentioned about the benefits of async/await, I personally use it extensively. Except for the cases above.
If you need performance and to you, the performance difference between async/await vs Promise is more important than the readability benefit of async/await over Promise, by all mean go ahead.
As long as it is a conscious choice, you should be fine.
UPDATE: as mentioned by Derek 朕會功夫
You can get parallel execution with async/await by:
async function foo() {
const p1 = backend.doSomething()
const p2 = backend.doAnotherThing()
return await p1 + await p2
}
Building on unional's answer:
You can achieve the same behavior as Promise.all with async/await
function foo() {
Promise.all([backend.doSomething(), backend.doAnotherThing()])
.then(([a, b]) => {
return a + b
})
}
async function foo() {
const a = backend.doSomething()
const b = backend.doAnotherThing()
return await a + await b
}
Backend tasks happen concurrently and we wait on both to be finished before we return. See also the MDN example I wrote
Based on this I am not sure if there is any performance advantage to directly using Promises over async/await.

How to wait for the last promise in a dynamic list of promises?

I have a function F that starts an asynchronous process X. The function returns a promise that is resolved when X ends (which I learn by means of a promise returned by X).
While the (w.l.o.g.) first instance of X, X1, is running, there may be more calls to F. Each of these will spawn a new instance of X, e.g. X2, X3, and so on.
Now, here's the difficulty: When X2 is created, depending on the state of X1, X1 should conclude or be aborted. X2 should start working only once X1 is not active any more. In any case, the unresolved promises returned from all previous calls to F should be resolved only once X2 has concluded, as well - or, any later instance of X, if F gets called again while X2 is running.
So far, the first call to F invokes $q.defer() to created a deferred whose promise is returned by all calls to F until the last X has concluded. (Then, the deferred should be resolved and the field holding it should be reset to null, waiting for the next cluster of calls to F.)
Now, my issue is waiting until all instances of X have finished. I know that I could use $q.all if I had the full list of X instances beforehand, but as I have to consider later calls to F, this is not a solution here. Ideally, I should probably then-chain something to the promise returned by X to resolve the deferred, and "unchain" that function as soon as I chain it to a later instance of X.
I imagine that something like this:
var currentDeferred = null;
function F() {
if (!currentDeferred) {
currentDeferred = $q.defer();
}
// if previous X then "unchain" its promise handler
X().then(function () {
var curDef = currentDeferred;
currentDeferred = null;
curDef.resolve();
});
return currentDeferred.promise;
}
However, I don't know how to perform that "unchaining", if that is even the right solution.
How do I go about this? Am I missing some common pattern or even built-in feature of promises, or am I on the wrong track altogether?
To add a little context: F is called to load data (asynchronously) and updating some visual output. F returns a promise that should only be resolved once the visual output is updated to a stable state again (i.e. with no more updates pending).
F is called to load data (asynchronously) and updating some visual output. F returns a promise that should only be resolved once the visual output is updated to a stable state again (i.e. with no more updates pending).
Since all callers of F will receive a promise they need to consume, but you only want to update the UI when all stacked calls have completed, the simplest thing is to have each promise resolve (or reject) with a value telling the caller not to update the UI if there's another "get more data" call pending; that way, only the caller whose promise resolves last will update the UI. You can do that by keeping track of outstanding calls:
let accumulator = [];
let outstanding = 0;
function F(val) {
++outstanding;
return getData(val)
.then(data => {
accumulator.push(data);
return --outstanding == 0 ? accumulator.slice() : null;
})
.catch(error => {
--outstanding;
throw error;
});
}
// Fake data load
function getData(val) {
return new Promise(resolve => {
setTimeout(resolve, Math.random() * 500, "data for " + val);
});
}
let accumulator = [];
let outstanding = 0;
function F(val) {
++outstanding;
return getData(val)
.then(data => {
accumulator.push(data);
return --outstanding == 0 ? accumulator.slice() : null;
})
.catch(error => {
--outstanding;
throw error;
});
}
// Resolution and rejection handlers for our test calls below
const resolved = data => {
console.log("chain done:", data ? ("update: " + data.join(", ")) : "don't update");
};
const rejected = error => { // This never gets called, we don't reject
console.error(error);
};
// A single call:
F("a").then(resolved).catch(rejected);
setTimeout(() => {
// One subsequent call
console.log("----");
F("b1").then(resolved).catch(rejected);
F("b2").then(resolved).catch(rejected);
}, 600);
setTimeout(() => {
// Two subsequent calls
console.log("----");
F("c1").then(resolved).catch(rejected);
F("c2").then(resolved).catch(rejected);
F("c3").then(resolved).catch(rejected);
}, 1200);
.as-console-wrapper {
max-height: 100% !important;
}
(That's with native promises; adjust as necessary for $q.)
To me, "don't update" is different from "failed," so I used a flag value (null) rather than a rejection to signal it. But of course, you can use rejection with a flag value as well, it's up to you. (And that would have the benefit of putting the conditional logic ["Is this a real error or just a "don't update"?] in your catch handler rather than your then [is this real data or not?]... Hmmm, I might go the other way now I think of it. But that's trivial change.)
Obviously accumulator in the above is just a crude placeholder for your real data structures (and it makes no attempt to keep the data received in the order it was requested).
I'm having the promise resolve with a copy of the data in the above (accumulator.slice()) but that may not be necessary in your case.

How do you know when an indefinitely long promise chain has completely finished?

I was trying to use promises to force serialization of a series of Ajax calls. These Ajax calls are made one for each time a user presses a button. I can successfully serialize the operations like this:
// sample async function
// real-world this is an Ajax call
function delay(val) {
log("start: ", val);
return new Promise(function(resolve) {
setTimeout(function() {
log("end: ", val);
resolve();
}, 500);
});
}
// initialize p to a resolved promise
var p = Promise.resolve();
var v = 1;
// each click adds a new task to
// the serially executed queue
$("#run").click(function() {
// How to detect here that there are no other unresolved .then()
// handlers on the current value of p?
p = p.then(function() {
return delay(v++);
});
});
Working demo: http://jsfiddle.net/jfriend00/4hfyahs3/
But, this builds a potentially never ending promise chain since the variable p that stores the last promise is never cleared. Every new operation just chains onto the prior promise. So, I was thinking that for good memory management, I should be able to detect when there are no more .then() handlers left to run on the current value of p and I can then reset the value of p, making sure that any objects that the previous chain of promise handlers might have held in closures will be eligible for garbage collection.
So, I was wondering how I would know in a given .then() handler that there are no more .then() handlers to be called in this chain and thus, I can just do p = Promise.resolve() to reset p and release the previous promise chain rather than just continually adding onto it.
I'm being told that a "good" promise implementation would not cause accumulating memory from an indefinitely growing promise chain. But, there is apparently no standard that requires or describes this (other than good programming practices) and we have lots of newbie Promise implementations out there so I have not yet decided if it's wise to rely on this good behavior.
My years of coding experience suggest that when implementations are new, facts are lacking that all implementations behave a certain way and there's no specification that says they should behave that way, then it might be wise to write your code in as "safe" a way as possible. In fact, it's often less work to just code around an uncertain behavior than it is to go test all relevant implementations to find out how they behave.
In that vein, here's an implementation of my code that seems to be "safe" in this regard. It just saves a local copy of the global last promise variable for each .then() handler and when that .then() handler runs, if the global promise variable still has the same value, then my code has not chained any more items onto it so this must be the currently last .then() handler. It seems to work in this jsFiddle:
// sample async function
// real-world this is an Ajax call
function delay(val) {
log("start: ", val);
return new Promise(function(resolve) {
setTimeout(function() {
log("end: ", val);
resolve();
}, 500);
});
}
// initialize p to a resolved promise
var p = Promise.resolve();
var v = 1;
// each click adds a new task to
// the serially executed queue
$("#run").click(function() {
var origP = p = p.then(function() {
return delay(v++);
}).then(function() {
if (p === origP) {
// no more are chained by my code
log("no more chained - resetting promise head");
// set fresh promise head so no chance of GC leaks
// on prior promises
p = Promise.resolve();
v = 1;
}
// clear promise reference in case this closure is leaked
origP = null;
}, function() {
origP = null;
});
});
… so that I can then reset the value of p, making sure that any objects that the previous chain of promise handlers might have held in closures will be eligible for garbage collection.
No. A promise handler that has been executed (when the promise has settled) is no more needed and implicitly eligible for garbage collection. A resolved promise does not hold onto anything but the resolution value.
You don't need to do "good memory management" for promises (asynchronous values), your promise library does take care of that itself. It has to "release the previous promise chain" automatically, if it doesn't then that's a bug. Your pattern works totally fine as is.
How do you know when the promise chain has completely finished?
I would take a pure, recursive approach for this:
function extendedChain(p, stream, action) {
// chains a new action to p on every stream event
// until the chain ends before the next event comes
// resolves with the result of the chain and the advanced stream
return Promise.race([
p.then(res => ({res}) ), // wrap in object to distinguish from event
stream // a promise that resolves with a .next promise
]).then(({next, res}) =>
next
? extendedChain(p.then(action), next, action) // a stream event happened first
: {res, next:stream}; // the chain fulfilled first
);
}
function rec(stream, action, partDone) {
return stream.then(({next}) =>
extendedChain(action(), next, action).then(({res, next}) => {
partDone(res);
return rec(next, action, partDone);
});
);
}
var v = 1;
rec(getEvents($("#run"), "click"), () => delay(v++), res => {
console.log("all current done, none waiting");
console.log("last result", res);
}); // forever
with a helper function for event streams like
function getEvents(emitter, name) {
var next;
function get() {
return new Promise((res) => {
next = res;
});
}
emitter.on(name, function() {
next({next: get()});
});
return get();
}
(Demo at jsfiddle.net)
It is impossible to detect when no more handlers are added.
It is in fact an undecidable problem. It is not very hard to show a reduction to the halting (or the Atm problem). I can add a formal reduction if you'd like but in handwavey: Given an input program, put a promise at its first line and chain to it at every return or throw - assuming we have a program that solves the problem you describe in this question - apply it to the input problem - we now know if it runs forever or not solving the halting problem. That is, your problem is at least as hard as the halting problem.
You can detect when a promise is "resolved" and update it on new ones.
This is common in "last" or in "flatMap". A good use case is autocomplete search where you only want the latest results. Here is an [implementation by Domenic
(https://github.com/domenic/last):
function last(operation) {
var latestPromise = null; // keep track of the latest
return function () {
// call the operation
var promiseForResult = operation.apply(this, arguments);
// it is now the latest operation, so set it to that.
latestPromise = promiseForResult;
return promiseForResult.then(
function (value) {
// if we are _still_ the last value when it resovled
if (latestPromise === promiseForResult) {
return value; // the operation is done, you can set it to Promise.resolve here
} else {
return pending; // wait for more time
}
},
function (reason) {
if (latestPromise === promiseForResult) { // same as above
throw reason;
} else {
return pending;
}
}
);
};
};
I adapted Domenic's code and documented it for your problem.
You can safely not optimize this
Sane promise implementations do not keep promises which are "up the chain", so setting it to Promise.resolve() will not save memory. If a promise does not do this it is a memory leak and you should file a bug against it.
I tried to check if we can see the promise's state in code, apprantly that is only possible from console, not from code, so I used a flag to moniter the status, not sure if there is a loophole somewhere:
var p
, v = 1
, promiseFulfilled = true;
function addPromise() {
if(!p || promiseFulfilled){
console.log('reseting promise...');
p = Promise.resolve();
}
p = p.then(function() {
promiseFulfilled = false;
return delay(v++);
}).then(function(){
promiseFulfilled = true;
});
}
fiddle demo
You could push the promises onto an array and use Promise.all:
var p = Promise.resolve,
promiseArray = [],
allFinishedPromise;
function cleanup(promise, resolvedValue) {
// You have to do this funkiness to check if more promises
// were pushed since you registered the callback, though.
var wereMorePromisesPushed = allFinishedPromise !== promise;
if (!wereMorePromisesPushed) {
// do cleanup
promiseArray.splice(0, promiseArray.length);
p = Promise.resolve(); // reset promise
}
}
$("#run").click(function() {
p = p.then(function() {
return delay(v++);
});
promiseArray.push(p)
allFinishedPromise = Promise.all(promiseArray);
allFinishedPromise.then(cleanup.bind(null, allFinishedPromise));
});
Alternatively, since you know they are executed sequentially, you could have each completion callback remove that promise from the array and just reset the promise when the array is empty.
var p = Promise.resolve(),
promiseArray = [];
function onPromiseComplete() {
promiseArray.shift();
if (!promiseArray.length) {
p = Promise.resolve();
}
}
$("#run").click(function() {
p = p.then(function() {
onPromiseComplete();
return delay(v++);
});
promiseArray.push(p);
});
Edit: If the array is likely to get very long, though, you should go with the first option b/c shifting the array is O(N).
Edit: As you noted, there's no reason to keep the array around. A counter will work just fine.
var p = Promise.resolve(),
promiseCounter = 0;
function onPromiseComplete() {
promiseCounter--;
if (!promiseCounter) {
p = Promise.resolve();
}
}
$("#run").click(function() {
p = p.then(function() {
onPromiseComplete();
return delay(v++);
});
promiseCounter++;
});

How is a promise/defer library implemented? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
How is a promise/defer library like q implemented? I was trying to read the source code but found it pretty hard to understand, so I thought it'd be great if someone could explain to me, from a high level, what are the techniques used to implement promises in single-thread JS environments like Node and browsers.
I find it harder to explain than to show an example, so here is a very simple implementation of what a defer/promise could be.
Disclaimer: This is not a functional implementation and some parts of the Promise/A specification are missing, This is just to explain the basis of the promises.
tl;dr: Go to the Create classes and example section to see full implementation.
Promise:
First we need to create a promise object with an array of callbacks. I'll start working with objects because it's clearer:
var promise = {
callbacks: []
}
now add callbacks with the method then:
var promise = {
callbacks: [],
then: function (callback) {
callbacks.push(callback);
}
}
And we need the error callbacks too:
var promise = {
okCallbacks: [],
koCallbacks: [],
then: function (okCallback, koCallback) {
okCallbacks.push(okCallback);
if (koCallback) {
koCallbacks.push(koCallback);
}
}
}
Defer:
Now create the defer object that will have a promise:
var defer = {
promise: promise
};
The defer needs to be resolved:
var defer = {
promise: promise,
resolve: function (data) {
this.promise.okCallbacks.forEach(function(callback) {
window.setTimeout(function () {
callback(data)
}, 0);
});
},
};
And needs to reject:
var defer = {
promise: promise,
resolve: function (data) {
this.promise.okCallbacks.forEach(function(callback) {
window.setTimeout(function () {
callback(data)
}, 0);
});
},
reject: function (error) {
this.promise.koCallbacks.forEach(function(callback) {
window.setTimeout(function () {
callback(error)
}, 0);
});
}
};
Note that the callbacks are called in a timeout to allow the code be always asynchronous.
And that's what a basic defer/promise implementation needs.
Create classes and example:
Now lets convert both objects to classes, first the promise:
var Promise = function () {
this.okCallbacks = [];
this.koCallbacks = [];
};
Promise.prototype = {
okCallbacks: null,
koCallbacks: null,
then: function (okCallback, koCallback) {
okCallbacks.push(okCallback);
if (koCallback) {
koCallbacks.push(koCallback);
}
}
};
And now the defer:
var Defer = function () {
this.promise = new Promise();
};
Defer.prototype = {
promise: null,
resolve: function (data) {
this.promise.okCallbacks.forEach(function(callback) {
window.setTimeout(function () {
callback(data)
}, 0);
});
},
reject: function (error) {
this.promise.koCallbacks.forEach(function(callback) {
window.setTimeout(function () {
callback(error)
}, 0);
});
}
};
And here is an example of use:
function test() {
var defer = new Defer();
// an example of an async call
serverCall(function (request) {
if (request.status === 200) {
defer.resolve(request.responseText);
} else {
defer.reject(new Error("Status code was " + request.status));
}
});
return defer.promise;
}
test().then(function (text) {
alert(text);
}, function (error) {
alert(error.message);
});
As you can see the basic parts are simple and small. It will grow when you add other options, for example multiple promise resolution:
Defer.all(promiseA, promiseB, promiseC).then()
or promise chaining:
getUserById(id).then(getFilesByUser).then(deleteFile).then(promptResult);
To read more about the specifications: CommonJS Promise Specification. Note that main libraries (Q, when.js, rsvp.js, node-promise, ...) follow Promises/A specification.
Hope I was clear enough.
Edit:
As asked in the comments, I've added two things in this version:
The possibility to call then of a promise, no matter what status it has.
The possibility to chain promises.
To be able to call the promise when resolved you need to add the status to the promise, and when the then is called check that status. If the status is resolved or rejected just execute the callback with its data or error.
To be able to chain promises you need to generate a new defer for each call to then and, when the promise is resolved/rejected, resolve/reject the new promise with the result of the callback. So when the promise is done, if the callback returns a new promise it is bound to the promise returned with the then(). If not, the promise is resolved with the result of the callback.
Here is the promise:
var Promise = function () {
this.okCallbacks = [];
this.koCallbacks = [];
};
Promise.prototype = {
okCallbacks: null,
koCallbacks: null,
status: 'pending',
error: null,
then: function (okCallback, koCallback) {
var defer = new Defer();
// Add callbacks to the arrays with the defer binded to these callbacks
this.okCallbacks.push({
func: okCallback,
defer: defer
});
if (koCallback) {
this.koCallbacks.push({
func: koCallback,
defer: defer
});
}
// Check if the promise is not pending. If not call the callback
if (this.status === 'resolved') {
this.executeCallback({
func: okCallback,
defer: defer
}, this.data)
} else if(this.status === 'rejected') {
this.executeCallback({
func: koCallback,
defer: defer
}, this.error)
}
return defer.promise;
},
executeCallback: function (callbackData, result) {
window.setTimeout(function () {
var res = callbackData.func(result);
if (res instanceof Promise) {
callbackData.defer.bind(res);
} else {
callbackData.defer.resolve(res);
}
}, 0);
}
};
And the defer:
var Defer = function () {
this.promise = new Promise();
};
Defer.prototype = {
promise: null,
resolve: function (data) {
var promise = this.promise;
promise.data = data;
promise.status = 'resolved';
promise.okCallbacks.forEach(function(callbackData) {
promise.executeCallback(callbackData, data);
});
},
reject: function (error) {
var promise = this.promise;
promise.error = error;
promise.status = 'rejected';
promise.koCallbacks.forEach(function(callbackData) {
promise.executeCallback(callbackData, error);
});
},
// Make this promise behave like another promise:
// When the other promise is resolved/rejected this is also resolved/rejected
// with the same data
bind: function (promise) {
var that = this;
promise.then(function (res) {
that.resolve(res);
}, function (err) {
that.reject(err);
})
}
};
As you can see, it has grown quite a bit.
Q is a very complex promise library in terms of implementation because it aims to support pipelining and RPC type scenarios. I have my own very bare bones implementation of the Promises/A+ specification here.
In principle it's quite simple. Before the promise is settled/resolved, you keep a record of any callbacks or errbacks by pushing them into an array. When the promise is settled you call the appropriate callbacks or errbacks and record what result the promise was settled with (and whether it was fulfilled or rejected). After it's settled, you just call the callbacks or errbacks with the stored result.
That gives you aproximately the semantics of done. To build then you just have to return a new promise that is resolved with the result of calling the callbacks/errbacks.
If you're interested in a full explenation of the reasonning behind the development of a full on promise implementation with support for RPC and pipelining like Q, you can read kriskowal's reasonning here. It's a really nice graduated approach that I can't recommend highly enough if you are thinking of implementing promises. It's probably worth a read even if you're just going to be using a promise library.
As Forbes mentions in his answer, I chronicled many of the design decisions involved in making a library like Q, here https://github.com/kriskowal/q/tree/v1/design. Suffice it to say, there are levels of a promise library, and lots of libraries that stop at various levels.
At the first level, captured by the Promises/A+ specification, a promise is a proxy for an eventual result and is suitable for managing “local asynchrony”. That is, it is suitable for ensuring that work occurs in the right order, and for ensuring that it is simple and straight-forward to listen for the result of an operation regardless of whether it already settled, or will occur in the future. It also makes it just as simple for one or many parties to subscribe to an eventual result.
Q, as I have implemented it, provides promises that are proxies for eventual, remote, or eventual+remote results. To that end, it’s design is inverted, with different implementations for promises—deferred promises, fulfilled promises, rejected promises, and promises for remote objects (the last being implemented in Q-Connection). They all share the same interface and work by sending and receiving messages like "then" (which is sufficient for Promises/A+) but also "get" and "invoke". So, Q is about “distributed asynchrony”, and exists on another layer.
However, Q was actually taken down from a higher layer, where promises are used for managing distributed asynchrony among mutually suspicious parties like you, a merchant, a bank, Facebook, the government—not enemies, maybe even friends, but sometimes with conflicts of interest. The Q that I implemented is designed to be API compatible with hardened security promises (which is the reason for separating promise and resolve), with the hope that it would introduce people to promises, train them in using this API, and allow them to take their code with them if they need to use promises in secure mashups in the future.
Of course, there are trade-offs as you move up the layers, usually in speed. So, promises implementations can also be designed to co-exist. This is where the concept of a “thenable” enters. Promise libraries at each layer can be designed to consume promises from any other layer, so multiple implementations can coexist, and users can buy only what they need.
All this said, there is no excuse for being difficult to read. Domenic and I are working on a version of Q that will be more modular and approachable, with some of its distracting dependencies and work-arounds moved into other modules and packages. Thankfully folks like Forbes, Crockford, and others have filled in the educational gap by making simpler libraries.
First make sure you're understanding how Promises are supposed to work. Have a look at the CommonJs Promises proposals and the Promises/A+ specification for that.
There are two basic concepts that can be implemented each in a few simple lines:
A Promise does asynchronously get resolved with the result. Adding callbacks is a transparent action - independent from whether the promise is resolved already or not, they will get called with the result once it is available.
function Deferred() {
var callbacks = [], // list of callbacks
result; // the resolve arguments or undefined until they're available
this.resolve = function() {
if (result) return; // if already settled, abort
result = arguments; // settle the result
for (var c;c=callbacks.shift();) // execute stored callbacks
c.apply(null, result);
});
// create Promise interface with a function to add callbacks:
this.promise = new Promise(function add(c) {
if (result) // when results are available
c.apply(null, result); // call it immediately
else
callbacks.push(c); // put it on the list to be executed later
});
}
// just an interface for inheritance
function Promise(add) {
this.addCallback = add;
}
Promises have a then method that allows chaining them. I takes a callback and returns a new Promise which will get resolved with the result of that callback after it was invoked with the first promise's result. If the callback returns a Promise, it will get assimilated instead of getting nested.
Promise.prototype.then = function(fn) {
var dfd = new Deferred(); // create a new result Deferred
this.addCallback(function() { // when `this` resolves…
// execute the callback with the results
var result = fn.apply(null, arguments);
// check whether it returned a promise
if (result instanceof Promise)
result.addCallback(dfd.resolve); // then hook the resolution on it
else
dfd.resolve(result); // resolve the new promise immediately
});
});
// and return the new Promise
return dfd.promise;
};
Further concepts would be maintaining a separate error state (with an extra callback for it) and catching exceptions in the handlers, or guaranteeing asynchronity for the callbacks. Once you add those, you've got a fully functional Promise implementation.
Here is the error thing written out. It unfortunately is pretty repetitive; you can do better by using extra closures but then it get's really really hard to understand.
function Deferred() {
var callbacks = [], // list of callbacks
errbacks = [], // list of errbacks
value, // the fulfill arguments or undefined until they're available
reason; // the error arguments or undefined until they're available
this.fulfill = function() {
if (reason || value) return false; // can't change state
value = arguments; // settle the result
for (var c;c=callbacks.shift();)
c.apply(null, value);
errbacks.length = 0; // clear stored errbacks
});
this.reject = function() {
if (value || reason) return false; // can't change state
reason = arguments; // settle the errror
for (var c;c=errbacks.shift();)
c.apply(null, reason);
callbacks.length = 0; // clear stored callbacks
});
this.promise = new Promise(function add(c) {
if (reason) return; // nothing to do
if (value)
c.apply(null, value);
else
callbacks.push(c);
}, function add(c) {
if (value) return; // nothing to do
if (reason)
c.apply(null, reason);
else
errbacks.push(c);
});
}
function Promise(addC, addE) {
this.addCallback = addC;
this.addErrback = addE;
}
Promise.prototype.then = function(fn, err) {
var dfd = new Deferred();
this.addCallback(function() { // when `this` is fulfilled…
try {
var result = fn.apply(null, arguments);
if (result instanceof Promise) {
result.addCallback(dfd.fulfill);
result.addErrback(dfd.reject);
} else
dfd.fulfill(result);
} catch(e) { // when an exception was thrown
dfd.reject(e);
}
});
this.addErrback(err ? function() { // when `this` is rejected…
try {
var result = err.apply(null, arguments);
if (result instanceof Promise) {
result.addCallback(dfd.fulfill);
result.addErrback(dfd.reject);
} else
dfd.fulfill(result);
} catch(e) { // when an exception was re-thrown
dfd.reject(e);
}
} : dfd.reject); // when no `err` handler is passed then just propagate
return dfd.promise;
};
You might want to check out the blog post on Adehun.
Adehun is an extremely lightweight implementation (about 166 LOC) and very useful for learning how to implement the Promise/A+ spec.
Disclaimer: I wrote the blog post but the blog post does explain all about Adehun.
The Transition function – Gatekeeper for State Transition
Gatekeeper function; ensures that state transitions occur when all required conditions are met.
If conditions are met, this function updates the promise’s state and value. It then triggers the process function for further processing.
The process function carries out the right action based on the transition (e.g. pending to fulfilled) and is explained later.
function transition (state, value) {
if (this.state === state ||
this.state !== validStates.PENDING ||
!isValidState(state)) {
return;
}
this.value = value;
this.state = state;
this.process();
}
The Then function
The then function takes in two optional arguments (onFulfill and onReject handlers) and must return a new promise. Two major requirements:
The base promise (the one on which then is called) needs to create a new promise using the passed in handlers; the base also stores an internal reference to this created promise so it can be invoked once the base promise is fulfilled/rejected.
If the base promise is settled (i.e. fulfilled or rejected), then the appropriate handler should be called immediately. Adehun.js handles this scenario by calling process in the then function.
``
function then(onFulfilled, onRejected) {
var queuedPromise = new Adehun();
if (Utils.isFunction(onFulfilled)) {
queuedPromise.handlers.fulfill = onFulfilled;
}
if (Utils.isFunction(onRejected)) {
queuedPromise.handlers.reject = onRejected;
}
this.queue.push(queuedPromise);
this.process();
return queuedPromise;
}`
The Process function – Processing Transitions
This is called after state transitions or when the then function is invoked. Thus it needs to check for pending promises since it might have been invoked from the then function.
Process runs the Promise Resolution procedure on all internally stored promises (i.e. those that were attached to the base promise through the then function) and enforces the following Promise/A+ requirements:
Invoking the handlers asynchronously using the Utils.runAsync helper (a thin wrapper around setTimeout (setImmediate will also work)).
Creating fallback handlers for the onSuccess and onReject handlers if they are missing.
Selecting the correct handler function based on the promise state e.g. fulfilled or rejected.
Applying the handler to the base promise’s value. The value of this operation is passed to the Resolve function to complete the promise processing cycle.
If an error occurs, then the attached promise is immediately rejected.
function process() {
var that = this,
fulfillFallBack = function(value) {
return value;
},
rejectFallBack = function(reason) {
throw reason;
};
if (this.state === validStates.PENDING) {
return;
}
Utils.runAsync(function() {
while (that.queue.length) {
var queuedP = that.queue.shift(),
handler = null,
value;
if (that.state === validStates.FULFILLED) {
handler = queuedP.handlers.fulfill ||
fulfillFallBack;
}
if (that.state === validStates.REJECTED) {
handler = queuedP.handlers.reject ||
rejectFallBack;
}
try {
value = handler(that.value);
} catch (e) {
queuedP.reject(e);
continue;
}
Resolve(queuedP, value);
}
});
}
The Resolve function – Resolving Promises
This is probably the most important part of the promise implementation since it handles promise resolution. It accepts two parameters – the promise and its resolution value.
While there are lots of checks for various possible resolution values; the interesting resolution scenarios are two – those involving a promise being passed in and a thenable (an object with a then value).
Passing in a Promise value
If the resolution value is another promise, then the promise must adopt this resolution value’s state. Since this resolution value can be pending or settled, the easiest way to do this is to attach a new then handler to the resolution value and handle the original promise therein. Whenever it settles, then the original promise will be resolved or rejected.
Passing in a thenable value
The catch here is that the thenable value’s then function must be invoked only once (a good use for the once wrapper from functional programming). Likewise, if the retrieval of the then function throws an Exception, the promise is to be rejected immediately.
Like before, the then function is invoked with functions that ultimately resolve or reject the promise but the difference here is the called flag which is set on the first call and turns subsequent calls are no ops.
function Resolve(promise, x) {
if (promise === x) {
var msg = "Promise can't be value";
promise.reject(new TypeError(msg));
}
else if (Utils.isPromise(x)) {
if (x.state === validStates.PENDING){
x.then(function (val) {
Resolve(promise, val);
}, function (reason) {
promise.reject(reason);
});
} else {
promise.transition(x.state, x.value);
}
}
else if (Utils.isObject(x) ||
Utils.isFunction(x)) {
var called = false,
thenHandler;
try {
thenHandler = x.then;
if (Utils.isFunction(thenHandler)){
thenHandler.call(x,
function (y) {
if (!called) {
Resolve(promise, y);
called = true;
}
}, function (r) {
if (!called) {
promise.reject(r);
called = true;
}
});
} else {
promise.fulfill(x);
called = true;
}
} catch (e) {
if (!called) {
promise.reject(e);
called = true;
}
}
}
else {
promise.fulfill(x);
}
}
The Promise Constructor
And this is the one that puts it all together. The fulfill and reject functions are syntactic sugar that pass no-op functions to resolve and reject.
var Adehun = function (fn) {
var that = this;
this.value = null;
this.state = validStates.PENDING;
this.queue = [];
this.handlers = {
fulfill : null,
reject : null
};
if (fn) {
fn(function (value) {
Resolve(that, value);
}, function (reason) {
that.reject(reason);
});
}
};
I hope this helped shed more light into the way promises work.

Categories

Resources