Question:
Is there an "easy" way to cancel ($q-/$http-)promises in AngularJS or determine the order in which promises were resolved?
Example
I have a long running calculation and i request the result via $http. Some actions or events require me to restart the calculation (and thus sending a new $http request) before the initial promise is resolved. Thus i imagine i can't use a simple implementation like
$http.post().then(function(){
//apply data to view
})
because I can't ensure that the responses come back in the order in which i did send the requests - after all i want to show the result of the latest calculation when all promises were resolved properly.
However I would like to avoid waiting for the first response until i send a new request like this:
const timeExpensiveCalculation = function(){
return $http.post().then(function(response){
if (isNewCalculationChained) {return timeExpensiveCalculation();}
else {return response.data;}
})
}
Thoughts:
When using $http i can access the config-object on the response to use some timestamps or other identifiers to manually order the incoming responses. However i was hoping I could just tell angular somehow to cancel an outdated promise and thus not run the .then() function when it gets resolved.
This does not work without manual implementation for $q-promises instead of $http though.
Maybe just rejecting the promise right away is the way to go? But in both cases it might take forever until finally a promise is resolved before the next request is generated (which leads to an empty view in the meantime).
Is there some angular API-Function that i am missing or are there robust design patterns or "tricks" with promise chaining or $q.all to handle multiple promises that return the "same" data?
I do it by generating a requestId, and in the promise's then() function I check if the response is coming from the most recent requestId.
While this approach does not actually cancel the previous promises, it does provide a quick and easy way to ensure that you are handling the most recent request's response.
Something like:
var activeRequest;
function doRequest(params){
// requestId is the id for the request being made in this function call
var requestId = angular.toJson(params); // I usually md5 hash this
// activeRequest will always be the last requestId sent out
activeRequest = requestId;
$http.get('/api/something', {data: params})
.then(function(res){
if(activeRequest == requestId){
// this is the response for last request
// activeRequest is now handled, so clear it out
activeRequest = undefined;
}
else {
// response from previous request (typically gets ignored)
}
});
}
Edit:
On a side-note, I wanted to add that this concept of tracking requestId's can also be applied to preventing duplicate requests. For example, in my Data service's load(module, id) method, I do a little process like this:
generate the requestId based on the URL + parameters.
check in requests hash-table for the requestId
if requestId is not found: generate new request and store promise in hash-table
if requestId is found: simply return the promise from the hash-table
When the request finishes, remove the requestId's entry from the hash-table.
Cancelling a promise is just making it not invoke the onFulfilled and onRejected functions at the then stage. So as #user2263572 mentioned it's always best to let go the promise not cancelled (ES6 native promises can not be cancelled anyways) and handle this condition within it's then stage (like disregarding the task if a global variable is set to 2 as shown in the following snippet) and i am sure you can find tons of other ways to do it. One example could be;
Sorry that i use v (looks like check character) for resolve and x (obvious) for reject functions.
var prom1 = new Promise((v,x) => setTimeout(v.bind(null,"You shall not read this"),2000)),
prom2,
validPromise = 1;
prom1.then(val => validPromise === 1 && console.log(val));
// oh what have i done..!?! Now i have to fire a new promise
prom2 = new Promise((v,x) => setTimeout(v.bind(null,"This is what you will see"),3000));
validPromise = 2;
prom2.then(val => validPromise === 2 && console.log(val));
I'm still trying to figure out a good way to unit test this, but you could try out this kind of strategy:
var canceller = $q.defer();
service.sendCalculationRequest = function () {
canceller.resolve();
return $http({
method: 'GET',
url: '/do-calculation',
timeout: canceller.promise
});
};
In ECMA6 promises, there is a Promise.race(promiseArray) method. This takes an array of promises as its argument, and returns a single promise. The first promise to resolve in the array will hand off its resolved value to the .then of the returned promise, while the other array promises that came in second, etc., will not be waited upon.
Example:
var httpCall1 = $http.get('/api/something', {data: params})
.then(function(val) {
return {
id: "httpCall1"
val: val
}
})
var httpCall2 = $http.get('/api/something-else', {data: params})
.then(function(val) {
return {
id: "httpCall2"
val: val
}
})
// Might want to make a reusable function out of the above two, if you use this in Production
Promise.race([httpCall1, httpCall2])
.then(function(winningPromise) {
console.log('And the winner is ' + winningPromise.id);
doSomethingWith(winningPromise.val);
});
You could either use this with a Promise polyfil, or look into the q.race that someone's developed for Angular (though I haven't tested it).
Related
I'm making requests to an API, but their server only allows a certain number of active connections, so I would like to limit the number of ongoing fetches. For my purposes, a fetch is only completed (not ongoing) when the HTTP response body arrives at the client.
I would like to create an abstraction like this:
const fetchLimiter = new FetchLimiter(maxConnections);
fetchLimiter.fetch(url, options); // Returns the same thing as fetch()
This would make things a lot simpler, but there seems to be no way of knowing when a stream being used by other code ends, because streams are locked while they are being read. It is possible to use ReadableStream.tee() to split the stream into two, use one and return the other to the caller (possibly also constructing a Response with it), but this would degrade performance, right?
Since fetch uses promises, you can take advantage of that to make a simple queue system.
This is a method I've used before for queuing promise based stuff. It enqueues items by creating a Promise and then adding its resolver to an array. Of course until that Promise resolves, the await keeps any later promises from being invoked.
And all we have to do to start the next fetch when one finishes is just grab the next resolver and invoke it. The promise resolves, and then the fetch starts!
Best part, since we don't actually consume the fetch result, there's no worries about having to clone or anything...we just pass it on intact, so that you can consume it in a later then or something.
*Edit: since the body is still streaming after the fetch promise resolves, I added a third option so that you can pass in the body type, and have FetchLimiter retrieve and parse the body for you.
These all return a promise that is eventually resolved with the actual content.
That way you can just have FetchLimiter parse the body for you. I made it so it would return an array of [response, data], that way you can still check things like the response code, headers, etc.
For that matter, you could even pass in a callback or something to use in that await if you needed to do something more complex, like different methods of parsing the body depending on response code.
Example
I added comments to indicate where the FetchLimiter code begins and ends...the rest is just demo code.
It's using a fake fetch using a setTimeout, which will resolve between 0.5-1.5 secs. It will start the first three requests immediately, and then the actives will be full, and it will wait for one to resolve.
When that happens, you'll see the comment that the promise has resolved, then the next promise in the queue will start, and then you'll see the then from in the for loop resolve. I added that then just so you could see the order of events.
(function() {
const fetch = (resource, init) => new Promise((resolve, reject) => {
console.log('starting ' + resource);
setTimeout(() => {
console.log(' - resolving ' + resource);
resolve(resource);
}, 500 + 1000 * Math.random());
});
function FetchLimiter() {
this.queue = [];
this.active = 0;
this.maxActive = 3;
this.fetchFn = fetch;
}
FetchLimiter.prototype.fetch = async function(resource, init, respType) {
// if at max active, enqueue the next request by adding a promise
// ahead of it, and putting the resolver in the "queue" array.
if (this.active >= this.maxActive) {
await new Promise(resolve => {
this.queue.push(resolve); // push, adds to end of array
});
}
this.active++; // increment active once we're about to start the fetch
const resp = await this.fetchFn(resource, init);
let data;
if (['arrayBuffer', 'blob', 'json', 'text', 'formData'].indexOf(respType) >= 0)
data = await resp[respType]();
this.active--; // decrement active once fetch is done
this.checkQueue(); // time to start the next fetch from queue
return [resp, data]; // return value from fetch
};
FetchLimiter.prototype.checkQueue = function() {
if (this.active < this.maxActive && this.queue.length) {
// shfit, pulls from start of array. This gives first in, first out.
const next = this.queue.shift();
next('resolved'); // resolve promise, value doesn't matter
}
}
const limiter = new FetchLimiter();
for (let i = 0; i < 9; i++) {
limiter.fetch('/mypage/' + i)
.then(x => console.log(' - .then ' + x));
}
})();
Caveats:
I'm not 100% sure if the body is still streaming when the promise resolves...that seems to be a concern for you. However if that's a problem you could use one of the Body mixin methods like blob or text or json, which doesn't resolve until the body content is completely parsed (see here)
I intentionally kept it very short (like 15 lines of actual code) as a very simple proof of concept. You'd want to add some error handling in production code, so that if the fetch rejects because of a connection error or something that you still decrement the active counter and start the next fetch.
Of course it's also using async/await syntax, because it's so much easier to read. If you need to support older browsers, you'd want to rewrite with Promises or transpile with babel or equivalent.
I am trying to create an array of Promises, then resolve them with Promise.all(). I am using got, which returns a promise.
My code works, but I don't fully understand how. Here it is:
const got = require('got');
const url = 'myUrl';
const params = ['param1', 'param2', 'param3'];
let promiseArray = [];
for (param of params) {
promiseArray.push(got(url + param));
}
// Inspect the promises
for (promise of promiseArray) {
console.log(JSON.stringify(promise));
// Output: promise: {"_pending":true,"_canceled":false,"_promise":{}}
}
Promise.all(promiseArray).then((results) => {
// Operate on results - works just fine
}).catch((e) => {
// Error handling logic
});
What throws me off is that the Promises are marked as "pending" when I add them into the array, which means they have already started.
I would think that they should lie inactive in promiseArray, and Promise.all(promiseArray) would both start them and resolve them.
Does this mean I am starting them twice?
You're not starting them twice. Promises start running as soon as they're created - or as soon as the JS engine finds enough resources to start them. You have no control on when they actually start.
All Promise.all() does is wait for all of them to settle (resolve or reject). Promise.all() does not interfere with nor influence the order/timing of execution of the promise itself.
Promises don't run at all. They are simply a notification system for communicating when asynchronous operations are complete.
So, as soon as you ran this:
promiseArray.push(got(url + param));
Your asynchronous operation inside of got() is already started and when it finishes, it will communicate that back through the promise.
All Promise.all() does is monitor all the promises and tell you when the first one rejects or when all of them have completed successfully. It does not "control" the async operations in any way. Instead, you start the async operations and they communicate back through the promises. You control when you started the async operations and the async operations then run themselves from then on.
If you break down your code a bit into pieces, here's what happens in each piece:
let promiseArray = [];
for (param of params) {
promiseArray.push(got(url + param));
}
This calls got() a bunch of times starting whatever async operation is in that function. got() presumably returns a promise object which is then put into your promiseArray. So, at this point, the async operations are all started already and running on their own.
// Inspect the promises
for (promise of promiseArray) {
console.log(JSON.stringify(promise));
// Output: promise: {"_pending":true,"_canceled":false,"_promise":{}}
}
This loop, just looks at all the promises to see if any of them might already be resolved, though one would not expect them to be because their underlying async operations were just started in the prior loop.
Promise.all(promiseArray).then((results) => {
// Operate on results - works just fine
}).catch((e) => {
// Error handling logic
});
Then, with Promise.all(), you're just asking to monitor the array of promises so it will tell you when either there's a rejected promise or when all of them complete successfully.
Promises "start" when they are created, i.e. the function that gives you the promise, has already launched the (often asynchronous) operations that will eventually lead into an asynchronous result. For instance, if a function returns a promise for a result of an HTTP request, it has already launched that HTTP request when returning you the promise object.
No matter what you do or not do with that promise object, that function (got) has already created a callback function which it passed on to an asynchronous API, such as a HTTP Request/Response API. In that callback function (which you do not see unless you inspect the source of got) the promise will be resolved as soon as it gets called back by that API. In the HTTP request example, the API calls that particular callback with the HTTP response, and the said callback function then resolve the promise.
Given all this, it is a bit strange to think of promises as things that "start" or "run". They are merely created in a pending state. The remaining thing is a pending callback from some API that will hopefully occur and then will change the state of the promise object, triggering then callbacks.
Please note that fetching an array of urls with Promise.all has got some possible problems:
If any of the urls fail to fetch your resolve is never called (so
one fails and your resolve function is never called.
If your array is very large you will clobber the site and your network with requests, you may want to throttle the maximum open requests and or requests made in a certain timeframe.
The first problem is easily solved, you process the failed requests and add them to the results. In the resolve handler you can decide what to do with the failed requests:
const got = require('got');
const url = 'myUrl';
const params = ['param1', 'param2', 'param3'];
const Fail = function(details){this.details = details;};
Promise.all(
params.map(
param =>
got(url + param)
.then(
x=>x,//if resolved just pass along the value
reject=>new Fail([reject,url+param])
)
)
).then((results) => {
const successes = results.filter(result=>(result && result.constructor)!==Fail),
const failedItems = results.filter(result=>(result && result.constructor)===Fail);
}).catch((e) => {
// Error handling logic
});
Point 2 is a bit more complicated, throttling can be done with this helper function and would look something like this:
... other code
const max5 = throttle(5);
Promise.all(
params.map(
param =>
max5(got)(url + param)
.then(
x=>x,//if resulved just pass along the value
reject=>new Fail([reject,url+param])
)
)
)
This is more of a conceptual question. I understand the Promise design pattern, but couldn't find a reliable source to answer my question about promise.all():
What is(are) the correct scenario(s) to use promise.all()
OR
Are there any best practices to use promise.all()? Should it be ideally used only if all of the promise objects are of the same or similar types?
The only one I could think of is:
Use promise.all() if you want to resolve the promise only if all of the promise objects resolve and reject if even one rejects.
I'm not sure anyone has really given the most general purpose explanation for when to use Promise.all() (and when not to use it):
What is(are) the correct scenario(s) to use promise.all()
Promise.all() is useful anytime you have more than one promise and your code wants to know when all the operations that those promises represent have finished successfully. It does not matter what the individual async operations are. If they are async, are represented by promises and your code wants to know when they have all completed successfully, then Promise.all() is built to do exactly that.
For example, suppose you need to gather information from three separate remote API calls and when you have the results from all three API calls, you then need to run some further code using all three results. That situation would be perfect for Promise.all(). You could so something like this:
Promise.all([apiRequest(...), apiRequest(...), apiRequest(...)]).then(function(results) {
// API results in the results array here
// processing can continue using the results of all three API requests
}, function(err) {
// an error occurred, process the error here
});
Promise.all() is probably most commonly used with similar types of requests (as in the above example), but there is no reason that it needs to be. If you had a different case where you needed to make a remote API request, read a local file and read a local temperature probe and then when you had data from all three async operations, you wanted to then do some processing with the data from all three, you would again use Promise.all():
Promise.all([apiRequest(...), fs.promises.readFile(...), readTemperature(...)]).then(function(results) {
// all results in the results array here
// processing can continue using the results of all three async operations
}, function(err) {
// an error occurred, process the error here
});
On the flip side, if you don't need to coordinate among them and can just handle each async operation individually, then you don't need Promise.all(). You can just fire each of your separate async operations with their own .then() handlers and no coordination between them is needed.
In addition Promise.all() has what is called a "fast fail" implementation. It returns a master promise that will reject as soon as the first promise you passed it rejects or it will resolve when all the promises have resolved. So, to use Promise.all() that type of implementation needs to work for your situation. There are other situations where you want to run multiple async operations and you need all the results, even if some of them failed. Promise.all() will not do that for you directly. Instead, you would likely use something like Promise.settle() for that situation. You can see an implementation of .settle() here which gives you access to all the results, even if some failed. This is particularly useful when you expect that some operations might fail and you have a useful task to pursue with the results from whatever operations succeeded or you want to examine the failure reasons for all the operations that failed to make decisions based on that.
Are there any best practices to use promise.all()? Should it be
ideally used only if all of the promise objects are of the same or
similar types?
As explained above, it does not matter what the individual async operations are or if they are the same type. It only matters whether your code needs to coordinate them and know when they all succeed.
It's also useful to list some situations when you would not use Promise.all():
When you only have one async operation. With only one operation, you can just use a .then() handler on the one promise and there is no reason for Promise.all().
When you don't need to coordinate among multiple async operations.
When a fast fail implementation is not appropriate. If you need all results, even if some fail, then Promise.all() will not do that by itself. You will probably want something like Promise.allSettled() instead.
If your async operations do not all return promises, Promise.all() cannot track an async operation that is not managed through a promise.
Promise.all is for waiting for several Promises to resolve in parallel (at the same time). It returns a Promise that resolves when all of the input Promises have resolved:
// p1, p2, p3 are Promises
Promise.all([p1, p2, p3])
.then(([p1Result, p2Result, p3Result]) => {
// This function is called when p1, p2 and p3 have all resolved.
// The arguments are the resolved values.
})
If any of the input Promises is rejected, the Promise returned by Promise.all is also rejected.
A common scenario is waiting for several API requests to finish so you can combine their results:
const contentPromise = requestUser();
const commentsPromise = requestComments();
const combinedContent = Promise.all([contentPromise, commentsPromise])
.then(([content, comments]) => {
// content and comments have both finished loading.
})
You can use Promise.all with Promise instance.
It's hard to answer these questions as they are the type that tend to answer themselves as one uses the available APIs of a language feature. Basically, it's fine to use Promises any way that suits your use case, so long as you avoid their anti-patterns.
What is(are) the correct scenario(s) to use promise.all()
Any situation in which an operation depends on the successful resolution of multiple promises.
Are there any best practices to use promise.all()? Should it be ideally used only if all of the promise objects are of the same or similar types?
Generally, no and no.
I use promise.all() when I have to do some requests to my API and I don't want to display something before the application loads all the data requested, so I delay the execution flow until I have all the data I need.
Example:
What I want to do I want to load the users of my app and their products (imagine that you have to do multiple requests) before displaying a table in my app with the user emails and the product names of each user.
What I do next I send the requests to my API creating the promises and using promise.all()
What I do when all the data has been loaded Once the data arrives to my app, I can execute the callback of promises.all() and then make visible the table with the users.
I hope it helps you to see in which scenario makes sense to use promises.all()
As #joews mentioned, probably one of the important features of Promise.all that should be explicitly indicated is that it makes your async code much faster.
This makes it ideal in any code that contains independent calls (that we want to return/finish before the rest of the code continues), but especially when we make frontend calls and want the user's experience to be as smooth as possible.
async function waitSecond() {
return new Promise((res, rej) => {
setTimeout(res, 1000);
});
}
function runSeries() {
console.time('series');
waitSecond().then(() => {
waitSecond().then(() => {
waitSecond().then(() => {
console.timeEnd('series');
});
});
});
}
function runParallel() {
console.time('parallel');
Promise.all([
waitSecond(),
waitSecond(),
waitSecond(),
]).then(() => {
console.timeEnd('parallel');
});
}
runSeries();
runParallel();
I tend to use promise all for something like this:
myService.getUsers()
.then(users => {
this.users = users;
var profileRequests = users.map(user => {
return myService.getProfile(user.Id); // returns a promise
});
return Promise.all(profileRequests);
})
.then(userProfilesRequest => {
// do something here with all the user profiles, like assign them back to the users.
this.users.forEach((user, index) => {
user.profile = userProfilesRequest[index];
});
});
Here, for each user we're going off and getting their profile. I don't want my promise chain to get out of hand now that i have x amount of promises to resolve.
So Promise.all() will basically aggregate all my promises back into one, and I can manage that through the next then. I can keep doing this for as long as a like, say for each profile I want to get related settings etc. etc. Each time I create tonnes more promises, I can aggregate them all back into one.
Promise.all-This method is useful for when you want to wait for more than one promise to complete or The Promise.all(iterable) method returns a promise that resolves when all of the promises in the iterable argument have resolved, or rejects with the reason of the first passed promise that rejects.
2.Just use Promise.all(files).catch(err => { })
This throws an error if ANY of the promises are rejected.
3.Use .reflect on the promises before .all if you want to wait for all
promises to reject or fulfill
Syntax -Promise.all(iterable);
Promise.all passes an array of values from all the promises in the iterable object that it was passed.
https://developer.mozilla.org/en/docs/Web/JavaScript/Reference/Global_Objects/Promise/all
var isCallFailed = false;
function myEndpoint1() {
return isCallFailed ? Promise.reject("Bohoo!") :Promise.resolve({"a":"a"});
}
function myEndpoint2() {
return Promise.resolve({"b":"b"});
}
Promise.all([myEndpoint1(), myEndpoint2()])
.then(values => {
var data1 = values[0];
var data2 = values[1];
alert("SUCCESS... data1: " + JSON.stringify(data1) + "; data2: " + JSON.stringify(data2));
})
.catch(error => {
alert("ERROR... " + error);
});
you can try another case by making isCallFailed = true.
Use Promise.all only when you need to run a code according to the result of more than one asynchronous operations using promises.
For example:
You have a scenario like, You need to download 2000 mb file from server, and at the same time you are going to free the user storage to make sure it can save the downloaded file.
And you need to save only in case if the file is downloaded successfully and the storage space is created successfully.
you will do like this.
your first asynchronous operation
var p1 = new Promise(function(resolve, reject) {
// you need to download 2000mb file and return resolve if
// you successfully downloaded the file
})
and your second asynchronous operation
var p2 = new Promise(function(resolve, reject) {
// you need to clear the user storage for 2000 mb
// which can take some time
})
Now you want to save only when both of the promises resolved successfully, otherwise not.
You will use promise.all like this.
Promise.all([p1,p2]).then((result)=>{
// you will be here only if your both p1 and p2 are resolved successfully.
// you code to save the downloaded file here
})
.catch((error)=>{
// you will be here if at-least one promise in p1,p2 is rejected.
// show error to user
// take some other action
})
Promise.all can be used in a scenario when there is a routine which is validating multiplerules based on particular criteria and you have to execute them all in parallel and need to see the results of those rules at one point. Promise.all returns the results as an array which were resolved in your rule vaidator routine.
E.g.
const results = await Promise.all([validateRule1, validateRule2, validateRule3, ...]);
then results array may look like (depending upon the conditions) as for example: [true, false, false]
Now you can reject/accept the results you have based on return values. Using this way you won't have to apply multiple conditions with if-then-else.
If you are interested only Promise.all then read below Promise.all
Promise (usually they are called "Promise") - provide a convenient way to organize asynchronous code.
Promise - is a special object that contains your state. Initially, pending ( «waiting"), and then - one of: fulfilled ( «was successful") or rejected ( «done with error").
On the promise to hang callbacks can be of two types:
unFulfilled - triggered when the promise in a state of "completed
successfully."
Rejected - triggered when the promise in the "made in error."
The syntax for creating the Promise:
var promise = new Promise(function(resolve, reject) {
// This function will be called automatically
// It is possible to make any asynchronous operations,
// And when they will end - you need to call one of:
// resolve(result) on success
// reject(error) on error
})
Universal method for hanging handlers:
promise.then(onFulfilled, onRejected)
onFulfilled - a function that will be called with the result with
resolve.
onRejected - a function that will be called when an error reject.
With its help you can assign both the handler once, and only one:
// onFulfilled It works on success
promise.then(onFulfilled)
// onRejected It works on error
promise.then(null, onRejected)
Synchronous throw - the same that reject
'use strict';
let p = new Promise((resolve, reject) => {
// то же что reject(new Error("o_O"))
throw new Error("o_O");
});
p.catch(alert); // Error: o_O
Promisification
Promisification - When taking asynchronous functionality and make it a wrapper for returning PROMIS.
After Promisification functional use often becomes much more convenient.
As an example, make a wrapper for using XMLHttpRequest requests
httpGet function (url) will return PROMIS, which upon successful data loading with the url will go into fulfilled with these data, and in case of error - in rejected with an error information:
function httpGet(url) {
return new Promise(function(resolve, reject) {
var xhr = new XMLHttpRequest();
xhr.open('GET', url, true);
xhr.onload = function() {
if (this.status == 200) {
resolve(this.response);
} else {
var error = new Error(this.statusText);
error.code = this.status;
reject(error);
}
};
xhr.onerror = function() {
reject(new Error("Network Error"));
};
xhr.send();
});
}
As you can see, inside the function XMLHttpRequest object is created and sent as usual, when onload / onerror are called, respectively, resolve (at the status 200) or reject.
Using:
httpGet("/article/promise/user.json")
.then(
response => alert(`Fulfilled: ${response}`),
error => alert(`Rejected: ${error}`)
);
Parallel execution
What if we want to implement multiple asynchronous processes simultaneously and to process their results?
The Promise class has the following static methods.
Promise.all(iterable)
Call Promise.all (iterable) receives an array (or other iterable object) and returns PROMIS PROMIS, which waits until all transferred PROMIS completed, and changes to the state "done" with an array of results.
For example:
Promise.all([
httpGet('/article/promise/user.json'),
httpGet('/article/promise/guest.json')
]).then(results => {
alert(results);
});
Let's say we have an array of URL.
let urls = [
'/article/promise/user.json',
'/article/promise/guest.json'
];
To download them in parallel, you need to:
Create for each URL corresponding to PROMIS.
Wrap an array of PROMIS in Promise.all.
We obtain this:
'use strict';
let urls = [
'/article/promise/user.json',
'/article/promise/guest.json'
];
Promise.all( urls.map(httpGet) )
.then(results => {
alert(results);
});
Note that if any of Promise ended with an error, the result will
Promise.all this error.
At the same time the rest of PROMIS ignored.
For example:
Promise.all([
httpGet('/article/promise/user.json'),
httpGet('/article/promise/guest.json'),
httpGet('/article/promise/no-such-page.json') // (нет такой страницы)
]).then(
result => alert("не сработает"),
error => alert("Ошибка: " + error.message) // Ошибка: Not Found
)
In total:
Promise - is a special object that stores its state, the current
result (if any), and callbacks.
When you create a new Promise ((resolve, reject) => ...) function
argument starts automatically, which should call resolve (result) on
success, and reject (error) - error.
Argument resolve / reject (only the first, and the rest are ignored)
is passed to handlers on this Promise.
Handlers are appointed by calling .then / catch.
To transfer the results from one processor to another using Channing.
https://www.promisejs.org/patterns/
I have a promise that returns data and I want to save that in variables. Is this impossible in JavaScript because of the async nature and do I need to use onResolve as a callback?
Can I somehow use this (e.g. wrap it with async/await):
const { foo, bar } = Promise.then(result => result.data, errorHandler);
// rest of script
instead of this?
Promise.then(result => {
const { foo, bar } = result.data;
// rest of script
}, errorHandler);
Note: Bluebird library is used instead of native implementation, and I can't change from Promise to asnyc/await or Generators.
NO you can't get the data synchronously out of a promise like you suggest in your example. The data must be used within a callback function. Alternatively in functional programming style the promise data could be map()ed over.
If your are OK using async/await (you should it's awesome) then you can write code that looks synchronous yet retain the asynchronicity of a promise (see #loganfsmyth comments).
const { foo, bar } = await iAmAPromise.then(result => result.data);
Overall since you are already using ES6 I assume you are also using a transpiler. In which case you should definitely give async/await a try.
Just be sure to weight in the decision that as today they are not yet a ratified specification.
While you can get a value from an awaited Promise inside an async function (simply because it pauses the function to await a result), you can't ever get a value directly "out" of a Promise and back into the same scope as the code that created the Promise itself.
That's because "out of" would mean trying to take something that exists in the future (the eventually resolved value) and putting it into a context (synchronous variable assignment) that already happened in the past.
That is, time-travel. And even if time-travel were possible, it probably wouldn't be a good coding practice because time travel can be very confusing.:)
In general, if you find yourself feeling like you need to do this, it's good sign that you need to refactor something. Note that what you're doing with "result => result.data" here:
Promise.then(result => result.data, errorHandler);
// rest of script
..is already a case of you working with (literally, mapping over) the value by passing it to a function. But, assuming that "// rest of script" does something important related to this value, you probably want to continue mapping over the now updated value with yet another function that then does something side-effect-y with the value (like display the data on the screen).
Promise
.then(({ data }) => data)
.then(data => doSomethingWithData(data))// rest of script
.catch(errorHandler);
"doSomethingWithData" will be called (if it's ever called) at some unknown point in the future. Which is why it's a good practice to clearly encapsulate all that behavior into a specific function and then hook that function up to the Promise chain.
It's honestly better this way, because it requires you to clearly declare a particular sequence of events that will happen, explicitly separated out from the first run through all of your application code's execution.
To put it another way, imagine this scenario, hypothetically executed in the global, top-level scope:
const { foo, bar } = Promise.then(result => result.data, errorHandler);
console.log(foo);
//...more program
What would you expect to happen there? There are two possibilities, and both of them are bad.
Your entire program would have to halt and wait for the Promise to execute
before it could know what "foo" & "bar" would... nay, might be. (this is
what "await," inside an async function, does in fact do: it pauses
the entire function execution until the value is available or an the error is thrown)
foo and bar would just be undefined (this is what actually
happens), since, as executed synchronously, they'd just be
non-existent properties of the top-level Promise object (which is not itself a "value,"
but rather a quasi-Monadic wrapper around getting an eventual value OR an error) which most
likely doesn't even contain a value yet.
let out; mypromise.then(x => out = x); console.log(out)
Only use this code when
you are debugging by hand,
and you know the promise has already succeeded
Behaviour of this code:
While the promise has not resolved yet, out will be undefined.
Once the promise resolves to a failure, an error is thrown.
When the promise resolves to success, (which may be after the console.log), the value of out will change from undefined to the Promise result — maybe in the middle of what you were doing.
In production code, or really any code that runs without you, the code that uses the Promise result should be inside the .then() callback, and not use some out variable. That way:
your code won't be run too early (when the result is still undefined),
and won't run too late (because you don't need 'I think sleeping for 10 seconds should do it' workarounds),
and won't erroneously run when the promise fails.
I have a solution of getting this value "out" if you will. This is a method at backend for uploading multiple files to AWS S3 which must be dealt asynchronously. I also need the responses from S3, so I need the values out of the Promise:
async function uploadMultipleFiles(files) {
const promises = []; //Creating an array to store promises
for (i = 0; i < files.length; i++) {
const fileStream = fs.createReadStream(files[i].path)
const uploadParams = {
Bucket: bucketName,
Body: fileStream,
Key: files[i].filename
}
promises.push(s3.upload(uploadParams).promise()) //pushing each promise instead
//of awaiting, to enable for concurrent uploads.
}
await Promise.all(promises).then(values => {
console.log("values: ", values) //just checking values
result = values; //storing in a different variable
});
return result; //returning that variable
}
The key lines in context with the issue being discussed here are these :
await Promise.all(promises).then(values => {
console.log("values: ", values) //just checking values
res = values; //storing in a different variable
});
return res; //returning that variable
But of course we have to also await in the function that will be calling this :
const result = await uploadMultipleFiles(files);
All you need to do is to extract all you have in your promise by using a .then
yourFunction().then( resp => {
... do what you require here
let var1 = resp.var1;
let var2 = resp.var2;
...
.....
})
yourFunction() should return a Promise
How to Get A Value From A Promise
YES! You can extract value out of a promise!
Do NOT let anyone here say you cannot. Just realize any variable that stores your returned promise value will likely have a short delay. So if you have a JavaScript script page that needs that data outside of the Promise or async-await functions, you may have to create loops, interval timers, or event listeners to wait to grab the value after some time. Because most async-await-promises are REST calls and very fast, that wait would require just a quick while loop!
It is easy! Just set a variable (or create a function) that can access the value inside your async or promise code and store the value in an outside variable, object, array, etc you can check on. Here is a primitive example:
// I just created a simple global variable to store my promise message.
var myDelayedData = '';
// This function is only used to go get data.
// Note I set the delay for 5 seconds below so you can test the delay
const getData = () => {
return new Promise((resolve, reject) => {
setTimeout(() => resolve('my promise data'), 5000);
});
}
// I like to create a second async function to get the data
// from the promise object and save the data to my global variable.
const processData = async () => {
let data = await getData();
// Save the delayed data to my global variable
myDelayedData = data;
}
// Start the data call from the promise.
processData();
// Open up your browser, hit F12 to pull up the browser devtools
// Click the "console" tab and watch the script print out
// the value of the variable with empty message until after
// 5 seconds the variable is assigned to the resolved promise
// and apears in the message!
// THAT IS IT! Your variable is assigned the promise value
// after the delay I set above!
// TEST: But let's test it and see...
var end = setInterval(function(){
console.log("My Result: " + myDelayedData);
if(myDelayedData !== ''){
clearInterval(end);
}
}, 1000);
// You should see this in devtools console.
// Each line below represents a 1 second delay.
My Result:
My Result:
My Result:
My Result: my promise data
Most people seeing this code will say "Then why use a Promise, just make a call for the data, pause, and update your application?" True: The whole point of a Promise is to encapsulate data processes inside the promise and take actions while the rest of the script continues.
But... you may need to output a result outside the Promise. You may have other global processes that need that data because it changes the state of the global application, for example. But at least you know you can get to that Promise data if you needed it.
I'm using jQuery to make various ajax POST requests. I need to keep track of the success or failure of each one of them, along with the overall progress of the complete batch, so that I can update the UI with a progress bar and info about how many requests have succeeded, and how many have failed, out of the total.
Before attempting to implement the feature in my app, I've been playing with some code in jsfiddle as a proof of concept, with no luck so far. This is what I've got:
// an alternative to console.log to see the log in the web page
var fnLog = function(message) {
$('#console').append($("<p>" + message + "</p>"));
};
// keeping track of how many ajax calls have been finished (successfully or not)
var count = 0;
// a dummy ajax call that succeeds by default
var fn = function(shouldFail) {
return $.get(shouldFail ? '/echo/fail/' : '/echo/json/')
.done(function() { fnLog("done") })
.fail(function() { fnLog("FAIL") });
};
// a set of different asynchronous ajax calls
var calls = [fn(),fn(),fn(),fn(true),fn(),fn()];
// an attempt to make a collective promise out of all the calls above
$.when.apply($, calls)
.done(function() { fnLog("all done") })
.fail(function() { fnLog("ALL FAIL") })
.always(function() { fnLog("always") })
.progress(function(arg) { fnLog("progress" + arg) })
.then(function() { fnLog("finished") });
It's all in this fiddle: http://jsfiddle.net/mmtbo7v6/1/
What I need is the ability to provide a callback that ought to be called after all promises are resolved (either successfully or not).
When all calls above are set to succeed (by removing the true argument to the fourth fn call in the array) it works fine. The output prints the following:
done
done
done
done
done
done
all done
always
finished
But when even a single call is set to fail (as it is by default in the jsfiddle), the output is the following:
done
FAIL
ALL FAIL
always
done
done
done
done
So none of the collective promise callbacks (the one generated by the $.when call) is called after all promises are resolved. The final .then is not called at all if a single ajax call fails.
Additionally, I would appreciate some insight on how to keep track of the progress of this batch of ajax calls, to update a progress bar in the UI.
Well... I'm going to be unfair. jQuery actually comes bundled with progression events but I myself hate them because I don't think they compose or aggregate well - so I'll show a simpler alternative approach for that progress bar that I believe is superior instead.
First thing's first:
The 'all promises resolved but some possibly rejected' issue is called a 'settle' typically. I've provided an answer to a similar question here with just giving the results and here providing an implementation that gives you access to all results even rejected ones.
function settle(promises){
var d = $.Deferred();
var counter = 0;
var results = Array(promises.length);
promises.forEach(function(p,i){
p.then(function(v){ // add as fulfilled
results[i] = {state:"fulfilled", promise : p, value: v};
}).catch(function(r){ // add as rejected
results[i] = {state:"rejected", promise : p, reason: r};
}).always(function(){ // when any promises resolved or failed
counter++; // notify the counter
if (counter === promises.length) {
d.resolve(results); // resolve the deferred.
}
});
});
return d.promise();
}
You'd use settle in place of $.when to get your desired results.
As for progression - I personally recommend passing a progression callback to the method itself. The pattern goes something like this:
function settle(promises, progress){
progress = progress || function(){}; // in case omitted
var d = $.Deferred();
var counter = 0;
var results = Array(promises.length);
promises.forEach(function(p,i){
p.then(function(v){ // add as fulfilled
results[i] = {state:"fulfilled", promise : p, value: v};
}).catch(function(r){ // add as rejected
results[i] = {state:"rejected", promise : p, reason: r};
}).always(function(){ // when any promises resolved or failed
counter++; // notify the counter
progress((promises.length - counter) / promises.length);
if (counter === promises.length) {
d.resolve(results); // resolve the deferred.
}
});
});
return d.promise();
}
Which would let you do something like:
settle([url1, url2, ... url100].map($.get), function(soFar){
$("#myProgressBar").css("width", (soFar * 100)+"%");
}).then(function(results){
console.log("All settled", results);
]);
It turns out there's a much better alternative to this problem, one which shadows the promises approach. Behold the combination of two patterns: Observables + Iterables = Reactive programming.
Reactive Programming is programming with asynchronous data streams, that is, treating asynchronous data streams as collections that can be traversed and transformed as traditional collection data types. This article is a great introduction.
I won't convert this answer post into a tutorial, so let's go straight to the solution, which is shown below. I'm gonna be using the RxJS library, but there are other libraries for Reactive Programming in JS (bacon.js seems to be really popular too).
function settle(promises) {
return Rx.Observable.from(promises).concatMap(function(promise, index) {
return Rx.Observable.fromPromise(promise).
map(function(response) {
return { count: index+1, total: promises.length, state: "fulfilled", promise: promise, value: response };
}).
catch(function(reason) {
return Rx.Observable.of({ count: index+1, total: promises.length, state: "rejected", promise: promise, reason: reason });
});
});
}
The function itself returns an observable, which is a stream of events. Namely, the events of each promise finished, either successfully or not. We can use that returned observable to listen to this stream (or to subscribe to it, if we're to adhere to RxJS terminology).
var results = settle(promises);
results.subscribeOnNext(function(results) {
// process each result as it arrives
// progress info can be extracted from results.count and results.total
});
results.subscribeOnCompleted(function() {
// completion callback
});
And that's it. Much cleaner code, a more functional-programming approach. No need to keep state, everything expressed in a more declarative way. Just what we want to be done, not how it should be done.