I'm using $http interceptors to capture all events following an ajax submission. For some reason, I am not able to throw a requestError. I've set up a test app to try and call requestError, but so far I can only get multiple responseErrors.
From angularjs docs:
requestError: interceptor gets called when a previous interceptor threw an error or resolved with a rejection.
This is my test code.
.factory('httpInterceptor',['$q',function(q){
var interceptor = {};
var uniqueId = function uniqueId() {
return new Date().getTime().toString(16) + '.' + (Math.round(Math.random() * 100000)).toString(16);
};
interceptor.request = function(config){
config.id = uniqueId();
console.log('request ',config.id,config);
return config;
};
interceptor.response = function(response){
console.log('response',response);
return response;
};
interceptor.requestError = function(config){
console.log('requestError ',config.id,config);
return q.reject(config);
};
interceptor.responseError = function(response){
console.log('responseError ',response.config.id,response);
return q.reject(response);
};
return interceptor;
}])
.config(['$httpProvider',function($httpProvider) {
$httpProvider.interceptors.push('httpInterceptor');
}])
.controller('MainCtrl',['$http',function($http){
var mainCtrl = this;
mainCtrl.method = null;
mainCtrl.url = null;
var testHttp = function testHttp() {
$http({method:mainCtrl.method,url:mainCtrl.url}).then(
(response)=>{console.log('ok',response);},
(response)=>{console.log('reject',response);}
);
};
//api
mainCtrl.testHttp = testHttp;
}])
I've tried multiple ways of creating http errors, and every time only responseError gets called. Things I've tried:
Get server to return different types of error for every request, e.g. 400 and 500.
Get the server to sleep random times, to get some later requests to respond with an error before earlier requests. Same resource, same server response.
Generate 404 errors by requesting resources which don't exist.
Disconnecting from the internet (responseError -1).
SIMILAR QUESTIONS
1) This question seems to have the answer:
When do functions request, requestError, response, responseError get invoked when intercepting HTTP request and response?
The key paragrapgh being:
A key point is that any of the above methods can return either an
"normal" object/primitive or a promise that will resolve with an
appropriate value. In the latter case, the next interceptor in the
queue will wait until the returned promise is resolved or rejected.
but I think I'm doing what it stipulates, viz random sleep by the server but no luck. I am getting reponseErrors out of order from the request ie as soon as the server responds.
2) A similar question was asked about 1 year ago: Angular and Jasmine: How to test requestError / rejection in HTTP interceptor?
Unfortunately, it only provides an explanation for interceptors. It does not answer my question.
I have tested in Chrome and Firefox. I hope you understand, I've done my best to find a solution to this, but I haven't come across a solution as yet.
This happens because the request isn't rejected at any point. It is supposed to be used like that:
app.factory('interceptor1', ['$q', function ($q) {
return {
request: function (config) {
console.log('request', config);
if (config.url === 'restricted')
return $q.reject({ error: 'restricted', config: config });
}
};
}]);
app.factory('interceptor2', ['$q', function ($q) {
return {
requestError: function (rejection) {
console.log('requestError', rejection);
if (rejection.error === 'restricted')
return angular.extend(rejection.config, { url: 'allowed' });
return $q.reject(rejection);
}
};
}]);
app.config(['$httpProvider',function($httpProvider) {
$httpProvider.interceptors.push('interceptor1');
$httpProvider.interceptors.push('interceptor2');
}]);
Notice that interceptors are supposed to work in stack (starting from transform* hooks in $http request), so the request can't be rejected and recovered within a single interceptor.
I know I'm years late, but I just came across the same problem and I didn't find any of the other answers particularly helpful. So, after spending a number of hours studying AngularJS interceptors, I'd like to share what I learned.
TL;DR
Interceptors are not intuitive, and there are a lot of "gotchas". The thread author and I got caught in a few of them. Problems like this can be fixed by better understanding the execution flow that happens behind the scenes. Most specific to this thread are Gotchas #3 and #6 near the end of this post.
Background
As you know, the $httpProvider has a property called "interceptors", which starts as an empty array and is where one or more interceptors can be stored. An interceptor is an object that has four optional methods: request, requestError, response, and responseError. The documentation says little about these methods, and what it does say is misleading and incomplete. It is not clear when these are called and in what order.
Explanation By Example
As mentioned in other comments/answers, the interceptor methods all end up linked together in a big promise chain. If you aren't familiar with promises, interceptors won't make any sense (and neither will the $http service). Even if you understand promises, interceptors are still a little weird.
Rather than trying to explain the execution flow, I'll show you an example. Let's say that I've added the following three interceptors to my $httpProvider.interceptors array.
When I make a request via $http, the execution flow that happens behind the scenes looks like the following. Note that green arrows indicate that the function resolved, and the red arrows indicate that the function rejected (which will happen automatically if an error is thrown). The labels next to the arrows indicate what value was resolved or rejected.
Wow, that's super complicated! I won't go through it step by step, but I want to point out a few things that might leave a programmer scratching their head.
Notable Bug-Causing Weirdness ("Gotchas")
The first thing to note is that, contrary to popular belief, passing a bad config object to $http() will not trigger a requestError function -- it won't trigger any of the interceptor methods. It will result in a normal old error and execution will stop.
There is no sideways movement in the flow -- every resolve or reject moves execution down the chain. If an error occurs in one of the success handlers (blue), the error handler (orange) in the same interceptor is not called; the one on the next level down is called, if it exists. This leads to gotcha number 3...
If you defined a requestError method in the first interceptor, it will never be called. Unless I'm reading the angularjs library code incorrectly, it seems that such a function is completely unreachable in the execution flow. This was what caused me so much confusion, and it seems it may have been part of the problem in the original question as well.
If the request or requestError methods of the last interceptor reject, the request will not be sent. Only if they resolve will angular actually attempt to send the request.
If the request fails to send OR the response status is not 2XX, it rejects and triggers the first responseError. That means your first responseError method has to be able to handle two different kinds of inputs: If the "send" function failed, the input would be an error; but if the response was a 401, the input would be a response object.
There is no way to break out of the chain once it starts. This also seemed to be part of the problem in the original question. When the last requestError rejects, it skips sending the request, but then the first responseError is immediately called. Execution doesn't stop until the chaining is complete, even if something fails early on.
Conclusion
I assume the author of this thread resolved (no pun intended) their problem long ago, but I hope this helps someone else down the line.
You are raising the responseError because all of your examples have errors in their responses. You can get a request error by trying to send invalid json in your request or improperly formatting your request.
Related
I have a cloud function in Firebase that, among a chain of promise invocations, ends with a call to this function:
function sendEmail() {
return new Promise((accept) => {
const Email = require('email-templates');
const email = new Email({...});
email.send({...}).then(() => {
console.log('Email sent');
}).catch((e) => {
console.error(e);
});
accept();
});
}
I am well aware of the fact that email.send() returns a promise. There's a problem however with this approach, that is, if I were to change the function to be:
function sendEmail() {
const Email = require('email-templates');
const email = new Email({...});
return email.send({...});
}
It usually results in the UI hanging for a significant amount of time (10+ seconds) because the time it takes from the promise to resolve equals the amount of time it takes for the email to send.
That's why I figured the first approach would be better. Just call email.send() asynchronously, it'll send the email eventually, and return a response to the client whether the email has finished its round trip or not.
The first approach is giving me problems. The cloud function finishes execution must faster, and thus ends up being a better experience for the user, however, the email doesn't send for another 15+ minutes.
I am considering another approach where we have a separate cloud function hook that handles the email sending, but I wanted to ask StackOverflow first.
I think there are two aspects being mixed here.
One side of the question deals with promises in the context of Cloud Functions. Promises in Cloud Functions need to be resolved before you call res.send() because right after this call the function will be shutdown and there's no guarantee that unresolved promises will complete before the function instance is terminated, see this question. You might as well never call res.send() and instead return the result of a promise as shown in the Firebase documentation, the key here would be to ensure the promise is resolved properly for example using an idiom like return myPromise().then(console.log); which will force the promise resolution.
Separately, as Bergi pointed out in the comments the first snippet uses an anti-pattern with promises and the second one is way more concise and clear. If you're experiencing a delay in the UI it's likely that the execution gets freezed waiting for the Function response and you might consider whether this could be avoided in your particular use case.
All that said, your last idea of creating a separate function to deal with the email send process would also likely reduce the response time and could even make more sense from a separation of concerns point of view. To go this route I would suggest to send a PubSub message from the main function so that a second one sends the email. Moreover, PubSub triggered function allows to configure retry policies which may be useful to ensure the mail will be sent in the context of eventual errors. This approach is also suggested in the question linked above.
I'm building a blog site w/Express, and using Q for the first time, and I was hoping to tap into the knowledge of veteran Q users.
I'm making one request to my DB to load post data, and another request that hits the Instagram API (unless it's already cached) and returns some json. So I have something like:
Q.all([blogPromise, instagramPromise]).then(good, bad);
The issue/question I'm running into is that say my request fails in my instagramPromise and I call deferred.reject(), my bad function is called. However, I still want to load the page with the blog post data if my blogPromise resolves, but it seems I'm not getting any arguments when my bad function is called (e.g. I don't get the blogPromise data that was successfully fetched).
Given this, it seems my only option is to not call deferred.reject() when I have an error, and instead call deferred.resolve() with something like deferred.resolve({error: true}) which I can then use in my good function to handle what gets passed to my view.
So my question is, does this sound right? Is this not a misuse of Q using resolve when in fact I'm running into an error and should be using reject? Or am I missing something with Q that would allow a better approach?
If you want your promise to resolve when both blogPromise and instagramPromise either resolves or rejects, you need to use allSettled method. Here is an example from the documentation:
Q.allSettled([blogPromise, instagramPromise])
.then(function (results) {
var loaded = results.filter(function (result) {
return result.state === "fulfilled";
});
good(loaded);
});
Inside of allSettled's then callback you can filter successfully loaded results and pass them to the good function. Or handle failed results somehow with bad one.
Something like this perhaps?
Q.all([
blogPromise,
instagramPromise.catch(function() { return {error: true}; })
]).then(good, bad);
It's similar to the approach you mention, with the difference that the error suppression is done in the place where it's used, rather than in the place where the error originates.
In my controller, I use a method from a factory to update some data. For example, I'm trying to fetch an updated array of users. Should I be returning the promise itself from the factory? Or should I be returning the data from the promise (not sure if I phrased that correctly)?
I ask because I've seen it both ways, and some people say that the controller shouldn't have to handle whether the request was successful or if it failed. I'm specifically talking about the promise returned from $http in this case.
Or maybe in other words, should I be using the then() method inside the factory, or should I be using it in the controller after returning from the factory?
I've tried to handle the success and error callbacks (using the this() method) from within the service, but when I return the data to the controller, the users array is not properly updated. I'm assuming that's because of the request being async. So in the controller, it would look something like this:
vm.users = userFactory.getUsers();
If I handle the promise from within the controller, and set the users array within the then() method, it works fine. But this goes back to where I should be using then():
userFactory.getUsers().then(
function(data){
vm.users = data;
}, ...
Hopefully someone would be able to shed some light on this or provide some input. Thanks!
There's no way you can return the data from the factory (since it's an async call) without using either a callback approach (discouraged):
userFactory.prototype.getUsers = function(callback){
$http.get('users').then(function (response) {
callback(response.data);
});
};
Or the promise approach.
If you're worried about handling the errors on the controller, then worry not! You can handle errors on the service:
userFactory.prototype.getUsers = function(){
return $http.get('users').then(function(response) {
return response.data;
}, function(error) {
// Handle your error here
return [];
});
};
You can return the results of then and it will be chained. So things from service will execute and then, later on, Controller ones.
I have no problem with controller deciding what to do basing on response failing/succeding. In fact it lets you easily handle different cases and doesn't add a lot of overhead to the controller (controller should be as small as possible and focused on current task, but for me going different path whether request failed is the part of its task).
Anyway, in Angular HTTP requests are wrapped in promises internally (I remember that in the previous versions there was a way to automatically unwrap them), so returning from service/factory always returns a promise, which has to be resolved.
I prefer returning a promise from a service/factory because I tend to let other classes decide what to do with the response.
WebDriverJS and Protractor itself are entirely based on the concept of promises:
WebDriverJS (and thus, Protractor) APIs are entirely asynchronous. All
functions return promises.
WebDriverJS maintains a queue of pending promises, called the control
flow, to keep execution organized.
And, according to the definition:
A promise is an object that represents a value, or the eventual
computation of a value. Every promise starts in a pending state and
may either be successfully resolved with a value or it may be rejected
to designate an error.
The last part about the promise rejection is something I don't entirely understand and haven't dealt with in Protractor. A common pattern we've seen and written is using then() and providing a function for a successfully resolved promise:
element(by.css("#myid")).getAttribute("value").then(function (value) {
// do smth with the value
});
The Question:
Is it possible that a promise returned by any of the Protractor/WebDriverJS functions would not be successfully resolved and would be rejected? Should we actually worry about it and handle it?
I've experienced a use-case of promise rejection while using browser.wait(). Here is an example:
var EC = protractor.ExpectedConditions;
function isElementVisible() {
var el = element(by.css('#myel'));
// return promise
return browser.wait(EC.visibilityOf(el), 1000)
.then(function success() {
return true; // return if promise resolved
}, function fail() {
return false; // return if promise rejected
});
}
expect(isElementVisible()).toBe(true);
expect(isElementVisible()).toBe(false);
Here, if element is on a page, success will be executed, otherwise, if it is not found when 1 second passes, then fail will be called. My first point is that providing a callback for rejection gives an ability to be consistent with what one should expect. In this case I am kinda sure that promise will always resolve to true or false, so I can build a suite relying on it. If I do not provide a fail callback, then I'll get an Uncaught exception because of timeout, which will still fail my particular spec and still run the rest of the specs. It won't be uncaught by the way, Protractor is gonna catch it, but here I want to bring a second point, that Protractor is considered a tool which you use to write and run your code, and if an exception is caught by Protractor, then this exception has left your code unhandled and your code has a leak. But ... at the same time I do not think that one should waste time to catch everything in tests: if there is no element on a page or click has failed, then a respective spec will obviously fail too, which is fine in most of the cases. Unless you want to use the result of failure to build some code on top of it like it is in my sample.
That is the great thing about promises you are going to get a response, either an response of data or an error message. That extended to a series of promises like Webdriver uses you are going to get an array of responses or a failure response of the first one that fails. How you handle the failed response is up to you I usually just dump it into a log for the console to see what failed. The only thing you need to figure out is do you abort the rest of your tests or do you continue on.
Here is a decent article on the subject as well. http://www.toolsqa.com/selenium-webdriver/exception-handling-selenium-webdriver/
Fyi what you are doing is fine you are just never bothering to catch any of the errors though, I am not sure if that matters to you or not, you could also abstract the call in a function to auto handle the errors for you if you wanted to log them somewhere.
I concede that, despite hours of reading and attempting, I am fundamentally unable to grasp something about Deferred promises and asynchrony in general.
The goal on my end is real, real simple: send some data to the server, and react to the contents of the response conditionally.
The response will always be a JSON object with save and error keys:
{ "save": true, "error":false}
// or
{ "save" : false,
"error" : "The server has run off again; authorities have been notifed."}
I have tried dozens and dozens of variations from the jQuery API, from other stackexchange answers, from tutorials, etc.. The examples all seem concerned with local asynchronous activity. When I need is some ability to be made aware when the AJAX request has either finished and returned a response I can inspect and make decisions about, or else to know that it's failed. Below, I've used comments to explain what I think is happening so someone can show me where I'm failing.
I know this is a repost; I am, apprently, worse than on average at grasping this.
var postData = {"id":7, "answer":"Ever since I went to Disneyland..."};
/* when(), as I understand it, should fire an event to be
responded to by then() when it's contents have run their course */
var result = $.when(
/* here I believe I'm supposed to assert what must complete
before the when() event has fired and before any chained
functions are subsequently called */
/* this should return a jqXHR object to then(), which is,
I'd thought, a queue of functions to call, in order,
UPON COMPLETION of the asynchronous bit */
$.post("my/restful/url", postData))
.then( function() {
/* since "this" is the jqXHR object generated in the $.post()
call above, and since it's supposed to be completed by now,
it's data key should be populated by the server's response—right? */
return this.data;
});
// alas, it isn't
console.log(result.data);
// >> undefined
Most examples I can find discuss a timeout function; but this seems, as I understand, to be a failsafe put in place to arbitrarily decide when the asynchronous part is said to have failed, rather than a means of stalling for time so the request can complete. Indeed, if all we can do is just wait it out, how's that any different from a synchronous request?
I'll even take links to a new read-mes, tutorials, etc. if they cover the material in a different way, use something other than modified examples from the jQuery API, or otherwise help this drooling idiot through the asynchronous mirk; here's where I've been reading to date:
jQuery API: Deferred
JQuery Fundamentals
jQuery Deferreds promises asynchronous bliss (blog)
StackOverflow: timeout for function (jQuery)
Update
This is in response to #Kevin B below:
I tried this:
var moduleA = {
var moduleB = {
postData: {"id":7, "answer":"Ever since I went to Disneyland..."};
save: function() {
return $.post("path/to/service", postData, null, "JSON");
}
};
var result = this.moduleB.save();
result.done(function(resp) {
if (resp.saved == true) {
// never reached before completion
console.log("yahoo");
} else {
console.log("Error: " + resp.error);
// >> undefined
}
});
}
You are over-complicating your code. You cannot get the data to outside of the callback, no matter how many deferred/promises you create/use (your sample creates 3 different deferred objects!)
Use the done callback.
var postData = {"id":7, "answer":"Ever since I went to Disneyland..."};
$.post("my/restful/url", postData).done(function (result) {
console.log(result.save, result.error);
});
You seem to have a misunderstanding of both asynchronous requests, the Promise pattern, and Javascripts mechanism of passing functions as an argument.
To understand what's really happening in your code I suggest you use a debugger and set some breakpoints in the code. Or, alternatively, add some console.logs in your code. This way you can see the flow of the program and might understand it better. Also be sure to log the arguments of the function you pass as an argument in the then()-method, so you understand what is passed.
ok you got it half right. the problem is that when you execute the console.log the promised is not yet fulfilled the asynchronous nature of the promises allows the code to execute before that ajax operation is done. also result is a deferred not a value, you need to handle your promised with .done instead of .then if you wish to return a value otherwise you'll continue passing promises.
so that said
var result={};
$.when(
$.post("my/restful/url", postData))
.done( function(data) {
result.data=data;
});
// here result is an object and data is a undefined since the promised has no yet been resolve.
console.log(result.data);