I'm working on a web application using vue2 and axios. After a certain event, I want to cancel every request that I've made before. For this, I'm using an AbortController that I declared in my store:
Example of a request :
await axios.post('...', this.dataToSend, {signal: this.controller.signal});
In my store, I have a function abort :
const state = {
controller: new AbortController()
}
const actions = {
abort ({state}) {
state.controller.abort();
}
}
I have multiple requests with the same controller signal. Is there a way to see directly in the abort method all the request that has been aborted durectly from my controller object ?
Related
Basically I am working with an online/offline app. I developped a custom hook to detect wether a user has connection or not. For this I am sending a random fetch request. However the service worker is intercepting the request and send a 200 even though the user is clearly offline. My question is, can I ingore a specific endpoint in the service worker ?
const checkOnline = async () => {
try {
const response = await fetch('/test.test');
setOnline(response.ok);
console.log('response.source', response.url)
} catch {
setOnline(false);
}
};
You can detect internet connection status via window.navigator.onLine. No need to make a fake request
I'm cancelling axios requests when the component unmounts as following
const CancelToken = axios.CancelToken
const source = CancelToken.source()
componentWillUnmount () {
source.cancel('message')
}
When the component is unmounted, the network requests happening in the background are flagged as 'stalled'. The problem is when the user goes to that component again, network requests supposed to happen in componentDidMount do no start.
axios#cancellation describes two ways to use the cancelToken. You used the first way, with source.token/source.cancel. I started out that way, and had a similar problem as yours: Once a request to a particular URL was canceled, I could never get a successful response from that URL again. I switched to the second method using an executor function and the problem went away. I guess was sharing the same cancelToken for multiple requests, which is what they say you can do with the executor function method. Anyway, maybe that would work for you too.
After canceling set a new token.
let uploadCancelTokenSource = axios.CancelToken.source();
upload() {
const config = {
cancelToken: uploadCancelTokenSource.token,
};
}
cancel() {
uploadCancelTokenSource.cancel('Operation canceled by the user');
uploadCancelTokenSource = axios.CancelToken.source();
}
I've done a simple service-worker to defer requests that fail for my JS application (following this example) and it works well.
But I still have a problem when requests succeed: the requests are done twice. One time normaly and one time by the service-worker due to the fetch() call I guess.
It's a real problem because when the client want to save datas, they are saved twice...
Here is the code :
const queue = new workbox.backgroundSync.Queue('deferredRequestsQueue');
const requestsToDefer = [
{ urlPattern: /\/sf\/observation$/, method: 'POST' }
]
function isRequestAllowedToBeDeferred (request) {
for (let i = 0; i < requestsToDefer.length; i++) {
if (request.method && request.method.toLowerCase() === requestsToDefer[i].method.toLowerCase()
&& requestsToDefer[i].urlPattern.test(request.url)) {
return true
}
}
return false
}
self.addEventListener('fetch', (event) => {
if (isRequestAllowedToBeDeferred(event.request)) {
const requestClone = event.request.clone()
const promiseChain = fetch(requestClone)
.catch((err) => {
console.log(`Request added to queue: ${event.request.url}`)
queue.addRequest(event.request)
event.respondWith(new Response({ deferred: true, request: requestClone }))
})
event.waitUntil(promiseChain)
}
})
How to do it well ?
EDIT:
I think I don't have to re-fetch() the request (because THIS is the cause of the 2nd request) and wait the response of the initial request that triggered the fetchEvent but I have no idea how to do it. The fetchEvent seems to have no way to wait (and read) the response.
Am I on the right way ? How to know when the request that triggered the fetchEvent has a response ?
You're calling event.respondWith(...) asynchronously, inside of promiseChain.
You need to call event.respondWith() synchronously, during the initial execution of the fetch event handler. That's the "signal" to the service worker that it's your fetch handler, and not another registered fetch handler (or the browser default) that will provide the response to the incoming request.
(While you're calling event.waitUntil(promiseChain) synchronously during the initial execution, that doesn't actually do anything with regards to responding to the request—it just ensures that the service worker isn't automatically killed while promiseChain is executing.)
Taking a step back, I think you might have better luck accomplishing what you're trying to do if you use the workbox.backgroundSync.Plugin along with workbox.routing.registerRoute(), following the example from the docs:
workbox.routing.registerRoute(
/\/sf\/observation$/,
workbox.strategy.networkOnly({
plugins: [new workbox.backgroundSync.Plugin('deferredRequestsQueue')]
}),
'POST'
);
That will tell Workbox to intercept any POST requests that match your RegExp, attempt to make those requests using the network, and if it fails, to automatically queue up and retry them via the Background Sync API.
Piggybacking Jeff Posnick's answer, you need to call event.respondWith() and include the fetch() call inside it's async function().
For example:
self.addEventListener('fetch', function(event) {
if (isRequestAllowedToBeDeferred(event.request)) {
event.respondWith(async function(){
const promiseChain = fetch(event.request.clone())
.catch(function(err) {
return queue.addRequest(event.request);
});
event.waitUntil(promiseChain);
return promiseChain;
}());
}
});
This will avoid the issue you're having with the second ajax call.
I'm trying to improve my app performance on mobile devices with laggy networks.
The first step was add "global" timeout for all http request, I used simple request interceptor for that - request(config) { return angular.extend({ timeout: 30000 }, config) }. It worked fine - instead of infinite waiting for the response I could display a warning about laggy network (in the responseError interceptor), moreover the slow request is canceled so I can expect that it should free some bandwidth for the other requests.
Now I'm trying to implement another optimization - cancelling pending http requests before the uiRouter state change because there's no reason for loading resources for the state which is no longer going to be displayed. For instance a case when a user is trying to navigate to state A, A resolvables are pending, bored user changed his mind and he's trying to navigate to state B.
My current implementation is base on $q and $timeout services and custom service for collecting timeouts for all http requests and cancelling them in a batch when necessary. Enough words, here's the code:
const REQUEST_TIMEOUT = 3000; // 30 secs
function httpRequestsCancellerService($timeout, $transitions, $q) {
const cancelers = [];
function cancelAll() {
while (cancelers.length > 0) {
const canceler = cancelers.pop();
canceler.resolve();
}
}
// Cancel all pending http requests before the next transition
$transitions.onStart({}, cancelAll);
return {
createTimeout() {
const canceler = $q.defer();
// TODO it will keep running even whe the request is completed or failed
$timeout(canceler.resolve, REQUEST_TIMEOUT);
cancelers.push(canceler);
return canceler.promise;
}
};
}
function timeoutInterceptor(httpRequestsCanceller) {
return {
request(config) {
const timeout = httpRequestsCanceller.createTimeout();
return angular.extend({ timeout }, config);
}
};
}
module.exports = function($httpProvider, $provide) {
'ngInject';
$provide.service('httpRequestsCanceller', httpRequestsCancellerService);
$httpProvider.interceptors.push(timeoutInterceptor);
};
It works perfectly now but it has a small drawback - $timeout in the request interceptor will keep running and it will finally resolve the canceller even if the request is completed or failed.
The question is - should I care about these pending $timeouts? Is it necessary to $timeout.cancel them in order to free some resources or avoid some strange side effects?
My use case is:
User requests asset from our API which fails because of JWT expiring (passed as an httpOnly cookie) - API returns a 401 status code.
We go and authenticate them (without the user doing anything) again using a refresh_token to retrieve a new JWT with a request from our client to auth0.
We send that new JWT to our API to be set as an httpOnly cookie to replace the expired one.
We then want to retry the original request the user made to the API in step 1.
I'm trying to use Observables within my Redux app with redux-observable. If you can think of another way of making the above user flow work I would be happy to hear how.
NB. Im using rxjs V5
export const fetchAssetListEpic = (action$, store) => {
return action$.ofType('FETCH_ASSET_LIST')
.switchMap( action => {
const options = {
crossDomain: true,
withCredentials: true,
url: uriGenerator('assetList', action.payload)
};
return ajax(options);
})
.map(fetchAssetListSuccess)
.retryWhen(handleError)
.catch(redirectToSignIn);
};
function handleError(err) {
return (err.status === 401) ?
/* Authenticate here [Step 2] */
/* Send new JWT to API [Step 3] */
/* If successful make original request again [Step 4] */
:
Observable.throw(err);
}
function redirectToSignIn() {
/*I will redirect here*/
}
So far I able to complete steps 1, 2 and 3 but not too sure of a way to add step 4. I may be completely off the mark but any help would be great!
Well one thing you probably won't want to do is allow the error to make it to the top level stream. Even if you do a catch you have effectively killed the top level stream. So unless your redirect is doing a hard redirect instead of a a soft one via something like react-router, you won't be able to use this epic any more.
Thus I would say that you want most of the logic to be encapsulated within the switchMap:
function withAuthorizedFlow(source) {
return source
.map(fetchAssetListSuccess)
// retryWhen takes a callback which accepts an Observable of errors
// emitting a next causes a retry, while an error or complete will
// stop retrying
.retryWhen(e => e.flatMap(err =>
Observable.if(
// Returns the first stream if true, second if false
() => err.status === 401,
reauthenticate, // A stream that will emit once authenticated
Observable.throw(err) // Rethrow the error
))
)
.catch(redirectToSignIn);
}
/** Within the epic **/
.switchMap(({payload}) => {
const options = {
crossDomain: true,
withCredentials: true,
url: uriGenerator('assetList', payload)
};
// Invoke the ajax request
return ajax(options)
// Attach a custom pipeline here
// Not strictly necessary but it keeps this method clean looking.
.let(withAuthorizedFlow);
})
The use of let above is completely optional, I threw it in to clean up the function. Essentially though you want to contain the error to the inner stream so that it can't halt the outer one. I am not sure which ajax library you are using but you should also confirm that it will in fact return a cold Observable otherwise you will need to wrap it in a defer block to in order for the retryWhen to work.