What's the best way to write this short pooling routine using rx.js
1. call the function this.dataService.getRowsByAccountId(id) to return Observable<Row[]> from back-end
2. send the received data to this function this.refreshGrid(data);
3. if one of items in the data meet this criteria r.stillProcessing==true
4. then wait 2 seconds and start again from step-1
5. if another call was made to this routine and there is a pending timer scheduled. Don't schedule another one because i don't want multiple timers running.
I think the best solution would be using retryWhen
I don't know if this will work out of the box with your code, but according to your comment try to tweak this.
this.dataService.getRowsByAccountId(id)
.pipe(
tap((data: Row[]) => this.refreshGrid(data)),
map((data: Row[]) => {
if (data.some((r: Row) => r.stillProcessing === true) {
//error will be picked up by retryWhen
throw r.Name; //(or something else)
}
return r;
}),
retryWhen(errors =>
errors.pipe(
//log error message
tap((r: Row) => console.log(`Row ${r} is still processing!`)),
//restart in 2 seconds
delayWhen(() => timer(2000))
)
).subscribe();
);
Related
I have a piece of code that executes a long list of http requests, and I've written the code in such a way that it always has 4 requests running parallel. It just so happens that the server can handle 4 parallel requests the fastest. With less the code would work slower and with more the requests would take longer to finish. Anyway, here is the code:
const itemsToRemove = items.filter(
// ...
)
const removeItem = (item: Item) => item && // first it checks if item isn't undefined
// Then it creates a DELETE request
itemsApi.remove(item).then(
// And then whenever a request finishes,
// it adds the next request to the queue.
// This ensures that there will always
// be 4 requests running parallel.
() => removeItem(itemsToRemove.shift())
)
// I start with a chunk of the first 4 items.
const firstChunk = itemsToRemove.splice(0, 4)
await Promise.allSettled(
firstChunk.map(removeItem)
)
Now the problem with this code is that if list is very long (as in thousands of items), at some point the browser tab just crashes. Which is a little unhelpful, because I don't get to see a specific error message that tells me what went wrong.
But my guess is that this part of the code:
itemsApi.remove(item).then(
() => removeItem(itemsToRemove.shift())
)
May be creating a Maximum call stack size exceeded issue? Because in a way I'm constantly adding to the call stack, aren't I?
Do you think my guess is correct? And regardless of if your answer is yes or no, do you have an idea how I could achieve the same goal without crashing the browser tab? Can I refactor this code in a way that doesn't add to the call stack? (If I'm indeed doing that?)
The issue with your code is in
await Promise.allSettled(firstChunk.map(removeItem)
The argument passed to Promise.allSettled needs to be an array of Promises as per the documentation:
The Promise.allSettled() method returns a promise that fulfills after all of the given promises have either fulfilled or rejected, with an array of objects that each describes the outcome of each promise.
Your recursive function then runs all of the requests one after the other throwing a Maximum call stack size exceeded error and crashing your browser.
The solution I came up with (it could probably be shortened) is like so:
let items = []
while (items.length < 20) {
items = [...items, `item-${items.length + 1}`]
}
// A mockup of the API function that executes an asynchronous task and returns once it is resolved
async function itemsApi(item) {
await new Promise((resolve) => setTimeout(() => {resolve(item)}, 1000))
}
async function f(items) {
const itemsToRemove = items
// call the itemsApi and resolve the promise after the itemsApi function finishes
const removeItem = (item) => item &&
new Promise((resolve, reject) =>
itemsApi(item)
.then((res) => resolve(res))
.catch(e => reject(e))
)
// Recursive function that removes a chunk of 4 items after the previous chunk has been removed
function removeChunk(chunk) {
// exit the function once there is no more items in the array
if (itemsToRemove.length === 0) return
console.log(itemsToRemove)
// after the first 4 request finish, keep making a new chunk of 4 requests until the itemsToRemove array is empty
Promise.allSettled(chunk.map(removeItem))
.then(() => removeChunk(itemsToRemove.splice(0, 4)))
}
const firstChunk = itemsToRemove.splice(0, 4)
// initiate the recursive function
removeChunk(firstChunk)
}
f(items)
I hope this answers your question
I'm using a concatMap to handle multiple requests to an API, where I want each request batch to be completed before the next batch is processed. The concatMap works as expected when triggering the flow with callSubject.next(requestData)
The problem: for certain types of requestData I want to cancel any in-flight http calls, and reset the concatMap. Cancelling the httpClient calls that are occurring within the getAll function is handy enough (I have a takeUntil that does that - not shown), but the concatMap may still have a number of queued up requests that will then be processed.
Is there a way to reset the concatMap without completing the callSubject Subject?
Note: if I trigger unsubscribeCallSubject$.next() this clears the concatmap, but also completes the callSubject, which means it can no longer be used with callSubject.next(reqData)
// callSubject is a Subject which can be triggered multiple times
callSubject
.pipe(
concatMap((req) => {
// getAll makes multiple httpClient calls in sequence
return getAll(req).pipe(
catchError((err) => {
// prevent callSubject completing on http error
return of(err);
})
);
}),
takeUntil(unsubscribeCallSubject$)
)
.subscribe(
(v) => log("callSubject: next handler", v),
(e) => log("callSubject: error", e),
() => log("callSubject: complete")
);
If I understand the problem right, you could try an approach which uses switchMap any time unsubscribeCallSubject$ emits.
The code would look something like this
unsubscribeCallSubject$.pipe(
// use startWith to start the stream with something
startWith('anything'),
// switchMap any time unsubscribeCallSubject$ emits, which will unsubscribe
// any Observable within the following concatMap
switchMap(() => callSubject$),
// concatMap as in your example
concatMap((req) => {
// getAll makes multiple httpClient calls in sequence
return getAll(req).pipe(
catchError((err) => {
// prevent callSubject completing on http error
return of(err);
})
);
}),
)
.subscribe(
(v) => log("callSubject: next handler", v),
(e) => log("callSubject: error", e),
() => log("callSubject: complete")
);
To be honest I have not tested this approach and so I am not sure whether it solves your problem, but if I have understood your problem right, this could work.
Use case: call an endpoint every 3-minutes that would update a status of a certain service over the application.
my current code:
interval(180000)
.subscribe(() => this.doRequest
.pipe(catchError(() => {
this.applicationFlag = false;
return EMPTY;
}))
.subscribe(result => this.applicationFlag = result));
My current problem is that sometimes the previous interval request has not been completed yet but the next interval request was also doing a request.
Is there a way to flag to wait for previous or don't execute the interval when the previous request is not completed yet?
When you have one subscribe inside another subscribe there's no way the outer chain can be notified that the inner chain has completed. You have to restructure your chain and use operators such as concatMap, mergeMap or switchMap. For example like the following:
interval(180000)
.pipe(
concatMap(() => this.doRequest.pipe(
catchError(() => {
this.applicationFlag = false;
return EMPTY;
}),
),
)
.subscribe();
I have an array of objects. For each object I need to trigger an asynchronous request (http call). But I only want to have a certain maximum of requests running at the same time. Also, it would be nice (but not neccessary) if I could have one single synchronization point after all requests finished to execute some code.
I've tried suggestions from:
Limit number of requests at a time with RxJS
How to limit the concurrency of flatMap?
Fire async request in parallel but get result in order using rxjs
and many more... I even tried making my own operators.
Either the answers on those pages are too old to work with my code or I can't figure out how to put everything together so all types fit nicely.
This is what I have so far:
for (const obj of objects) {
this.myService.updateObject(obj).subscribe(value => {
this.anotherService.set(obj);
});
}
EDIT 1:
Ok, I think we're getting there! With the answers of Julius and pschild (both seem to work equally) I managed to limit the number of requests. But now it will only fire the first batch of 4 and never fire the rest. So now I have:
const concurrentRequests = 4;
from(objects)
.pipe(
mergeMap(obj => this.myService.updateObject(obj), concurrentRequests),
tap(result => this.anotherService.set(result))
).subscribe();
Am I doing something wrong with the subscribe()?
Btw: The mergeMap with resultSelector parameter is deprecated, so I used mergeMap without it.
Also, the obj of the mergeMap is not visible in the tap, so I had to use tap's parameter
EDIT 2:
Make sure your observers complete! (It cost me a whole day)
You can use the third parameter of mergeMap to limit the number of concurrent inner subscriptions. Use finalize to execute something after all requests finished:
const concurrentRequests = 5;
from(objects)
.pipe(
mergeMap(obj => this.myService.updateObject(obj), concurrentRequests),
tap(res => this.anotherService.set(res))),
finalize(() => console.log('Sequence complete'))
);
See the example on Stackblitz.
from(objects).pipe(
bufferCount(10),
concatMap(objs => forkJoin(objs.map(obj =>
this.myService.updateObject(obj).pipe(
tap(value => this.anotherService.set(obj))
)))),
finalize(() => console.log('all requests are done'))
)
Code is not tested, but you get the idea. Let me know if any error or explanation is needed
I had the same issue once. When I tried to load multiple images from server. I had to send http requests one after another. I achieved desired outcome using awaited promise. Here is the sample code:
async ngOnInit() {
for (const number of this.numbers) {
await new Promise(resolve => {
this.http.get(`https://jsonplaceholder.typicode.com/todos/${number}`).subscribe(
data => {
this.responses.push(data);
console.log(data);
resolve();
}
);
});
}
}
Main idea is here to resolve the promise once you get the response.
With this technique you can come up with custom logic to execute one method once all the requests finished.
Here is the stackblitz. Open up the console to see it in action. :)
In the following code, I create a simple observable that produces one value and then complete. Then I share that observable replaying the last item and suscribe 3 times. The first right after, the second one before the value is produced and the third time after value is produced and the observable has completed.
let i = 0;
let obs$ = Rx.Observable.create(obs => {
console.log('Creating observable');
i++;
setTimeout(() => {
obs.onNext(i);
obs.onCompleted();
}, 2000);
}).shareReplay(1);
obs$.subscribe(
data => console.log(`s1: data = ${data}`),
() => {},
() => console.log('finish s1')
);
setTimeout( () => {
obs$.subscribe(
data => console.log(`s2: data = ${data}`),
() => {},
() => console.log('finish s2')
);
}, 1000);
setTimeout( () => {
obs$.subscribe(
data => console.log(`s3: data = ${data}`),
() => {},
() => console.log('finish s3')
);
}, 6000);
You can execute this on jsbin
This results in the following marble diagram
Actual
s1: -----1$
s2: \--1$
s3: \1$
But I would expect
Expected
s1: -----1$
s2: \--1$
s3: \----2$
I can understand why someone would like to have the first behaviour, but my reasoning is that, unlike this example, where I'm returning a number, I could be returning an object susceptible to unsubscribe behaviour, for example a database connection. If the above marble diagram represents a database connection, where in the dispose method I call a db.close(), on the third subscription I would have an exception, because I'm receiving as value a database handler that was released. (because when the second subscription finished refCount = 0 and the source is disposed).
Also another weird thing this example has, is that even it's resolving with
the first value and completing just after, its subscribing to the source twice (as you can see by the duplicated "Creating observable")
I know this github issue talks about this but what I'm missing is:
How can achieve (both in RxJs4 and 5) a shared observable that can replay the last item if the source observable hasn't completed, and if its done (refCount = 0), recreate the observable.
In RxJs5 I think the share method solves the reconnecting part of my problem, but not the sharing part.
In RxJs4 I'm clueless
If possible I would like to solve this using existing operators or subjects. My intuition tells me I would have to create a different Subject with such logic, but I'm not quite there yet.
A bit on shareReplay:
shareReplay keeps the same underlying ReplaySubject instance for the rest of the lifetime of the returned observable.
Once ReplaySubject completes, you can't put any more values into it, but it will still replay. So...
You subscribe to the observable the first time and the timeout starts. This increments i from 0 to 1.
You subscribe to the observable the second time and the timeout is already going.
The timeout callback fires and sends out onNext(i), then onCompleted().
onCompleted() signal completes the ReplaySubject inside the shareReplay, meaning that from now on, that shared observable will simply replay the value it has (which is 1) and complete.
A bit on shared observables in general:
Another, separate issue is that since you shared the observable, it's only ever going to call the subscriber function one time. That means that i will only ever be incremented one time. So even if you didn't onCompleted and kill your underlying ReplaySubject, you're going to end up not incrementing it to 2.
This isn't RxJS 5
A quick way to tell is onNext vs next. You're currently using RxJS 4 in your example, but you've tagged this with RxJS 5, and you've sighted an issue in RxJS 5. RxJS 5 is beta and a new version that is a complete rewrite of RxJS 4. The API changes were done mostly to match the es-observable proposal which is currently at stage 1
Updated example
I've updated your example to give you your expected results
Basically, you want to use a shared version of the observable for the first two calls, and the original observable for the third one.
let i = 0;
let obs$ = Rx.Observable.create(obs => {
console.log('Creating observable');
i++;
setTimeout(() => {
obs.onNext(i);
obs.onCompleted();
}, 2000);
})
let shared$ = obs$.shareReplay(1);
shared$.subscribe(
data => console.log(`s1: data = ${data}`),
() => {},
() => console.log('finish s1')
);
setTimeout( () => {
shared$.subscribe(
data => console.log(`s2: data = ${data}`),
() => {},
() => console.log('finish s2')
);
}, 1000);
setTimeout( () => {
obs$.subscribe(
data => console.log(`s3: data = ${data}`),
() => {},
() => console.log('finish s3')
);
}, 6000);
Unrelated
Also, protip: be sure to return a cancellation semantic for your custom observable that calls clearTimeout.