Caching observables causing problem with mergeMap - javascript

I have a caching method in a container:
get(): Observable<T[]> {
if (!this.get$) {
this.get$ = merge(
this.behaviorSubject.asObservable(),
this._config.get().pipe(shareReplay(1), tap(x => this.behaviorSubject.next(x))));
}
return this.get$;
}
This works fine with normal observables, however when I cache the bellow in a myContainer2 (e.g using cached observable's result to create another cached observable) method like:
// get is assigned to _config.get in the above function
const myContainer2 = new Container({get: () => myContainer1.get().pipe(mergeMap(res1 => getObs2(res1))});
// please note, the end goal is to resolve the first observable on the first subscription
// and not when caching it in the above method (using cold observables)
myContainer2.get().subscribe(...) // getObs2 gets called
myContainer2.get().subscribe(...) // getObs2 gets called again
myContainer2.get().subscribe(...) // getObs2 gets called for a third time, and so on
every time when the second cache is subscribed to getObs2 gets called (it caches nothing).
I suspect my implementation of get is faulty, since I am merging an behavior subject (which emits at the beginning), but I cant think of any other way to implement it (in order to use cold observables).
Please note that if I use normal observable instead of myContainer.get() everything works as expected.
Do you know where the problem lies?

Using a declarative approach, you can handle caching as follows:
// Declare the Observable that retrieves the set of
// configuration data and shares it.
config$ = this._config.get().pipe(shareReplay(1));
When subscribed to config$, the above code will automatically go get the configuration if it's not already been retrieved or return the retrieved configuration.
I'm not clear on what the BehaviorSubject code is for in your example. If it was to hold the emitted config data, it's not necessary as the config$ will provide it.

Related

Use effect wait for Data from previous page

I'm kinda new to the react framework.
As per my requirement , I want to wait until data arrives and binds to my constants in my useEffect() method.
The data is sent encrypted from the main page and is decrypted as follows :
useEffect(() => {
const DecryptedGroupID = atob(groupID.toString());
const DecryptedFactoryID = atob(factoryID.toString());
setProfitAndLossDetails({
...ProfitAndLossDetails,
groupID: DecryptedGroupID,
factoryID: DecryptedFactoryID
});
}, []);
I want to add a waiter/timer/delay to wait until the data gets bound to my const variables. (atob is a decrypting function).
The groupID is required to generate the factoryIDs for the specific group, hence during page reload since it takes time / delay for the hooks to bind, the factoryIDs won't load at times(however when refreshing it appears sometimes), I think adding a delay and giving it time to bind might fix this issue.
Just add groupID and factoryID as your useEffect dependencies. Hook will be called automatically when they are changed. Inside hook you can check if groupID and factoryID not empty, and call your setter function.
Read more about how this hook work:
https://reactjs.org/docs/hooks-reference.html#useeffect
You need to:
Test in the hook if they are defined or not
Call the effect hook when the values you are depending on change (so the hook doesn't run once when they aren't defined and then never run again) — add them to the dependency array.
Such:
useEffect(() => {
if (!groupID || !factoryID) return;
// Otherwise don't return and use them with their values
}, [groupID, factoryID]);

Angular subscribes not working how I expect

I'm at a loose end here and trying to understand the flow of how angular subscriptions work.
I make a call to an API and in the response I set the data in a behaviourSubject. So I can then subscribe to that data in my application.
Normally I would use async pipes in my templates cause its cleaner and it gets rid of all the subscription data for me.
All methods are apart of the same class method.
my first try.....
exportedData: BehaviourSubject = new BehaviourSubject([]);
exportApiCall(id) {
this.loadingSubject.next(true)
this.api.getReport(id).pipe(
catchError((err, caught) => this.errorHandler.errorHandler(err, caught)),
finalize(() => => this.loadingSubject.next(false))
).subscribe(res => {
this.exportedData.next(res)
})
}
export(collection) {
let x = []
this.exportCollection(collection.id); /// calls api
this.exportedData.subscribe(exportData => {
if(exportData){
x = exportData
}
})
}
console.log(x)//// first time it's empthy, then it's populated with the last click of data
/// in the template
<button (click)="export(data)">Export</button>
My problem is....
There is a list of buttons with different ID's. Each ID goes to the API and gives back certain Data. When I click, the console log firstly gives a blank array. Then there after I get the previous(the one I originally clicked) set of data.
I'm obviously not understanding subscriptions, pipes and behavior Subjects correctly. I understand Im getting a blank array because I'm setting the behaviour subject as a blank array.
my other try
export(collection) {
let x = []
this.exportCollection(collection.id).pip(tap(res => x = res)).subscribe()
console.log(x) //// get blank array
}
exportApiCall(id) {
return this.api.getReport(id).pipe(
catchError((err, caught) => this.errorHandler.errorHandler(err, caught))
)
}
Not sure about the first example - the placement of console.log() and what does the method (that is assigned on button click) do - but for the second example, you're getting an empty array because your observable has a delay and TypeScript doesn't wait for its execution to be completed.
You will most likely see that you will always receive your previous result in your console.log() (after updating response from API).
To get the initial results, you can update to such:
public exportReport(collection): void {
this.exportCollection(collection.id).pipe(take(1)).subscribe(res => {
const x: any = res;
console.log(x);
});
}
This will print your current iteration/values. You also forgot to end listening for subscription (either by unsubscribing or performing operators such as take()). Without ending listening, you might get unexpected results later on or the application could be heavily loaded.
Make sure the following step.
better to add console.log inside your functions and check whether values are coming or not.
Open your chrome browser network tab and see service endpoint is get hitting or not.
check any response coming from endpoints.
if it is still not identifiable then use below one to check whether you are getting a response or not
public exportReport(collection): void {
this.http.get(url+"/"+collection.id).subscribe(res=> {console.log(res)});
}
You would use BehaviourSubject, if there needs to be an initial/default value. If not, you can replace it by a Subject. This is why the initial value is empty array as BehaviourSubject gets called once by default. But if you use subject, it wont get called before the api call and you wont get the initial empty array.
exportedData: BehaviourSubject = new BehaviourSubject([]);
Also, you might not need to subscribe here, instead directly return it and by doing so you could avoid using the above subject.
exportApiCall(id) {
this.loadingSubject.next(true);
return this.api.getReport(id).pipe(
catchError((err, caught) => this.errorHandler.errorHandler(err, caught)),
finalize(() => => this.loadingSubject.next(false))
);
}
Console.log(x) needs to be inside the subscription, as subscribe is asynchronous and we dont knw when it might get complete. And since you need this data, you might want to declare in global score.
export(collection) {
// call api
this.exportApiCall(collection.id).subscribe(exportData => {
if (exportData) {
this.x = exportData; // or maybe this.x.push(exportData) ?
console.log(this.x);
}
});
}

RxJS - initial state and updates

I need to obtain data from websocket and I want to use RxJS to do so.
There is a websocket 1 for the latest initial data (~1000 records) and websocket 2 for the incremental updates.
I have created two observables:
initalState$ that goes to websocket 1 and fetches the initial data and then completes.
updateEvent$ that goes to websocket 2 and continuously receives updates.
My initial implementation was:
initialState.subscribe(initialData=> {
console.log(initialData);
updateEvent.subscribe(updateEvent => {
console.log(updateEvent);
});
});
The issue that I'm facing is that there is a gap after fetching the initalState and receiving the first update (updateEvent).
(I might lose update that happens after I fetch the initial data and before the subscribe).
Is there some practical way that I can create a new Observer that subscribes to both of my observers at the same time and buffer the updateEvent observer until the initalState completes and then have them in the right order "initial data first" then "updates" ?
Basically making the initialState just the "first" update, but making sure there aren't any missing updates after that.
It looks like you could achieve what you need by using buffer for the second websocket stream until the first one emits. Although, this chain gets a little more complicated because you want to start receiving values only after the first stream emits.
const initialStateShared = initialState.pipe(share());
const updateEventShared = updateEvent.pipe(share());
merge(
initialStateShared,
updateEventShared.pipe( // Buffer the second stream but only once
buffer(initialStateShared),
take(1),
),
updateEventShared.pipe( // Updates from the second stream will be buffered first and then continue comming from here
skipUntil(initialStateShared),
)
).subscribe(...);
If I am understanding it correctly what you want is to trigger both of the requests simultaneously and subscribe to them only if both are already available. I think you are looking for the combineLatest operator.
combineLatest([initialState$, updateEvent$]).subscribe(([initialState, updateEvent] => {
console.log({initialState, updateEvent});
}));
This way the combined observable will wait for both initialState$ and updateEvent$ to have emitted something and after that it will trigger emits if either of the combined observables emits something. See https://www.learnrxjs.io/operators/combination/combinelatest.html for more information.
Note: You should prevent doing a subscribe in another subscribe. It is often a code smell for doing something wrong.

Angular4 - how to ensure ngOnDestroy finishes before navigating away

I have a list of objects. The user can click on one, which then loads a child component to edit that component.
The problem I have is that when the user goes back to the list component, the child component has to do some cleanup in the ngOnDestroy method - which requires making a call to the server to do a final 'patch' of the object. Sometimes this processing can be a bit slow.
Of course what happens is the user arrives back on the list, and that api call completes before the database transaction from the ngOnDestroy completes, and thus the user sees stale data.
ngOnDestroy(){
this.destroy$.next();
this.template.template_items.forEach((item, index) => {
// mark uncompleted items for deletion
if (!item.is_completed) {
this.template.template_items[index]['_destroy'] = true;
};
});
// NOTE
// We don't care about result, this is a 'silent' save to remove empty items,
// but also to ensure the final sorted order is saved to the server
this._templateService.patchTemplate(this.template).subscribe();
this._templateService.selectedTemplate = null;
}
I understand that doing synchronous calls is not recommended as it blocks the UI/whole browser, which is not great.
I am sure there are multiple ways to solve this but really don't know which is the best (especially since Angular does not support sync requests so I would have to fall back to standard ajax to do that).
One idea I did think of was that the ngOnDestroy could pass a 'marker' to the API, and it could then mark that object as 'processing'. When the list component does its call, it could inspect each object to see if it has that marker and show a 'refresh stale data' button for any object in that state (which 99% of the time would only be a single item anyway, the most recent one the user edited). Seems a bit of a crap workaround and requires a ton of extra code compared to just changing an async call to a sync call.
Others must have encountered similar issues, but I cannot seem to find any clear examples except this sync one.
EDIT
Note that this child component already has a CanDeactive guard on it. It asks the user to confirm (ie. discard changes). So if they click to confirm, then this cleanup code in ngOnDestroy is executed. But note this is not a typical angular form where the user is really 'discarding' changes. Essentially before leaving this page the server has to do some processing on the final set of data. So ideally I don't want the user to leave until ngOnDestroy has finished - how can I force it to wait until that api call is done?
My CanDeactive guard is implemented almost the same as in the official docs for the Hero app, hooking into a general purpose dialog service that prompts the user whether they wish to stay on the page or proceed away. Here it is:
canDeactivate(): Observable<boolean> | boolean {
console.log('deactivating');
if (this.template.template_items.filter((obj) => { return !obj.is_completed}).length < 2)
return true;
// Otherwise ask the user with the dialog service and return its
// observable which resolves to true or false when the user decides
return this._dialogService.confirm('You have some empty items. Is it OK if I delete them?');
}
The docs do not make it clear for my situation though - even if I move my cleanup code from ngOnDestroy to a "YES" method handler to the dialog, it STILL has to call the api, so the YES handler would still complete before the API did and I'm back with the same problem.
UPDATE
After reading all the comments I am guessing the solution is something like this. Change the guard from:
return this._dialogService.confirm('You have some empty items.
Is it OK if I delete them?');
to
return this._dialogService.confirm('You have some empty items.
Is it OK if I delete them?').subscribe(result => {
...if yes then call my api and return true...
...if no return false...
});
As you said, there are many ways and they depend on other details how your whole app, data-flow and ux-flow is setup but it feels like you might want to take a look at CanDeactivate guard method which ensures user cannot leave route until your Observable<boolean>|Promise<boolean> are resolved to true.
So, its a way for async waiting until your service confirms things are changed on server.
[UPDATE]
it depends on your user confirmation implementation but something along these lines...
waitForServiceToConfirmWhatever(): Observable<boolean> {
return yourService.call(); //this should return Observable<boolean> with true emitted when your server work is done
}
canDeactivate(): Observable<boolean> {
if(confirm('do you want to leave?') == true)
return this.waitForServiceToConfirmWhatever();
else
Observable.of(false)
}
One "workaround" I can think of is to have your list based in client. You have the list as a JS array or object and show the UI based on that. After editing in the details screen, have a stale flag on the item which the service called on ngOnDestroy clears while updating the other related data.

Shortest code to cache Rxjs http request while not complete?

I'm trying to create an observable flow that fulfills the following requirements:
Loads data from storage at subscribe time
If the data has not yet expired, return an observable of the stored value
If the data has expired, return an HTTP request observable that uses the refresh token to get a new value and store it
If this code is reached again before the request has completed, return the same request observable
If this code is reached after the previous request completed or with a different refresh token, start a new request
I'm aware that there are many different answers on how to perform step (3), but as I'm trying to perform these steps together I am looking for guidance on whether the solution I've come up with is the most succinct it can be (which I doubt!).
Here's a sample demonstrating my current approach:
var cachedRequestToken;
var cachedRequest;
function getOrUpdateValue() {
return loadFromStorage()
.flatMap(data => {
// data doesn't exist, shortcut out
if (!data || !data.refreshtoken)
return Rx.Observable.empty();
// data still valid, return the existing value
if (data.expires > new Date().getTime())
return Rx.Observable.return(data.value);
// if the refresh token is different or the previous request is
// complete, start a new request, otherwise return the cached request
if (!cachedRequest || cachedRequestToken !== data.refreshtoken) {
cachedRequestToken = data.refreshtoken;
var pretendHttpBody = {
value: Math.random(),
refreshToken: Math.random(),
expires: new Date().getTime() + (10 * 60 * 1000) // set by server, expires in ten minutes
};
cachedRequest = Rx.Observable.create(ob => {
// this would really be a http request that exchanges
// the one use refreshtoken for new data, then saves it
// to storage for later use before passing on the value
window.setTimeout(() => { // emulate slow response
saveToStorage(pretendHttpBody);
ob.next(pretendHttpBody.value);
ob.completed();
cachedRequest = null; // clear the request now we're complete
}, 2500);
});
}
return cachedRequest;
});
}
function loadFromStorage() {
return Rx.Observable.create(ob => {
var storedData = { // loading from storage goes here
value: 15, // wrapped in observable to delay loading until subscribed
refreshtoken: 63, // other process may have updated this between requests
expires: new Date().getTime() - (60 * 1000) // pretend to have already expired
};
ob.next(storedData);
ob.completed();
})
}
function saveToStorage(data) {
// save goes here
}
// first request
getOrUpdateValue().subscribe(function(v) { console.log('sub1: ' + v); });
// second request, can occur before or after first request finishes
window.setTimeout(
() => getOrUpdateValue().subscribe(function(v) { console.log('sub2: ' + v); }),
1500);
First, have a look at a working jsbin example.
The solution is a tad different then your initial code, and I'd like to explain why. The need to keep returning to your local storage, save it, save flags (cache and token) didn't not fit for me with reactive, functional approach. The heart of the solution I gave is:
var data$ = new Rx.BehaviorSubject(storageMock);
var request$ = new Rx.Subject();
request$.flatMapFirst(loadFromServer).share().startWith(storageMock).subscribe(data$);
data$.subscribe(saveToStorage);
function getOrUpdateValue() {
return data$.take(1)
.filter(data => (data && data.refreshtoken))
.switchMap(data => (data.expires > new Date().getTime()
? data$.take(1)
: (console.log('expired ...'), request$.onNext(true) ,data$.skip(1).take(1))));
}
The key is that data$ holds your latest data and is always up to date, it is easily accessible by doing a data$.take(1). The take(1) is important to make sure your subscription gets a single values and terminates (because you attempt to work in a procedural, as opposed to functional, manner). Without the take(1) your subscription would stay active and you would have multiple handlers out there, that is you'll handle future updates as well in a code that was meant only for the current update.
In addition, I hold a request$ subject which is your way to start fetching new data from the server. The function works like so:
The filter ensures that if your data is empty or has no token, nothing passes through, similar to the return Rx.Observable.empty() you had.
If the data is up to date, it returns data$.take(1) which is a single element sequence you can subscribe to.
If not, it needs a refresh. To do so, it triggers request$.onNext(true) and returns data$.skip(1).take(1). The skip(1) is to avoid the current, out dated value.
For brevity I used (console.log('expired ...'), request$.onNext(true) ,data$.skip(1).take(1))). This might look a bit cryptic. It uses the js comma separated syntax which is common in minifiers/uglifiers. It executes all statements and returns the result of the last statement. If you want a more readable code, you could rewrite it like so:
.switchMap(data => {
if(data.expires > new Date().getTime()){
return data$.take(1);
} else {
console.log('expired ...');
request$.onNext(true);
return data$.skip(1).take(1);
}
});
The last part is the usage of flatMapFirst. This ensures that once a request is in progress, all following requests are dropped. You can see it works in the console printout. The 'load from server' is printed several times, yet the actual sequence is invoked only once and you get a single 'loading from server done' printout. This is a more reactive oriented solution to your original refreshtoken flag checking.
Though I didn't need the saved data, it is saved because you mentioned that you might want to read it on future sessions.
A few tips on rxjs:
Instead of using the setTimeout, which can cause many problems, you can simply do Rx.Observable.timer(time_out_value).subscribe(...).
Creating an observable is cumbersome (you even had to call next(...) and complete()). You have a much cleaner way to do this using Rx.Subject. Note that you have specifications of this class, the BehaviorSubject and ReplaySubject. These classes are worth knowing and can help a lot.
One last note. This was quite a challange :-) I'm not familiar with your server side code and design considerations yet the need to suppress calls felt uncomfortable to me. Unless there is a very good reason related to your backend, my natural approach would be to use flatMap and let the last request 'win', i.e. drop previous un terminated calls and set the value.
The code is rxjs 4 based (so it can run in jsbin), if you're using angular2 (hence rxjs 5), you'll need to adapt it. Have a look at the migration guide.
================ answers to Steve's other questions (in comments below) =======
There is one article I can recommend. It's title says it all :-)
As for the procedural vs. functional approach, I'd add another variable to the service:
let token$ = data$.pluck('refreshtoken');
and then consume it when needed.
My general approach is to first map my data flows and relations and then like a good "keyboard plumber" (like we all are), build the piping. My top level draft for a service would be (skipping the angular2 formalities and provider for brevity):
class UserService {
data$: <as above>;
token$: data$.pluck('refreshtoken');
private request$: <as above>;
refresh(){
request.onNext(true);
}
}
You might need to do some checking so the pluck does not fail.
Then, each component that needs the data or the token can access it directly.
Now lets suppose you have a service that needs to act on a change to the data or the token:
class SomeService {
constructor(private userSvc: UserService){
this.userSvc.token$.subscribe(() => this.doMyUpdates());
}
}
If your need to synthesize data, meaning, use the data/token and some local data:
Rx.Observable.combineLatest(this.userSvc.data$, this.myRelevantData$)
.subscribe(([data, myData] => this.doMyUpdates(data.someField, myData.someField));
Again, the philosophy is that you build the data flow and pipes, wire them up and then all you have to do is trigger stuff.
The 'mini pattern' I've come up with is to pass to a service once my trigger sequence and register to the result. Lets take for example autocomplete:
class ACService {
fetch(text: string): Observable<Array<string>> {
return http.get(text).map(response => response.json().data;
}
}
Then you have to call it every time your text changes and assign the result to your component:
<div class="suggestions" *ngFor="let suggestion; of suggestions | async;">
<div>{{suggestion}}</div>
</div>
and in your component:
onTextChange(text) {
this.suggestions = acSVC.fetch(text);
}
but this could be done like this as well:
class ACService {
createFetcher(textStream: Observable<string>): Observable<Array<string>> {
return textStream.flatMap(text => http.get(text))
.map(response => response.json().data;
}
}
And then in your component:
textStream: Subject<string> = new Subject<string>();
suggestions: Observable<string>;
constructor(private acSVC: ACService){
this.suggestions = acSVC.createFetcher(textStream);
}
onTextChange(text) {
this.textStream.next(text);
}
template code stays the same.
It seems like a small thing here, but once the app grows bigger, and the data flow complicated, this works much better. You have a sequence that holds you data and you can use it around the component wherever you need it, you can even further transform it. For example, lets say you need to know the number of suggestions, in the first method, once you get the result, you need to further query it to get it, thus:
onTextChange(text) {
this.suggestions = acSVC.fetch(text);
this.suggestionsCount = suggestions.pluck('length'); // in a sequence
// or
this.suggestions.subscribe(suggestions => this.suggestionsCount = suggestions.length); // in a numeric variable.
}
Now in the second method, you just define:
constructor(private acSVC: ACService){
this.suggestions = acSVC.createFetcher(textStream);
this.suggestionsCount = this.suggestions.pluck('length');
}
Hope this helps :-)
While writing, I tried to reflect about the path I took to getting to use reactive like this. Needless to say that on going experimentation, numerous jsbins and strange failures are big part of it. Another thing that I think helped shape my approach (though I'm not currently using it) is learning redux and reading/trying a bit of ngrx (angular's redux port). The philosophy and the approach does not let you even think procedural so you have to tune in to functional, data, relations and flows based mindset.

Categories

Resources