asynchronously add epic to middleware in redux-observable - javascript

I'm trying to evaluate redux-observable. Just looking through the doc and I'm trying to get the async epic loading thing going. I created a fork of the jsbin from the docs which basically attempts to add the async usage of the BehaviorSubject stuff.
http://jsbin.com/bazoqemiqu/edit?html,js,output
In that 'PING PONG' example, I added an 'OTHER' action and then use BehaviorSubject.next (as described in the docs) to add that epic. However, when I run the example, what happens is that the PING action is fired, followed by an endless stream of 'OTHER' actions, but never the PONG action. To see this, I added the reduxLogger. View it in the dev tools console as the jsbin console doesn't render it correctly.
My question is what am I doing wrong? Why does the PONG action never get dispatched?

Your otherEpic is an infinite "loop" (over time)
const otherEpic$ = action$ =>
action$
.delay(1000)
.mapTo({ type: OTHER });
This epic has the behavior "when any action at all is received, wait 1000ms and then emit another action of type OTHER". And since the actions your Epics emit go through the normal store.dispatch cycle like any other action, that means after the first PING is received, it will emit an OTHER after 1000ms, which will then be recursively received by the same epic again, wait another 1000ms and emit another OTHER, repeat forever.
I'm not sure if this was known, but wanted to point it out.
You next() into the BehaviorSubject of epic$ before your rootEpic has started running/been subscribed to it.
BehaviorSubjects will keep the last value emitted and provide that immediately when someone subscribes. Since your rootEpic has not yet been called and subscribed to the by the middleware, you're replacing the initial value, so only the otherEpic is emitted and ran through the epic$.mergeMap stuff.
In a real application with async/bundle splitting, when you would call epic$.next(newEpic) should always be after the middleware has subscribed to your rootEpic and received the initial epic you provided to your BehaviorSubject.
Here's a demo of that working: http://jsbin.com/zaniviz/edit?js,output
const epic$ = new BehaviorSubject(combineEpics(epic1, epic2, ...etc));
const rootEpic = (action$, store) =>
epic$.mergeMap(epic => {console.log(epic)
return epic(action$, store)
});
const otherEpic = action$ =>
action$.ofType(PONG)
.delay(1000)
.mapTo({ type: OTHER });
const epicMiddleware = createEpicMiddleware(rootEpic);
const store = createStore(rootReducer,
applyMiddleware(loggerMiddleware, epicMiddleware)
);
// any time AFTER the epicMiddleware
// has received the rootEpic
epic$.next(otherEpic);
The documentation says "sometime later" in the example, which I now see isn't clear enough. I'll try and clarify this further.
You may also find this other question on async loading of Epics useful if you're using react-router with Webpack's require.enquire() splitting.
Let me know if I can clarify any of these further 🖖

Related

RxJS - initial state and updates

I need to obtain data from websocket and I want to use RxJS to do so.
There is a websocket 1 for the latest initial data (~1000 records) and websocket 2 for the incremental updates.
I have created two observables:
initalState$ that goes to websocket 1 and fetches the initial data and then completes.
updateEvent$ that goes to websocket 2 and continuously receives updates.
My initial implementation was:
initialState.subscribe(initialData=> {
console.log(initialData);
updateEvent.subscribe(updateEvent => {
console.log(updateEvent);
});
});
The issue that I'm facing is that there is a gap after fetching the initalState and receiving the first update (updateEvent).
(I might lose update that happens after I fetch the initial data and before the subscribe).
Is there some practical way that I can create a new Observer that subscribes to both of my observers at the same time and buffer the updateEvent observer until the initalState completes and then have them in the right order "initial data first" then "updates" ?
Basically making the initialState just the "first" update, but making sure there aren't any missing updates after that.
It looks like you could achieve what you need by using buffer for the second websocket stream until the first one emits. Although, this chain gets a little more complicated because you want to start receiving values only after the first stream emits.
const initialStateShared = initialState.pipe(share());
const updateEventShared = updateEvent.pipe(share());
merge(
initialStateShared,
updateEventShared.pipe( // Buffer the second stream but only once
buffer(initialStateShared),
take(1),
),
updateEventShared.pipe( // Updates from the second stream will be buffered first and then continue comming from here
skipUntil(initialStateShared),
)
).subscribe(...);
If I am understanding it correctly what you want is to trigger both of the requests simultaneously and subscribe to them only if both are already available. I think you are looking for the combineLatest operator.
combineLatest([initialState$, updateEvent$]).subscribe(([initialState, updateEvent] => {
console.log({initialState, updateEvent});
}));
This way the combined observable will wait for both initialState$ and updateEvent$ to have emitted something and after that it will trigger emits if either of the combined observables emits something. See https://www.learnrxjs.io/operators/combination/combinelatest.html for more information.
Note: You should prevent doing a subscribe in another subscribe. It is often a code smell for doing something wrong.

Redux-Observable: modify state and trigger follow up action

I have the following scenario in redux-observable. I have a component which detects which backend to use and should set the backend URL used by the api-client. Both the client and URL are held in the global state object.
The order of execution should be:
1. check backend
2. on error replace backend URL held in state
3. trigger 3 actions to load resources using new backend state URL
What i did so far is, in step 1. access the state$ object from within my epic and modify the backed URL. This seems to only half work. The state is updated by actions triggered in 3. still see the old state and use the wrong backend.
What is the standard way to update state in between actions if you depend on the order of execution?
My API-Epic looks like this:
export const authenticate = (action$, state$) => action$.pipe(
ofType(actions.API_AUTHENTICATE),
mergeMap(action =>
from(state$.value.apiState.apiClient.authenticate(state$.value.apiState.bearer)).pipe(
map(bearer => apiActions.authenticatedSuccess(bearer))
)
)
)
export const authenticatedSuccess = (action$, state$) => action$.pipe(
ofType(actions.API_AUTHENTICATED_SUCCESS),
concatMap(action => concat(
of(resourceActions.doLoadAResource()),
of(resourceActions.doLoadOtherResource()),
of(resourceActions.doLoadSomethingElse()))
)
)
A common approach I've found users discussing on GitHub & StackOverflow is chaining multiple epics, much like what I believe your example tries to demonstrate. The first epic dispatches an action when it's "done". A reducer listens for this action and updates the store's state. A second epic (or many additional epics if you want concurrent operations) listen for this same action and kick off the next sequence of the workflow. The secondary epics run after the reducers and thus see the updated state. From the docs:
Epics run alongside the normal Redux dispatch channel, after the reducers have already received them...
I have found the chaining approach works well to decouple phases of a larger workflow. You may want the decoupling for design reasons (such as separation of concerns), to reuse smaller portions of the larger workflow, or to make smaller units for easier testing. It's an easy approach to implement when your epic is dispatching actions in between the different phases of the larger workflow.
However, keep in mind that state$ is an observable. You can use it to get the current value at any point in time -- including between dispatching different actions inside a single epic. For example, consider the following and assume our store keeps a simple counter:
export const workflow = (action$, state$) => action$.pipe(
ofType(constants.START),
withLatestFrom(state$),
mergeMap(([action, state]) => // "state" is the value when the START action was dispatched
concat(
of(actions.increment()),
state$.pipe(
first(),
map(state => // this new "state" is the _incremented_ value!
actions.decrement()),
),
defer(() => {
const state = state$.value // this new "state" is now the _decremented_ value!
return empty()
}),
),
),
)
There are lots of ways to get the current state from the observable!
Regarding the following line of code in your example:
state$.value.apiState.apiClient.authenticate(state$.value.apiState.bearer)
First, passing an API client around using the state is not a common/recommended pattern. You may want to look at injecting the API client as a dependency to your epics (this makes unit testing much easier!). Second, it's not clear how the API client is getting the current backend URL from the state. Is it possible the API client is using a cached version of the state? If yes, you may want to refactor your authenticate method and pass in the current backend URL.
Here's an example that handles errors and incorporates the above:
/**
* Let's assume the state looks like the following:
* state: {
* apiState: {
* backend: "URL",
* bearer: "token"
* }
*/
// Note how the API client is injected as a dependency
export const authenticate = (action$, state$, { apiClient }) => action$.pipe(
ofType(actions.API_AUTHENTICATE),
withLatestFrom(state$),
mergeMap(([action, state]) =>
// Try to authenticate against the current backend URL
from(apiClient.authenticate(state.apiState.backend, state.apiState.bearer)).pipe(
// On success, dispatch an action to kick off the chained epic(s)
map(bearer => apiActions.authenticatedSuccess(bearer)),
// On failure, dispatch two actions:
// 1) an action that replaces the backend URL in the state
// 2) an action that restarts _this_ epic using the new/replaced backend URL
catchError(error$ => of(apiActions.authenticatedFailed(), apiActions.authenticate()),
),
),
)
export const authenticatedSuccess = (action$, state$) => action$.pipe(
ofType(actions.API_AUTHENTICATED_SUCCESS),
...
)
Additionally, keep in mind when chaining epics that constructs like concat will not wait for the chained epics to "finish". For example:
concat(
of(resourceActions.doLoadAResource()),
of(resourceActions.doLoadOtherResource()),
of(resourceActions.doLoadSomethingElse()))
)
If each of these doLoadXXX actions "starts" an epic, all three will likely run concurrently. Each action will be dispatched one after another, and each epic will "start" running one after another without waiting for the previous one to "finish". This is because epics never really complete. They're long-lived, never ending streams. You will need to explicitly wait on some signal that identifies when doLoadAResource completes if you want to doLoadOtherResource to run after doLoadAResource.

ngRx state update and Effects execution order

I have my own opinion on this question, but it's better to double check and know for sure. Thanks for paying attention and trying to help. Here it is:
Imagine that we're dispatching an action which triggers some state changes and also has some Effects attached to it. So our code has to do 2 things - change state and do some side effects. But what is the order of these tasks? Are we doing them synchronously? I believe that first, we change state and then do the side effect, but is there a possibility, that between these two tasks might happen something else? Like this: we change state, then get some response on HTTP request we did previously and handle it, then do the side effects.
[edit:] I've decided to add some code here. And also I simplified it a lot.
State:
export interface ApplicationState {
loadingItemId: string;
items: {[itemId: string]: ItemModel}
}
Actions:
export class FetchItemAction implements Action {
readonly type = 'FETCH_ITEM';
constructor(public payload: string) {}
}
export class FetchItemSuccessAction implements Action {
readonly type = 'FETCH_ITEM_SUCCESS';
constructor(public payload: ItemModel) {}
}
Reducer:
export function reducer(state: ApplicationState, action: any) {
const newState = _.cloneDeep(state);
switch(action.type) {
case 'FETCH_ITEM':
newState.loadingItemId = action.payload;
return newState;
case 'FETCH_ITEM_SUCCESS':
newState.items[newState.loadingItemId] = action.payload;
newState.loadingItemId = null;
return newState;
default:
return state;
}
}
Effect:
#Effect()
FetchItemAction$: Observable<Action> = this.actions$
.ofType('FETCH_ITEM')
.switchMap((action: FetchItemAction) => this.httpService.fetchItem(action.payload))
.map((item: ItemModel) => new FetchItemSuccessAction(item));
And this is how we dispatch FetchItemAction:
export class ItemComponent {
item$: Observable<ItemModel>;
itemId$: Observable<string>;
constructor(private route: ActivatedRoute,
private store: Store<ApplicationState>) {
this.itemId$ = this.route.params.map(params => params.itemId);
itemId$.subscribe(itemId => this.store.dispatch(new FetchItemAction(itemId)));
this.item$ = this.store.select(state => state.items)
.combineLatest(itemId$)
.map(([items, itemId]: [{[itemId: string]: ItemModel}]) => items[itemId])
}
}
Desired scenario:
User clicks on itemUrl_1;
we store itemId_1 as loadingItemId;
make the request_1;
user clicks on itemUrl_2;
we store itemId_2 as loadingItemId;
switchMap operator in our effect cancells previous request_1 and makes request_2;
get the item_2 in response;
store it under key itemId_2 and make loadingItemId = null.
Bad scenario:
User clicks on itemUrl_1;
we store itemId_1 as loadingItemId;
make the request_1;
user clicks on itemUrl_2;
we store itemId_2 as loadingItemId;
we receive the response_1 before we made the new request_2 but after loadingItemId changed;
we store the item_1 from the response_1 under the key itemId_2;
make loadingItemId = null;
only here our effect works and we make request_2;
get item_2 in the response_2;
try to store it under key null and get an error
So the question is simply if the bad scenario can actually happen or not?
So our code has to do 2 things - change state and do some side
effects. But what is the order of these tasks? Are we doing them
synchronously?
Let's say we dispatch action A. We have a few reducers that handle action A. Those will get called in the order they are specified in the object that is passed to StoreModule.provideStore(). Then the side effect that listens to action A will fire next. Yes, it is synchronous.
I believe that first, we change state and then do the side effect, but
is there a possibility, that between these two tasks might happen
something else? Like this: we change state, then get some response on
HTTP request we did previously and handle it, then do the side
effects.
I've been using ngrx since middle of last year and I've never observed this to be the case. What I found is that every time an action is dispatched it goes through the whole cycle of first being handled by the reducers and then by the side effects before the next action is handled.
I think this has to be the case since redux (which ngrx evolved from) bills itself as a predictable state container on their main page. By allowing unpredictable async actions to occur you wouldn't be able to predict anything and the redux dev tools wouldn't be very useful.
Edited #1
So I just did a test. I ran an action 'LONG' and then the side effect would run an operation that takes 10 seconds. In the mean time I was able to continue using the UI while making more dispatches to the state. Finally the effect for 'LONG' finished and dispatched 'LONG_COMPLETE'. I was wrong about the reducers and side effect being a transaction.
That said I think it's still easy to predict what's going on because all state changes are still transactional. And this is a good thing because we don't want the UI to block while waiting for a long running api call.
Edited #2
So if I understand this correctly the core of your question is about switchMap and side effects. Basically you are asking what if the response comes back at the moment I am running the reducer code which will then run the side effect with switchMap to cancel the first request.
I came up with a test that I believe does answer this question. The test I setup was to create 2 buttons. One called Quick and one called Long. Quick will dispatch 'QUICK' and Long will dispatch 'LONG'. The reducer that listens to Quick will immediately complete. The reducer that listens to Long will take 10 seconds to complete.
I setup a single side effect that listens to both Quick and Long. This pretends to emulate an api call by using 'of' which let's me create an observable from scratch. This will then wait 5 seconds (using .delay) before dispatching 'QUICK_LONG_COMPLETE'.
#Effect()
long$: Observable<Action> = this.actions$
.ofType('QUICK', 'LONG')
.map(toPayload)
.switchMap(() => {
return of('').delay(5000).mapTo(
{
type: 'QUICK_LONG_COMPLETE'
}
)
});
During my test I clicked on the quick button and then immediately clicked the long button.
Here is what happened:
Quick button clicked
'QUICK' is dispatched
Side effect starts an observable that will complete in 5 seconds.
Long button clicked
'LONG' is dispatched
Reducer handling LONG takes 10 seconds. At the 5 second mark the original observable from the side effect completes but does not dispatch the 'QUICK_LONG_COMPLETE'. Another 5 seconds pass.
Side effect that listens to 'LONG' does a switchmap cancelling my first side effect.
5 seconds pass and 'QUICK_LONG_COMPLETE' is dispatched.
Therefore switchMap does cancel and your bad case shouldn't ever happen.

Making a lazy, cached observable that only execute the source once

I'm trying to use an rxjs observable to delegate, but share, a piece of expensive work across the lifetime of an application.
Essentially, something like:
var work$ = Observable.create((o) => {
const expensive = doSomethingExpensive();
o.next(expensive);
observer.complete();
})
.publishReplay(1)
.refCount();
Now, this works fine and does exactly what I want, except for one thing: if all subscribers unsubscribe, then when the next one subscribes, my expensive work happens again. I want to keep it.
now, I could use a subject, or I could remove the refCount() and use connect manually (and never disconnect). But that would make the expensive work happen the moment I connect, not the first time a subscriber tries to consume work$.
Essentially, I want something akin to refCount that only looks at the first subscription to connect, and never disconnect. A "lazy connect".
Is such a thing possible at all?
How does publishReplay() actually work
It internally creates a ReplaySubject and makes it multicast compatible. The minimal replay value of ReplaySubject is 1 emission. This results in the following:
First subscription will trigger the publishReplay(1) to internally subscribe to the source stream and pipe all emissions through the ReplaySubject, effectively caching the last n(=1) emissions
If a second subscription is started while the source is still active the multicast() will connect us to the same replaySubject and we will receive all next emissions until the source stream completes.
If a subscription is started after the source is already completed the replaySubject has cached the last n emissions and it will only receive those before completing.
const source = Rx.Observable.from([1,2])
.mergeMap(i => Rx.Observable.of('emission:'+i).delay(i * 100))
.do(null,null,() => console.log('source stream completed'))
.publishReplay(1)
.refCount();
// two subscriptions which are both in time before the stream completes
source.subscribe(val => console.log(`sub1:${val}`), null, () => console.log('sub1 completed'));
source.subscribe(val => console.log(`sub2:${val}`), null, () => console.log('sub2 completed'));
// new subscription after the stream has completed already
setTimeout(() => {
source.subscribe(val => console.log(`sub_late-to-the-party:${val}`), null, () => console.log('sub_late-to-the-party completed'));
}, 500);
<script src="https://cdnjs.cloudflare.com/ajax/libs/rxjs/5.0.3/Rx.js"></script>

How to avoid dispatching in the middle of a dispatch

Within my Flux architected React application I am retrieving data from a store, and would like to create an action to request that information if it does not exist. However I am running into an error where the dispatcher is already dispatching.
My desired code is something like:
getAll: function(options) {
options = options || {};
var key = JSON.stringify(options);
var ratings = _data.ratings[key];
if (!ratings) {
RatingActions.fetchAll(options);
}
return ratings || [];
}
However intermittently fails when the dispatcher is already dispatching an action, with the message Invariant Violation: Dispatch.dispatch(...): Cannot dispatch in the middle of a dispatch.. I am often making requests in response to a change in application state (eg date range). My component where I make the request, in response to a change event from the AppStore has the following:
getStateFromStores: function() {
var dateOptions = {
startDate: AppStore.getStartISOString(),
endDate: AppStore.getEndISOString()
};
return {
ratings: RatingStore.getAll(dateOptions),
};
},
I am aware that event chaining is a Flux antipattern, but I am unsure what architecture is better for retrieving data when it does not yet exist. Currently I am using this terrible hack:
getAll: function(options) {
options = options || {};
var key = JSON.stringify(options);
var ratings = _data.ratings[key];
if (!ratings) {
setTimeout(function() {
if (!RatingActions.dispatcher.isDispatching()) {
RatingActions.fetchAll(options);
}
}, 0);
}
return ratings || [];
},
What would be a better architecture, that avoids event chaining or the dispatcher error? Is this really event chaining? I just want to change the data based on the parameters the application has set.
Thanks!
You can use Flux waitFor() function instead of a setTimeout
For example you have 2 stores registered to the same dispatcher and have one store waitFor the other store to process the action first then the one waiting can update after and dispatch the change event. See Flux docs example
My particular error was occurring because my stores emitted their change event during the action dispatch, while it was still cycling through the listeners. This meant any listeners (ie components) that then triggered an action due to a data change in the store would interrupt the dispatch. I fixed it by emitting the change event after the dispatch had completed.
So this:
this.emit(CHANGE_EVENT);
Became
var self = this;
setTimeout(function() { // Run after dispatcher has finished
self.emit(CHANGE_EVENT);
}, 0);
Still a little hacky (will probably rewrite so doesn't require a setTimeout). Open to solutions that address the architectural problem, rather than this implementation detail.
The reason you get a dispatch in the middle of a previous dispatch, is that your store dispatches an action (invokes an action creator) synchronously in the handler for another action. The dispatcher is technically dispatching until all its registered callbacks have been executed. So, if you dispatch a new action from either of the registered callbacks, you'll get that error.
However, if you do some async work, e.g. make an ajax request, you can still dispatch an action in the ajax callbacks, or the async callback generally. This works, because as soon as the async function has been invoked, it per definition immediately continues the execution of the function and puts the callback on the event queue.
As pointed out by Amida and in the comments of that answer, it's a matter of choice whether to make ajax requests from the store in response to an action, or whether to do it in the store. The key is that a store should only mutate its state in response to an action, not in an ajax/async callback.
In your particular case, this would be exemplified by something like this for your store's registered callback, if you prefer to make the ajax calls from the store:
onGetAll: function(options) {
// ...do some work
request(ajaxOptions) // example for some promise-based ajax lib
.then(function(data) {
getAllSuccessAction(data); // run after dispatch
})
.error(function(data) {
getAllFailedAction(data); // run after dispatch
});
// this will be immediately run during getAllAction dispatch
return this.state[options];
},
onGetAllSuccess: function(data) {
// update state or something and then trigger change event, or whatever
},
onGetAllFailed: function(data) {
// handle failure somehow
}
Or you can just put the ajax call in your action creator and dispatch the "success/failed" actions from there.
you can user the "defer" option in the dispatcher.
In your case it would be like:
RatingActions.fetchAll.defer(options);
In my case, I fetch data through the actions/actions creators. The store is only a dump place that receives the payload of an action.
This means that I would "fetchall" in an action and then pass the result to the store which will do whatever with it and then emit a change event.
Some people consider using stores like me, others think like you.
Some people at Facebook uses "my" approach:
https://github.com/facebook/flux/blob/19a24975462234ddc583ad740354e115c20b881d/examples/flux-chat/js/utils/ChatWebAPIUtils.js#L51
I think it would probably avoid the dispatch problem treating your stores like this, but I may be wrong.
An interesting discussion is this one: https://groups.google.com/forum/#!topic/reactjs/jBPHH4Q-8Sc
where Jing Chen (Facebook engineer) explains what she thinks about how to use stores.

Categories

Resources