How do I sequence actions in RxJS/redux-observable vs redux-saga? - javascript

I've started learning RxJs deeply, one of reasons is to master redux-observable side effects approach, tho I find sagas more convenient and "declarative". I've already learned merge/flat/concat/switchMap operators, but it didn't help me to figure out how to sequence things in rxjs.
Here's an example of what I mean by "sequencing", on instance of Timer app where start may be scheduled after some period of time, implemented with redux-saga:
export function* timerSaga() {
while (true) {
yield take('START');
const { startDelay } = yield select(); // scheduled delay
const [cancelled] = yield race([
take('CANCEL_START'),
delay(startDelay)
]);
if (!cancelled) {
yield race([
call(function*() {
while (true) {
yield delay(10);
yield put({ type: 'TICK' });
}
}),
take(['STOP', 'RESET']
]);
}
}
}
I find that example very logically consistent and clear. I have no idea how to implement that with redux-observable. Please, simply give me peace of code that reproduces same logic but with rxjs operators.

Between sagas (generators) and epics (observables), it's important to change
the way you think about how events arrive at your code.
Generators satisfy the iterator and iterable protocols, which involve pulling
values/events (in this case, Redux actions) from the source, and blocking
execution until those events arrive.
Observables are push rather than pull. We describe and name streams of events
that we're interested in, and then we subscribe to them. There are no blocking
calls because all of our code is triggered by events when they occur.
This code duplicates the behavior in the saga example.
import { interval, timer } from 'rxjs';
import { withLatestFrom, mapTo, exhaustMap, takeUntil } from 'rxjs/operators';
import { ofType } from 'redux-observable';
const myEpic = (action$, state$) => {
// A stream of all the "cancel start" actions
const cancelStart$ = action$.pipe(ofType('CANCEL_START'));
// This observable will emit delayed start events that are not cancelled.
const delayedCancellableStarts$ = action$.pipe(
// When a start action occurs...
ofType('START'),
// Grab the latest start delay value from state...
withLatestFrom(state$, (_, { startDelay }) => startDelay),
exhaustMap(
// ...and emit an event after our delay, unless our cancel stream
// emits first, then do nothing until the next start event arrives.
// exhaustMap means we ignore all other start events while we handle
// this one.
(startDelay) => timer(startDelay).pipe(takeUntil(cancelStart$))
)
);
// On subscribe, emit a tick action every 10ms
const tick$ = interval(10).pipe(mapTo({ type: 'TICK' }));
// On subscribe, emit only STOP or RESET actions
const stopTick$ = action$.pipe(ofType('STOP', 'RESET'));
// When a start event arrives, start ticking until we get a message to
// stop. Ignore all start events until we stop ticking.
return delayedCancellableStarts$.pipe(
exhaustMap(() => tick$.pipe(takeUntil(stopTick$)))
);
};
Importantly, even though we're creating and naming these observable streams, their behavior is lazy - none of them are 'activated' until subscribed to, and that happens when you provide this epic function to the redux-observable middleware.

I assume take() return an observable, haven't test the code. It can probably be transformed to rx fashion like below.
The key here is repeat() and takeUntil()
// outter condition for starting ticker
forkJoin(take('START'), select())
.pipe(
switchMap(([, startDelay]) =>
// inner looping ticker
timer(10).pipe(switchMap(_ => put({type: 'TICK'})), repeat(),
takeUntil(race(
take('CANCEL_START'),
delay(startDelay)
))
)
/////////////////////
)
)

Related

Why does the rxjs share operator not work as expected in this setTimeout() example?

I don't understand why the rxjs share operator does not work with setTimeout().
I'm trying to understand this blogpost. In this example, the concept of "shared subscription" does not seem to work as expected.
const observable1 = Observable.create(observer => {
observer.next(`I am alive.`);
setTimeout(() => {
observer.next(`I am alive again.`);
}, 1000);
}).pipe(share());
observable1.subscribe(x => console.log(x));
observable1.subscribe(x => console.log(x));
Expected:
I am alive.
I am alive again.
Actual:
I am alive.
I am alive again.
I am alive again.
Reproducable stackblitz.
That is the expected output.
From official docs on share() operator:
Returns a new Observable that multicasts (shares) the original Observable. As long as there is at least one Subscriber this Observable will be subscribed and emitting data.
That means as soon as an observer subscribes, the observable starts emitting data.
So when the first subscribe statement observable1.subscribe(x => console.log(x)); executes, an observer subscribes and data is emitted by observer.next('I am alive.); statement.
When second subscribe statement executes, another observer subscribes and it receives only the data emitted from that point of time. This is the data emitted by observer.next('I am alive again.'); in setTimeout() method.
We can see this clearly in this StackBlitz demo where we are logging Observer1 and Observer2 text along with the received data.
I think the point of confusion is seeing two I am alive again. statements. It is logged twice because we are logging it in each subscriber. Move these log statements to the observable and they will only be logged once. This makes it more evident that the observable is executed only once.
This is the supposed behaviour of share(). It monitores and shares only one action. Here is an example taken from learnrxjs.com. As you can see only the tap()-operator is monitored. The mapTo()-operator is ignored.
// RxJS v6+
import { timer } from 'rxjs';
import { tap, mapTo, share } from 'rxjs/operators';
//emit value in 1s
const source = timer(1000);
//log side effect, emit result
const example = source.pipe(
tap(() => console.log('***SIDE EFFECT***')),
mapTo('***RESULT***')
);
/*
***NOT SHARED, SIDE EFFECT WILL BE EXECUTED
TWICE***
output:
"***SIDE EFFECT***"
"***RESULT***"
"***SIDE EFFECT***"
"***RESULT***"
*/
const subscribe = example.subscribe(val => console.log(val));
const subscribeTwo = example.subscribe(val => console.log(val));
//share observable among subscribers
const sharedExample = example.pipe(share());
/*
***SHARED, SIDE EFFECT EXECUTED ONCE***
output:
"***SIDE EFFECT***"
"***RESULT***"
"***RESULT***"
*/
const subscribeThree = sharedExample.subscribe(val => console.log(val));
const subscribeFour = sharedExample.subscribe(val => console.log(val));

React - controlling async calls smartly without any side effect in complex applications

Solution proposed by codeslayer1 in question raised at React - Controlling multiple Ajax Calls has an issue of accessing state directly inside action creator - an anti pattern.
So, if I don't access the state inside my action creator what I will do is, I will listen to a batchRequestCompleted state in my component. When components prop batchRequestCompleted will become true(means previous request is completed), I will check if any pending requests are there. If yes, I will dispatch action to process those next requests. So basically saga calls action which in turn modifies the state. And once state is modified, another action to process further requests is dispatched from component. In this way, saga never accesses the state.
Solution above sounds good but comes at a cost of problem mentioned in Route change before action creator completes. That is, what will happen to the requests placed inside queue if someone navigates to a different route, before queue is cleared.
Can I solve the problem mentioned in React - Controlling multiple Ajax Calls without accessing state inside action creators and without bringing component back in picture for dispatching an action to clear the pending queue.
Note: I have created a new question because problem mentioned in React - Controlling multiple Ajax Calls is solved but with side effects and this question majorly focuses on reaching to a solution which cleans off that side effect.
I made a little repo github.com/adz5a/so-stream-example to illustrate how I would solve your problem.
This repo uses two libraries xstream and recompose. The former provides an implementation of ObservableStreams with its operators and the latter wires it up with React.
A concept is necessary before everything : ES Observables. They are covered in depth in articles such as this (I strongly recommend reading and listening to past articles / talks from Ben Lesh, on this subject).
Observabes are a lazy primitive used to model values over time. In JS we have another primitive for doing async : Promises. Those models an eventual value or error and thus are not lazy but eager. In the case of a React component ( or more generally UI ) we are interested in lazyness because things can go wrong : the user may want to interrupt a long running process, it can crash, change route etc...
So, how can we solve your problem : controlling a long running process which can be interrupted ( fetching lots of rows ) by user interaction ?
First, the UI :
export class AnswerView extends React.Component {
static propTypes = {
// called when the user make a batch
// of request
onStart: PropTypes.func.isRequired,
// called when you want to stop the processing
// of requests ( when unmounting or at the request
// of the user )
onStop: PropTypes.func.isRequired,
// number of requests completed, 0 by default
completedRequests: PropTypes.number.isRequired,
// whether it's working right now or not
processing: PropTypes.bool.isRequired
};
render () {
// displays a form if no work is being done,
// else the number of completed requests
return (
<section>
<Link to="/other">Change Route !</Link>
<header>
Lazy Component Example
</header>
{
this.props.processing ?
<span>{"requests done " + this.props.completedRequests}<button onClick={this.props.onStop}>Stop !</button></span>:
<form onSubmit={e => {
e.preventDefault();
this.props.onStart(parseInt(e.currentTarget.elements.number.value, 10));
}}>
Nb of posts to fetch<input type="number" name="number" placeholder="0"/>
<input type="submit" value="go"/>
</form>
}
</section>
);
}
componentWillMount () {
console.log("mounting");
}
}
Pretty simple : a form with an input for the number of requests to perform (could checkboxes on a table component ... ).
Its props are as follow :
onStart : fn which takes the desired number
onStop : fn which takes no args and signals we would like to stop. Can be hooked to a button or in this case, componentWillUnmout.
completedRequests: Integer, counts requests done, 0.
processing: boolean, indicates if work is under way.
This does not do much by itself, so let's introduce recompose. Its purpose is to enhance component via HOC. We will use the mapPropsStream helper in this example.
Note : in this answer I use stream / Observable interchangeably but this is not true in the general case. A stream is an Observable with operators allowing to transform the emitted value into a new Observable.
For a React Component we can sort of observe its props with the standard api : 1st one at componentWillMount, then at componentWillReceiveProps. We can also signal when there will be no more props with componentWillUnmount. We can build the following (marble) diagram : p1--p2--..--pn--| (the pipe indicates the completion of the stream).
The enhancer code is posted below with comments.
What needs to be understood is that everything with streams can be approached like a signal : by modelling everything as a stream we can be sure that by sending the appropriate signal we can have the desired behaviour.
export const enhance = mapPropsStream(prop$ => {
/*
* createEventHandler will help us generates the callbacks and their
* corresponding streams.
* Each callback invocation will dispatch a value to their corresponding
* stream.
*/
// models the requested number of requests
const { handler: onStart, stream: requestCount$ } = createEventHandler();
// models the *stop* signals
const { handler: onStop, stream: stop$ } = createEventHandler();
// models the number of completed requests
const completedRequestCount$ = requestCount$.map( n => {
// for each request, generate a dummy url list
const urls = Array.from({ length: n }, (_, i) => `https://jsonplaceholder.typicode.com/posts/${i + 1}` );
// this is the trick : we want the process to be aware of itself when
// doing the next operation. This is a circular invocation so we need to
// use a *proxy*. Note : another way is to use a *subject* but they are
// not present in __xstream__, plz look at RxJS for a *subject* overview
// and implementation.
const requestProxy$ = xs.create();
const count$ = requestProxy$
// a *reduce* operation to follow where we are
// it acts like a cursor.
.fold(( n ) => n + 5, 0 )
// this will log the current value
.debug("nb");
const request$ = count$.map( n => Promise.all(urls.slice(n, n + 5).map(u => fetch(u))) )
.map(xs.fromPromise)
.flatten()
.endWhen(xs.merge(
// this stream completes when the stop$ emits
// it also completes when the count is above the urls array length
// and when the prop$ has emitted its last value ( when unmounting )
stop$,
count$.filter(n => n >= urls.length),
prop$.last()
));
// this effectively activates the proxy
requestProxy$.imitate(request$);
return count$;
} )
.flatten();
// models the processing props,
// will emit 2 values : false immediately,
// true when the process starts.
const processing$ = requestCount$.take(1)
.mapTo(true)
.startWith(false);
// combines each streams to generate the props
return xs.combine(
// original props
prop$,
// completed requests, 0 at start
completedRequestCount$.startWith(0),
// boolean indicating if processing is en route
processing$
)
.map(([ props, completedRequests, processing ]) => {
return {
...props,
completedRequests,
processing,
onStart,
onStop
};
})
// allows us to catch any error generated in the streams
// very much equivalent to the new ErrorBoundaries in React
.replaceError( e => {
// logs and return an empty stream which will never emit,
// effectively blocking the component
console.error(e);
return xs.empty();
} );
});
export const Answer = enhance(AnswerView);
I hope this answer is not (too) convoluted, feel free to ask any question.
As a side note, after a little research you may notice that the processing boolean is not really used in the logic but is merely there to help the UI know what's going on : this is a lot cleaner than having some piece of state attached to the this of a Component.

Making a lazy, cached observable that only execute the source once

I'm trying to use an rxjs observable to delegate, but share, a piece of expensive work across the lifetime of an application.
Essentially, something like:
var work$ = Observable.create((o) => {
const expensive = doSomethingExpensive();
o.next(expensive);
observer.complete();
})
.publishReplay(1)
.refCount();
Now, this works fine and does exactly what I want, except for one thing: if all subscribers unsubscribe, then when the next one subscribes, my expensive work happens again. I want to keep it.
now, I could use a subject, or I could remove the refCount() and use connect manually (and never disconnect). But that would make the expensive work happen the moment I connect, not the first time a subscriber tries to consume work$.
Essentially, I want something akin to refCount that only looks at the first subscription to connect, and never disconnect. A "lazy connect".
Is such a thing possible at all?
How does publishReplay() actually work
It internally creates a ReplaySubject and makes it multicast compatible. The minimal replay value of ReplaySubject is 1 emission. This results in the following:
First subscription will trigger the publishReplay(1) to internally subscribe to the source stream and pipe all emissions through the ReplaySubject, effectively caching the last n(=1) emissions
If a second subscription is started while the source is still active the multicast() will connect us to the same replaySubject and we will receive all next emissions until the source stream completes.
If a subscription is started after the source is already completed the replaySubject has cached the last n emissions and it will only receive those before completing.
const source = Rx.Observable.from([1,2])
.mergeMap(i => Rx.Observable.of('emission:'+i).delay(i * 100))
.do(null,null,() => console.log('source stream completed'))
.publishReplay(1)
.refCount();
// two subscriptions which are both in time before the stream completes
source.subscribe(val => console.log(`sub1:${val}`), null, () => console.log('sub1 completed'));
source.subscribe(val => console.log(`sub2:${val}`), null, () => console.log('sub2 completed'));
// new subscription after the stream has completed already
setTimeout(() => {
source.subscribe(val => console.log(`sub_late-to-the-party:${val}`), null, () => console.log('sub_late-to-the-party completed'));
}, 500);
<script src="https://cdnjs.cloudflare.com/ajax/libs/rxjs/5.0.3/Rx.js"></script>

asynchronously add epic to middleware in redux-observable

I'm trying to evaluate redux-observable. Just looking through the doc and I'm trying to get the async epic loading thing going. I created a fork of the jsbin from the docs which basically attempts to add the async usage of the BehaviorSubject stuff.
http://jsbin.com/bazoqemiqu/edit?html,js,output
In that 'PING PONG' example, I added an 'OTHER' action and then use BehaviorSubject.next (as described in the docs) to add that epic. However, when I run the example, what happens is that the PING action is fired, followed by an endless stream of 'OTHER' actions, but never the PONG action. To see this, I added the reduxLogger. View it in the dev tools console as the jsbin console doesn't render it correctly.
My question is what am I doing wrong? Why does the PONG action never get dispatched?
Your otherEpic is an infinite "loop" (over time)
const otherEpic$ = action$ =>
action$
.delay(1000)
.mapTo({ type: OTHER });
This epic has the behavior "when any action at all is received, wait 1000ms and then emit another action of type OTHER". And since the actions your Epics emit go through the normal store.dispatch cycle like any other action, that means after the first PING is received, it will emit an OTHER after 1000ms, which will then be recursively received by the same epic again, wait another 1000ms and emit another OTHER, repeat forever.
I'm not sure if this was known, but wanted to point it out.
You next() into the BehaviorSubject of epic$ before your rootEpic has started running/been subscribed to it.
BehaviorSubjects will keep the last value emitted and provide that immediately when someone subscribes. Since your rootEpic has not yet been called and subscribed to the by the middleware, you're replacing the initial value, so only the otherEpic is emitted and ran through the epic$.mergeMap stuff.
In a real application with async/bundle splitting, when you would call epic$.next(newEpic) should always be after the middleware has subscribed to your rootEpic and received the initial epic you provided to your BehaviorSubject.
Here's a demo of that working: http://jsbin.com/zaniviz/edit?js,output
const epic$ = new BehaviorSubject(combineEpics(epic1, epic2, ...etc));
const rootEpic = (action$, store) =>
epic$.mergeMap(epic => {console.log(epic)
return epic(action$, store)
});
const otherEpic = action$ =>
action$.ofType(PONG)
.delay(1000)
.mapTo({ type: OTHER });
const epicMiddleware = createEpicMiddleware(rootEpic);
const store = createStore(rootReducer,
applyMiddleware(loggerMiddleware, epicMiddleware)
);
// any time AFTER the epicMiddleware
// has received the rootEpic
epic$.next(otherEpic);
The documentation says "sometime later" in the example, which I now see isn't clear enough. I'll try and clarify this further.
You may also find this other question on async loading of Epics useful if you're using react-router with Webpack's require.enquire() splitting.
Let me know if I can clarify any of these further 🖖

Emitting events from Redux reducers

I'm using redux as a state container in a simple shooter game. State is completely deterministic, the only input the system receives is user input (eg. a weapon was fired, etc).
My problem is that I have to track (and process) certain events, that happen during the game (eg. something was destroyed, etc), and I'm not quite sure how to do that.
My current solution is that the reducer maintains an events array in the current state, and every reducer just appends events to it.
FIRE_WEAPON+-+ FIRE_WEAPON+-+
| |
| |
+-v--------+--------------v------------->
|
|
+->PLAYER_DESTROYED
Here the reduces receives two FIRE_WEAPON action, and should "emit" a PLAYER_DESTROYED event (right now, it is used to render an explosion there).
The project is open source, the reducer looks something like this (it's just pseudocode, but here is the relevant game logic):
// combine is just (f, g) => ((s, a) => g(f(s, a), a))
const reducer = combine(
// simulate world
(state, action) => {
while (state.time < action.time) {
state = evolve(state, delta); // evolve appends the happened in-game events to the state
}
return state;
},
// handle actual user input
(state, action) => {
return handleUserInput(state, action);
}
);
const evolve = (state, delta) => {
const events = [];
// some game logic that does some `events.push(...)`
return {
...state,
time: state.time + delta,
events: state.events.concat(events),
};
}
We can assume, that handleUserInput is a simple x => x identity function (it doesn't touch the events array).
During evolve, I'd like to "emit" events, but since that would make evolve impure, I cannot do that.
As I said, right now I'm doing this by storing the happened events in the state, but there might be a better way. Any suggestions?
These events are used during rendering, which looks likes this:
let sprites = [];
// `subscribe`d to store
function onStateChange() {
// `obsolete` removes old sprites that should not be displayed anymore
// `toSprite` converts events to sprites, you can assume that they are just simple objects
sprites = sprites.filter(obsolete).concat(store.getState().events.map(toSprite));
}
function render(state) {
renderState(state);
renderSprites(sprites);
}
But later on I'd like use events on the server (the reducer described above runs on the server too), for calculating various stats (eg. enemies destroyed, etc.).
Ps.: these "emitted" events have no influence on the state (they are totally unrelated), so I'm sure that they shouldn't be actions (because they would leave the state unchanged). They are processed after the reducer has finished, after that they can be dropped (the reducer always receives an empty events array).
I'm pretty sure that you can divide it in three parts:
-Actions:
const fireWeapon = ()=>({
type: FIRE_WEAPON
})
You can launch actions like fireWeapon okey, as you said reducers are pure functions so you can store in the state how much times you have launched that action.
-Reducer Fires
initialState: { fireWeapon: 0, fireShotgun:0}
CASE FIRE_WEAPON:
return {...state, fireWeapon: state.fireWeapon+1}
And finally, the key part, a libary called redux-observable, it's based on rxjs, reactive programming. You can suscribe to stream of actions and emit a new ones.
A really easy example is :
export const clearDeletedSiteEpic = (action$,state$) =>
action$.pipe(
ofType(FIRE_WEAPON),
map(() => {
if (state$.value.fires.fireWeapon % 2 === 0){
playerDestroyed() // action is launched
}
}
);

Categories

Resources