I'm using redux as a state container in a simple shooter game. State is completely deterministic, the only input the system receives is user input (eg. a weapon was fired, etc).
My problem is that I have to track (and process) certain events, that happen during the game (eg. something was destroyed, etc), and I'm not quite sure how to do that.
My current solution is that the reducer maintains an events array in the current state, and every reducer just appends events to it.
FIRE_WEAPON+-+ FIRE_WEAPON+-+
| |
| |
+-v--------+--------------v------------->
|
|
+->PLAYER_DESTROYED
Here the reduces receives two FIRE_WEAPON action, and should "emit" a PLAYER_DESTROYED event (right now, it is used to render an explosion there).
The project is open source, the reducer looks something like this (it's just pseudocode, but here is the relevant game logic):
// combine is just (f, g) => ((s, a) => g(f(s, a), a))
const reducer = combine(
// simulate world
(state, action) => {
while (state.time < action.time) {
state = evolve(state, delta); // evolve appends the happened in-game events to the state
}
return state;
},
// handle actual user input
(state, action) => {
return handleUserInput(state, action);
}
);
const evolve = (state, delta) => {
const events = [];
// some game logic that does some `events.push(...)`
return {
...state,
time: state.time + delta,
events: state.events.concat(events),
};
}
We can assume, that handleUserInput is a simple x => x identity function (it doesn't touch the events array).
During evolve, I'd like to "emit" events, but since that would make evolve impure, I cannot do that.
As I said, right now I'm doing this by storing the happened events in the state, but there might be a better way. Any suggestions?
These events are used during rendering, which looks likes this:
let sprites = [];
// `subscribe`d to store
function onStateChange() {
// `obsolete` removes old sprites that should not be displayed anymore
// `toSprite` converts events to sprites, you can assume that they are just simple objects
sprites = sprites.filter(obsolete).concat(store.getState().events.map(toSprite));
}
function render(state) {
renderState(state);
renderSprites(sprites);
}
But later on I'd like use events on the server (the reducer described above runs on the server too), for calculating various stats (eg. enemies destroyed, etc.).
Ps.: these "emitted" events have no influence on the state (they are totally unrelated), so I'm sure that they shouldn't be actions (because they would leave the state unchanged). They are processed after the reducer has finished, after that they can be dropped (the reducer always receives an empty events array).
I'm pretty sure that you can divide it in three parts:
-Actions:
const fireWeapon = ()=>({
type: FIRE_WEAPON
})
You can launch actions like fireWeapon okey, as you said reducers are pure functions so you can store in the state how much times you have launched that action.
-Reducer Fires
initialState: { fireWeapon: 0, fireShotgun:0}
CASE FIRE_WEAPON:
return {...state, fireWeapon: state.fireWeapon+1}
And finally, the key part, a libary called redux-observable, it's based on rxjs, reactive programming. You can suscribe to stream of actions and emit a new ones.
A really easy example is :
export const clearDeletedSiteEpic = (action$,state$) =>
action$.pipe(
ofType(FIRE_WEAPON),
map(() => {
if (state$.value.fires.fireWeapon % 2 === 0){
playerDestroyed() // action is launched
}
}
);
Related
What is the best way to pass data from one app to another app.
I have a two extJS app lets called appA and appB.
In my appA I am using some of the views of appB. Therefore I need to pass some data to be passed.
what is the correct way to pass the data.
Currently I am using
var myOb= {"key1" : "value1","key2" : "value2"}
sessionStorage.setItem('obJectData', muObj);
After use I am removing this like
sessionStorage.removeItem("obJectData");
Can anyone help me how and where to store the data in correct way.
I also have a though to be global vaiable.
If you need to have this across several different microapps all loaded on the same page, you might consider a combo approach of:
A singleton that holds the current shared state items
An events bus that can be used for pubsub (publish-subscribe) so that components/apps can subscribe to events and get updates when changes occur.
A rudimentary example could be seen here:
const appState = {
count: 0,
};
const eventBus = {
// just using the built-in DOM handlers, but you could create a custom one
subscribe: (eventName, handler) => document.addEventListener(eventName, handler),
publish: (eventName, payload) => document.dispatchEvent(new CustomEvent(eventName, payload)),
};
const incrementCount = () => {
appState.count = appState.count + 1;
eventBus.publish('newCount', { detail: { count: appState.count } });
}
const addCountDisplay = () => {
const newDisplay = document.createElement('div');
newDisplay.classList.add('countDisplay');
newDisplay.textContent = appState.count;
document.body.append(newDisplay);
}
// updateAllCountDisplays
eventBus.subscribe('newCount', (e) => {
const newCount = e.detail.count;
const displays = document.querySelectorAll('.countDisplay');
([...displays]).forEach((display) => display.textContent = newCount.toString());
});
<button onclick="incrementCount()">increment count</button>
<button onclick="addCountDisplay()">add count display</button>
In this example, you can see that:
Anytime you create a new count display, it is able to fetch the current count from the singleton appState
Anytime you update the count, a custom event is triggered that is able to be subscribed to to update the UI in reaction to the change in state
This could obviously be improved in many ways, but it serves as an example of how you can combine a globally available singleton in memory serving as a state cache with custom events to give all your components access to the current state as well as a way to subscribe and react to changes in state. This is not the only pattern you could use; it's just one option to consider.
I am trying to see (out of curiosity) how complex it would be to reimplement basic redux / redux-observable behavior with pure Rxjs.
Here is my take on it, but it seems incredibly too simple to be right. Can anyone point me at any errors/flaws in my logic ?
Thank you very much
// set up the store.dispatch functionnaly through a subject (action$.next() is like store.dispatch())
var action$ = new Rx.Subject()
// Create epics that do nothing interesting
function epic1(action$) {
return action$.filter(action => action.type == "test").delay(1000).mapTo({
type: "PONG"
})
}
function epic2(action$) {
return action$.filter(action => action.type == "test2").delay(2000).mapTo({
type: "PING"
})
}
//....
//Later on, Merge all epic into one observable
//
function activateAndMergeEpics(action$, ...epics) {
// give the action$ stream to each epic
var activatedArray = epics.map(epic => epic(action$))
// merge them all into one megaObservable
var merged = Rx.Observable.merge(...activatedArray)
return merged
}
var merged = activateAndMergeEpics(action$, epic1, epic2)
// Pipe your megaObservable back inside the loop so
// you can process the action in your reducers
var subscription = merged.subscribe(action$)
function rootReducer(state = {}, action) {
console.log(action)
return (state)
}
// Generate your state from your actions
var state$ = action$.scan(rootReducer, {})
// Do whatever your want now, like...
// state$.map(route).map(renderdom)
// Let's juste subscribe to nothing to get the stream pumping
state$.subscribe()
// Simulate a dispatch
action$.next({
type: "test"
})
// Another one
action$.next({type:"test2"})
<script src="https://cdnjs.cloudflare.com/ajax/libs/rxjs/5.4.3/Rx.min.js"></script>
Yep, you've totally got the core functionality.
I hope you don't mind some unsolicited advice: If you're doing this just to learn how it works, I applaud you! That's such a great and surprisingly rare trait, even among programmers. I do want to caution against using your own home rolled redux clone because then you lose a lot of the huge benefits of redux; devtools, middleware, enhancers. You lose all of its built-in assertions/error checking, which actually is most of the code in redux (some of which is stripped out in production builds). You also loose fixes for edge cases that get shaked out over the years, which is sometimes why a given library might appear unnecessarily complex to anyone without that context.
You could add all those things, but then it would just be redux 🙃
If you do decide to go down that route, checkout some of the existing RxJS-based clones for inspiration (or collaboration) on yours:
https://www.npmjs.com/package/reactive-state
https://www.npmjs.com/package/rxdux
https://www.npmjs.com/package/oddstream
https://www.npmjs.com/package/rstore
Solution proposed by codeslayer1 in question raised at React - Controlling multiple Ajax Calls has an issue of accessing state directly inside action creator - an anti pattern.
So, if I don't access the state inside my action creator what I will do is, I will listen to a batchRequestCompleted state in my component. When components prop batchRequestCompleted will become true(means previous request is completed), I will check if any pending requests are there. If yes, I will dispatch action to process those next requests. So basically saga calls action which in turn modifies the state. And once state is modified, another action to process further requests is dispatched from component. In this way, saga never accesses the state.
Solution above sounds good but comes at a cost of problem mentioned in Route change before action creator completes. That is, what will happen to the requests placed inside queue if someone navigates to a different route, before queue is cleared.
Can I solve the problem mentioned in React - Controlling multiple Ajax Calls without accessing state inside action creators and without bringing component back in picture for dispatching an action to clear the pending queue.
Note: I have created a new question because problem mentioned in React - Controlling multiple Ajax Calls is solved but with side effects and this question majorly focuses on reaching to a solution which cleans off that side effect.
I made a little repo github.com/adz5a/so-stream-example to illustrate how I would solve your problem.
This repo uses two libraries xstream and recompose. The former provides an implementation of ObservableStreams with its operators and the latter wires it up with React.
A concept is necessary before everything : ES Observables. They are covered in depth in articles such as this (I strongly recommend reading and listening to past articles / talks from Ben Lesh, on this subject).
Observabes are a lazy primitive used to model values over time. In JS we have another primitive for doing async : Promises. Those models an eventual value or error and thus are not lazy but eager. In the case of a React component ( or more generally UI ) we are interested in lazyness because things can go wrong : the user may want to interrupt a long running process, it can crash, change route etc...
So, how can we solve your problem : controlling a long running process which can be interrupted ( fetching lots of rows ) by user interaction ?
First, the UI :
export class AnswerView extends React.Component {
static propTypes = {
// called when the user make a batch
// of request
onStart: PropTypes.func.isRequired,
// called when you want to stop the processing
// of requests ( when unmounting or at the request
// of the user )
onStop: PropTypes.func.isRequired,
// number of requests completed, 0 by default
completedRequests: PropTypes.number.isRequired,
// whether it's working right now or not
processing: PropTypes.bool.isRequired
};
render () {
// displays a form if no work is being done,
// else the number of completed requests
return (
<section>
<Link to="/other">Change Route !</Link>
<header>
Lazy Component Example
</header>
{
this.props.processing ?
<span>{"requests done " + this.props.completedRequests}<button onClick={this.props.onStop}>Stop !</button></span>:
<form onSubmit={e => {
e.preventDefault();
this.props.onStart(parseInt(e.currentTarget.elements.number.value, 10));
}}>
Nb of posts to fetch<input type="number" name="number" placeholder="0"/>
<input type="submit" value="go"/>
</form>
}
</section>
);
}
componentWillMount () {
console.log("mounting");
}
}
Pretty simple : a form with an input for the number of requests to perform (could checkboxes on a table component ... ).
Its props are as follow :
onStart : fn which takes the desired number
onStop : fn which takes no args and signals we would like to stop. Can be hooked to a button or in this case, componentWillUnmout.
completedRequests: Integer, counts requests done, 0.
processing: boolean, indicates if work is under way.
This does not do much by itself, so let's introduce recompose. Its purpose is to enhance component via HOC. We will use the mapPropsStream helper in this example.
Note : in this answer I use stream / Observable interchangeably but this is not true in the general case. A stream is an Observable with operators allowing to transform the emitted value into a new Observable.
For a React Component we can sort of observe its props with the standard api : 1st one at componentWillMount, then at componentWillReceiveProps. We can also signal when there will be no more props with componentWillUnmount. We can build the following (marble) diagram : p1--p2--..--pn--| (the pipe indicates the completion of the stream).
The enhancer code is posted below with comments.
What needs to be understood is that everything with streams can be approached like a signal : by modelling everything as a stream we can be sure that by sending the appropriate signal we can have the desired behaviour.
export const enhance = mapPropsStream(prop$ => {
/*
* createEventHandler will help us generates the callbacks and their
* corresponding streams.
* Each callback invocation will dispatch a value to their corresponding
* stream.
*/
// models the requested number of requests
const { handler: onStart, stream: requestCount$ } = createEventHandler();
// models the *stop* signals
const { handler: onStop, stream: stop$ } = createEventHandler();
// models the number of completed requests
const completedRequestCount$ = requestCount$.map( n => {
// for each request, generate a dummy url list
const urls = Array.from({ length: n }, (_, i) => `https://jsonplaceholder.typicode.com/posts/${i + 1}` );
// this is the trick : we want the process to be aware of itself when
// doing the next operation. This is a circular invocation so we need to
// use a *proxy*. Note : another way is to use a *subject* but they are
// not present in __xstream__, plz look at RxJS for a *subject* overview
// and implementation.
const requestProxy$ = xs.create();
const count$ = requestProxy$
// a *reduce* operation to follow where we are
// it acts like a cursor.
.fold(( n ) => n + 5, 0 )
// this will log the current value
.debug("nb");
const request$ = count$.map( n => Promise.all(urls.slice(n, n + 5).map(u => fetch(u))) )
.map(xs.fromPromise)
.flatten()
.endWhen(xs.merge(
// this stream completes when the stop$ emits
// it also completes when the count is above the urls array length
// and when the prop$ has emitted its last value ( when unmounting )
stop$,
count$.filter(n => n >= urls.length),
prop$.last()
));
// this effectively activates the proxy
requestProxy$.imitate(request$);
return count$;
} )
.flatten();
// models the processing props,
// will emit 2 values : false immediately,
// true when the process starts.
const processing$ = requestCount$.take(1)
.mapTo(true)
.startWith(false);
// combines each streams to generate the props
return xs.combine(
// original props
prop$,
// completed requests, 0 at start
completedRequestCount$.startWith(0),
// boolean indicating if processing is en route
processing$
)
.map(([ props, completedRequests, processing ]) => {
return {
...props,
completedRequests,
processing,
onStart,
onStop
};
})
// allows us to catch any error generated in the streams
// very much equivalent to the new ErrorBoundaries in React
.replaceError( e => {
// logs and return an empty stream which will never emit,
// effectively blocking the component
console.error(e);
return xs.empty();
} );
});
export const Answer = enhance(AnswerView);
I hope this answer is not (too) convoluted, feel free to ask any question.
As a side note, after a little research you may notice that the processing boolean is not really used in the logic but is merely there to help the UI know what's going on : this is a lot cleaner than having some piece of state attached to the this of a Component.
I am trying to make multiple changes to the store, but not render till all changes are done. I wanted to do this with redux-thunk.
Here is my action creator:
function addProp(name, value) {
return { type:'ADD_PROP', name, value }
}
function multiGeoChanges(...changes) {
// my goal here is to make multiple changes to geo, and make sure that react doesnt update the render till the end
return async function(dispatch, getState) {
for (let change of changes) {
dispatch(change);
await promiseTimeout(2000);
}
}
}
I dispatch my async action creator like this:
store.dispatch(multiGeoChanges(addProp(1, "val1"), addProp(2, "val2"), addProp(3, "val3")));
However this is causing react to render after each dispatch. I am new to redux-thunk, I never used async middleware, but I thought it could help me here.
#Kokovin Vladislav's answer is correct. To add some additional context:
Redux will notify all subscribers after every dispatch. To cut down on re-renders, either dispatch fewer times, or use one of several approaches for "batching" dispatches and notifications. For more info, see the Redux FAQ on update events: http://redux.js.org/docs/faq/Performance.html#performance-update-events .
I also recently wrote a couple of blog posts that relate to this topic. Idiomatic Redux: Thoughts on Thunks, Sagas, Abstraction, and Reusability discusses the pros and cons of using thunks, and summarizes several ways to handle batching of dispatches. Practical Redux Part 6: Connected Lists, Forms, and Performance describes several key aspects to be aware of regarding Redux performance.
Finally, there's several other libraries that can help with batching up store change notifications. See the Store#Store Change Subscriptions section of my Redux addons catalog for a list of relevant addons. In particular, you might be interested in https://github.com/manaflair/redux-batch , which will allow you to dispatch an array of actions with only a single notification event.
There are ways to achieve the goal:
Classic way:
usually:
Actions describe the fact that something happened, but don't specify how the application's state changes in response. This is the job of reducers.
That also means that actions are not setters.
Thus, you could describe what has happened and accumulate changes, and dispatch one action
something like:
const multipleAddProp = (changedProps) =>({
type:'MULTIPLE_ADD_PROP', changedProps
});
And then react on action in reducer:
const geo=(state,action)=>{
...
switch (action.type){
case 'MULTIPLE_ADD_PROP':
// apply new props
...
}
}
Another way When rerendering is critical :
then you can consider to limit components, which could be rerendered on state change.
For example you can use shouldComponentUpdate to check whether component
should be rendered or not.
Also you could use reselect, in order to not rerender connected components
after calculating derived data...
Non standard way:
redux-batched-action
It works something like transaction.
In this example, the subscribers would be notified once:
import { batchActions } from 'redux-batched-actions';
const multiGeoChanges=(...arrayOfActions)=> dispatch => {
dispatch( batchActions(arrayOfActions) );
}
In react-redux 7.0.1+ batching is now built-in. Release notes of 7.0.1:
https://github.com/reduxjs/react-redux/releases/tag/v7.0.1
Batched Updates
React has an unstable_batchedUpdates API that it uses to group
together multiple updates from the same event loop tick. The React
team encouraged us to use this, and we've updated our internal Redux
subscription handling to leverage this API. This should also help
improve performance, by cutting down on the number of distinct renders
caused by a Redux store update.
function myThunk() {
return (dispatch, getState) => {
// should only result in one combined re-render, not two
batch(() => {
dispatch(increment());
dispatch(increment());
})
}
}
By design when the state, which is held by the store, changes the view should render.
You can avoid this by updating the state once.
If you are using promises you can use Promise.all to wait for all the promises to resolve and then dispatch a new action to the store with the calculated result. https://developer.mozilla.org/en/docs/Web/JavaScript/Reference/Global_Objects/Promise/all
Something like this:
Promise.all([p1, p2, p3, p4, p5]).then(changes => {
dispatch(changes)
}, err => {
// deal with error
});
Of course you'll need an action that will deal with many props, something like addManyProps this should update the state once, resulting in one render.
redux-batched-actions
Batching action creator and associated higher order reducer for redux that enables batching subscriber notifications for an array of actions.
Coming to this a bit late, but I think this is a much nicer solution, which enables you to add meta.batch to actions you would like to batch together into a single react update. As a bonus this approach works with asynchronous actions.
import raf from 'raf'
import { batchedSubscribe } from 'redux-batched-subscribe'
let notify = null
let rafId = null
const shouldBatch = action => action?.meta?.batch
export const batchedSubscribeEnhancer = batchedSubscribe(freshNotify => (notify = freshNotify))
export const batchedSubscribeMiddleware = () => next => action => {
const resolved = next(action)
if (notify && rafId === null && !shouldBatch(action)) {
notify()
} else if (!rafId) {
rafId = raf(() => {
rafId = null
notify()
})
}
return resolved
}
Then connect up to your store
mport { applyMiddleware, compose, createStore } from 'redux'
import { batchedSubscribeMiddleware, batchedSubscribeEnhancer } from './batching'
const store = createStore(
reducer,
intialState,
compose(
batchedSubscribeEnhancer,
applyMiddleware(batchedSubscribeMiddleware)
)
)
i'm trying to use React with Flux architecture and stumbled on one restriction which i can't handle.
Problem is as following:
There's a store which listens to an event. Event has object id. We need to fetch object if needed and make it selected.
If store doesn't have object with this id - it's queried. In callback we dispatch another event to store which is responsible for selection.
If store has object - i'd like to dispatch selection event, but i can't because dispatch is in progress.
Best solution i came up with so far is wrapping inner dispatch in setTimeout(f, 0), but it looks scary.
Actually the problem is quite general - how should i organize dispatch chain without dispatch nesting (without violating current Flux restrictions) if each new dispatch is based on previous dispatch handling result.
Does anybody have any good approaches to solve such problems?
var selectItem(item) {
AppDispatcher.dispatch({
actionType: AppConstants.ITEM_SELECT,
item: item
});
}
// Item must be requested and selected.
// If it's in store - select it.
// Otherwise fetch and then select it.
SomeStore.dispatchToken = AppDispatcher.register((action) => {
switch(action.actionType) {
case AppConstants.ITEM_REQUESTED:
var item = SomeStore.getItem(action.itemId);
if (item) {
// Won't work because can't dispatch in the middle of dispatch
selectItem(item);
} else {
// Will work
$.getJSON(`some/${action.itemId}`, (item) => selectItem(item));
}
}
};
Are you writing your own dispatcher? setTimeout(f, 0) is a fine trick. I do the same thing in my minimal flux here. Nothing scary there. Javascript's concurrency model is pretty simple.
More robust flux dispatcher implementations should handle that for you.
If ITEM_SELECT is an event that another Store is going to handle:
You are looking for dispatcher.waitFor(array<string> ids): void, which lets you use the SomeStore.dispatchToken that register() returns to enforce the order in which Stores handle an event.
The store, say we call it OtherStore, that would handle the ITEM_SELECT event, should instead handle ITEM_REQUEST event, but call dispatcher.waitFor( [ SomeStore.dispatchToken ] ) first, and then get whatever result is interesting from SomeStore via a public method, like SomeStore.getItem().
But from your example, it seems like SomeStore doesn't do anything to its internal state with ITEM_REQUEST, so you just need to move the following lines into OtherStore with a few minor changes:
// OtherStore.js
case AppConstants.ITEM_REQUESTED:
dispatcher.waitFor( [ SomeStore.dispatchToken ] );// and don't even do this if SomeStore isn't doing anything with ITEM_REQUEST
var item = SomeStore.getItem(action.itemId);
if (item) {
// Don't dispatch an event, let other stores handle this event, if necessary
OtherStore.doSomethingWith(item);
} else {
// Will work
$.getJSON(`some/${action.itemId}`, (item) => OtherStore.doSomethingWith(item));
}
And again, if another store needs to handle the result of OtherStore.doSomethingWith(item), they can also handle ITEM_REQUESTED, but call dispatcher.waitFor( [ OtherStore.dispatchToken ] ) before proceeding.
So, in looking at your code, are you setting a "selected" property on the item so it will be checked/selected in your UI/Component? If so, just make that part of the function you are already in.
if(item) {
item.selected = true;
//we're done now, no need to create another Action at this point,
//we have changed the state of our data, now alert the components
//via emitChange()
emitChange();
}
If you're wanting to track the currently selected item in the Store, just have an ID or and object as a private var up there, and set it similarly.
var Store = (function(){
var _currentItem = {};
var _currentItemID = 1;
function selectItem(item) {
_currentItem = item;
_currentItemID = item.id;
emitChange();
}
(function() {
Dispatcher.register(function(action){
case AppConstants.ITEM_REQUESTED:
var item = SomeStore.getItem(action.itemId);
if (item) {
selectItem(item);
} else {
$.getJSON(`some/${action.itemId}`, (item) =>
selectItem(item);
}
});
})();
return {
getCurrentlySelectedItem: function() {
return _currentItem;
},
getCurrentlySelectedItemID: function() {
return _currentItemID;
}
}
})();
Ultimately, you don't have to create Actions for everything. Whatever the item is that you're operating on, it should be some domain entity, and it is your Store's job to manage the state of that specific entity. Having other internal functions is often a necessity, so just make selectItem(item) an internal function of your Store so you don't have to create a new Action to access it or use it.
Now, if you have cross-store concerns, and another Store cares about some specific change to some data in your initial Store, this is where the waitFor(ids) function will come in. It effectively blocks execution until the first Store is updated, then the other can continue executing, assured that the other Store's data is in a valid state.
I hope this makes sense and solves your problem, if not, let me know, and hopefully I can zero in better.