We are building a new app using React/Redux which is rendered server side.
We wish to follow best practice for Redux and normalize our data on the server before it's passed into the initial state for the store.
For this example, let's say we have a generic 'Products' entity that can be quite complex and is normalized on the root of our store and page level state in another object on the root of the store. So the structure and Reducers follow the typical 'slice reducer' pattern and will look like this:
{
page_x_state: PageReducer
products: ProductsReducer
}
We are using combine reducers to merge the reducers before passing them into the store.
Theoretical use case: We have a 'products' page that shows a list of basic product info. A user can click on a product to show a modal which then loads and shows the complete product data.
For the above example, the state sent from the server will contain only basic product models (3 or 4 fields), this is enough to render the table and fetching all product information at this point is wasteful and not very performant.
When a user clicks a product we will do an AJAX call fetch all data for that product. Once we have all data for the single product, should we update the instance in the products store with a full model? If so, we would then end up with a set of objects all of which could be different states (some could have minimal fields vs some which are full-blown objects with 10s of fields). Is this the best way to handle it?
Also, I would be interested to hear any thoughts of managing different representations of the same underlying model on the server and how to map it to the Redux store (in Java ideally).
EDIT:
Explicitly answering your first question, if your reducers are built up correctly your whole state tree should initialize with absolutely no data in it. But should be the correct shape. Your reducers should always have a default return value - when rendering server side - Redux should only render the initial state
After server-side rendering, when the store (that is now client side) needs updating because of a user action, your state shape for all of your product data is already there (it's just that some of it will probably be default values.). Rather than overwriting an object, your just filling in the blanks so to speak.
Lets say, in your second level view you need name, photo_url, price and brand and the initial view has 4 products on it, your rendered store would look something like this:
{
products: {
by_id: {
"1": {
id: "1",
name: "Cool Product",
tags: [],
brand: "Nike",
price: 1.99,
photo_url: "http://url.com",
category: "",
product_state: 0,
is_fetching: 0,
etc: ""
},
"2": {
id: "2",
name: "Another Cool Product",
tags: [],
brand: "Adidas",
price: 3.99,
photo_url: "http://url2.com",
category: "",
product_state: 0,
is_fetching: 0,
etc: ""
},
"3": {
id: "3",
name: "Crappy Product",
tags: [],
brand: "Badidas",
price: 0.99,
photo_url: "http://urlbad.com",
category: "",
product_state: 0,
is_fetching: 0,
etc: ""
},
"4": {
id: "4",
name: "Expensive product",
tags: [],
brand: "Rolex",
price: 199.99,
photo_url: "http://url4.com",
category: "",
product_state: 0,
is_fetching: 0,
etc: ""
}
},
all_ids: ["1", "2", "3", "4"]
}
}
You can see in the above data some keys are just empty strings or an empty array. But we have our data we need for the actual initial rendering of the page.
We could then make asynchronous calls on the client in the background immediately after the server has rendered and the document is ready, the chances are the server will return those initial calls before the user tries to get the data anyway. We can then load subsequent products on user request. I don't think that's the best approach but it's the one that makes most sense to me. Some other people might have some other ideas. It entirely depends on your app and use-case.
I would only keep one products object in state though and keep ALL the data pertaining to products in there.
I recently deployed an app into production and i'll share some of my
insights. The app, whilst not being too large in size, had a complex
data structure and having gone through the whole process as a newbie
to Redux in production (and having guidance from my architect) – These
are some of our takeaways. There's no right way in terms of architecture but there certainly are some things to avoid or do.
1. Before firing into writing your reducers design a 'static' state
If you don't know where you are going, you can't get there. Writing the whole structure of your state out flat will help you reason about how your state will change over time. We found this saved us time because we didn't have to really rewrite large sections.
2. Designing you state
keep it simple. The whole point of Redux is to simplify state management. We used a lot of the tips from the egghead.io tutorials on Redux that were created by Dan Abramov. They are clear really helped solve a lot of issues we were encountering. i'm sure you've read the docs about normalising state but the simple examples they gave actually carried through in most data patterns we implemented.
Rather than creating complex webs of data each chunk of data only held it's own data if it needed to reference another piece of it data it only referenced it by id we found this simple pattern covered most of our needs.
{
products: {
by_id: {
"1": {
id: "1",
name: "Cool Product",
tags: ["tag1", "tag2"],
product_state: 0,
is_fetching: 0,
etc: "etc"
}
},
all_ids: ["1"]
}
}
In the example above, tags might be another chunk of data with a similiar data structure using by_id and all_ids. All over the docs and tut, Abramov keeps referencing relational data and relational databases this was actually key for us. At first we kept looking at the UI and designing our state around how we thought we were going to show it. When this clicked and we started grouping the data based on it's relationship to other pieces of data, things started to click into place.
Quickly flipping to your question, I would avoid duplicating any data, as mentioned in another comment, personally i'd simply create a key in the state object called product_modal. let the modal take care of it's own state...
{
products: {
...
},
product_modal: {
current_product_id: "1",
is_fetching: true,
is_open: true
}
}
We found following this pattern with page state worked really well as well...we just treated it like any other piece of data with an id/name etc.
3. Reducer Logic
make sure reducers keep track of their own state. a lot of our reducers looked quite similiar, at first this felt like DRY hell but then we quickly realised the power of more reducers...say an action is dispatched and you want to update a whole chunk of state..no probs just check in your reducer for the action and return the new state. If you only want to update one or two fields in the same state...then you just do the same thing but only in the fields you want changing. most of our reducers were just simply a switch statement with an occasional nested if statement.
Combining Reducers
We didnt use combineReducers, we wrote our own. It wasn't hard, it helped us understand what was going on in Redux, and it allowed us to get a little smarter with our state. This tut was invaluable
Actions
Middleware is your friend...we used fetch API with redux-thunk to make RESTful requests. We split the required data requests into separate actions which called store.dispatch() for each data chunk that needed updating for the call. Each dispatch dispatched another action to update state. This kept our state updated modularly and allowed us to update large sections, or granularly as needed.
Dealing with an API
Ok so there's way too much to deal with here. I'm not saying our way is the best...but it has worked for us. Cut short...we have an internal API in java with publically exposed endpoints. The calls from this API didn't always map to the front end easily. We haven't implemented this, but ideally, an initial init endpoint could have been written on their end to get a lump of initial data that was needed to get things rolling on the front end for speeds sake.
We created a public API on the same server as the app, written in PHP. This API abstracted the internal API's endpoints (and in some cases the data too) away from the front end and the browser.
When the app would make a GET request to /api/projects/all the PHP API would then call our internal API, get the necessary data (sometimes across a couple of requests) and return that data in a usable format that redux could consume.
This might not be the ideal approach for a javascript app but we didn't have the option to create a new internal API structure, we needed to use one that has existed for several years, we have found the performance acceptable.
should we update the instance in the products store with a full model
It should be noted that Java and ReactJs+Redux don't have much conceptual overlap. Everything is a Javascript Object, not an Object with a Class.
Generally, storing all the data you receive in the Redux store state is the way to go. To work around the fact that some of the data will be minimal and some will be fully loaded you should make a conditional ajax call in the onComponentWillMount method of the individual product display container.
class MyGreatProduct extends React.Component {
onComponentWillMount() {
if(!this.props.thisProduct.prototype.hasProperty( 'somethingOnlyPresentInFullData' )) {
doAjaxCall(this.props.thisProduct.id).then((result) => {
this.props.storeNewResult(result.data);
}).catch(error=>{ ... })
}
}
// the rest of the component container code
}
const mapStateToProps = (state, ownProps) => {
return {
thisProduct: state.products.productInfo[ownProps.selectedId] || {id: ownProps.selectedId}
}
}
const mapDispatchToProps = (dispatch, ownProps) => {
return {
storeNewResult: (data) => { dispatch(productDataActions.fullProductData(data)) }
}
export default connect(mapStateToProps, mapDispatchToProps)(MyGreatProduct);
With this code, it should be somewhat clear how agnostic the components and containers can be regarding the exact data available in the Store at any given time.
Edit: In terms of managing different representations of the same underlying model on the server and how to map it to the Redux store, I'd try to use the same relative looseness you are dealing with once you have JSON. This should eliminate some coupling.
What I mean by this is just add the data you have to a JSObject to be consumed by React + Redux, without worrying too much about what values could potentially be stored in the Redux state during the execution of the application.
There's probably no right answer, just which strategy you prefer:
The simplest strategy is to add another piece to your reducer called selectedProduct and always overwrite it with the full object of the currently selected product. Your modal would always display the details of the selectedProduct. The downfalls of this strategy are that you aren't caching data in the case when a user selects the same product a second time, and your minimal fields aren't normalized.
Or you could update the instance in your Products store like you said, you'll just need logic to handle it. When you select a product, if it's fully loaded, render it. If not, make the ajax call, and show a spinner until its fully loaded.
If you don't have a concern with storing extra that data in the redux store it's not actually going to hit your performance very much if you use a normalized state. So on that front I would recommend caching as much as you can without risking security.
I think the best solution for you would be to use some redux middleware so your front end doesn't care how it gets the data. It will dispatch an action to the redux store and the middleware can determine whether or not it needs an AJAX call to get the new data. If it does need to fetch the data then the middleware can update the state when the AJAX resolves, if it doesn't then it can just discard the action because you already have the data. This way you can isolate the issue of having two different representations of the data to the middleware and implement a resolution there for it so your front end just asks for data and doesn't care how it gets it.
I don't know all the implementation details so as Jeff said its probably more what you prefer but I would definitely recommend adding some middleware to handle your AJAX calls if you haven't already it should make interfacing with the store much simpler.
If you want to read more on middleware the Redux documentation is pretty good.
https://redux.js.org/docs/advanced/Middleware.html
You could store each entity as an object of its various representations. In the action creator that updates the entity, include the representation as an argument:
const receiveProducts = (payload = [], representation = 'summary') => ({
type: 'PRODUCTS_RECEIVED',
payload, representation
});
const productReducer = (state = {}, action) => {
case 'PRODUCTS_RECEIVED': {
const { payload, representation } = action;
return {
...state,
...payload.reduce((next, entity) => {
next[entity.id] = {
...next[entity.id],
[representation]: entity
};
return next
}, {})
}
}
};
This means that whoever is calling receiveProducts() needs to know which representation is returned.
Related
I'm using Normalizing State Shape for my Redux Store and it looks like this:
entities: {
users: {
list: [], // <-- list of all my users
loading: false,
lastFetch: null,
},
}
I got stuck on what should I do if someone opens up a website directly on the user's detail page. For example: {WEBSITE_URL}/users/1. The Redux Store is empty and I need to request only one entity. Should I:
fetch the whole list, put it in the Store and select one requested entity?
fetch only user #1, put it in the Store user list (entities.users.list), set lastFetch to null (this is because if someone will redirect to list next, he will fetch the new list again. Clearly the pervious list didn't have all users), and display user #1 from the list.
fetch only user #1, put it in the Store in separate place. For example in selected field of users:
entities: {
users: {
list: [],
loading: false,
lastFetch: null,
selected: null // <--- HERE
},
}
What solution do you think is the best? Do I need selected field at all? All tutorials and courses don't mention this scenario, only scenario how to fetch the list.
I'm having the same dilemma.
My approach is always — 3. I'm creating selected/single state to load data and one additional action (e.g. clearSelectedUser, clearSelectedPost) to clear data from the store on component unmount.
I'm also using Redux Saga to fetch data (do async operations) and this works good as combo. I really like the idea of having neat components without async calls in it.
However, I also found it acceptable to use component state (with useState hook) and do data fetching from a component directly (without Redux Saga or the store) in this particular case (entity single page/screen).
Option 1. will not work if you get paginated data from your API. You'll just complicate things.
Option 2. I agree with you on that one.
My situation has 4 components nested within each other in this order: Products (page), ProductList, ProductListItem, and CrossSellForm.
Products executes a graphql query (using urql) as such:
const productsQuery = `
query {
products {
id
title
imageSrc
crossSells {
id
type
title
}
}
}
`;
...
const [response] = useQuery({
query: productsQuery,
});
const { data: { products = [] } = {}, fetching, error } = response;
...
<ProductList products={products} />
products returns an array of Products that contains a field, crossSells, that returns an array of CrossSells. Products is propagated downwards to CrossSellForm, which contains a mutation query that returns an array of CrossSells.
The problem is that when I submit the crossSellForm the request goes through successfully but the crossSells up in Products does not update, and the UI reflects stale data. This only happens when the initial fetch up in Products contains no crossSells, so the initial response looks something like this:
{
data: {
products: [
{
id: '123123',
title: 'Nice',
imageSrc: 'https://image.com',
crossSells: [],
__typename: "Product"
},
...
]
}
}
}
If there is an existing crossSell, there is no problem, the ui updates properly and the response looks like this:
{
data: {
products: [
{
id: '123123',
title: 'Nice',
imageSrc: 'https://image.com',
crossSells: [
{
id: 40,
title: 'Nice Stuff',
type: 'byVendor',
__typename: 'CrossSell'
}
],
__typename: "Product"
},
...
]
}
}
}
I read up a bit on urql's caching mechanism at https://formidable.com/open-source/urql/docs/basics/ and from what I understand it uses a document cache, so it caches the document based on __typename. If a query requests something with a the same __typename it will pull it from the cache. If a mutation occurs with the same __typename it will invalidate all objects in the cache with that __typename so the next time the user fetches an object with that __typename it will execute a network request instead of cache.
What I think is going on is in the initial situation where there are products but no crossSells the form submission is successful but the Products page does not update because there is no reference to an object with __typename of CrossSell, but in the second situation there is so it busts the cache and executes the query again, refreshes products and cross-sells and the UI is properly updated.
I've really enjoyed the experience of using urql hooks with React components and want to continue but I'm not sure how I can fix this problem without reaching for another tool.
I've tried to force a re-render upon form submission using tips from: How can I force component to re-render with hooks in React? but it runs into the same problem where Products will fetch from the cache again and crossSells will return an empty array. I thought about modifying urql's RequestPolicy to network only, along with the forced re-render, but I thought that would be unnecessarily expensive to re-fetch every single time. The solution I'm trying out now is to move all the state into redux, a single source of truth so that any update to crossSells will propagate properly, and although I'm sure it will work it will also mean I'll trade in a lot of the convenience I had with hooks for standard redux boilerplate.
How can I gracefully update Products with crossSells upon submitting the form within CrossSellForm, while still using urql and hooks?
core contributor here 👋
As you've already discovered, there's an open issue for this that details the inherent problem our our simple, default cache. It's a document cache so kind of unsuitable for more complex tasks where normalisation can help.
When we have am empty array of data, there's no indication that a specific result needs to be refetched.
Instead of using the network-only policy you could try cache-and-network, but that doesn't solve the underlying issue that the operation (your query) is not invalidated by the mutation. So no refetch will be triggered.
I'd very much recommend you Graphcache, our normalised cache, which you've also already discovered. At its minimum with no configuration (!) it's actually a drop-in replacement that's already quite a bit smarter. https://github.com/FormidableLabs/urql-exchange-graphcache
The configuration for it is really just addons to teach it how to handle more tasks automatically! I'd be happy to help you in issues, here, or via Spectrum if you need to customise it. But my advise would be, give it a shot, because in the best case, you'll have all your edge cases just working without any changes ✨
I have a simple chat application going on and the following stores:
MessageStore - messages of all users/chat groups
ChatGroupStore - all chat groups
UserStore - all users in general
I'm using immutable.js to store data. The thing is, MessageStore needs to use data from ChatGroupStore and UserStore, each message is constructed like this:
{
id: 10,
body: 'message body',
peer: {...} // UserStore or ChatGroupStore item - destination
author: {...} // UserStore or ChatGroupStore item - creator of the message
}
How am I suppose to update MessageStore items according to ChatGroupStore and UserStore update?
I was using AppDispatcher.waitFor() like this:
MessageStore.dispatchToken = AppDispatcher.register(function(action) {
switch(action.actionType) {
case UserConstants.USER_UPDATE:
AppDispatcher.waitFor([
UserStore.dispatchToken
]);
// update message logic
break;
}
});
From my point of view I would have to wait for the UserStore to update and then find all the messages with the updated user and update them. But how do I find the updated peer? I think a search in UserStore by reference wouldn't be enough since immutable data doesn't keep the reference when data changes, then I would have to apply more on queries. But then I would have to apply query logic of other stores inside MessageStore handler.
Currently I'm storing peers as a reference inside each message, maybe should I change to just:
{
id: 10,
peer: {
peerType: 'user', // chatGroup
peerId: 20
}
}
Would be great if anybody could shed some light about it. I'm really confused.
The best option I can see as a solution in all occasions is not to keep related data nested and to avoid transformations on data that comes from server, this will reduce the amount of work I need to do to keep the data up to date at all times. Then in your view, all you have to do is to subscribe to changes and put together the necessary data.
Alternative to Flux
There's also a good and well maintained state container solution called Redux which I suggest everyone to at least try. It has only one store and combines the whole state into a single deep object, although you can create each reducer separately. It also has a good way to integrate it with React, see Usage with React.
I know this is a bit of an opinion question, and it's long, but I'm having trouble coming up with a good solution in Redux.
I'm building a level editor and I want to show the user whether or not the data has been modified since it was persisted to the server. First, consider the data:
chapters: [{
id: 1,
levelIds: [ 2 ]
}],
levels: [{
id: 2,
entityIds: [ 4, 5 ]
}],
entities: [{
id: 4, position:...
}, {
id: 5, position:...
}]
A chapter has multiple levels and a level has multiple entities. In the level editor, you edit full chapters as one item, so if any entity or level changes, the whole chapter is considered unsaved.
I want to track if the user has made any changes to the data since it was last persisted to the server. I want to show a * for example next to the chapter name if something has changed. Criteria:
Track unsaved (not persisted to server) status
Status must work with an undo/redo system
If some "nested" data is changed (like an entity position), the top level chapter must know it is unsaved, not the entity itself
I've explored a few options and I'll try to illustrate why I'm not sure if any solution is better than the others.
Option 1: Store an "unsaved" flag on each chapter
This solution involves storing an "unsaved" flag, possibly in a separate reducer, that's set to true on any modifications, and false when the chapter is saved to the server.
Problems
There are many actions I need to track, so this is a bit verbose. I also need to manually track which actions actually modify the chapter. It may look something like:
function isUnsavedReducer( state = {}, action ):Object {
switch( action.type ) {
case CHANGE_ENTITY_POSITION:
case CHANGE_ENTITY_NAME
...etc
case CHANGE_LEVEL_TITLE: {
return {
...state,
[ action.chapterId ]: true
};
}
}
}
Most of the actions don't know the chapterId. For example if I move an entity, the action looks like { entityId: 2, position: newPosition }. If I went this route I think I'd have to add the chapterId to all actions, even though they don't modify the chapter?
Option 2: Track the last chapter object saved
On the surface this looks simpler. Whenever the data is persisted, simply store the current in-memory chapter object:
function lastSavedReducer( state = {}, action ):Object {
switch( action.type ) {
case SAVE_CHAPTER: {
return {
...state,
[ action.chapterId ]: action.chapter
};
}
}
}
Then in the view to check if the current data is unsaved it's a strict equality check:
{ lastSaved[ currentChapterId ] === this.props.chater ? 'Saved' : 'Unsaved' }
Problems:
The same as problem #2 from above. When I modify an entity position with a redux action, I don't modify the top level chapter itself. I'd have to modify all of my reducers like chapterReducer to return a new object (even though nothing actually changed). I could also store the "last persisted entities" object, but since all entities are held in one store, I couldn't track which chapters were unsaved, just that something was unsaved.
Is there an obvious solution I'm missing? Should I modify how my data is stored or my reducer setup? The normalized data in my reducers, and the many possible actions that can set "unsaved", make me unsure of the best way forward. Have you implemented something similar and already know the pitfalls or best way forward?
The problem is that you have normalized your data, and this data act as the source of truth of your application.
Why do you want the edition events, that are not saved yet, to modify directly this normalized data?
It makes it hard to revert to saved state without refetching saved state from the backend (it shouldn't be required)
It is confusing because we don't know if the normalized data is saved or not (your current problem)
Best solution: keep unsaved actions on a list, but don't run them before save
Instead of modifying directly the normalized data, you could accumulate the edition actions in a list of your store.
At render-time, you have the ability to project your Redux store state with this list, so your store state does not change, but you can still connect to components the current edition state. This technique is very similar to event-sourcing and snapshots.
Imagine your store state looks like that:
const storeState = {
unsavedEditionActions: [a1,a2,a3],
normalizedData: {
chapters: [{
id: 1,
levelIds: [ 2 ]
}],
levels: [{
id: 2,
entityIds: [ 4, 5 ]
}],
entities: [{
id: 4, position:...
}, {
id: 5, position:...
}]
}
}
If a component needs to get the edition state, and the original state, you can do the reducing directly into mapStateToProps:
let Component = ...
Component = connect(state => {
return {
normalizedData: state.normalizedData,
normalizedDataEdited: state.unsavedEditionActions.reduce(myNormalizedDataReducer,state.normalizedData)
}
})(Component)
Your component will now receive both the currently saved normalized data, and the currently draft edition data. From then you can do your magic and shallow-compare everything of the 2 DBs to know what have been edited.
On save, you empty the list and apply it to your normalized data.
On cancel, you just empty this list (no need to refetch because you kept state before edition intact).
I'd recommend using Reselect and ImmutableJS for better performances, but without it should also work fine.
Note that this technique of maintaining a list of events is also what many people are using for optimistic updates (ie: giving immediate feedback after action of the user, without waiting for server approval, but having the ability to revert to previous state if server does not approve the change). Some useful links:
Lee Byron's Render 2016 talk (24")
Redux-optimistic-ui
Note you are not forced to do the reducing of your action-list in connect, you can also hold in your store directly 2 entries: normalizedData / normalizedDataEdited.
Other solution: build a graph to edit from normalized data, and compare initialGraph != editedGraph
Instead of modifing directly the normalized data on edition, you could build a temporary denormalized graph. Thus you could build a copy of this graph, edit the copy, and shallow-compare the copy to the original graph to see if it was modified.
function levelEditionReducer( state = {}, action ) {
switch( action.type ) {
case LEVEL_DATA_LOADED:
const levelData = action.payload.levelData;
return {
initialGraph: levelData,
editedGraph: levelData,
}
case CHANGE_ENTITY_NAME
const newEditedGraph = someFunctionToEditEntityName(state.editedGraph,action);
return (newEditedGraph == state.editedGraph) ? state : { ...state, editedGraph: newEditionState }
......
}
}
Then after edition, you can see that state.initialGraph != state.editedGraph and display a save button and a draft status.
Note this shallow compare only works if you deal very carefully with immutable data structures, and you make sure you don't update anything that does not have to be. If someFunctionToEditEntityName return the same state as the input, it is important the reducer returns the exact same state, because the action actually did not change anything!
Writing this kind of code is not simple with ES6 but if you use other libraries like Immutable or updeep or something similar, if you try to set to an object an attribute to a value it already has, it may short-circuit the operation and make sure to return you the original object.
Also note that you will detect your lack of care early, because the Save button would be displayed while it should not :) so you probably won't miss that.
Once the data is finally saved, you can merge the saved graph to your normalized data
I am new to React and I don't know what's the best way to do this.
I have a list of cars and on clicking each row it should show slide to full page details of that car.
My code structure is:
I have App which renders two components. CarList and CarDetails. Car Details is hidden initially. The reason I rendered carDetails in app is because it's a massive fix template so I would like to render this once when app is loaded and only update it's data when each row clicked.
CarList also renders CarRow component which is fine.
Now my problem is I have a getDetails function on CarRow component which is making a call to get the details based on the car id. How to get carDetails component data updated ? I used
this.setState({itemDetails:data});
but seems state of the carRow is not the same reference as state in carDetails.
Any help?
This is a fundamental issue that lots of thought and man-hours has gone into in order to try and solve. It probably can't be answered, except on a surface level, in a StackOverflow post. It's not React-centric, either. This is an issue across most applications, regardless of the framework you're using.
Since you asked in the context of React, you might consider reading into flux, which is the de-facto implementation of this one-way data-flow idea in concert with React. However, that architecture is by no means "the best". There are simply advantages and disadvantages to it like everything else.
Some people don't like the idea of the global "event bus" that flux proposes. If that's the case, you can simply implement your own intermediate data layer API that collects query callbacks and A) invokes the callbacks on any calls to save data and B) refreshes any appropriate queries to the server. For now, though, I'd stick with flux as it will give you an idea of the general principles involved in having the things that most people consider to be "good", like a single source of truth for your data, one way flow, etc.
To give a concrete example of the callback idea:
// data layer
const listeners = [];
const data = {
save: save,
query: query
};
function save(someData) {
// save data to the server, and then...
.then(data => {
listeners.forEach(listener => listener(data));
});
}
function query(params, callback) {
// query the server with the params, then
listeners.push(callback);
}
// component
componentWillMount() {
data.query(params, data => this.setState({ myData: data }));
},
save() {
// when the save operation is complete, it will "refresh" the query above
data.save(someData);
}
This is a very distilled example and doesn't address optimization, such as potential for memory leaks when moving to different views and invoking "stale" callbacks, however it should give you a general idea of another approach.
The two approaches have the same policy (a single source of truth for data and one way data flow) but different implementations (global "event bus" which necessitates keeping track of events, or the simple callback method, which can necessitate a form of memory management).