How to feed dependencies between Flux stores using Immutable.js? - javascript

I have a simple chat application going on and the following stores:
MessageStore - messages of all users/chat groups
ChatGroupStore - all chat groups
UserStore - all users in general
I'm using immutable.js to store data. The thing is, MessageStore needs to use data from ChatGroupStore and UserStore, each message is constructed like this:
{
id: 10,
body: 'message body',
peer: {...} // UserStore or ChatGroupStore item - destination
author: {...} // UserStore or ChatGroupStore item - creator of the message
}
How am I suppose to update MessageStore items according to ChatGroupStore and UserStore update?
I was using AppDispatcher.waitFor() like this:
MessageStore.dispatchToken = AppDispatcher.register(function(action) {
switch(action.actionType) {
case UserConstants.USER_UPDATE:
AppDispatcher.waitFor([
UserStore.dispatchToken
]);
// update message logic
break;
}
});
From my point of view I would have to wait for the UserStore to update and then find all the messages with the updated user and update them. But how do I find the updated peer? I think a search in UserStore by reference wouldn't be enough since immutable data doesn't keep the reference when data changes, then I would have to apply more on queries. But then I would have to apply query logic of other stores inside MessageStore handler.
Currently I'm storing peers as a reference inside each message, maybe should I change to just:
{
id: 10,
peer: {
peerType: 'user', // chatGroup
peerId: 20
}
}
Would be great if anybody could shed some light about it. I'm really confused.

The best option I can see as a solution in all occasions is not to keep related data nested and to avoid transformations on data that comes from server, this will reduce the amount of work I need to do to keep the data up to date at all times. Then in your view, all you have to do is to subscribe to changes and put together the necessary data.
Alternative to Flux
There's also a good and well maintained state container solution called Redux which I suggest everyone to at least try. It has only one store and combines the whole state into a single deep object, although you can create each reducer separately. It also has a good way to integrate it with React, see Usage with React.

Related

How do I update the grapqhl cache with urql upon a mutation, where the initial query response does not include the required __typename?

My situation has 4 components nested within each other in this order: Products (page), ProductList, ProductListItem, and CrossSellForm.
Products executes a graphql query (using urql) as such:
const productsQuery = `
query {
products {
id
title
imageSrc
crossSells {
id
type
title
}
}
}
`;
...
const [response] = useQuery({
query: productsQuery,
});
const { data: { products = [] } = {}, fetching, error } = response;
...
<ProductList products={products} />
products returns an array of Products that contains a field, crossSells, that returns an array of CrossSells. Products is propagated downwards to CrossSellForm, which contains a mutation query that returns an array of CrossSells.
The problem is that when I submit the crossSellForm the request goes through successfully but the crossSells up in Products does not update, and the UI reflects stale data. This only happens when the initial fetch up in Products contains no crossSells, so the initial response looks something like this:
{
data: {
products: [
{
id: '123123',
title: 'Nice',
imageSrc: 'https://image.com',
crossSells: [],
__typename: "Product"
},
...
]
}
}
}
If there is an existing crossSell, there is no problem, the ui updates properly and the response looks like this:
{
data: {
products: [
{
id: '123123',
title: 'Nice',
imageSrc: 'https://image.com',
crossSells: [
{
id: 40,
title: 'Nice Stuff',
type: 'byVendor',
__typename: 'CrossSell'
}
],
__typename: "Product"
},
...
]
}
}
}
I read up a bit on urql's caching mechanism at https://formidable.com/open-source/urql/docs/basics/ and from what I understand it uses a document cache, so it caches the document based on __typename. If a query requests something with a the same __typename it will pull it from the cache. If a mutation occurs with the same __typename it will invalidate all objects in the cache with that __typename so the next time the user fetches an object with that __typename it will execute a network request instead of cache.
What I think is going on is in the initial situation where there are products but no crossSells the form submission is successful but the Products page does not update because there is no reference to an object with __typename of CrossSell, but in the second situation there is so it busts the cache and executes the query again, refreshes products and cross-sells and the UI is properly updated.
I've really enjoyed the experience of using urql hooks with React components and want to continue but I'm not sure how I can fix this problem without reaching for another tool.
I've tried to force a re-render upon form submission using tips from: How can I force component to re-render with hooks in React? but it runs into the same problem where Products will fetch from the cache again and crossSells will return an empty array. I thought about modifying urql's RequestPolicy to network only, along with the forced re-render, but I thought that would be unnecessarily expensive to re-fetch every single time. The solution I'm trying out now is to move all the state into redux, a single source of truth so that any update to crossSells will propagate properly, and although I'm sure it will work it will also mean I'll trade in a lot of the convenience I had with hooks for standard redux boilerplate.
How can I gracefully update Products with crossSells upon submitting the form within CrossSellForm, while still using urql and hooks?
core contributor here 👋
As you've already discovered, there's an open issue for this that details the inherent problem our our simple, default cache. It's a document cache so kind of unsuitable for more complex tasks where normalisation can help.
When we have am empty array of data, there's no indication that a specific result needs to be refetched.
Instead of using the network-only policy you could try cache-and-network, but that doesn't solve the underlying issue that the operation (your query) is not invalidated by the mutation. So no refetch will be triggered.
I'd very much recommend you Graphcache, our normalised cache, which you've also already discovered. At its minimum with no configuration (!) it's actually a drop-in replacement that's already quite a bit smarter. https://github.com/FormidableLabs/urql-exchange-graphcache
The configuration for it is really just addons to teach it how to handle more tasks automatically! I'd be happy to help you in issues, here, or via Spectrum if you need to customise it. But my advise would be, give it a shot, because in the best case, you'll have all your edge cases just working without any changes ✨

Redux/Java: Managing normlized data & multiple model representations per entity

We are building a new app using React/Redux which is rendered server side.
We wish to follow best practice for Redux and normalize our data on the server before it's passed into the initial state for the store.
For this example, let's say we have a generic 'Products' entity that can be quite complex and is normalized on the root of our store and page level state in another object on the root of the store. So the structure and Reducers follow the typical 'slice reducer' pattern and will look like this:
{
page_x_state: PageReducer
products: ProductsReducer
}
We are using combine reducers to merge the reducers before passing them into the store.
Theoretical use case: We have a 'products' page that shows a list of basic product info. A user can click on a product to show a modal which then loads and shows the complete product data.
For the above example, the state sent from the server will contain only basic product models (3 or 4 fields), this is enough to render the table and fetching all product information at this point is wasteful and not very performant.
When a user clicks a product we will do an AJAX call fetch all data for that product. Once we have all data for the single product, should we update the instance in the products store with a full model? If so, we would then end up with a set of objects all of which could be different states (some could have minimal fields vs some which are full-blown objects with 10s of fields). Is this the best way to handle it?
Also, I would be interested to hear any thoughts of managing different representations of the same underlying model on the server and how to map it to the Redux store (in Java ideally).
EDIT:
Explicitly answering your first question, if your reducers are built up correctly your whole state tree should initialize with absolutely no data in it. But should be the correct shape. Your reducers should always have a default return value - when rendering server side - Redux should only render the initial state
After server-side rendering, when the store (that is now client side) needs updating because of a user action, your state shape for all of your product data is already there (it's just that some of it will probably be default values.). Rather than overwriting an object, your just filling in the blanks so to speak.
Lets say, in your second level view you need name, photo_url, price and brand and the initial view has 4 products on it, your rendered store would look something like this:
{
products: {
by_id: {
"1": {
id: "1",
name: "Cool Product",
tags: [],
brand: "Nike",
price: 1.99,
photo_url: "http://url.com",
category: "",
product_state: 0,
is_fetching: 0,
etc: ""
},
"2": {
id: "2",
name: "Another Cool Product",
tags: [],
brand: "Adidas",
price: 3.99,
photo_url: "http://url2.com",
category: "",
product_state: 0,
is_fetching: 0,
etc: ""
},
"3": {
id: "3",
name: "Crappy Product",
tags: [],
brand: "Badidas",
price: 0.99,
photo_url: "http://urlbad.com",
category: "",
product_state: 0,
is_fetching: 0,
etc: ""
},
"4": {
id: "4",
name: "Expensive product",
tags: [],
brand: "Rolex",
price: 199.99,
photo_url: "http://url4.com",
category: "",
product_state: 0,
is_fetching: 0,
etc: ""
}
},
all_ids: ["1", "2", "3", "4"]
}
}
You can see in the above data some keys are just empty strings or an empty array. But we have our data we need for the actual initial rendering of the page.
We could then make asynchronous calls on the client in the background immediately after the server has rendered and the document is ready, the chances are the server will return those initial calls before the user tries to get the data anyway. We can then load subsequent products on user request. I don't think that's the best approach but it's the one that makes most sense to me. Some other people might have some other ideas. It entirely depends on your app and use-case.
I would only keep one products object in state though and keep ALL the data pertaining to products in there.
I recently deployed an app into production and i'll share some of my
insights. The app, whilst not being too large in size, had a complex
data structure and having gone through the whole process as a newbie
to Redux in production (and having guidance from my architect) – These
are some of our takeaways. There's no right way in terms of architecture but there certainly are some things to avoid or do.
1. Before firing into writing your reducers design a 'static' state
If you don't know where you are going, you can't get there. Writing the whole structure of your state out flat will help you reason about how your state will change over time. We found this saved us time because we didn't have to really rewrite large sections.
2. Designing you state
keep it simple. The whole point of Redux is to simplify state management. We used a lot of the tips from the egghead.io tutorials on Redux that were created by Dan Abramov. They are clear really helped solve a lot of issues we were encountering. i'm sure you've read the docs about normalising state but the simple examples they gave actually carried through in most data patterns we implemented.
Rather than creating complex webs of data each chunk of data only held it's own data if it needed to reference another piece of it data it only referenced it by id we found this simple pattern covered most of our needs.
{
products: {
by_id: {
"1": {
id: "1",
name: "Cool Product",
tags: ["tag1", "tag2"],
product_state: 0,
is_fetching: 0,
etc: "etc"
}
},
all_ids: ["1"]
}
}
In the example above, tags might be another chunk of data with a similiar data structure using by_id and all_ids. All over the docs and tut, Abramov keeps referencing relational data and relational databases this was actually key for us. At first we kept looking at the UI and designing our state around how we thought we were going to show it. When this clicked and we started grouping the data based on it's relationship to other pieces of data, things started to click into place.
Quickly flipping to your question, I would avoid duplicating any data, as mentioned in another comment, personally i'd simply create a key in the state object called product_modal. let the modal take care of it's own state...
{
products: {
...
},
product_modal: {
current_product_id: "1",
is_fetching: true,
is_open: true
}
}
We found following this pattern with page state worked really well as well...we just treated it like any other piece of data with an id/name etc.
3. Reducer Logic
make sure reducers keep track of their own state. a lot of our reducers looked quite similiar, at first this felt like DRY hell but then we quickly realised the power of more reducers...say an action is dispatched and you want to update a whole chunk of state..no probs just check in your reducer for the action and return the new state. If you only want to update one or two fields in the same state...then you just do the same thing but only in the fields you want changing. most of our reducers were just simply a switch statement with an occasional nested if statement.
Combining Reducers
We didnt use combineReducers, we wrote our own. It wasn't hard, it helped us understand what was going on in Redux, and it allowed us to get a little smarter with our state. This tut was invaluable
Actions
Middleware is your friend...we used fetch API with redux-thunk to make RESTful requests. We split the required data requests into separate actions which called store.dispatch() for each data chunk that needed updating for the call. Each dispatch dispatched another action to update state. This kept our state updated modularly and allowed us to update large sections, or granularly as needed.
Dealing with an API
Ok so there's way too much to deal with here. I'm not saying our way is the best...but it has worked for us. Cut short...we have an internal API in java with publically exposed endpoints. The calls from this API didn't always map to the front end easily. We haven't implemented this, but ideally, an initial init endpoint could have been written on their end to get a lump of initial data that was needed to get things rolling on the front end for speeds sake.
We created a public API on the same server as the app, written in PHP. This API abstracted the internal API's endpoints (and in some cases the data too) away from the front end and the browser.
When the app would make a GET request to /api/projects/all the PHP API would then call our internal API, get the necessary data (sometimes across a couple of requests) and return that data in a usable format that redux could consume.
This might not be the ideal approach for a javascript app but we didn't have the option to create a new internal API structure, we needed to use one that has existed for several years, we have found the performance acceptable.
should we update the instance in the products store with a full model
It should be noted that Java and ReactJs+Redux don't have much conceptual overlap. Everything is a Javascript Object, not an Object with a Class.
Generally, storing all the data you receive in the Redux store state is the way to go. To work around the fact that some of the data will be minimal and some will be fully loaded you should make a conditional ajax call in the onComponentWillMount method of the individual product display container.
class MyGreatProduct extends React.Component {
onComponentWillMount() {
if(!this.props.thisProduct.prototype.hasProperty( 'somethingOnlyPresentInFullData' )) {
doAjaxCall(this.props.thisProduct.id).then((result) => {
this.props.storeNewResult(result.data);
}).catch(error=>{ ... })
}
}
// the rest of the component container code
}
const mapStateToProps = (state, ownProps) => {
return {
thisProduct: state.products.productInfo[ownProps.selectedId] || {id: ownProps.selectedId}
}
}
const mapDispatchToProps = (dispatch, ownProps) => {
return {
storeNewResult: (data) => { dispatch(productDataActions.fullProductData(data)) }
}
export default connect(mapStateToProps, mapDispatchToProps)(MyGreatProduct);
With this code, it should be somewhat clear how agnostic the components and containers can be regarding the exact data available in the Store at any given time.
Edit: In terms of managing different representations of the same underlying model on the server and how to map it to the Redux store, I'd try to use the same relative looseness you are dealing with once you have JSON. This should eliminate some coupling.
What I mean by this is just add the data you have to a JSObject to be consumed by React + Redux, without worrying too much about what values could potentially be stored in the Redux state during the execution of the application.
There's probably no right answer, just which strategy you prefer:
The simplest strategy is to add another piece to your reducer called selectedProduct and always overwrite it with the full object of the currently selected product. Your modal would always display the details of the selectedProduct. The downfalls of this strategy are that you aren't caching data in the case when a user selects the same product a second time, and your minimal fields aren't normalized.
Or you could update the instance in your Products store like you said, you'll just need logic to handle it. When you select a product, if it's fully loaded, render it. If not, make the ajax call, and show a spinner until its fully loaded.
If you don't have a concern with storing extra that data in the redux store it's not actually going to hit your performance very much if you use a normalized state. So on that front I would recommend caching as much as you can without risking security.
I think the best solution for you would be to use some redux middleware so your front end doesn't care how it gets the data. It will dispatch an action to the redux store and the middleware can determine whether or not it needs an AJAX call to get the new data. If it does need to fetch the data then the middleware can update the state when the AJAX resolves, if it doesn't then it can just discard the action because you already have the data. This way you can isolate the issue of having two different representations of the data to the middleware and implement a resolution there for it so your front end just asks for data and doesn't care how it gets it.
I don't know all the implementation details so as Jeff said its probably more what you prefer but I would definitely recommend adding some middleware to handle your AJAX calls if you haven't already it should make interfacing with the store much simpler.
If you want to read more on middleware the Redux documentation is pretty good.
https://redux.js.org/docs/advanced/Middleware.html
You could store each entity as an object of its various representations. In the action creator that updates the entity, include the representation as an argument:
const receiveProducts = (payload = [], representation = 'summary') => ({
type: 'PRODUCTS_RECEIVED',
payload, representation
});
const productReducer = (state = {}, action) => {
case 'PRODUCTS_RECEIVED': {
const { payload, representation } = action;
return {
...state,
...payload.reduce((next, entity) => {
next[entity.id] = {
...next[entity.id],
[representation]: entity
};
return next
}, {})
}
}
};
This means that whoever is calling receiveProducts() needs to know which representation is returned.

What is the right/preferred way to make a "Edit Detail" component in React?

I'm working on a page whose 'Data Model' is a collection, for example, an array of people. They are packed into React Components and tiled on the page. Essentially it's like:
class App extends React.Component {
constructor() {
super();
this.state = { people: /* some data */ };
}
render () {
return (
<div>
{this.state.people.map((person) =>
<People data={person}></People>)}
</div>);
}
}
Now I want to attach an edit section for each entry in <People> component, which allows the user to update the name, age ... all kinds of information for a specific entry.
Since React does not support mutating props inside components, I searched and found that adding callbacks as props can solve the problem of passing data to parent. But since there are many fields to update, there would be many callbacks such as onNameChanged, onEmailChanged... which could be very ugly (also more and more verbose as the number of fields keeps growing).
So what is the right way for it?
Honestly? The best way is Flux (back to that in a minute).
If you start to get into the process of passing data down the tree in the form of props, then passing it back up to be edited using callbacks, then you're breaking the unidirectional data flow that React is built around.
However, not all projects need to be written to ideal standards and it is possible to build this without Flux (and sometimes it might even be the right solution).
Without Flux
You can implement this without the need for a mass of callbacks, by passing down a single edit function as a prop. This function should take an id and a new person object, then update the state inside the parent component whenever it runs. Here's an example.
editPerson(id, editedPerson) {
const people = this.state.people;
const newFragment = { [id]: editedPerson };
// create a new list of people, with the updated person in
this.setState({
people: Object.assign([], people, newFragment)
});
},
render() {
// ...
{this.state.people.map((person, index) => {
const edit = this.editPerson.bind(this, index);
return (
<People data={person} edit={edit}></People>
);
})}
// ...
}
Then inside your person component, any time you make a change to the person, simply pass the person back up to the parent state with the callback.
However, if you visualize the flow of data through your application, you've now created a cycle that looks something like this.
App
^
|
v
Person
It's no longer trivial to work out where the data in app came from (it is still quite simple in such a small app, but obviously the bigger it gets the harder it is to tell.
With Flux
In the beginning, Facebook developers wrote React applications with unidirectional data flows and they saw that it was good. However, a need arose for data to go up the tree, which resulted in a crisis. How shall our data flow be unidirectional and still return to the top of the tree? And on the seventh day, they created Flux(1) and saw that it was very good.
Flux allows you to describe your changes as actions and pass them out of your components, to stores (self contained state boxes) which understand how to manipulate their state based on the action. Then the store tells all the components that care about it that something has changed, at which point the components can fetch new data to render.
You regain your unidirectional data flow, with an architecture that looks like this.
App <---- [Stores]
| ^
v |
Person --> Dispatcher
Stores
Rather than keeping your state in your <App /> component, you would probably want to create a People store to keep track of your list of people.
Maybe it would look something like this.
// stores/people-store.js
const people = [];
export function getPeople() {
return people;
}
function editPerson(id, person) {
// ...
}
function addPerson(person) {
// ...
}
function removePerson(id) {
// ...
}
Now, we could export these functions and let our components call them directly, but that's bad because it means that our components have to have knowledge of the design of the store and we want to keep them as dumb as possible.
Actions
Instead, our components create simple, serializable actions that our stores can understand. Here are some examples:
// remove person with id 53
{ type: 'PEOPLE_REMOVE', payload: 53 }
// create a new person called John Foo
{ type: 'PEOPLE_ADD', payload: { name: 'John Foo' } }
// edit person 13
{
type: 'PEOPLE_EDIT',
payload: {
id: 13,
person: { name: 'Unlucky Bill' }
}
}
These actions don't have to have these specific keys, they don't even have to be objects either, this is just the convention from Flux Standard Actions.
Dispatcher
Now, we have tell our store how to deal with these actions when they arrive.
// stores/people-store.js
// ...
dispatcher.register(function(action) {
switch(action.type) {
case 'PEOPLE_REMOVE':
removePerson(action.payload);
case 'PEOPLE_ADD':
addPerson(action.payload);
case 'PEOPLE_EDIT':
editPerson(action.payload.id, action.payload.person);
}
});
Phew. Lot of work so far, nearly there.
Now we can start to dispatch these actions from our components.
// components/people.js
// ...
onEdit(editedPerson) {
dispatcher.dispatch({
type: 'PEOPLE_EDIT',
payload: {
id: this.props.id,
person: editedPerson
}
});
}
onRemove() {
dispatcher.dispatch({
type: 'PEOPLE_REMOVE',
payload: this.props.id
});
}
// ...
When you edit the person, call the this.onEdit method and it will dispatch the appropriate action to your stores. Same goes for removing a person. Normally you'd move this stuff into action creators, but that's a topic for another time.
Ok, finally getting somewhere! Now our components can create actions that update the data in our stores. How do we get that data back into our components?
Initially, it's very simple. We can require the store in our top level component and simply ask for the data.
// components/app.js
import { getPeople } from './stores/people-store';
// ...
constructor() {
super();
this.state = { people: getPeople() };
}
We can pass this data down in exactly the same way, but what happens when the data changes?
The official stance from Flux is basically "Not our problem". Their examples use Node's Event Emitter class to allow stores to accept callback functions that are called when the store updates.
This allows you to write code that looks something like this:
componentWillMount() {
peopleStore.addListener(this.peopleUpdated);
},
componentWillUnmount() {
peopleStore.removeListener(this.peopleUpdated);
},
peopleUpdated() {
this.setState({ people: getPeople() });
}
Really, the ball is in your court on this one. There are many other strategies for getting the data back into your program. Reflux creates the listen method for you automatically, Redux allows you to declaratively specify which components receive which parts of the store as props, then it handles the updating. Spend enough time with Flux and you'll find a preference.
Now, you're probably thinking, blimey — this seems like a lot of effort to go to just to add edit functionality to a component; and you're right, it is!
For small applications, you probably don't need Flux.
Sure there are lots of benefits, but the additional complexity just isn't always warranted. As your application grows, you'll find that if you've fluxed it up, it will be much easier to manage, maintain and debug.
The trick is to know when it's appropriate to use the Flux architecture and hopefully when the time comes, this overly long, rambling answer will have cleared things up for you.
This isn't actually true.

Flux store dependency with async actions

I'm having problems understanding what is the best way to do this using the Flux pattern. Say for example that I have a userStore and I listen to it. Once it changed, I need get the user.name and access colors[user.name] - but the colors object comes from another colorsStore store I have. Here's the essence of it:
var self = {};
userStore.addListener(function(user) {
// dependency with the colors store
var color = self.colors[user.name]
})
colorsStore.addListener(function(colors) {
self.colors = colors;
})
actions.getUser() // modifies userStore
actions.getColors() // modifies colorsStore
The problem is that the two actions are async (they get the data from an AJAX call for instance). With this in mind, the userStore might change before the self.colors variable is populated from the other store.
How is this handled using the Flux pattern? Does the Dispatcher help with this somewhat? Sorry but I'm new to the Flux pattern. Intuitively I would simply call the async actions in the appropriate order such as:
actions.getColors() // need to populate self.colors before running getUser()
.then(actions.getUser())
But was wondering if there was a more Flux-way of doing this.
Your setup is fine from flux perspective.
Your component needs to be able to handle different possible (stores) states generated by your actions, which could possibly include:
user store has old/no data, colors store already has newest data
user store has newest user data, colors store still has old data
If you want any of these states to be visible to the user in some way (eg show loading indicator, show old color/ default color while waiting for newest color), then the react-flux way is to deal with these states inside your component.
If you do not want to show anything about these states to user, you have two options:
inside your component, fire the actions.getUser() from inside the colorStore listener function (quick and dirty solution)
change the setup to prevent the unwanted store state to trigger component update
For the second solution, you could typically do:
have you component fire both actions
both listeners trigger the same function getStateFromStores()
this function fetches state from both stores, and only does component update (setState()) if user and colors match
That way, your async calls can come back in any order.
If I understand your problem correctly, you can use waitFor for this case. Also there's the discussion about "waitFor vs combining stores into one", so a combined store can solve your problem as well.

Updating Data between two components in React

I am new to React and I don't know what's the best way to do this.
I have a list of cars and on clicking each row it should show slide to full page details of that car.
My code structure is:
I have App which renders two components. CarList and CarDetails. Car Details is hidden initially. The reason I rendered carDetails in app is because it's a massive fix template so I would like to render this once when app is loaded and only update it's data when each row clicked.
CarList also renders CarRow component which is fine.
Now my problem is I have a getDetails function on CarRow component which is making a call to get the details based on the car id. How to get carDetails component data updated ? I used
this.setState({itemDetails:data});
but seems state of the carRow is not the same reference as state in carDetails.
Any help?
This is a fundamental issue that lots of thought and man-hours has gone into in order to try and solve. It probably can't be answered, except on a surface level, in a StackOverflow post. It's not React-centric, either. This is an issue across most applications, regardless of the framework you're using.
Since you asked in the context of React, you might consider reading into flux, which is the de-facto implementation of this one-way data-flow idea in concert with React. However, that architecture is by no means "the best". There are simply advantages and disadvantages to it like everything else.
Some people don't like the idea of the global "event bus" that flux proposes. If that's the case, you can simply implement your own intermediate data layer API that collects query callbacks and A) invokes the callbacks on any calls to save data and B) refreshes any appropriate queries to the server. For now, though, I'd stick with flux as it will give you an idea of the general principles involved in having the things that most people consider to be "good", like a single source of truth for your data, one way flow, etc.
To give a concrete example of the callback idea:
// data layer
const listeners = [];
const data = {
save: save,
query: query
};
function save(someData) {
// save data to the server, and then...
.then(data => {
listeners.forEach(listener => listener(data));
});
}
function query(params, callback) {
// query the server with the params, then
listeners.push(callback);
}
// component
componentWillMount() {
data.query(params, data => this.setState({ myData: data }));
},
save() {
// when the save operation is complete, it will "refresh" the query above
data.save(someData);
}
This is a very distilled example and doesn't address optimization, such as potential for memory leaks when moving to different views and invoking "stale" callbacks, however it should give you a general idea of another approach.
The two approaches have the same policy (a single source of truth for data and one way data flow) but different implementations (global "event bus" which necessitates keeping track of events, or the simple callback method, which can necessitate a form of memory management).

Categories

Resources