Where do sockets fit into the Flux unidirectional data flow? I have read 2 schools of thought for where remote data should enter the Flux unidirectional data flow. The way I have seen remote data for a Flux app fetched is when a server-side call is made, for example, in a promise that is then resolved or rejected. Three possible actions could fire during this process:
An initial action for optimistically updating the view (FooActions.BAR)
A success action for when an asynchronous promise is resolved (FooActions.BAR_SUCCESS)
An error action for when an asynchronous promise is rejected (FooActions.BAR_ERROR)
The stores will listen for the actions and update the necessary data. I have seen the server-side calls made from both action creators and from within the stores themselves. I use action creators for the process described above, but I'm not sure if data fetching via a web socket should be treated similarly. I was wondering where sockets fit into the diagram below.
There's really no difference in how you use Flux with WebSockets or plain old HTTP requests/polling. Your stores are responsible for emitting a change event when the application state changes, and it shouldn't be visible from the outside of the store if that change came from a UI interaction, from a WebSocket, or from making an HTTP request. That's really one of the main benefits of Flux in that no matter where the application state was changed, it goes through the same code paths.
Some Flux implementations tend to use actions/action creators for fetching data, but I don't really agree with that.
Actions are things that happen that modifies your application state. It's things like "the user changed some text and hit save" or "the user deleted an item". Think of actions like the transaction log of a database. If you lost your database, but you saved and serialized all actions that ever happened, you could just replay all those actions and end up with the same state/database that you lost.
So things like "give me item with id X" and "give me all the items" aren't actions, they're questions, questions about that application state. And in my view, it's the stores that should respond to those questions via methods that you expose on those stores.
It's tempting to use actions/action creators for fetching because fetching needs to be async. And by wrapping the async stuff in actions, your components and stores can be completely synchronous. But if you do that, you blur the definition of what an action is, and it also forces you to assume that you can fit your entire application state in memory (because you can only respond synchronously if you have the answer in memory).
So here's how I view Flux and the different concepts.
Stores
This is obviously where your application state lives. The store encapsulates and manages the state and is the only place where mutation of that state actually happens. It's also where events are emitted when that state changes.
The stores are also responsible for communicating with the backend. The store communicates with the backend when the state has changed and that needs to be synced with the server, and it also communicates with the server when it needs data that it doesn't have in memory. It has methods like get(id), search(parameters) etc. Those methods are for your questions, and they all return promises, even if the state can fit into memory. That's important because you might end up with use cases where the state no longer fits in memory, or where it's not possible to filter in memory or do advanced searching. By returning promises from your question methods, you can switch between returning from memory or asking the backend without having to change anything outside of the store.
Actions
My actions are very lightweight, and they don't know anything about persisting the mutation that they encapsulate. They simply carry the intention to mutate from the component to the store. For larger applications, they can contain some logic, but never things like server communication.
Components
These are your React components. They interact with stores by calling the question methods on the stores and rendering the return value of those methods. They also subscribe to the change event that the store exposes. I like using higher order components which are components that just wrap another component and passes props to it. An example would be:
var TodoItemsComponent = React.createClass({
getInitialState: function () {
return {
todoItems: null
}
},
componentDidMount: function () {
var self = this;
TodoStore.getAll().then(function (todoItems) {
self.setState({todoItems: todoItems});
});
TodoStore.onChange(function (todoItems) {
self.setState({todoItems: todoItems});
});
},
render: function () {
if (this.state.todoItems) {
return <TodoListComponent todoItems={this.state.todoItems} />;
} else {
return <Spinner />;
}
}
});
var TodoListComponent = React.createClass({
createNewTodo: function () {
TodoActions.createNew({
text: 'A new todo!'
});
},
render: function () {
return (
<ul>
{this.props.todoItems.map(function (todo) {
return <li>{todo.text}</li>;
})}
</ul>
<button onClick={this.createNewTodo}>Create new todo</button>
);
}
});
In this example the TodoItemsComponent is the higher order component and it wraps the nitty-gritty details of communicating with the store. It renders the TodoListComponent when it has fetched the todos, and renders a spinner before that. Since it passes the todo items as props to TodoListComponent that component only has to focus on rendering, and it will be re-rendered as soon as anything changes in the store. And the rendering component is kept completely synchronous. Another benefit is that TodoItemsComponent is only focused on fetching data and passing it on, making it very reusable for any rendering component that needs the todos.
higher order components
The term higher order components comes from the term higher order functions. Higher order functions are functions that return other functions. So a higher order component is a component that just wraps another component and returns its output.
Related
I have a project where use react-router v3 only for one reason. The reason is the need of server side rendering with data prefetching and the most convenient way to do this is to keep centralized route config in an object or array and loop over the matching elements to fetch the data from the API on the server side. The data later is going to be passed to the client with the response HTML and stored in variable of JSON format string.
Also application uses code splitting, however with the use of babel-plugin-transform-ensure-ignore on sever side I can directly fetch the components instead of lazy loading and the native import method will be used only on client side.
Nevertheless, above-mentioned structure isn't working with react-router v5, as it's little bit difficult, since I can't use #loadable/components, as react-router official documentation suggests. Per my observation #loadable/components just generates the HTML on the server side instead of giving me the components in which I implement the fetch method responsible for server side logic.
Therefore, I would like to ask you the good structure for webpack + react-router v5 + ssr + data prefetch + redux + code splitting
I see it's quite complicated and no universal solution, however I may be wrong.
Any direction or suggestion is appreciated.
I have never tried #loadable/components, but I do similar stuff (SSR + code splitting + data pre-fetching) with a custom implementation of code splitting, and I believe you should change your data pre-fetching approach.
If I got you right, your problem is that you are trying to intervene into the normal React rendering process, deducing in advance what components will be used in your render, and thus which data should be pre-fetched. Such intervention / deduction is just not a part of React API, and although I saw different people use some undocumented internal React stuff to achieve it, it all fragile in long term run, and prone to issues like you have.
I believe, a much better bullet-proof approach is to perform SSR as a few normal rendering passes, collecting in each pass the list list of data to be pre-fetch, fetching them, and then repeating the render from the very beginning with updated state. I am struggling to come up with a clear explanation, but let me try with such example.
Say, a component <A> somewhere in your app tree depends on async-fetched data, which are supposed to be stored at some.path of your Redux store. Consider this:
Say you start with empty Redux store, and you also have you SSR context (for that you may reuse StaticRouter's context, or create a separate one with React's Context API).
You do the very basic SSR of entire app with ReactDOMServer.renderToString(..).
When the renderer arrives to render the component <A> somewhere in your app's tree, no mater whether it is code-splitted, or not, if everything is set up correctly, that component will have access both to Redux store, and to the SSR context. So, if <A> sees the current rendering happens at the server, and there is no data pre-fetched to some.path of Redux store, <A> will save into SSR context "a request to load those data", and renders some placeholder (or whatever makes sense to render without having those data pre-fetched). By the "request to load those data" I mean, the <A> can actually fire an async function which will fetch the data, and push corresponding data promise to a dedicated array in context.
Once ReactDOMServer.renderToString(..) completes you'll have: a current version of rendered HTML markup, and an array of data fetching promises collected in SSR context object. Here you do one of the following:
If there was no promises collected into SSR context, then your rendered HTML markup is final, and you can send it to the client, along with the Redux store content;
If there are pending promises, but SSR already takes too long (counting from (1)) you still can send the current HTML and current Redux store content, and just rely on the client side to fetch any missing data, and finish the render (thus compromising between server latency, and SSR completeness).
If you can wait, you wait for all pending promises; add all fetched data to the correct locations of your Redux store; reset SSR context; and then go back to (2), repeating the render from the begining, but with updated Redux store content.
You should see, if implemented correctly, it will work great with any number of different components relying on async data, no matter whether they are nested, and how exactly you implemented code-splitting, routing, etc. There is some overhead of repeated render passes, but I believe it is acceptable.
A small code example, based on pieces of code I use:
SSR loop (original code):
const ssrContext = {
// That's the initial content of "Global State". I use a custom library
// to manage it with Context API; but similar stuff can be done with Redux.
state: {},
};
let markup;
const ssrStart = Date.now();
for (let round = 0; round < options.maxSsrRounds; ++round) {
// These resets are not in my original code, as they are done in my global
// state management library.
ssrContext.dirty = false;
ssrContext.pending = [];
markup = ReactDOM.renderToString((
// With Redux, you'll have Redux store provider here.
<GlobalStateProvider
initialState={ssrContext.state}
ssrContext={ssrContext}
>
<StaticRouter
context={ssrContext}
location={req.url}
>
<App />
</StaticRouter>
</GlobalStateProvider>
));
if (!ssrContext.dirty) break;
const timeout = options.ssrTimeout + ssrStart - Date.now();
const ok = timeout > 0 && await Promise.race([
Promise.allSettled(ssrContext.pending),
time.timer(timeout).then(() => false),
]);
if (!ok) break;
// Here you should take data resolved by "ssrContext.pending" promises,
// and place it into the correct paths of "ssrContext.state", before going
// to the next SSR iteration. In my case, my global state management library
// takes care of it, so I don't have to do it explicitly here.
}
// Here "ssrContext.state" should contain the Redux store content to send to
// the client side, and "markup" is the corresponding rendered HTML.
And the logic inside a component, which relies on async data, will be somewhat like this:
function Component() {
// Try to get necessary async from Redux store.
const data = useSelector(..);
// react-router does not provide a hook for accessing the context,
// and in my case I am getting it via my <GlobalStateProvider>, but
// one way or another it should not be a problem to get it.
const ssrContext = useSsrContext();
// No necessary data in Redux store.
if (!data) {
// We are at server.
if (ssrContext) {
ssrContext.dirty = true;
ssrContext.pending.push(
// A promise which resolves to the data we need here.
);
// We are at client-side.
} else {
// Dispatch an action to load data into Redux store,
// as appropriate for your setup.
}
}
return data ? (
// Return the complete component render, which requires "data"
// for rendering.
) : (
// Return an appropriate placeholder (e.g. a "loading" indicator).
);
}
First off some description of what I need to achieve. I show information in front-end (React) that mostly corresponds to database rows and the user can do regular CRUD operations on those objects. However, I also add some dummy rows into the JSON that I send to front-end because there are some objects that are "defaults" and should not be inserted into the database unless the user actually wants to modify them. So what I need to do is once a user wants to modify a "default" object in front-end, I first have to POST that object to a specific endpoint that copies it from constants to the database and only then follow that up with the request to modify that row.
Secondly, the architecture around this. For storing the state of the rows in front-end I'm using Redux via easy-peasy and I have a thunk for doing the first saving before modifying. Once a user wants to edit a "default" object anywhere in the UI (there are about 20 different ways of editing an object), the flow of the program looks something like this:
User edits something and presses "save"
Thunk is called in the save function and awaited
Thunk POSTs to backend to insert the object into database and return the corresponding row
Backend responds with the ID-s of the rows
Thunk calls action and updates these objects in store with correct ID-s
Thunk returns and the function pointer moves back to the modifying function
The modifying function makes another request with the correct ID-s
The modifying function updates the store with the modified values
Now, the problem I run into is from step 5 to 7, because the component looks basically like this:
const Foo = () => {
const insertToDatabaseIfNecessary = useStoreActions((actions) => actions.baz.insertIfNecessary)
const items = useStoreState((state) => state.baz.items);
const onSave = async () => {
await insertToDatabaseIfNecessary();
// do the actual modifying thing here
axios.post(...items);
}
return (
<button onClick={onSave}>Save!</button>
);
}
If you know functional components better than I do, then you know that in onSave() the insertToDatabaseIfNecessary() will update the values in Redux store, but when we get to the actual modifying and post(...items) then the values that are POSTed are not updated because they will be updated in the next time the component is called. They would be updated if this was a class-based component, but easy-peasy has no support for class-based components. I guess one way would be to use class-based components and Redux directly but I have feeling there might be a different pattern that I could use to solve my issue without resorting to class-based components.
The question: Is there a sane way of doing this with functional components?
Thunks in easy-peasy can handle asynchronous events, so you should put your axios post in there e.g.
insertToDatabaseIfNecessary : thunk(async (actions, payload) => {
// First update the data on the server
await axios.post(payload.items);
// Assuming that the post succeeds, now dispatch and action to update your store.
// (You'd want to check your post succeeded before doing this...)
actions.updateYourStoreData(payload);
})
This easy-peasy thunk will wait for the async post to finish, so you can use the action as follows in your Foo component:
insertToDatabaseIfNecessary();
You will not need to await it or use the onSave function in your Foo component.
I am learning how redux works but its a lot of code to do simple things. For example, I want to load some data from the server before displaying. For editing reasons, I can't simply just use incoming props but I have to copy props data into the local state.
As far as I've learned, I have to send a Fetch_request action. If successful, a fetch_success action will update the store with new item. Then updated item will cause my component's render function to update.
In component
componentWillMount() {
this.props.FETCH_REQUEST(this.props.match.params.id);
}
...
In actions
export function FETCH_REQUEST(id) {
api.get(...)
.then(d => FETCH_SUCCESS(d))
.catch(e => FETCH_FAILURE(e));
}
...
In reducer
export function FETCH_REDUCER(state = {}, action ={}) {
switch (action.type) {
case 'FETCH_SUCCESS':
return { ...state, [action.payload.id]: ...action.payload }
...
}
Back in component
this.props.FETCH_REDUCER
// extra code for state, getting desired item from...
Instead, can I call a react-thunk function and pass some callback functions? The react-thunk can update the store and callbacks can change the component's local state.
In component
componentWillMount() {
this.props.FETCH_REQUEST(this.props.match.params.id, this.cbSuccess, this.cbFailure);
}
cbSuccess(data) {
// do something
}
cbFailure(error) {
// do something
}
...
In action
export function FETCH_REQUEST(id, cbSuccess, cbFailure) {
api.get(...)
.then(d => {
cbSuccess(d);
FETCH_SUCCESS(d);
}).catch(e => {
cbFailure(d);
FETCH_FAILURE(e);
});
}
...
Is this improper? Can I do the same thing with redux-observable?
UPDATE 1
I moved nearly everything to the redux store, even for edits (ie replaced this.setState with this.props.setState). It eases state management. However, every time any input's onChange fires, a new state is popping up. Can someone confirm whether this is okay? I'm worried about the app's memory management due to redux saving a ref to each state.
First of all, you should call your API in componentDidMount instead of componentWillMount. More on this at : what is right way to do API call in react js?
When you use a redux store, your components subscribe to state changes using the mapStateToProps function and they change state using the actions added a props through the mapDispatchToProps function (assuming you are using these functions in your connect call).
So you already are subscribing to state changes using your props. Using a callback would be similar to having the callback tell you of a change which your component already knows about because of a change in its props. And the change in props would trigger a re-render of the component to show the new state.
UPDATE:
The case you refer to, of an input field firing an onChange event at the change of every character, can cause a lot of updates to the store. As mentioned in my comments, you can use an api like _.debounce to throttle the updates to the store to reduce the number of state changes in such cases. More on handling this at Perform debounce in React.js.
The issue of memory management is a real issue in real world applications when using Redux. The way to reduce the effect of repeated updates to the state is to
Normalize the shape of state : http://redux.js.org/docs/recipes/reducers/NormalizingStateShape.html
Create memoized selectors using Reselect (https://github.com/reactjs/reselect)
Follow the advice provided in the articles regarding performance in Redux github pages (https://github.com/reactjs/redux/blob/master/docs/faq/Performance.md)
Also remember that although the whole state should be copied to prevent mutating, only the slice of state that changes needs to be updated. For example, if your state holds 10 objects and only one of them changes, you need to update the reference of the new object in the state, but the remaining 9 unchanged objects still point to the old references and the total number of objects in your memory is 11 and not 20 (excluding the encompassing state object.)
I'm learning from the react-redux docs on middleware and have trouble understanding the purpose of the didInvalidate property in the reddit example. It seems like the example goes through the middleware to let the store now the process of making an API call starting with INVALIDATE_SUBREDDIT then to REQUEST_POSTS then to RECEIVE_POSTS. Why is the INVALIDATE_SUBREDDIT necessary? Looking at the actions below, I can only guess that it prevents multiple fetches from happening in case the user clicks 'refresh' very rapidly. Is that the only purpose of this property?
function shouldFetchPosts(state, subreddit) {
const posts = state.postsBySubreddit[subreddit]
if (!posts) {
return true
} else if (posts.isFetching) {
return false
} else {
return posts.didInvalidate
}
}
export function fetchPostsIfNeeded(subreddit) {
return (dispatch, getState) => {
if (shouldFetchPosts(getState(), subreddit)) {
return dispatch(fetchPosts(subreddit))
}
}
}
You are close that didInvalidate is related to reducing server requests, however it is kind of the opposite of preventing fetches. It informs the app it should go and fetch new data; the current data did 'invalidate'.
Knowing a bit about the lifecycle will help explain further. Redux uses mapStateToProps to help to decide whether to redraw a Component when the global state changes.
When a Component is about to be redrawn, because the state (mapped to the props) changes for instance, componentDidMount is called. Typically if the state depends on remote data componentDidMount checks to see if the state contains a current representation of the remote data (e.g. via shouldFetchPosts).
You are correct that it is inefficient to keep making the remote call but it is shouldFetchPosts that guards against this. Once the required data has been fetched (!posts is false) or it is in the process of being fetched (isFetching is true) then the check shouldFetchPosts returns false.
Once there is a set of posts in the state then the app will never fetch another set from the server.
But what happens when the server side data changes? The app will typically provide a refresh button, which (as components should not change the state) issues an 'Action' (INVALIDATE_SUBREDDIT for example) which is reduced into setting a flag (posts.didInvalidate) in the state that indicates that the data is now invalid.
The change in state triggers the component redraw which, as mentioned, checks shouldFetchPosts which falls into the clause that executes return posts.didInvalidate which is now true, therefore firing the action to REQUEST_POSTS and fetching the current server side data.
So to reiterate: didInvalidate suggests a fetch of the current server side data is needed.
The most up-voted answer isn't entirely correct.
didInvalidate is used to tell the app whether the data is stale or not. If true, the data should be re-fetched from the server. If false, we will use the data we already have.
In the official examples, firing INVALIDATE_SUBREDDIT will set didInvalidate to true. This Redux action can be dispatched as a result of a user action (clicking a refresh button), or something else (a countdown, a server push etc.)
However, firing INVALIDATE_SUBREDDIT alone will not initiate a new request to the server. It is simply used to determine whether we should re-fetch the data or use the existing data when we call fetchPostsIfNeeded().
Because didInvalidate is set to true, the app will not let us fetch the data more than once. To refresh our data (e.g. after clicking a refresh button) we need to:
dispatch(invalidateSubreddit(selectedSubreddit))
dispatch(fetchPostsIfNeeded(selectedSubreddit))
Because we called invalidateSubreddit(), didInvalidate is set to true and fetchPostsIfNeeded() will initiate a re-fetch.
(This is why danmux's answer isn't entirely correct. The life cycle method componentDidMount will not be called when the state (which is mapped to the props) changes; componentDidMount is only called when the component mounts for the first time. So, the effect of hitting the refresh button will not appear until the component has been remounted, e.g. from a route change.)
I am new to React and I don't know what's the best way to do this.
I have a list of cars and on clicking each row it should show slide to full page details of that car.
My code structure is:
I have App which renders two components. CarList and CarDetails. Car Details is hidden initially. The reason I rendered carDetails in app is because it's a massive fix template so I would like to render this once when app is loaded and only update it's data when each row clicked.
CarList also renders CarRow component which is fine.
Now my problem is I have a getDetails function on CarRow component which is making a call to get the details based on the car id. How to get carDetails component data updated ? I used
this.setState({itemDetails:data});
but seems state of the carRow is not the same reference as state in carDetails.
Any help?
This is a fundamental issue that lots of thought and man-hours has gone into in order to try and solve. It probably can't be answered, except on a surface level, in a StackOverflow post. It's not React-centric, either. This is an issue across most applications, regardless of the framework you're using.
Since you asked in the context of React, you might consider reading into flux, which is the de-facto implementation of this one-way data-flow idea in concert with React. However, that architecture is by no means "the best". There are simply advantages and disadvantages to it like everything else.
Some people don't like the idea of the global "event bus" that flux proposes. If that's the case, you can simply implement your own intermediate data layer API that collects query callbacks and A) invokes the callbacks on any calls to save data and B) refreshes any appropriate queries to the server. For now, though, I'd stick with flux as it will give you an idea of the general principles involved in having the things that most people consider to be "good", like a single source of truth for your data, one way flow, etc.
To give a concrete example of the callback idea:
// data layer
const listeners = [];
const data = {
save: save,
query: query
};
function save(someData) {
// save data to the server, and then...
.then(data => {
listeners.forEach(listener => listener(data));
});
}
function query(params, callback) {
// query the server with the params, then
listeners.push(callback);
}
// component
componentWillMount() {
data.query(params, data => this.setState({ myData: data }));
},
save() {
// when the save operation is complete, it will "refresh" the query above
data.save(someData);
}
This is a very distilled example and doesn't address optimization, such as potential for memory leaks when moving to different views and invoking "stale" callbacks, however it should give you a general idea of another approach.
The two approaches have the same policy (a single source of truth for data and one way data flow) but different implementations (global "event bus" which necessitates keeping track of events, or the simple callback method, which can necessitate a form of memory management).