Angular6: control data sharing inside application - javascript

In my Angular application, I have the subscription in every single component for watching if the user's Company change. On app init I download user's Company, so subscription fires once in every component I'm subscribing for changes in the state of the company (it's necessary because I am using this Company data in most of them). One of my components have a subscription to Company, and it downloads data once on init. When I change the view, a subscription is no more fired, so I need to download data. Code looks like
this.subscription = this
.companyService
.CompanyState
.subscribe((company: Company) => {
this.getSomeData()
})
this.getSomeData()
I've tried adding some flag like needDownload with default value true, and set it to false if subscription fires this.getSomeData(), but it's async and doesn't work very well.
If I remove the subscription from this component, I will stop watching changes on Company state. If I remove this.getSomeData() from the end of this code, I will not get data if the component is initiated without default call on subscription.
The problem is I am downloading data twice, and I feel like it's possible to do it once.

In your service, you can define companySubject as a ReplaySubject instead of a Subject. The buffer size can be set to 1, so that it "replays" only the last emitted value.
private companySubject = new ReplaySubject<Company>(1);
A new view will be notified as soon as it subscribes to the observable, if CompanyState has already emitted a value. As a consequence, you can remove the direct call to getSomeData() in your component initialization code.
See this answer for details about the various Subject classes.

Related

What is the proper way to control react components based on redux state

I think this is more of a conceptual/architecture question but I will include code samples to help explain the question. I have a normalized redux state where the entities state slice looks like this:
entities: {
projects: {
[id]: {...},
[id]: {...},
...
},
assignments: {
[id]: {...},
[id]: {...},
...
}
}
And an individual assignment looks like this:
{
id: 1,
name: 'assignment 1',
status: 'active',
deadline: '01-01-2020'
}
I fetch this data from a backend DB. I am trying to figure out the proper way to handle the process of updating this data, keeping the UI responsive, and keeping my redux state in sync with the backend.
A specific example is a react component for displaying an individual assignment that has a picker/radio buttons to change the status between:
const statusOptions = {
'active',
'pending',
'complete'
}
The options I can see are:
1) Set the value of the picker as the props.assignment.status value, and in the onChange of the picker/selector dispatch an updateAssignment() action where a saga/thunk sends the POST request and immediately triggers a fetchAssignment() action which will send a GET request and update the redux state and in turn the component will re render.
The problem with this is the redux update takes too long so the UI appears laggy and the controlled input will revert to the old selection until the new props are passed in.
2) Set the local component state based on the redux state like this:
state = { status: this.props.assignment.status }
And then set the value of the picker based on the local state, which would provide near instant UI updates on a value change.
The problem I see here is I am pretty sure this is a react anti-pattern, and I would have to use getDerivedStateFromProps() or something similar to make sure the local state stays in sync with the redux state. Plus I really like the 'single source of truth' idea and I feel like this option would invalidate that.
3) set the value of the picker based on props.assignment.status and in the onChange handler of the picker clone the assignment object, update the status attribute, and then immediately send an updateAssignment() action that merges the locally created assignment object into the state.
After that send the POST request to the server and if it fails somehow revert the redux state to the prior state, basically removing the locally added assignment object. This seems kind of hacky though maybe?
Is there any agreed upon best practices for updating redux data while maintaining a single source of truth, snappy UI, and clean code?
The first part of (2) seems to me the right way.
In ComponentDidMount (or, even better, in App.js, when the app is starting) you fetch the data from the database to the redux state, and set the local state from it.
Then you maintain the data locally, and dispatch the proper action that will update the redux state and the database.
In shouldComponentUpdate you need to prevent updates that happen following this redux update: you will check if the values of the props have changed.
In componentDidUpdate you will update the state if the props change.
The last thing to take care of is getting data updates following database changes that happen by other instances of the app running on other smartphones, or by other sources of data, if this may happen. In firebase, for example, you do that by listening to relevant app changes. I don't know if this is relevant here.

What is the purpose of having a didInvalidate property in the data structure of a react-redux app's state?

I'm learning from the react-redux docs on middleware and have trouble understanding the purpose of the didInvalidate property in the reddit example. It seems like the example goes through the middleware to let the store now the process of making an API call starting with INVALIDATE_SUBREDDIT then to REQUEST_POSTS then to RECEIVE_POSTS. Why is the INVALIDATE_SUBREDDIT necessary? Looking at the actions below, I can only guess that it prevents multiple fetches from happening in case the user clicks 'refresh' very rapidly. Is that the only purpose of this property?
function shouldFetchPosts(state, subreddit) {
const posts = state.postsBySubreddit[subreddit]
if (!posts) {
return true
} else if (posts.isFetching) {
return false
} else {
return posts.didInvalidate
}
}
export function fetchPostsIfNeeded(subreddit) {
return (dispatch, getState) => {
if (shouldFetchPosts(getState(), subreddit)) {
return dispatch(fetchPosts(subreddit))
}
}
}
You are close that didInvalidate is related to reducing server requests, however it is kind of the opposite of preventing fetches. It informs the app it should go and fetch new data; the current data did 'invalidate'.
Knowing a bit about the lifecycle will help explain further. Redux uses mapStateToProps to help to decide whether to redraw a Component when the global state changes.
When a Component is about to be redrawn, because the state (mapped to the props) changes for instance, componentDidMount is called. Typically if the state depends on remote data componentDidMount checks to see if the state contains a current representation of the remote data (e.g. via shouldFetchPosts).
You are correct that it is inefficient to keep making the remote call but it is shouldFetchPosts that guards against this. Once the required data has been fetched (!posts is false) or it is in the process of being fetched (isFetching is true) then the check shouldFetchPosts returns false.
Once there is a set of posts in the state then the app will never fetch another set from the server.
But what happens when the server side data changes? The app will typically provide a refresh button, which (as components should not change the state) issues an 'Action' (INVALIDATE_SUBREDDIT for example) which is reduced into setting a flag (posts.didInvalidate) in the state that indicates that the data is now invalid.
The change in state triggers the component redraw which, as mentioned, checks shouldFetchPosts which falls into the clause that executes return posts.didInvalidate which is now true, therefore firing the action to REQUEST_POSTS and fetching the current server side data.
So to reiterate: didInvalidate suggests a fetch of the current server side data is needed.
The most up-voted answer isn't entirely correct.
didInvalidate is used to tell the app whether the data is stale or not. If true, the data should be re-fetched from the server. If false, we will use the data we already have.
In the official examples, firing INVALIDATE_SUBREDDIT will set didInvalidate to true. This Redux action can be dispatched as a result of a user action (clicking a refresh button), or something else (a countdown, a server push etc.)
However, firing INVALIDATE_SUBREDDIT alone will not initiate a new request to the server. It is simply used to determine whether we should re-fetch the data or use the existing data when we call fetchPostsIfNeeded().
Because didInvalidate is set to true, the app will not let us fetch the data more than once. To refresh our data (e.g. after clicking a refresh button) we need to:
dispatch(invalidateSubreddit(selectedSubreddit))
dispatch(fetchPostsIfNeeded(selectedSubreddit))
Because we called invalidateSubreddit(), didInvalidate is set to true and fetchPostsIfNeeded() will initiate a re-fetch.
(This is why danmux's answer isn't entirely correct. The life cycle method componentDidMount will not be called when the state (which is mapped to the props) changes; componentDidMount is only called when the component mounts for the first time. So, the effect of hitting the refresh button will not appear until the component has been remounted, e.g. from a route change.)

Flux store dependency with async actions

I'm having problems understanding what is the best way to do this using the Flux pattern. Say for example that I have a userStore and I listen to it. Once it changed, I need get the user.name and access colors[user.name] - but the colors object comes from another colorsStore store I have. Here's the essence of it:
var self = {};
userStore.addListener(function(user) {
// dependency with the colors store
var color = self.colors[user.name]
})
colorsStore.addListener(function(colors) {
self.colors = colors;
})
actions.getUser() // modifies userStore
actions.getColors() // modifies colorsStore
The problem is that the two actions are async (they get the data from an AJAX call for instance). With this in mind, the userStore might change before the self.colors variable is populated from the other store.
How is this handled using the Flux pattern? Does the Dispatcher help with this somewhat? Sorry but I'm new to the Flux pattern. Intuitively I would simply call the async actions in the appropriate order such as:
actions.getColors() // need to populate self.colors before running getUser()
.then(actions.getUser())
But was wondering if there was a more Flux-way of doing this.
Your setup is fine from flux perspective.
Your component needs to be able to handle different possible (stores) states generated by your actions, which could possibly include:
user store has old/no data, colors store already has newest data
user store has newest user data, colors store still has old data
If you want any of these states to be visible to the user in some way (eg show loading indicator, show old color/ default color while waiting for newest color), then the react-flux way is to deal with these states inside your component.
If you do not want to show anything about these states to user, you have two options:
inside your component, fire the actions.getUser() from inside the colorStore listener function (quick and dirty solution)
change the setup to prevent the unwanted store state to trigger component update
For the second solution, you could typically do:
have you component fire both actions
both listeners trigger the same function getStateFromStores()
this function fetches state from both stores, and only does component update (setState()) if user and colors match
That way, your async calls can come back in any order.
If I understand your problem correctly, you can use waitFor for this case. Also there's the discussion about "waitFor vs combining stores into one", so a combined store can solve your problem as well.

Where do sockets fit into the Flux unidirectional data flow?

Where do sockets fit into the Flux unidirectional data flow? I have read 2 schools of thought for where remote data should enter the Flux unidirectional data flow. The way I have seen remote data for a Flux app fetched is when a server-side call is made, for example, in a promise that is then resolved or rejected. Three possible actions could fire during this process:
An initial action for optimistically updating the view (FooActions.BAR)
A success action for when an asynchronous promise is resolved (FooActions.BAR_SUCCESS)
An error action for when an asynchronous promise is rejected (FooActions.BAR_ERROR)
The stores will listen for the actions and update the necessary data. I have seen the server-side calls made from both action creators and from within the stores themselves. I use action creators for the process described above, but I'm not sure if data fetching via a web socket should be treated similarly. I was wondering where sockets fit into the diagram below.
There's really no difference in how you use Flux with WebSockets or plain old HTTP requests/polling. Your stores are responsible for emitting a change event when the application state changes, and it shouldn't be visible from the outside of the store if that change came from a UI interaction, from a WebSocket, or from making an HTTP request. That's really one of the main benefits of Flux in that no matter where the application state was changed, it goes through the same code paths.
Some Flux implementations tend to use actions/action creators for fetching data, but I don't really agree with that.
Actions are things that happen that modifies your application state. It's things like "the user changed some text and hit save" or "the user deleted an item". Think of actions like the transaction log of a database. If you lost your database, but you saved and serialized all actions that ever happened, you could just replay all those actions and end up with the same state/database that you lost.
So things like "give me item with id X" and "give me all the items" aren't actions, they're questions, questions about that application state. And in my view, it's the stores that should respond to those questions via methods that you expose on those stores.
It's tempting to use actions/action creators for fetching because fetching needs to be async. And by wrapping the async stuff in actions, your components and stores can be completely synchronous. But if you do that, you blur the definition of what an action is, and it also forces you to assume that you can fit your entire application state in memory (because you can only respond synchronously if you have the answer in memory).
So here's how I view Flux and the different concepts.
Stores
This is obviously where your application state lives. The store encapsulates and manages the state and is the only place where mutation of that state actually happens. It's also where events are emitted when that state changes.
The stores are also responsible for communicating with the backend. The store communicates with the backend when the state has changed and that needs to be synced with the server, and it also communicates with the server when it needs data that it doesn't have in memory. It has methods like get(id), search(parameters) etc. Those methods are for your questions, and they all return promises, even if the state can fit into memory. That's important because you might end up with use cases where the state no longer fits in memory, or where it's not possible to filter in memory or do advanced searching. By returning promises from your question methods, you can switch between returning from memory or asking the backend without having to change anything outside of the store.
Actions
My actions are very lightweight, and they don't know anything about persisting the mutation that they encapsulate. They simply carry the intention to mutate from the component to the store. For larger applications, they can contain some logic, but never things like server communication.
Components
These are your React components. They interact with stores by calling the question methods on the stores and rendering the return value of those methods. They also subscribe to the change event that the store exposes. I like using higher order components which are components that just wrap another component and passes props to it. An example would be:
var TodoItemsComponent = React.createClass({
getInitialState: function () {
return {
todoItems: null
}
},
componentDidMount: function () {
var self = this;
TodoStore.getAll().then(function (todoItems) {
self.setState({todoItems: todoItems});
});
TodoStore.onChange(function (todoItems) {
self.setState({todoItems: todoItems});
});
},
render: function () {
if (this.state.todoItems) {
return <TodoListComponent todoItems={this.state.todoItems} />;
} else {
return <Spinner />;
}
}
});
var TodoListComponent = React.createClass({
createNewTodo: function () {
TodoActions.createNew({
text: 'A new todo!'
});
},
render: function () {
return (
<ul>
{this.props.todoItems.map(function (todo) {
return <li>{todo.text}</li>;
})}
</ul>
<button onClick={this.createNewTodo}>Create new todo</button>
);
}
});
In this example the TodoItemsComponent is the higher order component and it wraps the nitty-gritty details of communicating with the store. It renders the TodoListComponent when it has fetched the todos, and renders a spinner before that. Since it passes the todo items as props to TodoListComponent that component only has to focus on rendering, and it will be re-rendered as soon as anything changes in the store. And the rendering component is kept completely synchronous. Another benefit is that TodoItemsComponent is only focused on fetching data and passing it on, making it very reusable for any rendering component that needs the todos.
higher order components
The term higher order components comes from the term higher order functions. Higher order functions are functions that return other functions. So a higher order component is a component that just wraps another component and returns its output.

In Flux what is responsible for direct talking to API

I'm trying to learn Flux, and having watched and read these amazing resources
https://egghead.io/technologies/react
http://facebook.github.io/flux/
https://scotch.io/tutorials/getting-to-know-flux-the-react-js-architecture
I still don't understand which part of Flux architecture (Action, Dispatcher or Store) is responsible for talking to the API, provided that my API is asynchronous, and is able to push data - i.e. I get an event when new data becomes available.
This image suggests that an Action is talking to API, however multiple code examples show that Action is only triggering Dispatcher..
If you look at the role of Actions as informing Stores of updated state data, it seems sensible that API calls that actually get the new data should come before the Action is called (e.g. in the event handlers of the component). However, you may not want API-related logic scattered throughout your Views. To avoid this, a module of ActionCreators is sometimes introduced between View and Action in the above diagram.
Methods for making API calls and handling the returned data by calling appropriate Actions can be collected in ActionCreators, so they will be loosely coupled to your Views. For example,
user clicks login ->
click handler calls ActionCreator.login(), which makes the API call ->
result is passed to Stores by calling Actions ->
Stores update their state accordingly
If your server can push updates through something like websockets, the corresponding event listeners can call methods defined in ActionCreators as well, so all your actions are emitted from one place. Or you could split up user-initiated ActionCreators and server-initiated ActionCreators into separate modules. Either way, I think this achieves a good separation of concerns.
After a couple months working with React + Flux, I've faced the same question and have tried some different approaches.
I've reached the conclusion that the best way is to have the actions deal with data updates, both remote and local:
# COMPONENT
TodoItems = React.createClass
componentDidMount: ->
TodoStore.addListener("CHANGE", #_onChange)
_onChange: ->
#setState {
todos: TodoStore.get()
_onKeyDown: (event) ->
if event.keyCode == ENTER_KEY_CODE
content = event.target.value.trim()
TodoActions.add(content)
render: ->
React.DOM.textarea {onKeyDown: #_onKeyDown}
# ACTIONS
class TodoActions
#add: (content) ->
Dispatcher.handleAction({type: "OPTIMISTIC_TODO_ADD", todo: {content: content}})
APICall.addTodo({content: content})
# STORE
class TodoStore extends EventEmitter
constructor: ->
#todos = [] # this is a nice way of retrieving from localStore
#dispatchToken = #registerToDispatcher()
get: ->
return #todos
registerToDispatcher: ->
Dispatcher.register (payload) =>
type = payload.type
todo = payload.todo
response = payload.response
switch type
when "OPTIMISTIC_TODO_ADD"
#todos.push(todo)
#emit("CHANGE")
when "TODO_ADD"
# act according to server response
#emit("CHANGE") # or whatever you like
#### APICall
class APICall # what can be called an 'action creator'
#addTodo: (todo) ->
response = http.post(todo) # I guess you get the idea
Dispatcher.handleAction({type: "TODO_ADD", response: response})
As you can see, the "juice" is within TodoActions. When a todo gets added, TodoActions.add() can trigger an optimistic UI update via OPTIMISTIC_TODO_ADD that will insert into TodoStore.todos. In parallel it knows that this must be communicated to the server.
An external entity - ApiCall (that can be considered an action creator) - is responsible to deal with the remote part of this action and when you get a response it follows its normal course to TodoStore that can act accordingly.
If you make the stores directly responsible for remote content management you will be adding an extra layer of complexity to it, which made me less confident about the data state at a certain point.
Lets imagine it:
class TodoActions
# TodoActions is `dumb`, only passes data and action types to Dispatcher
#add: (content) ->
Dispatcher.handleAction({type: "TODO_ADD", todo: {content: content}})
# APICall.addTodo({content: content})
class TodoStore extends EventEmitter
# ...
registerToDispatcher: ->
# ...
when "TODO_ADD"
#todos.push(todo)
# now the store has to push it to the server
# which means that it will have to call actions or the API directly = BAD
# lest assume:
APICall.addTodo({content: content})
# it also generates some uncertainty about the nature of the event emit:
# this change can guarantee that data was persisted within the server.
#emit("CHANGE")
The solution I've presented first offers a nice way of doing optimistic updates to the UI, handling errors and displaying loading indications as far as I've experienced.
Reto Schläpfer explains how he approaches this same problem with great clarity:
The smarter way is to call the Web Api directly from an Action Creator and then >make the Api dispatch an event with the request result as a payload. The Store(s) >can choose to listen on those request actions and change their state accordingly.
Before I show some updated code snippets, let me explain why this is superior:
There should be only one channel for all state changes: The Dispatcher. This >makes debugging easy because it just requires a single console.log in the >dispatcher to observe every single state change trigger.
Asynchronously executed callbacks should not leak into Stores. The consequences >of it are just to hard to fully foresee. This leads to elusive bugs. Stores >should only execute synchronous code. Otherwise they are too hard to understand.
Avoiding actions firing other actions makes your app simple. We use the newest >Dispatcher implementation from Facebook that does not allow a new dispatch while >dispatching. It forces you to do things right.
Full article:
http://www.code-experience.com/the-code-experience/

Categories

Resources