In Flux what is responsible for direct talking to API - javascript

I'm trying to learn Flux, and having watched and read these amazing resources
https://egghead.io/technologies/react
http://facebook.github.io/flux/
https://scotch.io/tutorials/getting-to-know-flux-the-react-js-architecture
I still don't understand which part of Flux architecture (Action, Dispatcher or Store) is responsible for talking to the API, provided that my API is asynchronous, and is able to push data - i.e. I get an event when new data becomes available.
This image suggests that an Action is talking to API, however multiple code examples show that Action is only triggering Dispatcher..

If you look at the role of Actions as informing Stores of updated state data, it seems sensible that API calls that actually get the new data should come before the Action is called (e.g. in the event handlers of the component). However, you may not want API-related logic scattered throughout your Views. To avoid this, a module of ActionCreators is sometimes introduced between View and Action in the above diagram.
Methods for making API calls and handling the returned data by calling appropriate Actions can be collected in ActionCreators, so they will be loosely coupled to your Views. For example,
user clicks login ->
click handler calls ActionCreator.login(), which makes the API call ->
result is passed to Stores by calling Actions ->
Stores update their state accordingly
If your server can push updates through something like websockets, the corresponding event listeners can call methods defined in ActionCreators as well, so all your actions are emitted from one place. Or you could split up user-initiated ActionCreators and server-initiated ActionCreators into separate modules. Either way, I think this achieves a good separation of concerns.

After a couple months working with React + Flux, I've faced the same question and have tried some different approaches.
I've reached the conclusion that the best way is to have the actions deal with data updates, both remote and local:
# COMPONENT
TodoItems = React.createClass
componentDidMount: ->
TodoStore.addListener("CHANGE", #_onChange)
_onChange: ->
#setState {
todos: TodoStore.get()
_onKeyDown: (event) ->
if event.keyCode == ENTER_KEY_CODE
content = event.target.value.trim()
TodoActions.add(content)
render: ->
React.DOM.textarea {onKeyDown: #_onKeyDown}
# ACTIONS
class TodoActions
#add: (content) ->
Dispatcher.handleAction({type: "OPTIMISTIC_TODO_ADD", todo: {content: content}})
APICall.addTodo({content: content})
# STORE
class TodoStore extends EventEmitter
constructor: ->
#todos = [] # this is a nice way of retrieving from localStore
#dispatchToken = #registerToDispatcher()
get: ->
return #todos
registerToDispatcher: ->
Dispatcher.register (payload) =>
type = payload.type
todo = payload.todo
response = payload.response
switch type
when "OPTIMISTIC_TODO_ADD"
#todos.push(todo)
#emit("CHANGE")
when "TODO_ADD"
# act according to server response
#emit("CHANGE") # or whatever you like
#### APICall
class APICall # what can be called an 'action creator'
#addTodo: (todo) ->
response = http.post(todo) # I guess you get the idea
Dispatcher.handleAction({type: "TODO_ADD", response: response})
As you can see, the "juice" is within TodoActions. When a todo gets added, TodoActions.add() can trigger an optimistic UI update via OPTIMISTIC_TODO_ADD that will insert into TodoStore.todos. In parallel it knows that this must be communicated to the server.
An external entity - ApiCall (that can be considered an action creator) - is responsible to deal with the remote part of this action and when you get a response it follows its normal course to TodoStore that can act accordingly.
If you make the stores directly responsible for remote content management you will be adding an extra layer of complexity to it, which made me less confident about the data state at a certain point.
Lets imagine it:
class TodoActions
# TodoActions is `dumb`, only passes data and action types to Dispatcher
#add: (content) ->
Dispatcher.handleAction({type: "TODO_ADD", todo: {content: content}})
# APICall.addTodo({content: content})
class TodoStore extends EventEmitter
# ...
registerToDispatcher: ->
# ...
when "TODO_ADD"
#todos.push(todo)
# now the store has to push it to the server
# which means that it will have to call actions or the API directly = BAD
# lest assume:
APICall.addTodo({content: content})
# it also generates some uncertainty about the nature of the event emit:
# this change can guarantee that data was persisted within the server.
#emit("CHANGE")
The solution I've presented first offers a nice way of doing optimistic updates to the UI, handling errors and displaying loading indications as far as I've experienced.

Reto Schläpfer explains how he approaches this same problem with great clarity:
The smarter way is to call the Web Api directly from an Action Creator and then >make the Api dispatch an event with the request result as a payload. The Store(s) >can choose to listen on those request actions and change their state accordingly.
Before I show some updated code snippets, let me explain why this is superior:
There should be only one channel for all state changes: The Dispatcher. This >makes debugging easy because it just requires a single console.log in the >dispatcher to observe every single state change trigger.
Asynchronously executed callbacks should not leak into Stores. The consequences >of it are just to hard to fully foresee. This leads to elusive bugs. Stores >should only execute synchronous code. Otherwise they are too hard to understand.
Avoiding actions firing other actions makes your app simple. We use the newest >Dispatcher implementation from Facebook that does not allow a new dispatch while >dispatching. It forces you to do things right.
Full article:
http://www.code-experience.com/the-code-experience/

Related

Angular6: control data sharing inside application

In my Angular application, I have the subscription in every single component for watching if the user's Company change. On app init I download user's Company, so subscription fires once in every component I'm subscribing for changes in the state of the company (it's necessary because I am using this Company data in most of them). One of my components have a subscription to Company, and it downloads data once on init. When I change the view, a subscription is no more fired, so I need to download data. Code looks like
this.subscription = this
.companyService
.CompanyState
.subscribe((company: Company) => {
this.getSomeData()
})
this.getSomeData()
I've tried adding some flag like needDownload with default value true, and set it to false if subscription fires this.getSomeData(), but it's async and doesn't work very well.
If I remove the subscription from this component, I will stop watching changes on Company state. If I remove this.getSomeData() from the end of this code, I will not get data if the component is initiated without default call on subscription.
The problem is I am downloading data twice, and I feel like it's possible to do it once.
In your service, you can define companySubject as a ReplaySubject instead of a Subject. The buffer size can be set to 1, so that it "replays" only the last emitted value.
private companySubject = new ReplaySubject<Company>(1);
A new view will be notified as soon as it subscribes to the observable, if CompanyState has already emitted a value. As a consequence, you can remove the direct call to getSomeData() in your component initialization code.
See this answer for details about the various Subject classes.

What is the purpose of having a didInvalidate property in the data structure of a react-redux app's state?

I'm learning from the react-redux docs on middleware and have trouble understanding the purpose of the didInvalidate property in the reddit example. It seems like the example goes through the middleware to let the store now the process of making an API call starting with INVALIDATE_SUBREDDIT then to REQUEST_POSTS then to RECEIVE_POSTS. Why is the INVALIDATE_SUBREDDIT necessary? Looking at the actions below, I can only guess that it prevents multiple fetches from happening in case the user clicks 'refresh' very rapidly. Is that the only purpose of this property?
function shouldFetchPosts(state, subreddit) {
const posts = state.postsBySubreddit[subreddit]
if (!posts) {
return true
} else if (posts.isFetching) {
return false
} else {
return posts.didInvalidate
}
}
export function fetchPostsIfNeeded(subreddit) {
return (dispatch, getState) => {
if (shouldFetchPosts(getState(), subreddit)) {
return dispatch(fetchPosts(subreddit))
}
}
}
You are close that didInvalidate is related to reducing server requests, however it is kind of the opposite of preventing fetches. It informs the app it should go and fetch new data; the current data did 'invalidate'.
Knowing a bit about the lifecycle will help explain further. Redux uses mapStateToProps to help to decide whether to redraw a Component when the global state changes.
When a Component is about to be redrawn, because the state (mapped to the props) changes for instance, componentDidMount is called. Typically if the state depends on remote data componentDidMount checks to see if the state contains a current representation of the remote data (e.g. via shouldFetchPosts).
You are correct that it is inefficient to keep making the remote call but it is shouldFetchPosts that guards against this. Once the required data has been fetched (!posts is false) or it is in the process of being fetched (isFetching is true) then the check shouldFetchPosts returns false.
Once there is a set of posts in the state then the app will never fetch another set from the server.
But what happens when the server side data changes? The app will typically provide a refresh button, which (as components should not change the state) issues an 'Action' (INVALIDATE_SUBREDDIT for example) which is reduced into setting a flag (posts.didInvalidate) in the state that indicates that the data is now invalid.
The change in state triggers the component redraw which, as mentioned, checks shouldFetchPosts which falls into the clause that executes return posts.didInvalidate which is now true, therefore firing the action to REQUEST_POSTS and fetching the current server side data.
So to reiterate: didInvalidate suggests a fetch of the current server side data is needed.
The most up-voted answer isn't entirely correct.
didInvalidate is used to tell the app whether the data is stale or not. If true, the data should be re-fetched from the server. If false, we will use the data we already have.
In the official examples, firing INVALIDATE_SUBREDDIT will set didInvalidate to true. This Redux action can be dispatched as a result of a user action (clicking a refresh button), or something else (a countdown, a server push etc.)
However, firing INVALIDATE_SUBREDDIT alone will not initiate a new request to the server. It is simply used to determine whether we should re-fetch the data or use the existing data when we call fetchPostsIfNeeded().
Because didInvalidate is set to true, the app will not let us fetch the data more than once. To refresh our data (e.g. after clicking a refresh button) we need to:
dispatch(invalidateSubreddit(selectedSubreddit))
dispatch(fetchPostsIfNeeded(selectedSubreddit))
Because we called invalidateSubreddit(), didInvalidate is set to true and fetchPostsIfNeeded() will initiate a re-fetch.
(This is why danmux's answer isn't entirely correct. The life cycle method componentDidMount will not be called when the state (which is mapped to the props) changes; componentDidMount is only called when the component mounts for the first time. So, the effect of hitting the refresh button will not appear until the component has been remounted, e.g. from a route change.)

Where do sockets fit into the Flux unidirectional data flow?

Where do sockets fit into the Flux unidirectional data flow? I have read 2 schools of thought for where remote data should enter the Flux unidirectional data flow. The way I have seen remote data for a Flux app fetched is when a server-side call is made, for example, in a promise that is then resolved or rejected. Three possible actions could fire during this process:
An initial action for optimistically updating the view (FooActions.BAR)
A success action for when an asynchronous promise is resolved (FooActions.BAR_SUCCESS)
An error action for when an asynchronous promise is rejected (FooActions.BAR_ERROR)
The stores will listen for the actions and update the necessary data. I have seen the server-side calls made from both action creators and from within the stores themselves. I use action creators for the process described above, but I'm not sure if data fetching via a web socket should be treated similarly. I was wondering where sockets fit into the diagram below.
There's really no difference in how you use Flux with WebSockets or plain old HTTP requests/polling. Your stores are responsible for emitting a change event when the application state changes, and it shouldn't be visible from the outside of the store if that change came from a UI interaction, from a WebSocket, or from making an HTTP request. That's really one of the main benefits of Flux in that no matter where the application state was changed, it goes through the same code paths.
Some Flux implementations tend to use actions/action creators for fetching data, but I don't really agree with that.
Actions are things that happen that modifies your application state. It's things like "the user changed some text and hit save" or "the user deleted an item". Think of actions like the transaction log of a database. If you lost your database, but you saved and serialized all actions that ever happened, you could just replay all those actions and end up with the same state/database that you lost.
So things like "give me item with id X" and "give me all the items" aren't actions, they're questions, questions about that application state. And in my view, it's the stores that should respond to those questions via methods that you expose on those stores.
It's tempting to use actions/action creators for fetching because fetching needs to be async. And by wrapping the async stuff in actions, your components and stores can be completely synchronous. But if you do that, you blur the definition of what an action is, and it also forces you to assume that you can fit your entire application state in memory (because you can only respond synchronously if you have the answer in memory).
So here's how I view Flux and the different concepts.
Stores
This is obviously where your application state lives. The store encapsulates and manages the state and is the only place where mutation of that state actually happens. It's also where events are emitted when that state changes.
The stores are also responsible for communicating with the backend. The store communicates with the backend when the state has changed and that needs to be synced with the server, and it also communicates with the server when it needs data that it doesn't have in memory. It has methods like get(id), search(parameters) etc. Those methods are for your questions, and they all return promises, even if the state can fit into memory. That's important because you might end up with use cases where the state no longer fits in memory, or where it's not possible to filter in memory or do advanced searching. By returning promises from your question methods, you can switch between returning from memory or asking the backend without having to change anything outside of the store.
Actions
My actions are very lightweight, and they don't know anything about persisting the mutation that they encapsulate. They simply carry the intention to mutate from the component to the store. For larger applications, they can contain some logic, but never things like server communication.
Components
These are your React components. They interact with stores by calling the question methods on the stores and rendering the return value of those methods. They also subscribe to the change event that the store exposes. I like using higher order components which are components that just wrap another component and passes props to it. An example would be:
var TodoItemsComponent = React.createClass({
getInitialState: function () {
return {
todoItems: null
}
},
componentDidMount: function () {
var self = this;
TodoStore.getAll().then(function (todoItems) {
self.setState({todoItems: todoItems});
});
TodoStore.onChange(function (todoItems) {
self.setState({todoItems: todoItems});
});
},
render: function () {
if (this.state.todoItems) {
return <TodoListComponent todoItems={this.state.todoItems} />;
} else {
return <Spinner />;
}
}
});
var TodoListComponent = React.createClass({
createNewTodo: function () {
TodoActions.createNew({
text: 'A new todo!'
});
},
render: function () {
return (
<ul>
{this.props.todoItems.map(function (todo) {
return <li>{todo.text}</li>;
})}
</ul>
<button onClick={this.createNewTodo}>Create new todo</button>
);
}
});
In this example the TodoItemsComponent is the higher order component and it wraps the nitty-gritty details of communicating with the store. It renders the TodoListComponent when it has fetched the todos, and renders a spinner before that. Since it passes the todo items as props to TodoListComponent that component only has to focus on rendering, and it will be re-rendered as soon as anything changes in the store. And the rendering component is kept completely synchronous. Another benefit is that TodoItemsComponent is only focused on fetching data and passing it on, making it very reusable for any rendering component that needs the todos.
higher order components
The term higher order components comes from the term higher order functions. Higher order functions are functions that return other functions. So a higher order component is a component that just wraps another component and returns its output.

Service call in Fluxxor / React.JS

I'm very very new to react.js and Fluxxor and I haven't done web development for a while. :)
I was wondering where to put server calls (JQuery $.ajax()) in my code?
My actions are only dispatching calls like:
var actions = {
onBlubb: function (data) {
this.dispatch(cmd.BLUBB, data);
},};
Then I have one store which does some changes and calls the emit function to update the view. The whole cycle works fine (view, action, dispatcher, store)
Now I guess I should put my ajax call in my store class. Let's say i call my store "blubbStore".
But I want my store classes to be testable. That means I have to put the ajax call in another store class which basically does the server call and ...
Approach 1) ... triggers a success/failed action. This action is handled in blubbStore
Approach 2) ... stores the service call response in properties. Then blubbStore calls "WaitFor" and reads the data from this "service-caller-store" once the service call is done.
I guess approach 2 is not possible, because the WaitFor does not await asynchronous calls? That means approach 1 would be the solution?
(And the actions should dispatch only messages. right?)
Thanks
In my personal view and experience - it's better to put async call in actions with this logic - image
In this way you can dispatch an event, calling loading screen for example and then, when data is recieved dispatch new change with data.
In the end I believe it's a personal choice, aim for the method that will help you handle code better.

What is the purpose of the React.addons.batchedUpdates API?

The React v0.12 release announcement included the following:
New Features:
* React.addons.batchedUpdates added to API for hooking into update cycle
However I cannot find any documentation for this API. What is its purpose?
Specifically, any chance that it has an equivalent of Ember.run()?
When responding to synthetic events like onClick and so on, component state changes are batched so lots of calls to this.setState for the same component will only result in one render.
If you are changing state in response to some other async callback (e.g. AJAX or setTimeout) then every call to this.setState will result in a render. You can wrap your work in batchedUpdates(..) to avoid this.
var React = require('react/addons');
var batchedUpdates = React.addons.batchedUpdates;
var request = require('superagent'); // AJAX lib
var req = request('GET', ...).end(function(err, res) {
// invoked when AJAX call is done
batchedUpdates(function(){
.. all setState calls are batched and only one render is done ...
})
});
The default batched update strategy is great for your average website. Sometimes you have extra requirements and need to deviate from that.
The initial reason this was made public is for a requestAnimationFrame batching strategy, which is better for games and sites that need to update often and in many places.
It's just an extensibility point to solve edge case issues.

Categories

Resources