Flux: common practices to separate actions btw action creators - javascript

If you'll look at flux-chat example you may see three action creators. Pretty much for such small hello-world app. I'm writting React+Flux application and wondering what is the common practice for bigger apps to separate actions between action creators? Should I create the only one or create separate for each module? If I will create the separate for each module - than they will depend among each others, isthat acceptable?

Action creators should all be on the same level of abstraction, meaning they should never depend on each other (notice that none of them depend on each other here: https://github.com/facebook/flux/tree/master/examples/flux-chat/js/actions). And while action creators shouldn't depend on each other, they can depend on multiple stores, and multiple actions can depend on the same store. This is what separates actions from action creators since the actions are really defined inside of a store. So if you need to access data from more than one store, you can do that in an action creator (They use the thread store in the message store, and that's bad practice. The thread data should be passed into the action rather than included via the store).
In the flux chat example, they have another concept called Utils (very general) which doesn't emit events or do anything to the application state. In fact, it doesn't really add to the flux architecture. It is just a concept born out of the DRY principle.
EDIT:
The reason why action creators depend on stores is because it is a separation in abstraction that specifies where logic belongs. As an analogy, you wouldn't be worrying about allocating memory when working on a function that is meant for addition of two integers. You want to only have the relevant code pertaining to the calculation, and delegate low level details to other methods. This is the same difference in abstraction that the stores and actions are providing. Actions are working to connect separate services/stores together, while stores maintain how to interact with http, websockets, and application state.

Related

Redux/sagas: Approach for larger API's

I've worked with Redux/sagas workflows on small projects based off of this real-world example, but the logic of those is not nearly as complex. How should I be approach working with a more comprehensive api (i.e., Reddit's API), without making things overly verbose?
Do I make a const for every endpoint? i.e.,
export const fetchUser = login => callApi(`users/${login}`, userSchema)
Should I be worried about managing the entity cache?
Is there a way how to further reduce complexity/boilerplate (i.e. further grouping request types with get/put/post/delete for the same endpoint)?
Are there any examples out there that deal with bigger/more complex than the real-world?
I think the answer depends on how fluid you want your components to be.
I'm working on a large codebase using sagas, our pages are separated into "types", for example a "list" type, "form" type etc.
We have one saga responsible for fetching content, while each pageComponent when being rendered is responsible for supplying the endpoints.
this allows a very modular approach, to add a component you need to deal with one subsection of your file system.
Our pages are mostly a configuration file that contains all this information, and we use this configuration to render a "generic" component with the correct data.
Saga reusability
I see Sagas as sequential processes, they can be for async fetching data, but they're also useful for anything that needs to be dealt with in sequence.
These "Flows" are sometimes very similar in a codebase, and those are the ones you want to generalize.
Like you said, the most common operations are CRUD for any endpoint, that can be easily grouped together.
Login is extremely different than loadUserList and different things need to happen afterwards, however loadUserList and loadRepoList is extremely similar.
Things that impact reusability
Your ability to control your API's, if you can dictate the shape of the API you consume, you can get away with even more generalizations in front end.
The shape of the application(Front-wise) - are your pages strangely dependent on one another's state? for example it's not uncommon for insurance programs to have forms that link to one another, you can fill the first 3 forms in any order you want, but once all three are complete the 4th unlocks.
Each of these dependancies will normally have their own saga that controls the flow of your use story.
Does your application require syncing? You can easily create sagas that automatically sync data with your different endpoints and update your Redux State, there's much to consider here, including if we decide to interrupt the user with new data(we might want to let him know that the form he's editing has out dated data) - syncs require a distinct saga as there are usually various business rules when to sync what data - if the rules are very different this can force you to create multiple sagas)
Common Sagas that can be unified
UserSagas - login, logout.
FetchData - fetch a single record, or a collection.
DeleteData - Delete a single record, or a collection of IDs
Data Syncing - Updating your local data from a remote periodically.
Regarding the entity cache
Entity cache is just a name they picked, but this goes back the points mentioned before.
Does your application run on stale data, or do you fetch from the server every time your component is loaded?
If the data is only fetched once and you display stale data, you'll store it in a type of cache(that's basically the redux store).
If you show stale data, this is the way to go.

Redux: Why not put actions and reducer in same file?

I'm creating an app with Redux and am scratching my head as to why it is best to place actions and reducers in separate files. At least, that's the impression I'm getting from all the examples.
Each action, or action creator, appears to map to a single function that is called by a reducer (inside a switch statement). Wouldn't it be logical to keep these together in the same file? It also makes using the same constant for the action type and switch case easier, as it doesn't have to be exported/imported between files.
From Redux creator Dan Abramov:
Many reducers may handle one action. One reducer may handle many
actions. Putting them together negates many benefits of how Flux and
Redux application scale. This leads to code bloat and unnecessary
coupling. You lose the flexibility of reacting to the same action from
different places, and your action creators start to act like
“setters”, coupled to a specific state shape, thus coupling the
components to it as well.
From the Redux docs:
We suggest you write independent small reducer functions that are each
responsible for updates to a specific slice of state. We call this
pattern “reducer composition”. A given action could be handled by all,
some, or none of them. This keep components decoupled from the actual
data changes, as one action may affect different parts of the state
tree, and there is no need for the component to be aware of this.
See this conversation on twitter and this issue on github for more information.
Keeping Actions and Reducers in separate files helps keep the code modular.
It can be easier to find bugs, extend the code, and in general work on the smallest piece possible.
Example:
Saving API error messages to the Redux store can be helpful.
If I forgot to update the store with the incoming error on one of the Reducers, that could be tough to find across multiple files.
If I'm looking at multiple Reducers in the same file, it'll be easier to see that one of them is missing the error: action.payload line.

Is it common practice to grab store data from an action creator in flux?

In the flux architecture, is it common practice to grab data from a store in an action creator? If not, would that mean that it's better to pass all needed data for network calls in through the component params?
I have an application that has a 3 level deep component, and just wondering how realistic it is to copy data from level 1 to level 3.
Any explanation would be greatly appreciated.
It's fine for the stores' getters to be called in the action creators, but usually the action creator will call a WebAPIUtils module, where the actual call to the stores' getters will be found.
I would question the practice of passing anything through the view layer that isn't actually used by views (usually React components).
Network calls are usually made within a dedicated utility module. These are sometimes called DataLoaders or WebAPIUtils modules. They differ from other utility modules in that they often pull data out of stores before making the network calls.
Other utility modules should be libraries of pure functions, with very few dependencies, if any. This keeps them very portable.

Flux architecture misunderstanding in example chat app

I'm trying to understand the Flux example chat app. The authors mention this unidirectional data flow:
However, in the example app there are dependencies between Action Creators (ChatMesssageActionCreator) and Stores (MessageStore), and between Stores (MessageStore, ThreadStore) and Web API Utils (ChatMessageUtils), which seems to be against the unidirectional data flow rule:
Is it recommended to follow the given example, or should one design a better pattern?
Update
I figured out that the ChatMessageUtils doesn't belong to Web API Utils, so the two arrows from store shouldn't point there, therefore maybe they're okay.
However the connection between the ActionCreators and the Store seems still strange.
The example is a bit forced, and it was created with the purpose of trying to show how waitFor() works. The WebAPI aspect of the example is pretty half-baked and really should be revised.
However, even though MessageStore.getCreatedMessageData(text) passes a value to the store, it's still a getter. It's not setting data on the store. It's really being used as a utility method, and a good revision (pull request?) would be to move that method to a Utils module.
To improve upon the example for the real world, you might do a couple things:
Call the WebAPIUtils from the store, instead of from the ActionCreators. This is fine as long as the response calls another ActionCreator, and is not handled by setting new data directly on the store. The important thing is for new data to originate with an action. It matters more how data enters the system than how data exits the system.
Alternatively, you might want to have separate client-side vs. server-side IDs for the messages. There might be few advantages of this, like managing optimistic renderings. In that case, you might want to generate a client-side id in a Utils module, and pass that id along with the text to both the dispatched action and the WebAPIUtils.
All that said, yes the example needs revision.

React Flux: Store dependencies

so I have recently playing with React and the Flux architecture.
Let's say there are 2 Stores A and B. A has a dependecy on B, because it needs some value from B. So everytime the dispatcher dispatches an action, first B.MethodOfB gets executed, then A.MethodOfA.
I am wondering what are the advantages of this architecture over registering A as a listener of B and just executing the A.MethodOfA everytime B emits a change event?
Btw: Think about a Flux implementation without the "switch case" of the example dispatcher from facebook!
The problem with an evented approach is that you don't have a guarantee as to which handlers will handle a given event first. So in a very large, complex app, this can turn into a tangled web where you're not really sure what is happening when, which makes dependency management between stores very difficult.
The benefits of the callback-based dispatcher are twofold: the order in which stores update themselves is declared in the stores that need this ordering, and it is also guaranteed to work exactly as intended. And this is one of the primary purposes of Flux -- getting the state of an app to be predictable, consistent and stable.
In a very small app that is guaranteed to not grow or change over time, I can't argue with what you are suggesting. But small apps have a tendency to grow into large ones, eventually. This often happens before anyone realizes it's happening.
There are certainly other approaches to Flux. Quite a few different implementations have been created and they have different approaches to this problem. However, I'm not sure which of these experiments scale well. On the other hand, the dispatcher in Facebook's Flux repo and the approach described in the documentation has scaled to truly gigantic applications and is quite battle tested.
In my opinion I think this dispatcher is somehow an anti-pattern.
In distributed architectures based on event sourcing OR CQRS, autonomous components do not have to depend on each others as they share a same event log.
It's not because you are on the same host (the browser/mobile device) that you can't apply these concepts. However having autonomous stores (no store dependencies) means that 2 stores on the same browser will probably have duplicate data as the same data may be needed by 2 different stores. This is a cost to pay but I think in the long term it has benefits as it removes the store dependencies. This means that you can refactor one store entirely without any impact on components that do not use that store.
In my case, I use such a pattern and create some kind of autonomous widgets. An autonomous widget is:
A store that listen to an event stream
A component
A LESS file (maybe useless now because of React Styles?)
The advantage of this is that if there is a bug on a given widget, the bug almost never involve any other file than the 3 mentionned above ;)
The drawback is that stores that host the same data must also maintain it. On some events, many stores may have to do the same action on their local data.
I think this approach scales better to larger projects.
See my insights here: Om but in javascript

Categories

Resources