so I have recently playing with React and the Flux architecture.
Let's say there are 2 Stores A and B. A has a dependecy on B, because it needs some value from B. So everytime the dispatcher dispatches an action, first B.MethodOfB gets executed, then A.MethodOfA.
I am wondering what are the advantages of this architecture over registering A as a listener of B and just executing the A.MethodOfA everytime B emits a change event?
Btw: Think about a Flux implementation without the "switch case" of the example dispatcher from facebook!
The problem with an evented approach is that you don't have a guarantee as to which handlers will handle a given event first. So in a very large, complex app, this can turn into a tangled web where you're not really sure what is happening when, which makes dependency management between stores very difficult.
The benefits of the callback-based dispatcher are twofold: the order in which stores update themselves is declared in the stores that need this ordering, and it is also guaranteed to work exactly as intended. And this is one of the primary purposes of Flux -- getting the state of an app to be predictable, consistent and stable.
In a very small app that is guaranteed to not grow or change over time, I can't argue with what you are suggesting. But small apps have a tendency to grow into large ones, eventually. This often happens before anyone realizes it's happening.
There are certainly other approaches to Flux. Quite a few different implementations have been created and they have different approaches to this problem. However, I'm not sure which of these experiments scale well. On the other hand, the dispatcher in Facebook's Flux repo and the approach described in the documentation has scaled to truly gigantic applications and is quite battle tested.
In my opinion I think this dispatcher is somehow an anti-pattern.
In distributed architectures based on event sourcing OR CQRS, autonomous components do not have to depend on each others as they share a same event log.
It's not because you are on the same host (the browser/mobile device) that you can't apply these concepts. However having autonomous stores (no store dependencies) means that 2 stores on the same browser will probably have duplicate data as the same data may be needed by 2 different stores. This is a cost to pay but I think in the long term it has benefits as it removes the store dependencies. This means that you can refactor one store entirely without any impact on components that do not use that store.
In my case, I use such a pattern and create some kind of autonomous widgets. An autonomous widget is:
A store that listen to an event stream
A component
A LESS file (maybe useless now because of React Styles?)
The advantage of this is that if there is a bug on a given widget, the bug almost never involve any other file than the 3 mentionned above ;)
The drawback is that stores that host the same data must also maintain it. On some events, many stores may have to do the same action on their local data.
I think this approach scales better to larger projects.
See my insights here: Om but in javascript
Related
I have recently started working in VueJS and I have been instructed by one of the lead devs to never combine emit events and vuex store. Basically, if the project will use a store, take all the events/state through the store.
From one point of view I can understand this, but there are a lot scenarios in which emiting an event is so much faster than taking everything through the store.
Is this the best practice by not combining Vuex and emit events?
As a lead developer myself using Vue, this arbitrary rule is simply narrow-minded.
When using Vuex and deciding to use an emit or not, I look at the relationship. If I have a component that only needs to interact with its parent, then I use an emit. It keeps the store cleaner and the relationships clearer. Your lead is not making scalable or maintainable code.
If he/she argues you shouldn't use emits when you have a store, then following that logic, you shouldn't use props ever either. That is equally nonsensical.
Once you start working with applications that have several children down, you'll realize that jamming the store with every variable you'll need just for a few components way down the hierarchy creates a horrible mess of things.
I disagree with your Lead. Vuex should only be used for data that is truly global. Any data/events that are only shared between a component and its child should go through emit/props.
While there can/should be debate about what should use the store vs props and emit, a blanket "always use store" is almost certainly wrong and will lead to a needlessly bloated store.
I am considering moving my reducers from plain JS to immutable.js.
It will take a few days to understand immutable.js api and do the refactor with the tests and I want to give some thought to whether this transition is necessary. My motivation is the fact I am currently duplicating the state on every change:
let newState = {...state};
This is expensive and leads me to forget cloning deep objects from time to time.
Googling the issue for the last days I still don't understand whether moving my reducers to immutable.js will result in a performance hit and whether I would need to go through my components and containers and use state.toJS() on each one.
What is the performance hit on moving to immutable.js? especially when I use undo and keep multiple steps. Will I have to use .toJS() every time I need data for my components/containers?
Short answer about performance: it depends. To quote Dan Abramov here,
In short: create a sample app that imitates the kinda of data size and change speed you expect in your app, and profile it. No other way to tell if dropping Immutable is going to work for you.
One of the major benefits to using a library like immutable is as you mentioned, it prevents you from forgetting to clone deep objects from time to time, which can lead to really nasty bugs that are hard to track down. Likewise, undo can be much easier with immutable if you keep track of previous states, whereas they are much much more involved without an immutable library since you basically have to deep clone state before creating new state.
I think it's worth a try, but you can always move a few reducers at a time to using immutable instead of migrating your entire app. That way you can profile performance impacts and see if it's worth it to migrate the entire app.
My own personal opinion is that use of Immutable.js is mostly overrated for most situations. I wrote an extended comment on Reddit describing my concerns, at (Dan Abramov: Redux is not an architecture or design pattern, it is just a library). I'll paste the TL;DR: here:
Overall, my impression is that the performance benefits are overrated, it's too easy to make mistakes in usage that are actually a net performance negative, and you either have to go all-in on the API everywhere in your codebase or be very sure you know when you're using Immutable types vs plain JS and do conversions all over the place.
So yes, you generally either have to use toJS(), or explicitly call state.getIn() to extract pieces of data.
My React/Redux links list has a section on React performance, which includes several articles regarding Immutable.js performance (including some pitfalls, like overuse of toJS()) : react-redux-links.
I have a react/redux application which has become large enough to need some performance optimizations.
There are approx ~100 unique components which are updated via websocket events. When many events occur (say ~5/second) the browser starts to slow down significantly.
Most of the state is kept in a redux store as Immutable.js objects. The entire store is converted to a plain JS object and passed down as props through the component tree.
The problem is when one field updates, the entire tree updates and I believe this is where there is most room for improvement.
My question:
If the entire store is passed through all components, is there an intelligent way to prevent components updating, or do I need a custom shouldComponentUpdate method for each component, based on which props it (and its children) actually use?
You really don't want to do things that way. First, as I understand it, Immutable's toJS() is fairly expensive. If you're doing that for the entire state every time, that's not going to help.
Second, calling toJS() right away wastes almost the entire benefit of using Immutable.js types in the first place. You really would want to keep your data in Immutable-wrapped form down until your render functions, so that you get the benefit of the fast reference checks in shouldComponentUpdate.
Third, doing things entirely top-down generally causes a lot of unnecessary re-rendering. You can get around that if you stick shouldComponentUpdate on just about everything in your component tree, but that seems excessive.
The recommended pattern for Redux is to use connect() on multiple components, at various levels in your component tree, as appropriate. That will simplify the amount of work being done, on several levels.
You might want to read through some of the articles I've gathered on React and Redux Performance. In particular, the recent slideshow on "High Performance Redux" is excellent.
update:
I had a good debate with another Redux user a couple days before this question was asked, over in Reactiflux's #redux channel, on top-down vs multiple connections. I've copied that discussion and pasted it in a gist: top-down single connect vs multiple lower connects.
Also, yesterday there was an article posted that conveniently covers exactly this topic of overuse of Immutable.js's toJS() function: https://medium.com/#AlexFaunt/immutablejs-worth-the-price-66391b8742d4. Very well-written article.
Is there any practical difference between keeping several simple (plain) subscriptions and keeping a single complex (many levels) one? (with publish-composite, for example)
Seems to me that there shouldn't be any difference, but I wanted to be sure. I prefer sticking to plain subs as it seems to make the code clearer in highly modular projects, but only if that wouldn't bring any performance or scalability issues.
So, can someone help me?
There are two key differences in doing several plain subscriptions vs. keeping complex composite subscription
1) Exposure/Privacy
A composite subscription allows you to perform joins/filters on the server side to ensure that you only send data that the current user has authority to see. You don't want to expose your entire database to the client. Keep in mind that even if your UI is not showing the data, the user can go into the console and grab all the data that your server publishes.
2) Client performance
Performing joins/filters on the client can be expensive if you have a large dataset. This is of course dependent on your application. Additionally, if the database is constantly being updated, and those updates should not be visible to the user; you will constantly need to transfer the updates to the client without deriving benefits from the network expense.
I think this question can't be given a precise answer without more details specific to your application. That being said, I think it's an important question so I'll outline of some things to consider.
To be clear, the focus of this answer will be debating the relative merits of server-side and client-side reactive joins.
decide if you need reactivity
You can produce a simple join of multiple collections without any reactivity in the publisher (see the first example from the article above). Depending on the nature of the problem, it may be that you don't really need a reactive join. Imagine you are joining comments and authors, but your app always has all of the possible authors published already. In that case the fundamental flaw in non-reactive joins (missing child documents after a new parent) won't exist, so a reactive publication is redundant.
consider your security model
As I mention in my article on template joins, server-side joins have the advantage of bundling all of your data together, whereas client-joins require more granular publishers. Consider the security implications of having a publisher like commentsAndAuthors vs two generic implementations of comments and users. The latter suggests that anyone could request an array of user documents without context.
server joins can be CPU and memory hogs
Look carefully at the implementation of the library you are considering for your server-side joins. Some of them use observe which requires that each complete document in the dependency chain be kept in memory. Others are implemented only on observeChanges which is more efficient but makes packages a bit less flexible in what they can do.
look for observer reuse
One of your goals should be to reuse your observers. In other words, given that you will have S concurrent subscriptions you will only end up doing ~(S-I) work where I is the number of identical observers across clients. Depending on the nature of your subscriptions, you may see greater observer reuse with more granular subscriptions, but this is very application specific.
beware of latency
A big advantage of server-side joins is that they deliver all of the documents effectively at once. Compare that to a client join which must wait for each set of parent documents to arrive before activating the child subscriptions. A N-level client-join would have N round-trips before the initial set of documents will be delivered to the client.
conclusion
You'll need to take all of the above into consideration when deciding which technique to use for each of your publications. The reality is that benchmarking a live app on something like kadira is the only way to arrive at a conclusive answer.
In most modern JS frameworks, the best practice for loosely coupling your UI components is with some implementation of Pub / Sub.
The question I have is doesn't this make debugging, maintaining your app more difficult, whereas dependency injection could achieve the same result (loose coupling)?
For example, my component should open a dialog when clicked. This needs to happen or else the UI will appear broken. To me it seems more readable to have the component make an explicit call to some dialog service. In the wild, I see this problem solved more with pub sub, so maybe I am missing something.
When using both methods together, where is a good place to draw the line when something should fire an event or fulfill that action with an injected service?
Pub-sub is great for application-wide events where the number of potential subscribers can vary or is unknown at the moment of raising an event.
Injection always sets the relation between two, sure, you can create decorators/composites and inject compound objects made of simple objects but it gets messy the moment you start to do it.
Take a tree and a list for example. When you click a node, the list should be rebuilt. Sounds like injection.
But then you realize that some nodes trigger other actions, headings are updated, processes are triggered in the background etc. Raising an event and handling it in many independent subscribers is just easier.
I would say that injection works great through layers, when you compose a view and a controller or a view and its backing storage.
Pub-sub works great between objects in the same layer, for example different independent controllers exchange messages and react accordingly.