I'm trying to understand why Redux is designed the way it is. For example suppose I have a store that contains a list of todos.
If the store is effectively an object like this:
{1: todo1,
2: todo2,
3: todo3, ...}*
And we wrap it in a class that allows us to do things like:
todoStore.getAll()
todoStore.add(todo)
todoStore.get(id);
todoStore.get([1,2,3,...]);
todoStore.filter((todo)=> todo.id == 1);
todoStore.del(1);
todoStore.update(2, {title: 'new title'}:Partial<Todo>);
....
So in this case all we have is a Todo model and a TodoStore that has an API that allows us to query / filter, delete, update, and add items.
Why does Redux need the Actions and the Reducers?
One of the answers indicate that:
Instead Redux uses a pattern that, when given a state and action will always produce the same new state.
So it seems that because of this pattern we need actions and reducers, but to me these look like internal implementation details. Why can't the TodoStore just implement these internally?
For example if we have 1 todo instance in the cache and we add another one, we now have 2. This seems like a pretty simple thing to implement ... but I must be missing something ...
Background
I was thinking about implementing something like
#TodoStore
class Todo {
}
The annotation / decorator would generate the store and clients would then get the store via something like:
todoStore:TodoStore = StoreCache.get(Todo);
todos:Observable<Todo[]> = todoStore.getAll();
...
Etc. It seems like it could be this simple ... so just wondering what Redux provides that this might be missing ... In other words why did redux decide that it needed actions and reducers instead of a simple Store<Type> like interface?
Yet a different way of looking at it is do the Reducers and Actions add something that the Store<Type> interface cannot add via the way that is implemented / language constraints?
Assertions
The action is the method name combined with the entity name. So for example if we have Store<Todo> (A Store type that operates on todo types), and say an update method such as update(id:String, todo:Todo), then we effectively have the name of the Action which would be TODO UPDATE. If the second argument were plural, so update(id:String, todos:Todo[]), then the action is TODO UPDATES ...
If we are doing updates we have to find the instances that we are updating and update them, and we typically do this with an ID. Once the updates are complete we can track them in an immutable state tree if we wish to do some, and for example the entire change could be wrapped in a command object instance so that we could undo / replay it. I believe the Eclipe EMF framework API has a good model for this that enables this with Elipse undo / redo functionality for generated models.
This question seems a bit broad or opinion based but I'll give it a shot.
Redux store is immutable, because of this you cannot mutate the store. Instead Redux uses a pattern that, when given a state and action will always produce the same new state. The actions are exact that, telling the store what action to perform and the reducers execute that change and return a new state.
This may feel odd if you come from a mutable object oriented background but it allows you to walk through the state changes, go back in history and replay actions, etc. It's a powerful pattern.
This is waaay too long for a comment, and may be an answer, but it's more about the concepts of why Redux exists than the nitty-gritty details of it's implementation.
Consider the deceptively simple assignment statement:
var foo = {bar: 1};
foo.bar = 3;
Simple right? Except... what if I need the previous value of foo.bar? What if I need to store a reference to the state transition itself so I can e.g. re-play it? Statements are not first-class entities in JavaScript:
var transition = foo.bar = 3;
isn't really meaningful. You could wrap it in a lambda expression:
var transition = () => { foo.bar = 3 };
But this fails to capture the transition semantics, where just setting the state of foo.bar to 3. What if the previous state matters for the next? How do you share foo? Stuff it in a global and hope no one mutates it on you? But if foo has clear semantics around it's state changes and is otherwise immutable, well...
There are some useful properties that fall out of this:
Code-reloading. If all of your state changes are first-class, you can reload your code and then get back to the state you were in simply by replaying them.
Reproducing on the server. What if production bugs dumped the state transitions that created them? Ever failed to repro a bug? Thing of the past.
Undo is trivial.
There are others, but you get the point. Now, you may not need all that, and it may not be worth the loss of flexibility (and Dan Abramov, author of Redux would agree). But there is a lot to be gained by giving state transitions first-class status in your system.
Related
In ReactQuery, the useQuery(..) hook takes a key that can contain complex dependencies (in an array). Or even just an int, like todoId that can change (cf the documentation).
Or a filters object like below:
function Component() {
const [filters, setFilters] = React.useState()
const { data } = useQuery(['todos', filters], () => fetchTodos(filters))
// ✅ set local state and let it "drive" the query
return <Filters onApply={setFilters} />
}
I'm unable to find an explanation regarding how it does monitor changes under the hood.
If the hashing of the key is well explained in the source code and this blog post the event-handling/monitoring of the value changing is a mystery to me.
So the question is: how does it keep track of changes, even inside complex typed passed in the Query Key array? Is there some introspection happening connecting events to value and/or reference changes?
PS: It is a question also applicable to dependencies in the useEffect(..) hook. There is a general perplexity from me, coming from non-interpreted languages.
Ok, since my comment was stolen as an answer even without a mention, I just repost it as an answer myself:
Query restarts when key hash changes.
how does the system know to recompute and compare the Hashkey? How does it "respond" to a change?
It recomputes hash on every render basically, no magic here.
Hash algoritm is an implementation detail, but by default it uses JSON.stringify (the only detail is that the object keys are sorted).
In the opposite, useEffect hook compares deps just by reference (if you can say so, technically it probably uses tc39.es/ecma262/#sec-isstrictlyequal e.g. ===).
The query keys are hashed deterministically. Basically, we JSON.stringify the key, but sort the keys of objects inside it so that they are stable. After that, we have just strings (you can also see them in the devtools), and strings are easy to compare to see if something changed (just ===).
how does the system know to recompute and compare the Hashkey?
we just do this on every render.
This question already has answers here:
Why can't I directly modify a component's state, really?
(7 answers)
Closed 3 years ago.
Assume this is my state:
state={
user:{
name: 'Joe',
condition:{
isPrivate: true,
premium: false
}
}
}
And this is the methods I can use to update user:
updateUser = (property, value)=>{
// firstway. probably not a good one
let user = this.state.user;
user[property] = value;
this.setState({user})
// second way. probably the best way
let user = JSON.parse(JSON.stringify(this.state.user))
user[property] = value;
this.setState({user})
}
Although I know modifying the state directly is not a good practice but I'm getting the same result from both of them with no side effects so far.
So why should I take this extra step to copy the state then modify it on the copied object while this slows down the operation (however so little)!
So which one would be faster? what would be the side effects of the first method in the context of react? and finally what are the pros and cons of each method?
In response to your first method of updating state, you are getting a reference to the object nested in your state.
let user = this.state.user;
user[property] = value;
In this chunk you have already updated the state, so you are actually performing a side effect. The call to setState() just reflects those changes in the UI(i.e. re-rendering of the component).
The reason for not modifying the state directly might be some unintentional updates in the state. For example, if you want to make an api call by modifying some of the data in this.state and sending it as the body of the request(Note that you don't want these updates to reflect in the UI), then modifying the state directly like you did in method 1 could cause some unwanted changes in the state and subsequent calls to setState() might expose some unwanted changes to the user of the application.
However in your example it's fine to use any of those methods but it might not be a good practice.
Hope this helps!
The basic idea is avoid mutating objects, create new objects instead.
This mantra means that you should avoid direct mutations to javascript objects that you have in memory but, instead, you should create a new object each time.
You can use the ES6 spread operator in order to get a clone of your object. With the spread operator you can also update the properties of the clone, so that you perform the required update to the object properties.
This is the code you need:
updateUser = (property, value) => {
const user = {...this.state.user, [property]: value}; // gets a clone of the current user and update its properties
this.setState({ user });
}
The three dots in the syntax above are not a typo, they are the aforementioned ES6 spread operator
Based on my knowledge (I'm quite new to react) there are basically three reasons to avoid direct state mutation:
recalculating a new state each time is simpler than trying to update an existing state. When I say simpler I mean simpler from a conceptual and coding perspective. Creating a new object each time avoiding any kind of side effect will simplify your code and will reduce your bugs.
you can't be sure on how your component and its children components are using a given piece of state. That piece of state is used by your component and could be passed to its children components via props. If you only reason on your component in isolation you can't know how the children components are using that piece of state. What's gonna happen when you mutate the object in memory by changing its properties ? The response is who knows. You can have a series of side effects and, more important, you cannot be sure about what kind of side effects you will get if you reason only on your component in isolation. It really depends on how the component hierarchy is composed. Reasoning about side effects is always a mess, it's too risky to cope with them and a better approach is trying to avoid them.
react and react dom have been designed to update the browser DOM efficiently when the state is mutated by following the best practices of functional approach (no side effects and no direct state mutation). This means that, if you use react the way you are suggested to, react itself will have a better time in calculating the modifications to be applied to the DOM in order to redraw your components and then your app will perform better.
I am building a game that players can attack each other by turn. So first I set the name, jobmanually and generate life,damage,magic randomly in componentWillMount().
I hope that every time I submit the attack form, certain amount of life with be reduced from the attacked person. But now every time I submit, the whole state is regenerated(with all kinds of bugs).
Can I do something to solve it?
app.js: https://ghostbin.com/paste/ype2y
attack.js: https://ghostbin.com/paste/wzm3m
I noticed that you do a lot of:
let players = this.state.players
which you are not supposed to do. Array is an object in js so here you are passing by reference. This means that every modification to the var players actually has side effects and modifies the state which you should never do. I generally recommend to never use in-place operations like splice, and to always use a copy of the state. In this case you can do:
let players = this.state.players.slice()
and from then on any modification to the players var does NOT affect the state. Double check you are not doing this anywhere else in your code. On top of that you should use the constructor only to set up and initiate your state. Otherwise every time the componentWillMount method is called your state is regenerated which is probably not the behavior you are expecting.
EDIT
I figured I could give you more pointers for what you are trying to do with arrays, as a general rule of thumb I follow this approach. If my new state has an array field which is a subset of the previous one then I use the .filter method, if the array of my new state needs to update some of its entries then I use the .map method. To give you an example on player deletion, I would have done it this way:
handleDeletePlayer(id) {
this.setState(prevState => ({
players: prevState.players.filter(player => player.id !== id)
}));
}
Your initial state should be generated in the constructor. This is done only once and will not be repeated when components props are updated.
I have a different implementation of cross-component communication here http://jsfiddle.net/c641oog2/ than what's described here: http://lhorie.github.io/mithril/components.html#librarization. The goal is to create an easily integrable & scalable (to be re-used in other components) components, i.e librarization.
Main parts of my code:
var autocompleter = function(options) {
...
autocompleter.vm = {
list: m.prop(options.list || []),
observers: m.prop(options.observers || []),
...
select: function(item) {
for(observer in autocompleter.vm.observers()) {
autocompleter.vm.observers()[observer](item); //notify all observers of selection
}
}
//initialization later on...
this.userAC = new autocompleter({list: this.users(), observers: [this.selectedUser]})
The main difference is in how the components communicate with each other. In my implementation, I decided to use observers, and in the documentation's implementation, he has implemented by creating pure functions which are then used in the "view" function of dashboard, where correct arguments are passed to the "view" function of the autocomplete function.
My questions:
If you had to pick between these two implementation, why would you pick one over the other?
In the functional programming model, is an OOP concept such as the observer pattern frowned upon?
Is there a more terse but scalable way to implement this in either FP / using a different pattern?
Nice example. Looks terse to me. A little hint to start typing with 'j', 'b' or 'm' would avoid the need to read all the code or assume the example is broken ;)
For a hub and spoke getter/setter arrangement like a dashboard and subviews, the observer pattern just adds additional overhead without any decoupling benefits since the dashboard must initiate the subviews anyways.
It would make more sense if the 'project' subview would observe the 'user' subview. This would allow for complex and reusable logic between the subviews with a light dashboard limited to the initiation.
Personally, I prefer the 'pure' version, rather than the observer pattern. I think that conceptually it's simpler. There's no cross-component communication, it's all vertical up and down between parents and children.
Also, you then break (in my mind) the idea that UI state is data, and so ideally never duplicated.
It means that if you create new components that want to interact with the rest, they all need to keep copies of the selected state, rather than all observing the single UI state model.
My question is about mutability of 'state machine' object in FRP. I'm evaluating Bacon.js's Observable.withStateMachine.
My domain is trading robots. I've got a source event stream of 'Orders' which actually are tuples (buy or sell,price,volume)
I want to use something like the following pseudocode
```
orders.withStateMachine(new Book(), function(book,order) { // is 'book' mutable ?!!
var bookModifiedEvents = book.merge(order);
return [book, modifications];
})
Book.prototype.merge = function(order) {
// either append order into the Book, or generate Trade when order gets filled.
// generate corresponding bookModifiedEvents and tradeEvents.
return bookModifiedEvents.merge(tradeEvents);
}
```
This code shoud aggregate exchange orders into order book (which is a pair of priorityqueues for bids and asks orders sorted by price) and publish 'bookModified' and 'tradeOccured' event streams.
What I don't quite understand: Can I directly modify initial state object that was passed to my callback I give to .withStateMachine method?
Since FRP is all about immutability, I think I shouldn't. In such case I should create a lot of orderbook objects, which are very heavy (thousands of orders inside).
So I began to look to immutable collections, but, first, there is no immutable priority queue (if it makes sense), and, second, I'm afraid the performance would be poor for such collections.
So, finalizing, my question has 2 parts:
1) In case of HEAVY STATE, is it LEGAL to modify state in .withStateMachine??
Will it have some very-very bad side effects in bacon.js internals?
2) And if it is NOT allowed, what is recommended? Immutable collections using tries? Or some huge refactoring so I will not need orderbooks as phenomena in my code at all?
Thanks.
The whole idea of reactive programming doesn't work, if you mutate data or cause side-effects in something which is expected to be referentially transparent.
So 1) modifying the state isn't illegal, but you can run into undefined behaviour scenarios. So you are on your own.
2) And as mutation isn't recommended, what is an alternative: Try immutable.js as you mentioned it, make priority queue build on top of List or whatever is more suitable. Don't prejudge the performance. Immutable collections use sharing, so when you copy a collection, you don't need to copy the elements as they could be shared (they are assumed to be immutable too - why to copy stuff we aren't changing).