Mithril js - cross component communication pattern - javascript

I have a different implementation of cross-component communication here http://jsfiddle.net/c641oog2/ than what's described here: http://lhorie.github.io/mithril/components.html#librarization. The goal is to create an easily integrable & scalable (to be re-used in other components) components, i.e librarization.
Main parts of my code:
var autocompleter = function(options) {
...
autocompleter.vm = {
list: m.prop(options.list || []),
observers: m.prop(options.observers || []),
...
select: function(item) {
for(observer in autocompleter.vm.observers()) {
autocompleter.vm.observers()[observer](item); //notify all observers of selection
}
}
//initialization later on...
this.userAC = new autocompleter({list: this.users(), observers: [this.selectedUser]})
The main difference is in how the components communicate with each other. In my implementation, I decided to use observers, and in the documentation's implementation, he has implemented by creating pure functions which are then used in the "view" function of dashboard, where correct arguments are passed to the "view" function of the autocomplete function.
My questions:
If you had to pick between these two implementation, why would you pick one over the other?
In the functional programming model, is an OOP concept such as the observer pattern frowned upon?
Is there a more terse but scalable way to implement this in either FP / using a different pattern?

Nice example. Looks terse to me. A little hint to start typing with 'j', 'b' or 'm' would avoid the need to read all the code or assume the example is broken ;)
For a hub and spoke getter/setter arrangement like a dashboard and subviews, the observer pattern just adds additional overhead without any decoupling benefits since the dashboard must initiate the subviews anyways.
It would make more sense if the 'project' subview would observe the 'user' subview. This would allow for complex and reusable logic between the subviews with a light dashboard limited to the initiation.

Personally, I prefer the 'pure' version, rather than the observer pattern. I think that conceptually it's simpler. There's no cross-component communication, it's all vertical up and down between parents and children.
Also, you then break (in my mind) the idea that UI state is data, and so ideally never duplicated.
It means that if you create new components that want to interact with the rest, they all need to keep copies of the selected state, rather than all observing the single UI state model.

Related

Which places and for what elements do we have to provide a unique key?

I always thought the only place that a unique key is needed is inside lists and arrays like map, but today i was writing a loading screen for my component so a loading is shown to the user and after the load ends i do a setState and let the render know that its time to render the real view.
But i saw that the new component is not being rendered and the loading remains on the screen! after lots of testing i saw everything works as it should and finally i thought lets give it a key maybe something happens!! and it did !, the problem was solved, and i was confused as why on earth a key was needed there
So here is a sudo code of what i did and what happend :
render () {
//with no key, this doesn't render properly
const finalView = this.state.isLoading ?
<UIManager key={1} json= {myLoadingComponentJSON} />
:
<UIManager key={2} json={myFormJson} />;
return () {
finalView
}
}
I can defiantly see that the problem here is probably that i am using a component called UIManager and i use a JSON to know what kinda element this component should render, and probably both UIManagers had the same key? well i am not sure, but still i didn't think a key was needed here.
Which places and for what elements do we have to provide a unique key?
Only provide a key for siblings where you expect the items to be re-ordered (think of list items).
If a child is unique (i.e doesn't have siblings), no need for a key
key should be unique across siblings with the same ancestor.
So for the case of UIManager, you can supply a name as key.
i.e
const finalView = this.state.isLoading ?
<UIManager key={'LoadingComp'} json= {myLoadingComponentJSON} />
:
<UIManager key={'Form'} json={myFormJson} />;
Why?
The real cause is how React does Reconciliation. Without the key, React only sees json attribute and check if it changed or not (might have performance problems with deeply nested JSON data - slow / dropped frames). If it sees any change in json, it will destroy previous instance of UIManager.
See Tradeoffs due to heuristics.
Summary:
With supplying a key, it makes it easier for React to check the difference.
Reconciliation
React provides a declarative API so that you don’t have to worry about exactly what changes on every update. This makes writing applications a lot easier, but it might not be obvious how this is implemented within React. This article explains the choices we made in React’s “diffing” algorithm so that component updates are predictable while being fast enough for high-performance apps.
Because React relies on heuristics if the assumptions behind them are not met, performance will suffer.
The algorithm will not try to match the subtrees of different
component types. If you see yourself alternating between two-component
types with very similar output, you may want to make it the same type.
In practice, we haven’t found this to be an issue.
Keys should be stable, predictable, and unique. Unstable keys (like
those produced by Math.random()) will cause many component instances
and DOM nodes to be unnecessarily recreated, which can cause
performance degradation and lost state in child components.
In your case, you are rendering the same component UIManager with different props so react can't identify which one to render that is the main reason behind
it's working after setting key
When you set key react identify them as a different component, though you could try the following approach
render () {
const finalView =
<UIManager json= { this.state.isLoading ? myLoadingComponentJSON : myFormJson} /> ;
return () {
finalView
}
}

how to properly update an object or array in react state [duplicate]

This question already has answers here:
Why can't I directly modify a component's state, really?
(7 answers)
Closed 3 years ago.
Assume this is my state:
state={
user:{
name: 'Joe',
condition:{
isPrivate: true,
premium: false
}
}
}
And this is the methods I can use to update user:
updateUser = (property, value)=>{
// firstway. probably not a good one
let user = this.state.user;
user[property] = value;
this.setState({user})
// second way. probably the best way
let user = JSON.parse(JSON.stringify(this.state.user))
user[property] = value;
this.setState({user})
}
Although I know modifying the state directly is not a good practice but I'm getting the same result from both of them with no side effects so far.
So why should I take this extra step to copy the state then modify it on the copied object while this slows down the operation (however so little)!
So which one would be faster? what would be the side effects of the first method in the context of react? and finally what are the pros and cons of each method?
In response to your first method of updating state, you are getting a reference to the object nested in your state.
let user = this.state.user;
user[property] = value;
In this chunk you have already updated the state, so you are actually performing a side effect. The call to setState() just reflects those changes in the UI(i.e. re-rendering of the component).
The reason for not modifying the state directly might be some unintentional updates in the state. For example, if you want to make an api call by modifying some of the data in this.state and sending it as the body of the request(Note that you don't want these updates to reflect in the UI), then modifying the state directly like you did in method 1 could cause some unwanted changes in the state and subsequent calls to setState() might expose some unwanted changes to the user of the application.
However in your example it's fine to use any of those methods but it might not be a good practice.
Hope this helps!
The basic idea is avoid mutating objects, create new objects instead.
This mantra means that you should avoid direct mutations to javascript objects that you have in memory but, instead, you should create a new object each time.
You can use the ES6 spread operator in order to get a clone of your object. With the spread operator you can also update the properties of the clone, so that you perform the required update to the object properties.
This is the code you need:
updateUser = (property, value) => {
const user = {...this.state.user, [property]: value}; // gets a clone of the current user and update its properties
this.setState({ user });
}
The three dots in the syntax above are not a typo, they are the aforementioned ES6 spread operator
Based on my knowledge (I'm quite new to react) there are basically three reasons to avoid direct state mutation:
recalculating a new state each time is simpler than trying to update an existing state. When I say simpler I mean simpler from a conceptual and coding perspective. Creating a new object each time avoiding any kind of side effect will simplify your code and will reduce your bugs.
you can't be sure on how your component and its children components are using a given piece of state. That piece of state is used by your component and could be passed to its children components via props. If you only reason on your component in isolation you can't know how the children components are using that piece of state. What's gonna happen when you mutate the object in memory by changing its properties ? The response is who knows. You can have a series of side effects and, more important, you cannot be sure about what kind of side effects you will get if you reason only on your component in isolation. It really depends on how the component hierarchy is composed. Reasoning about side effects is always a mess, it's too risky to cope with them and a better approach is trying to avoid them.
react and react dom have been designed to update the browser DOM efficiently when the state is mutated by following the best practices of functional approach (no side effects and no direct state mutation). This means that, if you use react the way you are suggested to, react itself will have a better time in calculating the modifications to be applied to the DOM in order to redraw your components and then your app will perform better.

Why does Redux need Actions and Reducers?

I'm trying to understand why Redux is designed the way it is. For example suppose I have a store that contains a list of todos.
If the store is effectively an object like this:
{1: todo1,
2: todo2,
3: todo3, ...}*
And we wrap it in a class that allows us to do things like:
todoStore.getAll()
todoStore.add(todo)
todoStore.get(id);
todoStore.get([1,2,3,...]);
todoStore.filter((todo)=> todo.id == 1);
todoStore.del(1);
todoStore.update(2, {title: 'new title'}:Partial<Todo>);
....
So in this case all we have is a Todo model and a TodoStore that has an API that allows us to query / filter, delete, update, and add items.
Why does Redux need the Actions and the Reducers?
One of the answers indicate that:
Instead Redux uses a pattern that, when given a state and action will always produce the same new state.
So it seems that because of this pattern we need actions and reducers, but to me these look like internal implementation details. Why can't the TodoStore just implement these internally?
For example if we have 1 todo instance in the cache and we add another one, we now have 2. This seems like a pretty simple thing to implement ... but I must be missing something ...
Background
I was thinking about implementing something like
#TodoStore
class Todo {
}
The annotation / decorator would generate the store and clients would then get the store via something like:
todoStore:TodoStore = StoreCache.get(Todo);
todos:Observable<Todo[]> = todoStore.getAll();
...
Etc. It seems like it could be this simple ... so just wondering what Redux provides that this might be missing ... In other words why did redux decide that it needed actions and reducers instead of a simple Store<Type> like interface?
Yet a different way of looking at it is do the Reducers and Actions add something that the Store<Type> interface cannot add via the way that is implemented / language constraints?
Assertions
The action is the method name combined with the entity name. So for example if we have Store<Todo> (A Store type that operates on todo types), and say an update method such as update(id:String, todo:Todo), then we effectively have the name of the Action which would be TODO UPDATE. If the second argument were plural, so update(id:String, todos:Todo[]), then the action is TODO UPDATES ...
If we are doing updates we have to find the instances that we are updating and update them, and we typically do this with an ID. Once the updates are complete we can track them in an immutable state tree if we wish to do some, and for example the entire change could be wrapped in a command object instance so that we could undo / replay it. I believe the Eclipe EMF framework API has a good model for this that enables this with Elipse undo / redo functionality for generated models.
This question seems a bit broad or opinion based but I'll give it a shot.
Redux store is immutable, because of this you cannot mutate the store. Instead Redux uses a pattern that, when given a state and action will always produce the same new state. The actions are exact that, telling the store what action to perform and the reducers execute that change and return a new state.
This may feel odd if you come from a mutable object oriented background but it allows you to walk through the state changes, go back in history and replay actions, etc. It's a powerful pattern.
This is waaay too long for a comment, and may be an answer, but it's more about the concepts of why Redux exists than the nitty-gritty details of it's implementation.
Consider the deceptively simple assignment statement:
var foo = {bar: 1};
foo.bar = 3;
Simple right? Except... what if I need the previous value of foo.bar? What if I need to store a reference to the state transition itself so I can e.g. re-play it? Statements are not first-class entities in JavaScript:
var transition = foo.bar = 3;
isn't really meaningful. You could wrap it in a lambda expression:
var transition = () => { foo.bar = 3 };
But this fails to capture the transition semantics, where just setting the state of foo.bar to 3. What if the previous state matters for the next? How do you share foo? Stuff it in a global and hope no one mutates it on you? But if foo has clear semantics around it's state changes and is otherwise immutable, well...
There are some useful properties that fall out of this:
Code-reloading. If all of your state changes are first-class, you can reload your code and then get back to the state you were in simply by replaying them.
Reproducing on the server. What if production bugs dumped the state transitions that created them? Ever failed to repro a bug? Thing of the past.
Undo is trivial.
There are others, but you get the point. Now, you may not need all that, and it may not be worth the loss of flexibility (and Dan Abramov, author of Redux would agree). But there is a lot to be gained by giving state transitions first-class status in your system.

FRP complex state - immutability vs performance

My question is about mutability of 'state machine' object in FRP. I'm evaluating Bacon.js's Observable.withStateMachine.
My domain is trading robots. I've got a source event stream of 'Orders' which actually are tuples (buy or sell,price,volume)
I want to use something like the following pseudocode
```
orders.withStateMachine(new Book(), function(book,order) { // is 'book' mutable ?!!
var bookModifiedEvents = book.merge(order);
return [book, modifications];
})
Book.prototype.merge = function(order) {
// either append order into the Book, or generate Trade when order gets filled.
// generate corresponding bookModifiedEvents and tradeEvents.
return bookModifiedEvents.merge(tradeEvents);
}
```
This code shoud aggregate exchange orders into order book (which is a pair of priorityqueues for bids and asks orders sorted by price) and publish 'bookModified' and 'tradeOccured' event streams.
What I don't quite understand: Can I directly modify initial state object that was passed to my callback I give to .withStateMachine method?
Since FRP is all about immutability, I think I shouldn't. In such case I should create a lot of orderbook objects, which are very heavy (thousands of orders inside).
So I began to look to immutable collections, but, first, there is no immutable priority queue (if it makes sense), and, second, I'm afraid the performance would be poor for such collections.
So, finalizing, my question has 2 parts:
1) In case of HEAVY STATE, is it LEGAL to modify state in .withStateMachine??
Will it have some very-very bad side effects in bacon.js internals?
2) And if it is NOT allowed, what is recommended? Immutable collections using tries? Or some huge refactoring so I will not need orderbooks as phenomena in my code at all?
Thanks.
The whole idea of reactive programming doesn't work, if you mutate data or cause side-effects in something which is expected to be referentially transparent.
So 1) modifying the state isn't illegal, but you can run into undefined behaviour scenarios. So you are on your own.
2) And as mutation isn't recommended, what is an alternative: Try immutable.js as you mentioned it, make priority queue build on top of List or whatever is more suitable. Don't prejudge the performance. Immutable collections use sharing, so when you copy a collection, you don't need to copy the elements as they could be shared (they are assumed to be immutable too - why to copy stuff we aren't changing).

Is there a performance impact to using virtual getters in Mongoose with Node.js?

I'm starting to make use of virtual getter methods in Mongoose in a real-world application and am wondering if there is a performance impact to using them that would be good to know about up-front.
For example:
var User = new Schema({
name: {
first: String,
last: String
}
});
User.virtual('name.full').get(function () {
return this.name.first + ' ' + this.name.last;
});
Basically, I don't understand yet how the getters are generated into the Objects Mongoose uses, and whether the values are populated on object initialisation or on demand.
__defineGetter__ can be used to map a property to a method in Javascript but this does not appear to be used by Mongoose for virtual getters (based on a quick search of the code).
An alternative would be to populate each virtual path on initialisation, which would mean that for 100 users in the example above, the method to join the first and last names is called 100 times.
(I'm using a simplified example, the getters can be much more complex)
Inspecting the raw objects themselves (e.g. using console.dir) is a bit misleading because internal methods are used by Mongoose to handle translating objects to 'plain' objects or to JSON, which by default don't include the getters.
If anyone can shed light how this works, and whether lots of getters may become an issue at scale, I'd appreciate it.
They're probably done using the standard way:
Object.defineProperty(someInstance, propertyName, {get: yourGetter});
... meaning "not on initialization". Reading the virtual properties on initialization would defeat the point of virtual properties, I'd think.

Categories

Resources