I'm building an Angular app and trying to use the async pipe whenever possible to handle Observable suscriptions.
I'm still to exactly sure of when and why I should use it, most of the time I've seen that if I don't need to make any changes to the coming data I can just use it and show the data as-is; but if I need to to something to any piece of the data beforehand I should manually subscribe in my typescript code and handle everything there before displaying it.
So for example if I have an array of objects and I need to manipulate a string in one of the object's properties it would be better to manually subscribe, handling the response and then displaying that in my template.
Is this assumption correct?
I have used both types of observables within components and these are my reasons
(there are probably others that I am not aware of):
Reasons for using a subscribed observable:
To control subscribing and unsubscribing subscriptions manually.
Synchronizing the loading and manipulation of data within a component before internal use.
When the subscribed data is used internally (non-visually) within the component, such as a service or computations.
Reasons for using an asynchronous observable pipe:
Subscribing and unsubscribing of subscriptions of observables is handled automatically.
Synchronizing the loading and manipulation of data within a component before use within the HTML template.
When there are a number of HTML elements that depend on subscribed data and you would like the subscriptions released automatically after the component is destroyed.
In both cases you can load and manipulate subscribed data within your component before usage.
An example of each is below:
Subscription based
TS
someData: SomeClass[] = [
{ id: 1, desc: 'One', data: 100 },
{ id: 2, desc: 'Two', data: 200 },
{ id: 3, desc: 'Three', data: 300 }
];
someData$: Observable<SomeClass[]>;
this.someData$ = of(this.someData).subscribe((res) => {
this.someData = res.map((r) => {
r.data = Math.floor(r.data * 1.1);
return r;
});
});
Asynchronous observable pipe
TS
...
someData: SomeClass[] = [];
someData$: Subscription;
this.someData$ = of(this.someData).pipe(
map((res) => {
res.map((r) => {
r.data = Math.floor(r.data * 1.1);
});
return res;
})
);
HTML (for both options)
<li *ngFor="let data of someData$ | async">
Item={{ data.desc }}. Value={{ data.data }}
</li>
To summarize, the usage of either option depends on the complexity of your component, it's type (visual or non-visual) and how you would like to manage the memory management of subscriptions.
The answer to the original question is no, it is not necessarily better to manually subscribe when calculations/pre-processing are involved. You can also use an asynchronous pipe to do likewise as I showed with the two equivalent examples above.
Related
I understand the benefits of using a store pattern and having a single source of truth for data shared across components in an application, and making API calls in a store action that gets called by components rather than making separate requests in every component that requires the data.
It's my understanding that if this data needs to change in some way, depending on the component using the data, this data can be updated by calling a store action with the appropriate filters/args, and updating the global store var accordingly.
However, I am struggling to understand how to solve the issue whereby a parent component requires one version of this data, and a child of that component requires another.
Consider the following example:
In an API, there exists a GET method on an endpoint to return all people. A flag can be passed to return people who are off sick:
GET: api/people returns ['John Smith', 'Joe Bloggs', 'Jane Doe']
GET: api/people?isOffSick=true returns ['Jane Doe']
A parent component in the front end application requires the unfiltered data, but a child component requires the filtered data. For arguments sake, the API does not return the isOffSick boolean in the response, so 2 separate requests need to be made.
Consider the following example in Vue.js:
// store.js
export const store = createStore({
state: {
people: []
},
actions: {
fetchPeople(filters) {
// ...
const res = api.get('/people' + queryString);
commit('setPeople', res.data);
}
},
mutations: {
setPeople(state, people) {
state.people = people;
}
}
});
// parent.vue - requires ALL people (NO filters/args passed to API)
export default {
mounted() {
this.setPeople();
},
computed: {
...mapState([
'people'
])
},
methods: {
...mapActions(['setPeople']),
}
}
// child.vue - requires only people who are off sick (filters/args passed to API)
export default {
mounted() {
this.setPeople({ isOffSick: true });
},
computed: {
...mapState([
'people'
])
},
methods: {
...mapActions(['setPeople']),
}
}
The parent component sets the store var with the data it requires, and then the child overwrites that store var with the data it requires.
Obviously the shared store var is not compatible with both components.
What is the preferred solution to this problem for a store pattern? Storing separate state inside the child component seems to violate the single source of truth for the data, which is partly the reason for using a store pattern in the first place.
Edit:
My question is pertaining to the architecture of the store pattern, rather than asking for a solution to this specific example. I appreciate that the API response in this example does not provide enough information to filter the global store of people, i.e. using a getter, for use in the child component.
What I am asking is: where is an appropriate place to store this second set of people if I wanted to stay true to a store focused design pattern?
It seems wrong somehow to create another store variable to hold the data just for the child component, yet it also seems counter-intuitive to store the second set of data in the child component's state, as that would not be in line with a store pattern approach and keeping components "dumb".
If there were numerous places that required variations on the people data that could only be created by a separate API call, there would either be a) lots of store variables for each "variation" of the data, or b) separate API calls and state in each of these components.
Thanks to tao I've found what I'm looking for:
The best approach would be to return the isOffSick property in the API response, then filtering the single list of people (e.g. using a store getter), thus having a single source of truth for all people in the store and preventing the need for another API request.
If that was not possible, it would make sense to add a secondary store variable for isOffSick people, to be consumed by the child component.
I have the following observable: messages$: Observable<Message[] | undefined>. Message has 2 fields: id and content, both of which are string.
What I would like to do is to modify messages$ so that a function foo(string) is invoked on the content of each Message.
It doesn't seem difficult at face value but I'm new to observables and unfortunately I got stuck.
I guess solution is simple:
messages$: Observable<Message[] | undefined> = yourSource
.pipe(
map(messages => {
messages.forEach(value => {
value.content = foo(value.content);
});
return messages;
}
)
What you are asking is how can you change your Observable to an observable with sideeffect. You probably don't ever want that (except for simple cases like logging stuff).
Instead what you want to do is subscribe to that Observable and then do your logic in the subscription. That way you're also guaranteed that your logic is only run once (or the number you want) instead of being reliant on something else subscribing to the observable.
messages$.subscribe(({ content }) => { foo(content); });
Be careful of subscription that is not unsubscribed.
Check out this question for a solution to that generic problem:
RXJS - Angular - unsubscribe from Subjects
If i misunderstood your question, and what you really want is an observable that transforms the data, and your foo method is pure (does not modify the inputs or other external data), the solution is different:
const modifiedMessages$ = messages$.pipe(map(({ content }) => foo(content));
I have some data in a Firebase Realtime Database where I wish to split one single onUpdate() trigger into two triggers. My data is structured as below.
notes: {
note-1234: {
access:{
author: "-L1234567890",
members: {
"-L1234567890": 0,
"-LAAA456BBBB": 1
}
},
data: {
title: "Hello",
order: 1
}
}
}
Currently I have one onUpdate() database trigger for node 'notes/{noteId}'.
exports.onNotesUpdate = functions
.region('europe-west1')
.database
.ref('notes/{noteId}')
.onUpdate((change, context) => {
//Perform actions
return noteFunc.notesUpdate({change, context, type:ACTIVE})
})
However, since my code is getting quite extensive handling both data and access updates - I am considering splitting the code into two parts. One part handling updates in the access child node and one handling the data child node. This way my code would be easier to read and understand by being logically split into separate code blocks.
exports.onNotesUpdateAccess = functions
.region('europe-west1')
.database
.ref('notes/{noteId}/access')
.onUpdate((change, context) => {
//Perform actions
return noteFunc.notesAccessUpdate({change, context, type:ACTIVE})
})
exports.onNotesUpdateData = functions
.region('europe-west1')
.database
.ref('notes/{noteId}/data')
.onUpdate((change, context) => {
//Perform actions
return noteFunc.notesDataUpdate({change, context, type:ACTIVE})
})
I am a bit unsure though, since both access and data are child nodes to the note-1234 (noteId) node.
My question is - Would this be a recommended approach or could separate triggers on child nodes create problems?
Worth mentioning is that the entire note-1234 node (both access and data) will sometimes be updated with one .update() action from my application. At other times only access or data will be updated.
Kind regards /K
It looks like you've nested two types of data under a single branch, which is something the Firebase documentation explicitly recommends against in the sections on avoid nesting data and flatten data structure.
So instead of merely splitting the code into two, I'd also recommend splitting the data structure into two top-level nodes: one for each type of data. For example:
"notes-data": {
note-1234: {
author: "-L1234567890",
members: {
"-L1234567890": 0,
"-LAAA456BBBB": 1
}
}
},
"notes-access": {
note-1234: {
title: "Hello",
order: 1
}
}
By using the same key in both top-level nodes, you can easily look up the other type of data for a note. And because Firebase pipelines these requests over a single connection such "client side joining of data" is not nearly as slow as you may initially think.
My situation has 4 components nested within each other in this order: Products (page), ProductList, ProductListItem, and CrossSellForm.
Products executes a graphql query (using urql) as such:
const productsQuery = `
query {
products {
id
title
imageSrc
crossSells {
id
type
title
}
}
}
`;
...
const [response] = useQuery({
query: productsQuery,
});
const { data: { products = [] } = {}, fetching, error } = response;
...
<ProductList products={products} />
products returns an array of Products that contains a field, crossSells, that returns an array of CrossSells. Products is propagated downwards to CrossSellForm, which contains a mutation query that returns an array of CrossSells.
The problem is that when I submit the crossSellForm the request goes through successfully but the crossSells up in Products does not update, and the UI reflects stale data. This only happens when the initial fetch up in Products contains no crossSells, so the initial response looks something like this:
{
data: {
products: [
{
id: '123123',
title: 'Nice',
imageSrc: 'https://image.com',
crossSells: [],
__typename: "Product"
},
...
]
}
}
}
If there is an existing crossSell, there is no problem, the ui updates properly and the response looks like this:
{
data: {
products: [
{
id: '123123',
title: 'Nice',
imageSrc: 'https://image.com',
crossSells: [
{
id: 40,
title: 'Nice Stuff',
type: 'byVendor',
__typename: 'CrossSell'
}
],
__typename: "Product"
},
...
]
}
}
}
I read up a bit on urql's caching mechanism at https://formidable.com/open-source/urql/docs/basics/ and from what I understand it uses a document cache, so it caches the document based on __typename. If a query requests something with a the same __typename it will pull it from the cache. If a mutation occurs with the same __typename it will invalidate all objects in the cache with that __typename so the next time the user fetches an object with that __typename it will execute a network request instead of cache.
What I think is going on is in the initial situation where there are products but no crossSells the form submission is successful but the Products page does not update because there is no reference to an object with __typename of CrossSell, but in the second situation there is so it busts the cache and executes the query again, refreshes products and cross-sells and the UI is properly updated.
I've really enjoyed the experience of using urql hooks with React components and want to continue but I'm not sure how I can fix this problem without reaching for another tool.
I've tried to force a re-render upon form submission using tips from: How can I force component to re-render with hooks in React? but it runs into the same problem where Products will fetch from the cache again and crossSells will return an empty array. I thought about modifying urql's RequestPolicy to network only, along with the forced re-render, but I thought that would be unnecessarily expensive to re-fetch every single time. The solution I'm trying out now is to move all the state into redux, a single source of truth so that any update to crossSells will propagate properly, and although I'm sure it will work it will also mean I'll trade in a lot of the convenience I had with hooks for standard redux boilerplate.
How can I gracefully update Products with crossSells upon submitting the form within CrossSellForm, while still using urql and hooks?
core contributor here 👋
As you've already discovered, there's an open issue for this that details the inherent problem our our simple, default cache. It's a document cache so kind of unsuitable for more complex tasks where normalisation can help.
When we have am empty array of data, there's no indication that a specific result needs to be refetched.
Instead of using the network-only policy you could try cache-and-network, but that doesn't solve the underlying issue that the operation (your query) is not invalidated by the mutation. So no refetch will be triggered.
I'd very much recommend you Graphcache, our normalised cache, which you've also already discovered. At its minimum with no configuration (!) it's actually a drop-in replacement that's already quite a bit smarter. https://github.com/FormidableLabs/urql-exchange-graphcache
The configuration for it is really just addons to teach it how to handle more tasks automatically! I'd be happy to help you in issues, here, or via Spectrum if you need to customise it. But my advise would be, give it a shot, because in the best case, you'll have all your edge cases just working without any changes ✨
I'm working on a page whose 'Data Model' is a collection, for example, an array of people. They are packed into React Components and tiled on the page. Essentially it's like:
class App extends React.Component {
constructor() {
super();
this.state = { people: /* some data */ };
}
render () {
return (
<div>
{this.state.people.map((person) =>
<People data={person}></People>)}
</div>);
}
}
Now I want to attach an edit section for each entry in <People> component, which allows the user to update the name, age ... all kinds of information for a specific entry.
Since React does not support mutating props inside components, I searched and found that adding callbacks as props can solve the problem of passing data to parent. But since there are many fields to update, there would be many callbacks such as onNameChanged, onEmailChanged... which could be very ugly (also more and more verbose as the number of fields keeps growing).
So what is the right way for it?
Honestly? The best way is Flux (back to that in a minute).
If you start to get into the process of passing data down the tree in the form of props, then passing it back up to be edited using callbacks, then you're breaking the unidirectional data flow that React is built around.
However, not all projects need to be written to ideal standards and it is possible to build this without Flux (and sometimes it might even be the right solution).
Without Flux
You can implement this without the need for a mass of callbacks, by passing down a single edit function as a prop. This function should take an id and a new person object, then update the state inside the parent component whenever it runs. Here's an example.
editPerson(id, editedPerson) {
const people = this.state.people;
const newFragment = { [id]: editedPerson };
// create a new list of people, with the updated person in
this.setState({
people: Object.assign([], people, newFragment)
});
},
render() {
// ...
{this.state.people.map((person, index) => {
const edit = this.editPerson.bind(this, index);
return (
<People data={person} edit={edit}></People>
);
})}
// ...
}
Then inside your person component, any time you make a change to the person, simply pass the person back up to the parent state with the callback.
However, if you visualize the flow of data through your application, you've now created a cycle that looks something like this.
App
^
|
v
Person
It's no longer trivial to work out where the data in app came from (it is still quite simple in such a small app, but obviously the bigger it gets the harder it is to tell.
With Flux
In the beginning, Facebook developers wrote React applications with unidirectional data flows and they saw that it was good. However, a need arose for data to go up the tree, which resulted in a crisis. How shall our data flow be unidirectional and still return to the top of the tree? And on the seventh day, they created Flux(1) and saw that it was very good.
Flux allows you to describe your changes as actions and pass them out of your components, to stores (self contained state boxes) which understand how to manipulate their state based on the action. Then the store tells all the components that care about it that something has changed, at which point the components can fetch new data to render.
You regain your unidirectional data flow, with an architecture that looks like this.
App <---- [Stores]
| ^
v |
Person --> Dispatcher
Stores
Rather than keeping your state in your <App /> component, you would probably want to create a People store to keep track of your list of people.
Maybe it would look something like this.
// stores/people-store.js
const people = [];
export function getPeople() {
return people;
}
function editPerson(id, person) {
// ...
}
function addPerson(person) {
// ...
}
function removePerson(id) {
// ...
}
Now, we could export these functions and let our components call them directly, but that's bad because it means that our components have to have knowledge of the design of the store and we want to keep them as dumb as possible.
Actions
Instead, our components create simple, serializable actions that our stores can understand. Here are some examples:
// remove person with id 53
{ type: 'PEOPLE_REMOVE', payload: 53 }
// create a new person called John Foo
{ type: 'PEOPLE_ADD', payload: { name: 'John Foo' } }
// edit person 13
{
type: 'PEOPLE_EDIT',
payload: {
id: 13,
person: { name: 'Unlucky Bill' }
}
}
These actions don't have to have these specific keys, they don't even have to be objects either, this is just the convention from Flux Standard Actions.
Dispatcher
Now, we have tell our store how to deal with these actions when they arrive.
// stores/people-store.js
// ...
dispatcher.register(function(action) {
switch(action.type) {
case 'PEOPLE_REMOVE':
removePerson(action.payload);
case 'PEOPLE_ADD':
addPerson(action.payload);
case 'PEOPLE_EDIT':
editPerson(action.payload.id, action.payload.person);
}
});
Phew. Lot of work so far, nearly there.
Now we can start to dispatch these actions from our components.
// components/people.js
// ...
onEdit(editedPerson) {
dispatcher.dispatch({
type: 'PEOPLE_EDIT',
payload: {
id: this.props.id,
person: editedPerson
}
});
}
onRemove() {
dispatcher.dispatch({
type: 'PEOPLE_REMOVE',
payload: this.props.id
});
}
// ...
When you edit the person, call the this.onEdit method and it will dispatch the appropriate action to your stores. Same goes for removing a person. Normally you'd move this stuff into action creators, but that's a topic for another time.
Ok, finally getting somewhere! Now our components can create actions that update the data in our stores. How do we get that data back into our components?
Initially, it's very simple. We can require the store in our top level component and simply ask for the data.
// components/app.js
import { getPeople } from './stores/people-store';
// ...
constructor() {
super();
this.state = { people: getPeople() };
}
We can pass this data down in exactly the same way, but what happens when the data changes?
The official stance from Flux is basically "Not our problem". Their examples use Node's Event Emitter class to allow stores to accept callback functions that are called when the store updates.
This allows you to write code that looks something like this:
componentWillMount() {
peopleStore.addListener(this.peopleUpdated);
},
componentWillUnmount() {
peopleStore.removeListener(this.peopleUpdated);
},
peopleUpdated() {
this.setState({ people: getPeople() });
}
Really, the ball is in your court on this one. There are many other strategies for getting the data back into your program. Reflux creates the listen method for you automatically, Redux allows you to declaratively specify which components receive which parts of the store as props, then it handles the updating. Spend enough time with Flux and you'll find a preference.
Now, you're probably thinking, blimey — this seems like a lot of effort to go to just to add edit functionality to a component; and you're right, it is!
For small applications, you probably don't need Flux.
Sure there are lots of benefits, but the additional complexity just isn't always warranted. As your application grows, you'll find that if you've fluxed it up, it will be much easier to manage, maintain and debug.
The trick is to know when it's appropriate to use the Flux architecture and hopefully when the time comes, this overly long, rambling answer will have cleared things up for you.
This isn't actually true.