Firebase Realtime Database - Database trigger structure best practice - javascript

I have some data in a Firebase Realtime Database where I wish to split one single onUpdate() trigger into two triggers. My data is structured as below.
notes: {
note-1234: {
access:{
author: "-L1234567890",
members: {
"-L1234567890": 0,
"-LAAA456BBBB": 1
}
},
data: {
title: "Hello",
order: 1
}
}
}
Currently I have one onUpdate() database trigger for node 'notes/{noteId}'.
exports.onNotesUpdate = functions
.region('europe-west1')
.database
.ref('notes/{noteId}')
.onUpdate((change, context) => {
//Perform actions
return noteFunc.notesUpdate({change, context, type:ACTIVE})
})
However, since my code is getting quite extensive handling both data and access updates - I am considering splitting the code into two parts. One part handling updates in the access child node and one handling the data child node. This way my code would be easier to read and understand by being logically split into separate code blocks.
exports.onNotesUpdateAccess = functions
.region('europe-west1')
.database
.ref('notes/{noteId}/access')
.onUpdate((change, context) => {
//Perform actions
return noteFunc.notesAccessUpdate({change, context, type:ACTIVE})
})
exports.onNotesUpdateData = functions
.region('europe-west1')
.database
.ref('notes/{noteId}/data')
.onUpdate((change, context) => {
//Perform actions
return noteFunc.notesDataUpdate({change, context, type:ACTIVE})
})
I am a bit unsure though, since both access and data are child nodes to the note-1234 (noteId) node.
My question is - Would this be a recommended approach or could separate triggers on child nodes create problems?
Worth mentioning is that the entire note-1234 node (both access and data) will sometimes be updated with one .update() action from my application. At other times only access or data will be updated.
Kind regards /K

It looks like you've nested two types of data under a single branch, which is something the Firebase documentation explicitly recommends against in the sections on avoid nesting data and flatten data structure.
So instead of merely splitting the code into two, I'd also recommend splitting the data structure into two top-level nodes: one for each type of data. For example:
"notes-data": {
note-1234: {
author: "-L1234567890",
members: {
"-L1234567890": 0,
"-LAAA456BBBB": 1
}
}
},
"notes-access": {
note-1234: {
title: "Hello",
order: 1
}
}
By using the same key in both top-level nodes, you can easily look up the other type of data for a note. And because Firebase pipelines these requests over a single connection such "client side joining of data" is not nearly as slow as you may initially think.

Related

Vuex/Redux store pattern - sharing single source of data in parent and child components that require variations of that data

I understand the benefits of using a store pattern and having a single source of truth for data shared across components in an application, and making API calls in a store action that gets called by components rather than making separate requests in every component that requires the data.
It's my understanding that if this data needs to change in some way, depending on the component using the data, this data can be updated by calling a store action with the appropriate filters/args, and updating the global store var accordingly.
However, I am struggling to understand how to solve the issue whereby a parent component requires one version of this data, and a child of that component requires another.
Consider the following example:
In an API, there exists a GET method on an endpoint to return all people. A flag can be passed to return people who are off sick:
GET: api/people returns ['John Smith', 'Joe Bloggs', 'Jane Doe']
GET: api/people?isOffSick=true returns ['Jane Doe']
A parent component in the front end application requires the unfiltered data, but a child component requires the filtered data. For arguments sake, the API does not return the isOffSick boolean in the response, so 2 separate requests need to be made.
Consider the following example in Vue.js:
// store.js
export const store = createStore({
state: {
people: []
},
actions: {
fetchPeople(filters) {
// ...
const res = api.get('/people' + queryString);
commit('setPeople', res.data);
}
},
mutations: {
setPeople(state, people) {
state.people = people;
}
}
});
// parent.vue - requires ALL people (NO filters/args passed to API)
export default {
mounted() {
this.setPeople();
},
computed: {
...mapState([
'people'
])
},
methods: {
...mapActions(['setPeople']),
}
}
// child.vue - requires only people who are off sick (filters/args passed to API)
export default {
mounted() {
this.setPeople({ isOffSick: true });
},
computed: {
...mapState([
'people'
])
},
methods: {
...mapActions(['setPeople']),
}
}
The parent component sets the store var with the data it requires, and then the child overwrites that store var with the data it requires.
Obviously the shared store var is not compatible with both components.
What is the preferred solution to this problem for a store pattern? Storing separate state inside the child component seems to violate the single source of truth for the data, which is partly the reason for using a store pattern in the first place.
Edit:
My question is pertaining to the architecture of the store pattern, rather than asking for a solution to this specific example. I appreciate that the API response in this example does not provide enough information to filter the global store of people, i.e. using a getter, for use in the child component.
What I am asking is: where is an appropriate place to store this second set of people if I wanted to stay true to a store focused design pattern?
It seems wrong somehow to create another store variable to hold the data just for the child component, yet it also seems counter-intuitive to store the second set of data in the child component's state, as that would not be in line with a store pattern approach and keeping components "dumb".
If there were numerous places that required variations on the people data that could only be created by a separate API call, there would either be a) lots of store variables for each "variation" of the data, or b) separate API calls and state in each of these components.
Thanks to tao I've found what I'm looking for:
The best approach would be to return the isOffSick property in the API response, then filtering the single list of people (e.g. using a store getter), thus having a single source of truth for all people in the store and preventing the need for another API request.
If that was not possible, it would make sense to add a secondary store variable for isOffSick people, to be consumed by the child component.

Proper approach for Async Pipe

I'm building an Angular app and trying to use the async pipe whenever possible to handle Observable suscriptions.
I'm still to exactly sure of when and why I should use it, most of the time I've seen that if I don't need to make any changes to the coming data I can just use it and show the data as-is; but if I need to to something to any piece of the data beforehand I should manually subscribe in my typescript code and handle everything there before displaying it.
So for example if I have an array of objects and I need to manipulate a string in one of the object's properties it would be better to manually subscribe, handling the response and then displaying that in my template.
Is this assumption correct?
I have used both types of observables within components and these are my reasons
(there are probably others that I am not aware of):
Reasons for using a subscribed observable:
To control subscribing and unsubscribing subscriptions manually.
Synchronizing the loading and manipulation of data within a component before internal use.
When the subscribed data is used internally (non-visually) within the component, such as a service or computations.
Reasons for using an asynchronous observable pipe:
Subscribing and unsubscribing of subscriptions of observables is handled automatically.
Synchronizing the loading and manipulation of data within a component before use within the HTML template.
When there are a number of HTML elements that depend on subscribed data and you would like the subscriptions released automatically after the component is destroyed.
In both cases you can load and manipulate subscribed data within your component before usage.
An example of each is below:
Subscription based
TS
someData: SomeClass[] = [
{ id: 1, desc: 'One', data: 100 },
{ id: 2, desc: 'Two', data: 200 },
{ id: 3, desc: 'Three', data: 300 }
];
someData$: Observable<SomeClass[]>;
this.someData$ = of(this.someData).subscribe((res) => {
this.someData = res.map((r) => {
r.data = Math.floor(r.data * 1.1);
return r;
});
});
Asynchronous observable pipe
TS
...
someData: SomeClass[] = [];
someData$: Subscription;
this.someData$ = of(this.someData).pipe(
map((res) => {
res.map((r) => {
r.data = Math.floor(r.data * 1.1);
});
return res;
})
);
HTML (for both options)
<li *ngFor="let data of someData$ | async">
Item={{ data.desc }}. Value={{ data.data }}
</li>
To summarize, the usage of either option depends on the complexity of your component, it's type (visual or non-visual) and how you would like to manage the memory management of subscriptions.
The answer to the original question is no, it is not necessarily better to manually subscribe when calculations/pre-processing are involved. You can also use an asynchronous pipe to do likewise as I showed with the two equivalent examples above.

Using Merge with a single Create call in FaunaDB is creating two documents?

Got a weird bug using FaunaDB with a Node.js running on a Netlify Function.
I am building out a quick proof-of-concept and initially everything worked fine. I had a Create query that looked like this:
const faunadb = require('faunadb');
const q = faunadb.query;
const CreateFarm = (data) => (
q.Create(
q.Collection('farms'),
{ data },
)
);
As I said, everything here works as expected. The trouble began when I tried to start normalizing the data FaunaDB sends back. Specifically, I want to merge the Fauna-generated ID into the data object, and send just that back with none of the other metadata.
I am already doing that with other resources, so I wrote a helper query and incorporated it:
const faunadb = require('faunadb');
const q = faunadb.query;
const Normalize = (resource) => (
q.Merge(
q.Select(['data'], resource),
{ id: q.Select(['ref', 'id'], resource) },
)
);
const CreateFarm = (data) => (
Normalize(
q.Create(
q.Collection('farms'),
{ data },
),
)
);
This Normalize function works as expected everywhere else. It builds the correct merged object with an ID with no weird side effects. However, when used with CreateFarm as above, I end up with two identical farms in the DB!!
I've spent a long time looking at the rest of the app. There is definitely only one POST request coming in, and CreateFarm is definitely only being called once. My best theory was that since Merge copies the first resource passed to it, Create is somehow getting called twice on the DB. But reordering the Merge call does not change anything. I have even tried passing in an empty object first, but I always end up with two identical objects created in the end.
Your helper creates an FQL query with two separate Create expressions. Each is evaluated and creates a new Document. This is not related to the Merge function.
Merge(
Select(['data'], Create(
Collection('farms'),
{ data },
)),
{ id: Select(['ref', 'id'], Create(
Collection('farms'),
{ data },
)) },
)
Use Let to create the document, then Update it with the id. Note that this increases the number of Write Ops required for you application. It will basically double the cost of creating Documents. But for what you are trying to do, this is how to do it.
Let(
{
newDoc: Create(q.Collection("farms"), { data }),
id: Select(["ref", "id"], Var("newDoc")),
data: Select(["data"], Var("newDoc"))
},
Update(
Select(["ref"], Var("newDoc")),
{
data: Merge(
Var("data"),
{ id: Var("id") }
)
}
)
)
Aside: why store id in the document data?
It's not clear why you might need to do this. Indexes can be created on the ref value themselves. If your client receives a Ref, then that can be passed into subsequent queries directly. In my experience, if you need the plain id value directly in an application, transform the Document as close to that point in the application as possible (like using ids as keys for an array of web components).
There's even a slight Compute advantage for using Ref values rather than re-building Ref expressions from a Collection name and ID. The expression Ref(Collection("farms"), "1234") counts as 2 FQL functions toward Compute costs, but reusing the Ref value returned by queries is free.
Working with GraphQL, the _id field is abstracted out for you because working with Document types in GraphQL would be pretty awful. However, the best practice for FQL queries would be to use the Ref's directly as much as possible.
Don't let me talk in absolute terms, though! I believe generally that there's a reason for anything. If you believe you really need to duplicate the ID in the Documents data, then I would be interested in a comment why.

Redux/Java: Managing normlized data & multiple model representations per entity

We are building a new app using React/Redux which is rendered server side.
We wish to follow best practice for Redux and normalize our data on the server before it's passed into the initial state for the store.
For this example, let's say we have a generic 'Products' entity that can be quite complex and is normalized on the root of our store and page level state in another object on the root of the store. So the structure and Reducers follow the typical 'slice reducer' pattern and will look like this:
{
page_x_state: PageReducer
products: ProductsReducer
}
We are using combine reducers to merge the reducers before passing them into the store.
Theoretical use case: We have a 'products' page that shows a list of basic product info. A user can click on a product to show a modal which then loads and shows the complete product data.
For the above example, the state sent from the server will contain only basic product models (3 or 4 fields), this is enough to render the table and fetching all product information at this point is wasteful and not very performant.
When a user clicks a product we will do an AJAX call fetch all data for that product. Once we have all data for the single product, should we update the instance in the products store with a full model? If so, we would then end up with a set of objects all of which could be different states (some could have minimal fields vs some which are full-blown objects with 10s of fields). Is this the best way to handle it?
Also, I would be interested to hear any thoughts of managing different representations of the same underlying model on the server and how to map it to the Redux store (in Java ideally).
EDIT:
Explicitly answering your first question, if your reducers are built up correctly your whole state tree should initialize with absolutely no data in it. But should be the correct shape. Your reducers should always have a default return value - when rendering server side - Redux should only render the initial state
After server-side rendering, when the store (that is now client side) needs updating because of a user action, your state shape for all of your product data is already there (it's just that some of it will probably be default values.). Rather than overwriting an object, your just filling in the blanks so to speak.
Lets say, in your second level view you need name, photo_url, price and brand and the initial view has 4 products on it, your rendered store would look something like this:
{
products: {
by_id: {
"1": {
id: "1",
name: "Cool Product",
tags: [],
brand: "Nike",
price: 1.99,
photo_url: "http://url.com",
category: "",
product_state: 0,
is_fetching: 0,
etc: ""
},
"2": {
id: "2",
name: "Another Cool Product",
tags: [],
brand: "Adidas",
price: 3.99,
photo_url: "http://url2.com",
category: "",
product_state: 0,
is_fetching: 0,
etc: ""
},
"3": {
id: "3",
name: "Crappy Product",
tags: [],
brand: "Badidas",
price: 0.99,
photo_url: "http://urlbad.com",
category: "",
product_state: 0,
is_fetching: 0,
etc: ""
},
"4": {
id: "4",
name: "Expensive product",
tags: [],
brand: "Rolex",
price: 199.99,
photo_url: "http://url4.com",
category: "",
product_state: 0,
is_fetching: 0,
etc: ""
}
},
all_ids: ["1", "2", "3", "4"]
}
}
You can see in the above data some keys are just empty strings or an empty array. But we have our data we need for the actual initial rendering of the page.
We could then make asynchronous calls on the client in the background immediately after the server has rendered and the document is ready, the chances are the server will return those initial calls before the user tries to get the data anyway. We can then load subsequent products on user request. I don't think that's the best approach but it's the one that makes most sense to me. Some other people might have some other ideas. It entirely depends on your app and use-case.
I would only keep one products object in state though and keep ALL the data pertaining to products in there.
I recently deployed an app into production and i'll share some of my
insights. The app, whilst not being too large in size, had a complex
data structure and having gone through the whole process as a newbie
to Redux in production (and having guidance from my architect) – These
are some of our takeaways. There's no right way in terms of architecture but there certainly are some things to avoid or do.
1. Before firing into writing your reducers design a 'static' state
If you don't know where you are going, you can't get there. Writing the whole structure of your state out flat will help you reason about how your state will change over time. We found this saved us time because we didn't have to really rewrite large sections.
2. Designing you state
keep it simple. The whole point of Redux is to simplify state management. We used a lot of the tips from the egghead.io tutorials on Redux that were created by Dan Abramov. They are clear really helped solve a lot of issues we were encountering. i'm sure you've read the docs about normalising state but the simple examples they gave actually carried through in most data patterns we implemented.
Rather than creating complex webs of data each chunk of data only held it's own data if it needed to reference another piece of it data it only referenced it by id we found this simple pattern covered most of our needs.
{
products: {
by_id: {
"1": {
id: "1",
name: "Cool Product",
tags: ["tag1", "tag2"],
product_state: 0,
is_fetching: 0,
etc: "etc"
}
},
all_ids: ["1"]
}
}
In the example above, tags might be another chunk of data with a similiar data structure using by_id and all_ids. All over the docs and tut, Abramov keeps referencing relational data and relational databases this was actually key for us. At first we kept looking at the UI and designing our state around how we thought we were going to show it. When this clicked and we started grouping the data based on it's relationship to other pieces of data, things started to click into place.
Quickly flipping to your question, I would avoid duplicating any data, as mentioned in another comment, personally i'd simply create a key in the state object called product_modal. let the modal take care of it's own state...
{
products: {
...
},
product_modal: {
current_product_id: "1",
is_fetching: true,
is_open: true
}
}
We found following this pattern with page state worked really well as well...we just treated it like any other piece of data with an id/name etc.
3. Reducer Logic
make sure reducers keep track of their own state. a lot of our reducers looked quite similiar, at first this felt like DRY hell but then we quickly realised the power of more reducers...say an action is dispatched and you want to update a whole chunk of state..no probs just check in your reducer for the action and return the new state. If you only want to update one or two fields in the same state...then you just do the same thing but only in the fields you want changing. most of our reducers were just simply a switch statement with an occasional nested if statement.
Combining Reducers
We didnt use combineReducers, we wrote our own. It wasn't hard, it helped us understand what was going on in Redux, and it allowed us to get a little smarter with our state. This tut was invaluable
Actions
Middleware is your friend...we used fetch API with redux-thunk to make RESTful requests. We split the required data requests into separate actions which called store.dispatch() for each data chunk that needed updating for the call. Each dispatch dispatched another action to update state. This kept our state updated modularly and allowed us to update large sections, or granularly as needed.
Dealing with an API
Ok so there's way too much to deal with here. I'm not saying our way is the best...but it has worked for us. Cut short...we have an internal API in java with publically exposed endpoints. The calls from this API didn't always map to the front end easily. We haven't implemented this, but ideally, an initial init endpoint could have been written on their end to get a lump of initial data that was needed to get things rolling on the front end for speeds sake.
We created a public API on the same server as the app, written in PHP. This API abstracted the internal API's endpoints (and in some cases the data too) away from the front end and the browser.
When the app would make a GET request to /api/projects/all the PHP API would then call our internal API, get the necessary data (sometimes across a couple of requests) and return that data in a usable format that redux could consume.
This might not be the ideal approach for a javascript app but we didn't have the option to create a new internal API structure, we needed to use one that has existed for several years, we have found the performance acceptable.
should we update the instance in the products store with a full model
It should be noted that Java and ReactJs+Redux don't have much conceptual overlap. Everything is a Javascript Object, not an Object with a Class.
Generally, storing all the data you receive in the Redux store state is the way to go. To work around the fact that some of the data will be minimal and some will be fully loaded you should make a conditional ajax call in the onComponentWillMount method of the individual product display container.
class MyGreatProduct extends React.Component {
onComponentWillMount() {
if(!this.props.thisProduct.prototype.hasProperty( 'somethingOnlyPresentInFullData' )) {
doAjaxCall(this.props.thisProduct.id).then((result) => {
this.props.storeNewResult(result.data);
}).catch(error=>{ ... })
}
}
// the rest of the component container code
}
const mapStateToProps = (state, ownProps) => {
return {
thisProduct: state.products.productInfo[ownProps.selectedId] || {id: ownProps.selectedId}
}
}
const mapDispatchToProps = (dispatch, ownProps) => {
return {
storeNewResult: (data) => { dispatch(productDataActions.fullProductData(data)) }
}
export default connect(mapStateToProps, mapDispatchToProps)(MyGreatProduct);
With this code, it should be somewhat clear how agnostic the components and containers can be regarding the exact data available in the Store at any given time.
Edit: In terms of managing different representations of the same underlying model on the server and how to map it to the Redux store, I'd try to use the same relative looseness you are dealing with once you have JSON. This should eliminate some coupling.
What I mean by this is just add the data you have to a JSObject to be consumed by React + Redux, without worrying too much about what values could potentially be stored in the Redux state during the execution of the application.
There's probably no right answer, just which strategy you prefer:
The simplest strategy is to add another piece to your reducer called selectedProduct and always overwrite it with the full object of the currently selected product. Your modal would always display the details of the selectedProduct. The downfalls of this strategy are that you aren't caching data in the case when a user selects the same product a second time, and your minimal fields aren't normalized.
Or you could update the instance in your Products store like you said, you'll just need logic to handle it. When you select a product, if it's fully loaded, render it. If not, make the ajax call, and show a spinner until its fully loaded.
If you don't have a concern with storing extra that data in the redux store it's not actually going to hit your performance very much if you use a normalized state. So on that front I would recommend caching as much as you can without risking security.
I think the best solution for you would be to use some redux middleware so your front end doesn't care how it gets the data. It will dispatch an action to the redux store and the middleware can determine whether or not it needs an AJAX call to get the new data. If it does need to fetch the data then the middleware can update the state when the AJAX resolves, if it doesn't then it can just discard the action because you already have the data. This way you can isolate the issue of having two different representations of the data to the middleware and implement a resolution there for it so your front end just asks for data and doesn't care how it gets it.
I don't know all the implementation details so as Jeff said its probably more what you prefer but I would definitely recommend adding some middleware to handle your AJAX calls if you haven't already it should make interfacing with the store much simpler.
If you want to read more on middleware the Redux documentation is pretty good.
https://redux.js.org/docs/advanced/Middleware.html
You could store each entity as an object of its various representations. In the action creator that updates the entity, include the representation as an argument:
const receiveProducts = (payload = [], representation = 'summary') => ({
type: 'PRODUCTS_RECEIVED',
payload, representation
});
const productReducer = (state = {}, action) => {
case 'PRODUCTS_RECEIVED': {
const { payload, representation } = action;
return {
...state,
...payload.reduce((next, entity) => {
next[entity.id] = {
...next[entity.id],
[representation]: entity
};
return next
}, {})
}
}
};
This means that whoever is calling receiveProducts() needs to know which representation is returned.

Redux: How to track whether data is locally modified?

I know this is a bit of an opinion question, and it's long, but I'm having trouble coming up with a good solution in Redux.
I'm building a level editor and I want to show the user whether or not the data has been modified since it was persisted to the server. First, consider the data:
chapters: [{
id: 1,
levelIds: [ 2 ]
}],
levels: [{
id: 2,
entityIds: [ 4, 5 ]
}],
entities: [{
id: 4, position:...
}, {
id: 5, position:...
}]
A chapter has multiple levels and a level has multiple entities. In the level editor, you edit full chapters as one item, so if any entity or level changes, the whole chapter is considered unsaved.
I want to track if the user has made any changes to the data since it was last persisted to the server. I want to show a * for example next to the chapter name if something has changed. Criteria:
Track unsaved (not persisted to server) status
Status must work with an undo/redo system
If some "nested" data is changed (like an entity position), the top level chapter must know it is unsaved, not the entity itself
I've explored a few options and I'll try to illustrate why I'm not sure if any solution is better than the others.
Option 1: Store an "unsaved" flag on each chapter
This solution involves storing an "unsaved" flag, possibly in a separate reducer, that's set to true on any modifications, and false when the chapter is saved to the server.
Problems
There are many actions I need to track, so this is a bit verbose. I also need to manually track which actions actually modify the chapter. It may look something like:
function isUnsavedReducer( state = {}, action ):Object {
switch( action.type ) {
case CHANGE_ENTITY_POSITION:
case CHANGE_ENTITY_NAME
...etc
case CHANGE_LEVEL_TITLE: {
return {
...state,
[ action.chapterId ]: true
};
}
}
}
Most of the actions don't know the chapterId. For example if I move an entity, the action looks like { entityId: 2, position: newPosition }. If I went this route I think I'd have to add the chapterId to all actions, even though they don't modify the chapter?
Option 2: Track the last chapter object saved
On the surface this looks simpler. Whenever the data is persisted, simply store the current in-memory chapter object:
function lastSavedReducer( state = {}, action ):Object {
switch( action.type ) {
case SAVE_CHAPTER: {
return {
...state,
[ action.chapterId ]: action.chapter
};
}
}
}
Then in the view to check if the current data is unsaved it's a strict equality check:
{ lastSaved[ currentChapterId ] === this.props.chater ? 'Saved' : 'Unsaved' }
Problems:
The same as problem #2 from above. When I modify an entity position with a redux action, I don't modify the top level chapter itself. I'd have to modify all of my reducers like chapterReducer to return a new object (even though nothing actually changed). I could also store the "last persisted entities" object, but since all entities are held in one store, I couldn't track which chapters were unsaved, just that something was unsaved.
Is there an obvious solution I'm missing? Should I modify how my data is stored or my reducer setup? The normalized data in my reducers, and the many possible actions that can set "unsaved", make me unsure of the best way forward. Have you implemented something similar and already know the pitfalls or best way forward?
The problem is that you have normalized your data, and this data act as the source of truth of your application.
Why do you want the edition events, that are not saved yet, to modify directly this normalized data?
It makes it hard to revert to saved state without refetching saved state from the backend (it shouldn't be required)
It is confusing because we don't know if the normalized data is saved or not (your current problem)
Best solution: keep unsaved actions on a list, but don't run them before save
Instead of modifying directly the normalized data, you could accumulate the edition actions in a list of your store.
At render-time, you have the ability to project your Redux store state with this list, so your store state does not change, but you can still connect to components the current edition state. This technique is very similar to event-sourcing and snapshots.
Imagine your store state looks like that:
const storeState = {
unsavedEditionActions: [a1,a2,a3],
normalizedData: {
chapters: [{
id: 1,
levelIds: [ 2 ]
}],
levels: [{
id: 2,
entityIds: [ 4, 5 ]
}],
entities: [{
id: 4, position:...
}, {
id: 5, position:...
}]
}
}
If a component needs to get the edition state, and the original state, you can do the reducing directly into mapStateToProps:
let Component = ...
Component = connect(state => {
return {
normalizedData: state.normalizedData,
normalizedDataEdited: state.unsavedEditionActions.reduce(myNormalizedDataReducer,state.normalizedData)
}
})(Component)
Your component will now receive both the currently saved normalized data, and the currently draft edition data. From then you can do your magic and shallow-compare everything of the 2 DBs to know what have been edited.
On save, you empty the list and apply it to your normalized data.
On cancel, you just empty this list (no need to refetch because you kept state before edition intact).
I'd recommend using Reselect and ImmutableJS for better performances, but without it should also work fine.
Note that this technique of maintaining a list of events is also what many people are using for optimistic updates (ie: giving immediate feedback after action of the user, without waiting for server approval, but having the ability to revert to previous state if server does not approve the change). Some useful links:
Lee Byron's Render 2016 talk (24")
Redux-optimistic-ui
Note you are not forced to do the reducing of your action-list in connect, you can also hold in your store directly 2 entries: normalizedData / normalizedDataEdited.
Other solution: build a graph to edit from normalized data, and compare initialGraph != editedGraph
Instead of modifing directly the normalized data on edition, you could build a temporary denormalized graph. Thus you could build a copy of this graph, edit the copy, and shallow-compare the copy to the original graph to see if it was modified.
function levelEditionReducer( state = {}, action ) {
switch( action.type ) {
case LEVEL_DATA_LOADED:
const levelData = action.payload.levelData;
return {
initialGraph: levelData,
editedGraph: levelData,
}
case CHANGE_ENTITY_NAME
const newEditedGraph = someFunctionToEditEntityName(state.editedGraph,action);
return (newEditedGraph == state.editedGraph) ? state : { ...state, editedGraph: newEditionState }
......
}
}
Then after edition, you can see that state.initialGraph != state.editedGraph and display a save button and a draft status.
Note this shallow compare only works if you deal very carefully with immutable data structures, and you make sure you don't update anything that does not have to be. If someFunctionToEditEntityName return the same state as the input, it is important the reducer returns the exact same state, because the action actually did not change anything!
Writing this kind of code is not simple with ES6 but if you use other libraries like Immutable or updeep or something similar, if you try to set to an object an attribute to a value it already has, it may short-circuit the operation and make sure to return you the original object.
Also note that you will detect your lack of care early, because the Save button would be displayed while it should not :) so you probably won't miss that.
Once the data is finally saved, you can merge the saved graph to your normalized data

Categories

Resources