How to deal with relational data in Redux? - javascript

The app I'm creating has a lot of entities and relationships (database is relational). To get an idea, there're 25+ entities, with any type of relations between them (one-to-many, many-to-many).
The app is React + Redux based. For getting data from the Store, we're using Reselect library.
The problem I'm facing is when I try to get an entity with its relations from the Store.
In order to explain the problem better, I've created a simple demo app, that has similar architecture. I'll highlight the most important code base. In the end I'll include a snippet (fiddle) in order to play with it.
Demo app
Business logic
We have Books and Authors. One Book has one Author. One Author has many Books. As simple as possible.
const authors = [{
id: 1,
name: 'Jordan Enev',
books: [1]
}];
const books = [{
id: 1,
name: 'Book 1',
category: 'Programming',
authorId: 1
}];
Redux Store
Store is organized in flat structure, compliant with Redux best practices - Normalizing State Shape.
Here is the initial state for both Books and Authors Stores:
const initialState = {
// Keep entities, by id:
// { 1: { name: '' } }
byIds: {},
// Keep entities ids
allIds:[]
};
Components
The components are organized as Containers and Presentations.
<App /> component act as Container (gets all needed data):
const mapStateToProps = state => ({
books: getBooksSelector(state),
authors: getAuthorsSelector(state),
healthAuthors: getHealthAuthorsSelector(state),
healthAuthorsWithBooks: getHealthAuthorsWithBooksSelector(state)
});
const mapDispatchToProps = {
addBooks, addAuthors
}
const App = connect(mapStateToProps, mapDispatchToProps)(View);
<View /> component is just for the demo. It pushes dummy data to the Store and renders all Presentation components as <Author />, <Book />.
Selectors
For the simple selectors, it looks straightforward:
/**
* Get Books Store entity
*/
const getBooks = ({books}) => books;
/**
* Get all Books
*/
const getBooksSelector = createSelector(getBooks,
(books => books.allIds.map(id => books.byIds[id]) ));
/**
* Get Authors Store entity
*/
const getAuthors = ({authors}) => authors;
/**
* Get all Authors
*/
const getAuthorsSelector = createSelector(getAuthors,
(authors => authors.allIds.map(id => authors.byIds[id]) ));
It gets messy, when you have a selector, that computes / queries relational data.
The demo app includes the following examples:
Getting all Authors, which have at least one Book in specific category.
Getting the same Authors, but together with their Books.
Here are the nasty selectors:
/**
* Get array of Authors ids,
* which have books in 'Health' category
*/
const getHealthAuthorsIdsSelector = createSelector([getAuthors, getBooks],
(authors, books) => (
authors.allIds.filter(id => {
const author = authors.byIds[id];
const filteredBooks = author.books.filter(id => (
books.byIds[id].category === 'Health'
));
return filteredBooks.length;
})
));
/**
* Get array of Authors,
* which have books in 'Health' category
*/
const getHealthAuthorsSelector = createSelector([getHealthAuthorsIdsSelector, getAuthors],
(filteredIds, authors) => (
filteredIds.map(id => authors.byIds[id])
));
/**
* Get array of Authors, together with their Books,
* which have books in 'Health' category
*/
const getHealthAuthorsWithBooksSelector = createSelector([getHealthAuthorsIdsSelector, getAuthors, getBooks],
(filteredIds, authors, books) => (
filteredIds.map(id => ({
...authors.byIds[id],
books: authors.byIds[id].books.map(id => books.byIds[id])
}))
));
Summing up
As you can see, computing / querying relational data in selectors gets too complicated.
Loading child relations (Author->Books).
Filtering by child entities (getHealthAuthorsWithBooksSelector()).
There will be too many selector parameters, if an entity has a lot of child relations. Checkout getHealthAuthorsWithBooksSelector() and imagine if the Author has a lot of more relations.
So how do you deal with relations in Redux?
It looks like a common use case, but surprisingly there aren't any good practices round.
*I checked redux-orm library and it looks promising, but its API is still unstable and I'm not sure is it production ready.
const { Component } = React
const { combineReducers, createStore } = Redux
const { connect, Provider } = ReactRedux
const { createSelector } = Reselect
/**
* Initial state for Books and Authors stores
*/
const initialState = {
byIds: {},
allIds:[]
}
/**
* Book Action creator and Reducer
*/
const addBooks = payload => ({
type: 'ADD_BOOKS',
payload
})
const booksReducer = (state = initialState, action) => {
switch (action.type) {
case 'ADD_BOOKS':
let byIds = {}
let allIds = []
action.payload.map(entity => {
byIds[entity.id] = entity
allIds.push(entity.id)
})
return { byIds, allIds }
default:
return state
}
}
/**
* Author Action creator and Reducer
*/
const addAuthors = payload => ({
type: 'ADD_AUTHORS',
payload
})
const authorsReducer = (state = initialState, action) => {
switch (action.type) {
case 'ADD_AUTHORS':
let byIds = {}
let allIds = []
action.payload.map(entity => {
byIds[entity.id] = entity
allIds.push(entity.id)
})
return { byIds, allIds }
default:
return state
}
}
/**
* Presentational components
*/
const Book = ({ book }) => <div>{`Name: ${book.name}`}</div>
const Author = ({ author }) => <div>{`Name: ${author.name}`}</div>
/**
* Container components
*/
class View extends Component {
componentWillMount () {
this.addBooks()
this.addAuthors()
}
/**
* Add dummy Books to the Store
*/
addBooks () {
const books = [{
id: 1,
name: 'Programming book',
category: 'Programming',
authorId: 1
}, {
id: 2,
name: 'Healthy book',
category: 'Health',
authorId: 2
}]
this.props.addBooks(books)
}
/**
* Add dummy Authors to the Store
*/
addAuthors () {
const authors = [{
id: 1,
name: 'Jordan Enev',
books: [1]
}, {
id: 2,
name: 'Nadezhda Serafimova',
books: [2]
}]
this.props.addAuthors(authors)
}
renderBooks () {
const { books } = this.props
return books.map(book => <div key={book.id}>
{`Name: ${book.name}`}
</div>)
}
renderAuthors () {
const { authors } = this.props
return authors.map(author => <Author author={author} key={author.id} />)
}
renderHealthAuthors () {
const { healthAuthors } = this.props
return healthAuthors.map(author => <Author author={author} key={author.id} />)
}
renderHealthAuthorsWithBooks () {
const { healthAuthorsWithBooks } = this.props
return healthAuthorsWithBooks.map(author => <div key={author.id}>
<Author author={author} />
Books:
{author.books.map(book => <Book book={book} key={book.id} />)}
</div>)
}
render () {
return <div>
<h1>Books:</h1> {this.renderBooks()}
<hr />
<h1>Authors:</h1> {this.renderAuthors()}
<hr />
<h2>Health Authors:</h2> {this.renderHealthAuthors()}
<hr />
<h2>Health Authors with loaded Books:</h2> {this.renderHealthAuthorsWithBooks()}
</div>
}
};
const mapStateToProps = state => ({
books: getBooksSelector(state),
authors: getAuthorsSelector(state),
healthAuthors: getHealthAuthorsSelector(state),
healthAuthorsWithBooks: getHealthAuthorsWithBooksSelector(state)
})
const mapDispatchToProps = {
addBooks, addAuthors
}
const App = connect(mapStateToProps, mapDispatchToProps)(View)
/**
* Books selectors
*/
/**
* Get Books Store entity
*/
const getBooks = ({ books }) => books
/**
* Get all Books
*/
const getBooksSelector = createSelector(getBooks,
books => books.allIds.map(id => books.byIds[id]))
/**
* Authors selectors
*/
/**
* Get Authors Store entity
*/
const getAuthors = ({ authors }) => authors
/**
* Get all Authors
*/
const getAuthorsSelector = createSelector(getAuthors,
authors => authors.allIds.map(id => authors.byIds[id]))
/**
* Get array of Authors ids,
* which have books in 'Health' category
*/
const getHealthAuthorsIdsSelector = createSelector([getAuthors, getBooks],
(authors, books) => (
authors.allIds.filter(id => {
const author = authors.byIds[id]
const filteredBooks = author.books.filter(id => (
books.byIds[id].category === 'Health'
))
return filteredBooks.length
})
))
/**
* Get array of Authors,
* which have books in 'Health' category
*/
const getHealthAuthorsSelector = createSelector([getHealthAuthorsIdsSelector, getAuthors],
(filteredIds, authors) => (
filteredIds.map(id => authors.byIds[id])
))
/**
* Get array of Authors, together with their Books,
* which have books in 'Health' category
*/
const getHealthAuthorsWithBooksSelector = createSelector([getHealthAuthorsIdsSelector, getAuthors, getBooks],
(filteredIds, authors, books) => (
filteredIds.map(id => ({
...authors.byIds[id],
books: authors.byIds[id].books.map(id => books.byIds[id])
}))
))
// Combined Reducer
const reducers = combineReducers({
books: booksReducer,
authors: authorsReducer
})
// Store
const store = createStore(reducers)
const render = () => {
ReactDOM.render(<Provider store={store}>
<App />
</Provider>, document.getElementById('root'))
}
render()
<div id="root"></div>
<script src="https://cdnjs.cloudflare.com/ajax/libs/react/15.1.0/react.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/react/15.1.0/react-dom.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/babel-core/5.8.24/browser.js"></script>
<script src="https://npmcdn.com/reselect#3.0.1/dist/reselect.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/redux/3.3.1/redux.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/react-redux/4.4.6/react-redux.min.js"></script>
JSFiddle.

This reminds me of how I started one of my projects where the data was highly relational. You think too much still about the backend way of doing things, but you gotta start thinking of more of the JS way of doing things (a scary thought for some, to be sure).
1) Normalized Data in State
You've done a good job of normalizing your data, but really, it's only somewhat normalized. Why do I say that?
...
books: [1]
...
...
authorId: 1
...
You have the same conceptual data stored in two places. This can easily become out of sync. For example, let's say you receive new books from the server. If they all have authorId of 1, you also have to modify the book itself and add those ids to it! That's a lot of extra work that doesn't need to be done. And if it isn't done, the data will be out of sync.
One general rule of thumb with a redux style architecture is never store (in the state) what you can compute. That includes this relation, it is easily computed by authorId.
2) Denormalized Data in Selectors
We mentioned having normalized data in the state was not good. But denormalizing it in selectors is ok right? Well, it is. But the question is, is it needed? I did the same thing you are doing now, getting the selector to basically act like a backend ORM. "I just want to be able to call author.books and get all the books!" you may be thinking. It would be so easy to just be able to loop through author.books in your React component, and render each book, right?
But, do you really want to normalize every piece of data in your state? React doesn't need that. In fact, it will also increase your memory usage. Why is that?
Because now you will have two copies of the same author, for instance:
const authors = [{
id: 1,
name: 'Jordan Enev',
books: [1]
}];
and
const authors = [{
id: 1,
name: 'Jordan Enev',
books: [{
id: 1,
name: 'Book 1',
category: 'Programming',
authorId: 1
}]
}];
So getHealthAuthorsWithBooksSelector now creates a new object for each author, which will not be === to the one in the state.
This is not bad. But I would say it's not ideal. On top of the redundant (<-
keyword) memory usage, it's better to have one single authoritative reference to each entity in your store. Right now, there are two entities for each author that are the same conceptually, but your program views them as totally different objects.
So now when we look at your mapStateToProps:
const mapStateToProps = state => ({
books: getBooksSelector(state),
authors: getAuthorsSelector(state),
healthAuthors: getHealthAuthorsSelector(state),
healthAuthorsWithBooks: getHealthAuthorsWithBooksSelector(state)
});
You are basically providing the component with 3-4 different copies of all the same data.
Thinking About Solutions
First, before we get to making new selectors and make it all fast and fancy, let's just make up a naive solution.
const mapStateToProps = state => ({
books: getBooksSelector(state),
authors: getAuthors(state),
});
Ahh, the only data this component really needs! The books, and the authors. Using the data therein, it can compute anything it needs.
Notice that I changed it from getAuthorsSelector to just getAuthors? This is because all the data we need for computing is in the books array, and we can just pull the authors by id one we have them!
Remember, we're not worrying about using selectors yet, let's just think about the problem in simple terms. So, inside the component, let's build an "index" of books by their author.
const { books, authors } = this.props;
const healthBooksByAuthor = books.reduce((indexedBooks, book) => {
if (book.category === 'Health') {
if (!(book.authorId in indexedBooks)) {
indexedBooks[book.authorId] = [];
}
indexedBooks[book.authorId].push(book);
}
return indexedBooks;
}, {});
And how do we use it?
const healthyAuthorIds = Object.keys(healthBooksByAuthor);
...
healthyAuthorIds.map(authorId => {
const author = authors.byIds[authorId];
return (<li>{ author.name }
<ul>
{ healthBooksByAuthor[authorId].map(book => <li>{ book.name }</li> }
</ul>
</li>);
})
...
Etc etc.
But but but you mentioned memory earlier, that's why we didn't denormalize stuff with getHealthAuthorsWithBooksSelector, right?
Correct! But in this case we aren't taking up memory with redundant information. In fact, every single entity, the books and the authors, are just reference to the original objects in the store! This means that the only new memory being taken up is by the container arrays/objects themselves, not by the actual items in them.
I've found this kind of solution ideal for many use cases. Of course, I don't keep it in the component like above, I extract it into a reusable function which creates selectors based on certain criteria.
Although, I'll admit I haven't had a problem with the same complexity as yours, in that you have to filter a specific entity, through another entity. Yikes! But still doable.
Let's extract our indexer function into a reusable function:
const indexList = fieldsBy => list => {
// so we don't have to create property keys inside the loop
const indexedBase = fieldsBy.reduce((obj, field) => {
obj[field] = {};
return obj;
}, {});
return list.reduce(
(indexedData, item) => {
fieldsBy.forEach((field) => {
const value = item[field];
if (!(value in indexedData[field])) {
indexedData[field][value] = [];
}
indexedData[field][value].push(item);
});
return indexedData;
},
indexedBase,
);
};
Now this looks like kind of a monstrosity. But we must make certain parts of our code complex, so we can make many more parts clean. Clean how?
const getBooksIndexed = createSelector([getBooksSelector], indexList(['category', 'authorId']));
const getBooksIndexedInCategory = category => createSelector([getBooksIndexed],
booksIndexedBy => {
return indexList(['authorId'])(booksIndexedBy.category[category])
});
// you can actually abstract this even more!
...
later that day
...
const mapStateToProps = state => ({
booksIndexedBy: getBooksIndexedInCategory('Health')(state),
authors: getAuthors(state),
});
...
const { booksIndexedBy, authors } = this.props;
const healthyAuthorIds = Object.keys(booksIndexedBy.authorId);
healthyAuthorIds.map(authorId => {
const author = authors.byIds[authorId];
return (<li>{ author.name }
<ul>
{ healthBooksByAuthor[authorId].map(book => <li>{ book.name }</li> }
</ul>
</li>);
})
...
This is not as easy to understand of course, because it relies primarily on composing these functions and selectors to build representations of data, instead of renormalizing it.
The point is: We're not looking to recreate copies of the state with normalized data. We're trying to *create indexed representations (read: references) of that state which are easily digested by components.
The indexing I've presented here is very reusable, but not without certain problems (I'll let everyone else figure those out). I don't expect you to use it, but I do expect you to learn this from it: rather than trying to coerce your selectors to give you backend-like, ORM-like nested versions of your data, use the inherent ability to link your data using the tools you already have: ids and object references.
These principles can even be applied to your current selectors. Rather than create a bunch of highly specialized selectors for every conceivable combination of data...
1) Create functions that create selectors for you based on certain parameters
2) Create functions that can be used as the resultFunc of many different selectors
Indexing isn't for everyone, I'll let others suggest other methods.

Author of the question's here!
One year later, now I'm going to summarize my experience and thoughts here.
I was looking into two possible approaches for handling the relational data:
1. Indexing
aaronofleonard, already gave us a great and very detailed answer here, where his main concept is as follows:
We're not looking to recreate copies of the state with normalized
data. We're trying to *create indexed representations (read:
references) of that state which are easily digested by components.
It perfectly fits to the examples, he mentions. But it's important to highlight that his examples create indexes only for one-to-many relations (one Book has many Authors). So I started to think about how this approach will fit to all my possible requirements:
Handing many-to-many cases. Example: One Book has many Authors, through BookStore.
Handling Deep filtration. Example: Get all the Books from the Healthy Category, where at least on Author is from a specific Country. Now just imagine if we have many more nested levels of entities.
Of course it's doable, but as you can see the things can get serious very soon.
If you feel comfortable with managing such complexity with Indexing, then make sure you have enough design time for creating your selectors and composing indexing utilities.
I continued searching for a solution, because creating such an Indexing utility looks totally out-of-scope for the project. It's more like creating a third-party library.
So I decided to give a try to Redux-ORM library.
2. Redux-ORM
A small, simple and immutable ORM to manage relational data in your Redux store.
Without being verbose, here's how I managed all the requirements, just using the library:
// Handing many-to-many case.
const getBooks = createSelector({ Book } => {
return Books.all().toModelArray()
.map( book => ({
book: book.ref,
authors: book.authors.toRefArray()
})
})
// Handling Deep filtration.
// Keep in mind here you can pass parameters, instead of hardcoding the filtration criteria.
const getFilteredBooks = createSelector({ Book } => {
return Books.all().toModelArray()
.filter( book => {
const authors = book.authors.toModelArray()
const hasAuthorInCountry = authors.filter(a => a.country.name === 'Bulgaria').length
return book.category.type === 'Health' && hasAuthorInCountry
})
.map( book => ({
book: book.ref,
authors: book.authors.toRefArray()
})
})
As you can see - the library handles all the relations for us and we can easily access all the child entities and perform complex computation.
Also using .ref we return the entity Store's reference, instead of creating a new object copy (you're worried about the memory).
So having this type of selectors my flow is as follows:
Container components fetches the data via API.
Selectors get only the needed slice of data.
Render the Presentation components.
However, nothing is perfect as it sounds as. Redux-ORM deals with relational operations as querying, filtering, etc. in a very easy of use way. Cool!
But when we talk about selectors reusability, composition, extending and so on - it's kind of tricky and awkward task. It's not a Redux-ORM problem, than to the reselect library itself and the way it works. Here we discussed the topic.
Conclusion (personal)
For simpler relational projects I would give a try to the Indexing approach.
Otherwise, I would stick with Redux-ORM, as I used it in the App, for which one I asked the question. There I have 70+ entities and still counting!

When you start "overloading" your selectors (like getHealthAuthorsSelector) with other named selectors (like getHealthAuthorsWithBooksSelector, ...) you might end up with something like getHealthAuthorsWithBooksWithRelatedBooksSelector etc etc.
That is not sustainable. I suggest you stick to the high level ones (ie getHealthAuthorsSelector) and use a mechanism so that their books and the related books of those books etc are always available.
You can use TypeScript and turn the author.books into a getter, or just work with covenience functions to get the books from the store whenever they are needed. With an action you can combine a get from store with a fetch from db to display (possibly) stale data directly and have Redux/React take care of the visual update once the data is retrieved from the database.
I hadn't heard of this Reselect but it seems like it might be a good way to have all sorts of filters in one place to avoid duplicating code in components.
Simple as they are, they are also easily testable. Business/Domain logic testing is usually a (very?) good idea, especially when you are not a domain expert yourself.
Also keep in mind that a joining of multiple entities into something new is useful from time to time, for example flattening entities so they can be bound easily to a grid control.

There is a library that solves relational selects: ngrx-entity-relationship.
Its similar demo is here on codesandbox
For the case with books and authors, it would be like that:
The next code we need to define once.
// first we need proper state selectors, because of custom field names
const bookState = stateKeys(getBooks, 'byIds', 'allIds');
const authorState = stateKeys(getAuthors, 'byIds', 'allIds');
// now let's define root and relationship selector factories
const book = rootEntitySelector(bookState);
const bookAuthor = relatedEntitySelector(
authorState,
'authorId',
'author'
);
// the same for authors
const author = rootEntitySelector(authorState);
const authorBooks = relatedEntitySelector(
bookState,
'books', // I would rename it to `booksId`
'booksEntities', // and would use here `books`
);
Now we can build selectors and reuse them if it is needed.
// now we can build a selector
const getBooksWithAuthors = rootEntities(
book(
bookAuthor(
authorBooks(), // if we want to go crazy
),
),
);
// and connect it
const mapStateToProps = state => ({
books: getBooksWithAuthors(state, [1, 2, 3]), // or a selector for ids
// ...
});
The result would be
this.props.books = [
{
id: 1,
name: 'Book 1',
category: 'Programming',
authorId: 1
author: {
id: 1,
name: 'Jordan Enev',
books: [1],
booksEntities: [
{
id: 1,
name: 'Book 1',
category: 'Programming',
authorId: 1,
},
],
},
},
];

Related

Immutable JS - how to preserve Type and get immutability when converting a deeply nested JS object?

Working on a React, Redux + Typescript project, I am trying to add Immutable JS to the stack.
I started with working on a large nested object that could really use being safer as an immutable data structure.
import { Record, fromJS } from "immutable";
const obj = {
name: "werwr",
overview: {
seasons: {
2017: [{ period: 1, rates: 2 }]
}
}
};
// -- Using fromJS
const objJS = fromJS(obj);
const nObj = objJS.getIn(["overview", "seasons", "2017"]);
console.log(nObj); // I get an immutable list cool!
// -- Using Record, infer the type
const objRecord = Record(obj)();
const nRec = objRecord.getIn(["overview", "seasons", "2017"]);
console.log(nRec); // but I get a JS array
// -- Using both
const makeRec = Record(objJS);
const bothRecord = makeRec({ name: "name" });
console.log(bothRecord); // fails
Runnable code in codesandbox: https://codesandbox.io/s/naughty-panini-9bpgn?file=/src/index.ts
using fromJS. The conversion works well and deep but I lose all
type information.
using a Record. It keeps track of the type but nested arrays are
still mutable.
passing the converted object into a Record and manually add the type but I ran into an error: Cannot read property 'get' of
undefined
Whats the proper way to convert such an object to a fully immutable data structure while not loosing the type? Thanks!
You can use classes to construct deep structures.
interface IRole {
name: string;
related: IRole[];
}
const roleRecord = Record({
name: '',
related: List<Role>(),
});
class Role extends roleRecord {
name: string;
related: List<Role>;
constructor(config: IRole) {
super(Object.assign({}, config, {
related: config.related && List(config.related.map(r => new Role(r))),
}));
}
}
const myRole = new Role({
name: 'President',
related: [
{name: 'VP',
related:[
{name: 'AVP',
related: []}
]}
]});
With this type of structure, myRole will be all nested Role classes.
NOTE: I will add a bit of caution, we have been using this structure in a production application for almost 4 years now (angular, typescript, redux), and I added the immutablejs for safety from mutated actions and stores. If I had to do it over, the strict immutable store and actions that comes with NGRX would be my choice. Immutablejs is great at what it does, but the complexity it adds to the app is a trade off (Especially for onboarding new/greener coders).
Record is a factory for Record-Factories. As such, the argument should be an object template (aka default values), not actual data! (see docs).
const MyRecord = Record({
name: "werwr",
overview: null
});
const instance = MyRecord(somedata);
As you already noticed, the Record factory will not transform data to immutable. If you want to do that, you have to either do it manually with Maps and Lists, fromJS or the constructor of records.
The last approach is a bit weird, because then your record factory suddendly becomes a class:
const SeasonRecord = Record({
period: null, rates: null
})
class MyRecord extends Record({
name: "default_name",
seasons: Map()
}, 'MyRecord') {
constructor(values = {}, name) {
if(values.seasons) {
// straight forward Map of seasons:
// values = fromJS(values);
// Map of sub-records
values.seasons = Object.entries(values.seasons).reduce(
(acc, [year, season]) => {
acc[year] = SeasonRecord(season);
return acc;
}, {});
values.seasons = Map(values.seasons);
}
super(values, name);
}
}
const x = new MyRecord({
seasons: {
2017: { period: 1, rates: 2 }
}
})
console.log('period of 2017', x.seasons.get('2017').period)
I strongly suggest to not use unecessarily nest objects (record -> overview -> season) as it makes everything more complicated (and if you use large amounts of records, it might impact performance).
My general recommendation for Records is to keep them as flat as possible. The shown nesting of records allows to use the property access syntax instead of get, but is too tendious most of the time. Simply doing fromJS() for the values of a record and then use getIn is easier.

Resolve Custom Types at the root in GraphQL

I feel like I'm missing something obvious. I have IDs stored as [String] that I want to be able to resolve to the full objects they represent.
Background
This is what I want to enable. The missing ingredient is the resolvers:
const bookstore = `
type Author {
id: ID!
books: [Book]
}
type Book {
id: ID!
title: String
}
type Query {
getAuthor(id: ID!): Author
}
`;
const my_query = `
query {
getAuthor(id: 1) {
books { /* <-- should resolve bookIds to actual books I can query */
title
}
}
}
`;
const REAL_AUTHOR_DATA = [
{
id: 1,
books: ['a', 'b'],
},
];
const REAL_BOOK_DATA = [
{
id: 'a',
title: 'First Book',
},
{
id: 'b',
title: 'Second Book',
},
];
Desired result
I want to be able to drop a [Book] in the SCHEMA anywhere a [String] exists in the DATA and have Books load themselves from those Strings. Something like this:
const resolve = {
Book: id => fetchToJson(`/some/external/api/${id}`),
};
What I've Tried
This resolver does nothing, the console.log doesn't even get called
const resolve = {
Book(...args) {
console.log(args);
}
}
HOWEVER, this does get some results...
const resolve = {
Book: {
id(id) {
console.log(id)
return id;
}
}
}
Where the console.log does emit 'a' and 'b'. But I obviously can't scale that up to X number of fields and that'd be ridiculous.
What my team currently does is tackle it from the parent:
const resolve = {
Author: {
books: ({ books }) => books.map(id => fetchBookById(id)),
}
}
This isn't ideal because maybe I have a type Publisher { books: [Book]} or a type User { favoriteBooks: [Book] } or a type Bookstore { newBooks: [Book] }. In each of these cases, the data under the hood is actually [String] and I do not want to have to repeat this code:
const resolve = {
X: {
books: ({ books }) => books.map(id => fetchBookById(id)),
}
};
The fact that defining the Book.id resolver lead to console.log actually firing is making me think this should be possible, but I'm not finding my answer anywhere online and this seems like it'd be a pretty common use case, but I'm not finding implementation details anywhere.
What I've Investigated
Schema Directives seems like overkill to get what I want, and I just want to be able to plug [Books] anywhere a [String] actually exists in the data without having to do [Books] #rest('/external/api') in every single place.
Schema Delegation. In my use case, making Books publicly queryable isn't really appropriate and just clutters my Public schema with unused Queries.
Thanks for reading this far. Hopefully there's a simple solution I'm overlooking. If not, then GQL why are you like this...
If it helps, you can think of this way: types describe the kind of data returned in the response, while fields describe the actual value of the data. With this in mind, only a field can have a resolver (i.e. a function to tell it what kind of value to resolve to). A resolver for a type doesn't make sense in GraphQL.
So, you can either:
1. Deal with the repetition. Even if you have ten different types that all have a books field that needs to be resolved the same way, it doesn't have to be a big deal. Obviously in a production app, you wouldn't be storing your data in a variable and your code would be potentially more complex. However, the common logic can easily be extracted into a function that can be reused across multiple resolvers:
const mapIdsToBooks = ({ books }) => books.map(id => fetchBookById(id))
const resolvers = {
Author: {
books: mapIdsToBooks,
},
Library: {
books: mapIdsToBooks,
}
}
2. Fetch all the data at the root level instead. Rather than writing a separate resolver for the books field, you can return the author along with their books inside the getAuthor resolver:
function resolve(root, args) {
const author = REAL_AUTHOR_DATA.find(row => row.id === args.id)
if (!author) {
return null
}
return {
...author,
books: author.books.map(id => fetchBookById(id)),
}
}
When dealing with databases, this is often the better approach anyway because it reduces the number of requests you make to the database. However, if you're wrapping an existing API (which is what it sounds like you're doing), you won't really gain anything by going this route.

How to merge and observe two collections in Firestore based on reference ID in documents?

I'm creating a StencilJS app (no framework) with a Google Firestore backend, and I want to use the RxFire and RxJS libraries as much as possible to simplify data access code. How can I combine into a single observable stream data coming from two different collections that use a reference ID?
There are several examples online that I've read through and tried, each one using a different combination of operators with a different level of nested complexity. https://www.learnrxjs.io/ seems like a good resource, but it does not provide line-of-business examples that make sense to me. This question is very similar, and maybe the only difference is some translation into using RxFire? Still looking at that. Just for comparison, in SQL this would be a SELECT statement with an INNER JOIN on the reference ID.
Specifically, I have a collection for Games:
{ id: "abc000001", name: "Billiards" },
{ id: "abc000002", name: "Croquet" },
...
and a collection for Game Sessions:
{ id: "xyz000001", userId: "usr000001", gameId: "abc000001", duration: 30 },
{ id: "xyz000002", userId: "usr000001", gameId: "abc000001", duration: 45 },
{ id: "xyz000003", userId: "usr000001", gameId: "abc000002", duration: 55 },
...
And I want to observe a merged collection of Game Sessions where gameId is essentially replace with Game.name.
I current have a game-sessions-service.ts with a function to get sessions for a particular user:
import { collectionData } from 'rxfire/firestore';
import { Observable } from 'rxjs';
import { GameSession } from '../interfaces';
observeUserGameSesssions(userId: string): Observable<GameSession[]> {
let collectionRef = this.db.collection('game-sessions');
let query = collectionRef.where('userId', '==', userId);
return collectionData(query, 'id);
}
And I've tried variations of things with pipe and mergeMap, but I don't understand how to make them all fit together properly. I would like to establish an interface GameSessionView to represent the merged data:
export interface GameSessionView {
id: string,
userId: string,
gameName: string,
duration: number
}
observeUserGameSessionViews(userId: string): Observable<GameSessionView> {
this.observeUserGameSessions(userId)
.pipe(
mergeMap(sessions => {
// What do I do here? Iterate over sessions
// and embed other observables for each document?
}
)
}
Possibly, I'm just stuck in a normalized way of thinking, so I'm open to suggestions on better ways to manage the data. I just don't want too much duplication to keep synchronized.
You can use the following code (also available as Stackblitz):
const games: Game[] = [...];
const gameSessions: GameSession[] = [...];
combineLatest(
of(games),
of(gameSessions)
).pipe(
switchMap(results => {
const [gamesRes, gameSessionsRes] = results;
const gameSessionViews: GameSessionView[] = gameSessionsRes.map(gameSession => ({
id: gameSession.id,
userId: gameSession.userId,
gameName: gamesRes.find(game => game.id === gameSession.gameId).name,
duration: gameSession.duration
}));
return of(gameSessionViews);
})
).subscribe(mergedData => console.log(mergedData));
Explanation:
With combineLatest you can combine the latest values from a number of Obervables. It can be used if you have "multiple (..) observables that rely on eachother for some calculation or determination".
So assuming you lists of Games and GameSessions are Observables, you can combine the values of each list.
Within the switchMap you create new objects of type GameSessionView by iterating over your GameSessions, use the attributes id, userId and duration and find the value for gameName within the second list of Games by gameId. Mind that there is no error handling in this example.
As switchMap expects that you return another Observable, the merged list will be returned with of(gameSessionViews).
Finally, you can subscribe to this process and see the expected result.
For sure this is not the only way you can do it, but I find it the simplest one.

Efficiently working with large data sets in Vue applications with Vuex

In my Vue application, I have Vuex store modules with large arrays of resource objects in their state. To easily access individual resources in those arrays, I make Vuex getter functions that map resources or lists of resources to various keys (e.g. 'id' or 'tags'). This leads to sluggish performance and a huge memory memory footprint. How do I get the same functionality and reactivity without so much duplicated data?
Store Module Example
export default {
state: () => ({
all: [
{ id: 1, tags: ['tag1', 'tag2'] },
...
],
...
}),
...
getters: {
byId: (state) => {
return state.all.reduce((map, item) => {
map[item.id] = item
return map
}, {})
},
byTag: (state) => {
return state.all.reduce((map, item, index) => {
for (let i = 0; i < item.tags.length; i++) {
map[item.tags[i]] = map[item.tags[i]] || []
map[item.tags[i]].push(item)
}
return map
}, {})
},
}
}
Component Example
export default {
...,
data () {
return {
itemId: 1
}
},
computed: {
item () {
return this.$store.getters['path/to/byId'][this.itemId]
},
relatedItems () {
return this.item && this.item.tags.length
? this.$store.getters['path/to/byTag'][this.item.tags[0]]
: []
}
}
}
To fix this problem, look to an old, standard practice in programming: indexing. Instead of storing a map with the full item values duplicated in the getter, you can store a map to the index of the item in state.all. Then, you can create a new getter that returns a function to access a single item. In my experience, the indexing getter functions always run faster than the old getter functions, and their output takes up a lot less space in memory (on average 80% less in my app).
New Store Module Example
export default {
state: () => ({
all: [
{ id: 1, tags: ['tag1', 'tag2'] },
...
],
...
}),
...
getters: {
indexById: (state) => {
return state.all.reduce((map, item, index) => {
// Store the `index` instead of the `item`
map[item.id] = index
return map
}, {})
},
byId: (state, getters) => (id) => {
return state.all[getters.indexById[id]]
},
indexByTags: (state) => {
return state.all.reduce((map, item, index) => {
for (let i = 0; i < item.tags.length; i++) {
map[item.tags[i]] = map[item.tags[i]] || []
// Again, store the `index` not the `item`
map[item.tags[i]].push(index)
}
return map
}, {})
},
byTag: (state, getters) => (tag) => {
return (getters.indexByTags[tag] || []).map(index => state.all[index])
}
}
}
New Component Example
export default {
...,
data () {
return {
itemId: 1
}
},
computed: {
item () {
return this.$store.getters['path/to/byId'](this.itemId)
},
relatedItems () {
return this.item && this.item.tags.length
? this.$store.getters['path/to/byTag'](this.item.tags[0])
: []
}
}
}
The change seems small, but it makes a huge difference in terms of performance and memory efficiency. It is still fully reactive, just as before, but you're no longer duplicating all of the resource objects in memory. In my implementation, I abstracted out the various indexing methodologies and index expansion methodologies to make the code very maintainable.
You can check out a full proof of concept on github, here: https://github.com/aidangarza/vuex-indexed-getters
While I agree with #aidangarza, I think your biggest issue is the reactivity. Specifically the computed property. This adds a lot of bloated logic and slow code that listens for everything - something you don't need.
Finding the related items will always lead you to looping through the whole list - there's no easy way around it. BUT it will be much faster if you call this by yourself.
What I mean is that computed properties are about something that is going to be computed. You are actually filtering your results. Put a watcher on your variables, and then call the getters by yourself. Something along the lines (semi-code):
watch: {
itemId() {
this.item = this.$store.getters['path/to/byId'][this.itemId]
}
}
You can test with item first and if it works better (which I believe it will) - add watcher for the more complex tags.
Good luck!
While only storing select fields is a good intermediate option (per #aidangarza), it's still not viable when you end up with really huge sets of data. E.g. actively working with 2 million records of "just 2 fields" will still eat your memory and ruin browser performance.
In general, when working with large (or unpredictable) data sets in Vue (using VueX), simply skip the get and commit mechanisms altogether. Keep using VueX to centralize your CRUD operations (via Actions), but do not try to "cache" the results, rather let each component cache what they need for as long as they're using it (e.g. the current working page of the results, some projection thereof, etc.).
In my experience VueX caching is intended for reasonably bounded data, or bounded subsets of data in the current usage context (i.e. for a currently logged in user). When you have data where you have no idea about its scale, then keep its access on an "as needed" basis by your Vue components via Actions only; no getters or mutations for those potentially huge data sets.

Difference between two observables

Let's say I have two observables.
The first observable is an array of certain listings:
[
{id: 'zzz', other props here...},
{id: 'aaa', ...},
{id: '007', ...}
... and more over time
]
The second observable is an array of ignored listings:
[
{id: '007'}, // only id, no other props
{id: 'zzz'}
... and more over time
]
The result should be a new observable of listings (first observable) but must not have any of the ignored listings:
[
{id: 'aaa', other props here...}
... and more over time
]
This is what I have now before posting:
obs2.pipe(withLatestFrom(obs1, ? => ?, filter(?));
I didn't test it out, but I think it should be ok:
combineLatest(values$, excluded$).pipe(
map(([values, excluded]) => {
// put all the excluded IDs into a map for better perfs
const excludedIds: Map<string, undefined> = excluded.reduce(
(acc: Map<string, undefined>, item) => {
acc.set(item.id, undefined)
return acc;
},
new Map()
);
// filter the array, by looking up if the current
// item.id is in the excluded list or not
return values.filter(item => !excludedIds.has(item.id))
})
)
Explanation:
Using combineLatest you'll always be warned no matter where you get the update from. If you use withLatestFrom as in your example, it'll trigger an update only if the values$ observable is updated. But if the excluded$ changes, it wouldn't trigger an update in your case.
Then get all the excluded IDs into a map instead of an array as we'll need to know whether a given ID should be excluded or not. Looking into a map is wayyyyy faster than looking into an array.
Then just filter the values array.
If I'm understanding correctly, what you'll want to do is
Aggregate the incoming items over time
Aggregate the ids that are to be ignored over time
Finally, as both of the above streams emit over time, emit a resulting list of items that don't include the ignored ids.
Given the above, below is a rough example you could try. As noted towards the bottom, you'll get different results depending on the cadence of the first two streams because, well, thats's what happens with async. To show that, I'm simulating a random delay in the emission of things over time.
Hope this helps!
P.S.: The below is Typescript, assuming rxjs#^6.
import { BehaviorSubject, combineLatest, of, Observable } from "rxjs";
import { delay, map, scan, concatMap } from "rxjs/operators";
/**
* Data sources
*/
// Just for showcase purposes... Simulates items emitted over time
const simulatedEmitOverTime = <T>() => (source: Observable<T>) =>
source.pipe(
concatMap(thing => of(thing).pipe(delay(Math.random() * 1000)))
);
interface Thing {
id: string;
}
// Stream of things over time
const thingsOverTime$ = of(
{ id: "zzz" },
{ id: "aaa" },
{ id: "007" }
).pipe(
simulatedEmitOverTime()
);
// Stream of ignored things over time
const ignoredThingsOverTime$ = of(
{ id: "007" },
{ id: "zzz" }
).pipe(
simulatedEmitOverTime()
);
/**
* Somewhere in your app
*/
// Aggregate incoming things
// `scan` takes a reducer-type function
const aggregatedThings$ = thingsOverTime$.pipe(
scan(
(aggregatedThings: Thing[], incomingThing: Thing) =>
aggregatedThings.concat(incomingThing),
[]
)
);
// Create a Set from incoming ignored thing ids
// A Set will allow for easy filtering over time
const ignoredIds$ = ignoredThingsOverTime$.pipe(
scan(
(excludedIdSet, incomingThing: Thing) =>
excludedIdSet.add(incomingThing.id),
new Set<string>()
)
);
// Combine stream and then filter out ignored ids
const sanitizedThings$ = combineLatest(aggregatedThings$, ignoredIds$)
.pipe(
map(([things, ignored]) => things.filter(({ id }) => !ignored.has(id)))
);
// Subscribe where needed
// Note: End result will vary depending on the timing of items coming in
// over time (which is being simulated here-ish)
sanitizedThings$.subscribe(console.log);

Categories

Resources