Queuing Actions in Redux - javascript

I've currently got a situation whereby I need Redux Actions to be run consecutively. I've taken a look at various middlewares, such a redux-promise, which seem to be fine if you know what the successive actions are at the point of the root (for lack of a better term) action being triggered.
Essentially, I'd like to maintain a queue of actions that can be added to at any point. Each object has an instance of this queue in its state and dependent actions can be enqueued, processed and dequeued accordingly. I have an implementation, but in doing so I'm accessing state in my action creators, which feels like an anti-pattern.
I'll try and give some context on use case and implementation.
Use Case
Suppose you want to create some lists and persist them on a server. On list creation, the server responds with an id for that list, which is used in subsequent API end points pertaining to the list:
http://my.api.com/v1.0/lists/ // POST returns some id
http://my.api.com/v1.0/lists/<id>/items // API end points include id
Imagine that the client wants to perform optimistic updates on these API points, to enhance UX - nobody likes looking at spinners. So when you create a list, your new list instantly appears, with an option at add items:
+-------------+----------+
| List Name | Actions |
+-------------+----------+
| My New List | Add Item |
+-------------+----------+
Suppose that someone attempts to add an item before the response from the initial create call has made it back. The items API is dependent on the id, so we know we can't call it until we have that data. However, we might want to optimistically show the new item and enqueue a call to the items API so that it triggers once the create call is done.
A Potential Solution
The method I'm using to get around this currently is by giving each list an action queue - that is, a list of Redux actions that will be triggered in succession.
The reducer functionality for a list creation might look something like this:
case ADD_LIST:
return {
id: undefined, // To be filled on server response
name: action.payload.name,
actionQueue: []
}
Then, in an action creator, we'd enqueue an action instead of directly triggering it:
export const createListItem = (name) => {
return (dispatch) => {
dispatch(addList(name)); // Optimistic action
dispatch(enqueueListAction(name, backendCreateListAction(name));
}
}
For brevity, assume the backendCreateListAction function calls a fetch API, which dispatches messages to dequeue from the list on success/failure.
The Problem
What worries me here is the implementation of the enqueueListAction method. This is where I'm accessing state to govern the advancement of the queue. It looks something like this (ignore this matching on name - this actually uses a clientId in reality, but I'm trying to keep the example simple):
const enqueueListAction = (name, asyncAction) => {
return (dispatch, getState) => {
const state = getState();
dispatch(enqueue(name, asyncAction));{
const thisList = state.lists.find((l) => {
return l.name == name;
});
// If there's nothing in the queue then process immediately
if (thisList.actionQueue.length === 0) {
asyncAction(dispatch);
}
}
}
Here, assume that the enqueue method returns a plain action that inserts an async action into the lists actionQueue.
The whole thing feels a bit against the grain, but I'm not sure if there's another way to go with it. Additionally, since I need to dispatch in my asyncActions, I need to pass the dispatch method down to them.
There is similar code in the method to dequeue from the list, which triggers the next action should one exist:
const dequeueListAction = (name) => {
return (dispatch, getState) => {
dispatch(dequeue(name));
const state = getState();
const thisList = state.lists.find((l) => {
return l.name === name;
});
// Process next action if exists.
if (thisList.actionQueue.length > 0) {
thisList.actionQueue[0].asyncAction(dispatch);
}
}
Generally speaking, I can live with this, but I'm concerned that it's an anti-pattern and there might be a more concise, idiomatic way of doing this in Redux.
Any help is appreciated.

I have the perfect tool for what you are looking for. When you need a lot of control over redux, (especially anything asynchronous) and you need redux actions to happen sequentially there is no better tool than Redux Sagas. It is built on top of es6 generators giving you a lot of control since you can, in a sense, pause your code at certain points.
The action queue you describe is what is called a saga. Now since it is created to work with redux these sagas can be triggered to run by dispatching in your components.
Since Sagas use generators you can also ensure with certainty that your dispatches occur in a specific order and only happen under certain conditions. Here is an example from their documentation and I will walk you through it to illustrate what I mean:
function* loginFlow() {
while (true) {
const {user, password} = yield take('LOGIN_REQUEST')
const token = yield call(authorize, user, password)
if (token) {
yield call(Api.storeItem, {token})
yield take('LOGOUT')
yield call(Api.clearItem, 'token')
}
}
}
Alright, it looks a little confusing at first but this saga defines the exact order a login sequence needs to happen. The infinite loop is allowed because of the nature of generators. When your code gets to a yield it will stop at that line and wait. It will not continue to the next line until you tell it to. So look where it says yield take('LOGIN_REQUEST'). The saga will yield or wait at this point until you dispatch 'LOGIN_REQUEST' after which the saga will call the authorize method, and go until the next yield. The next method is an asynchronous yield call(Api.storeItem, {token}) so it will not go to the next line until that code resolves.
Now, this is where the magic happens. The saga will stop again at yield take('LOGOUT') until you dispatch LOGOUT in your application. This is crucial since if you were to dispatch LOGIN_REQUEST again before LOGOUT, the login process would not be invoked. Now, if you dispatch LOGOUT it will loop back to the first yield and wait for the application to dispatch LOGIN_REQUEST again.
Redux Sagas are, by far, one of my favorite tools to use with Redux. It gives you so much control over your application and anyone reading your code will thank you since everything now reads one line at a time.

Have a look at this: https://github.com/gaearon/redux-thunk
The id alone shouldn't go through the reducer. In your action creator (thunk), fetch the list id first, and then() perform a second call to add the item to the list. After this, you can dispatch different actions based on whether or not the addition was successful.
You can dispatch multiple actions while doing this, to report when the server interaction has started and finished. This will allow you to show a message or a spinner, in case the operation is heavy and might take a while.
A more in-depth analysis can be found here: http://redux.js.org/docs/advanced/AsyncActions.html
All credit to Dan Abramov

I was facing a similar problem to yours. I needed a queue to guarantee that optimistic actions were committed or eventually committed (in case of network problems) to remote server in same sequential order they were created, or rollback if not possible. I found that with Redux only, fells short for this, basically because I believe it was not designed for this and doing it with promises alone can be really a hard problem to reason with, besides the fact you need to manage your queue state somehow... IMHO.
I think #Pcriulan's suggestion on using redux-saga was a good one. At first sight, redux-saga doesn't provide anything to help you with until you get to channels. This opens you a door to deal with concurrency in other ways other languages do, CSP specifically (see Go or Clojure's async for example), thanks to JS generators. There are even questions on why is named after the Saga pattern and not CSP haha... anyway.
So here is how a saga could help you with your queue:
export default function* watchRequests() {
while (true) {
// 1- Create a channel for request actions
const requestChan = yield actionChannel('ASYNC_ACTION');
let resetChannel = false;
while (!resetChannel) {
// 2- take from the channel
const action = yield take(requestChan);
// 3- Note that we're using a blocking call
resetChannel = yield call(handleRequest, action);
}
}
}
function* handleRequest({ asyncAction, payload }) {
while (true) {
try {
// Perform action
yield call(asyncAction, payload);
return false;
} catch(e) {
if(e instanceof ConflictError) {
// Could be a rollback or syncing again with server?
yield put({ type: 'ROLLBACK', payload });
// Store is out of consistency so
// don't let waiting actions come through
return true;
} else if(e instanceof ConnectionError) {
// try again
yield call(delay, 2000);
}
}
}
}
So the interesting part here is how the channel acts as a buffer (a queue) which keeps "listening" for incoming actions but won't proceed with future actions until it finish with the current one. You might need to go over their documentation in order to grasp the code better, but I think it's worth it. The resetting channel part might or not work for your needs :thinking:
Hope it helps!

This is how I would tackle this problem:
Make sure each local list have an unique identifier. I'm not talking about the backend id here. Name is probably not enough to identify a list? An "optimistic" list not yet persisted should be uniquely identifiable, and user may try to create 2 lists with the same name, even if it's an edge case.
On list creation, add a promise of backend id to a cache
CreatedListIdPromiseCache[localListId] = createBackendList({...}).then(list => list.id);
On item add, try to get the backend id from Redux store. If it does not exist, then try to get it from CreatedListIdCache. The returned id must be async because CreatedListIdCache returns a promise.
const getListIdPromise = (localListId,state) => {
// Get id from already created list
if ( state.lists[localListId] ) {
return Promise.resolve(state.lists[localListId].id)
}
// Get id from pending list creations
else if ( CreatedListIdPromiseCache[localListId] ) {
return CreatedListIdPromiseCache[localListId];
}
// Unexpected error
else {
return Promise.reject(new Error("Unable to find backend list id for list with local id = " + localListId));
}
}
Use this method in your addItem, so that your addItem will be delayed automatically until the backend id is available
// Create item, but do not attempt creation until we are sure to get a backend id
const backendListItemPromise = getListIdPromise(localListId,reduxState).then(backendListId => {
return createBackendListItem(backendListId, itemData);
})
// Provide user optimistic feedback even if the item is not yet added to the list
dispatch(addListItemOptimistic());
backendListItemPromise.then(
backendListItem => dispatch(addListItemCommit()),
error => dispatch(addListItemRollback())
);
You may want to clean the CreatedListIdPromiseCache, but it's probably not very important for most apps unless you have very strict memory usage requirements.
Another option would be that the backend id is computed on frontend, with something like UUID. Your backend just need to verify the unicity of this id. Thus you would always have a valid backend id for all optimistically created lists, even if backend didn't reply yet.

You don't have to deal with queuing actions. It will hide the data flow and it will make your app more tedious to debug.
I suggest you to use some temporary ids when creating a list or an item and then update those ids when you actually receive the real ones from the store.
Something like this maybe ? (don't tested but you get the id) :
EDIT : I didn't understand at first that the items need to be automatically saved when the list is saved. I edited the createList action creator.
/* REDUCERS & ACTIONS */
// this "thunk" action creator is responsible for :
// - creating the temporary list item in the store with some
// generated unique id
// - dispatching the action to tell the store that a temporary list
// has been created (optimistic update)
// - triggering a POST request to save the list in the database
// - dispatching an action to tell the store the list is correctly
// saved
// - triggering a POST request for saving items related to the old
// list id and triggering the correspondant receiveCreatedItem
// action
const createList = (name) => {
const tempList = {
id: uniqueId(),
name
}
return (dispatch, getState) => {
dispatch(tempListCreated(tempList))
FakeListAPI
.post(tempList)
.then(list => {
dispatch(receiveCreatedList(tempList.id, list))
// when the list is saved we can now safely
// save the related items since the API
// certainly need a real list ID to correctly
// save an item
const itemsToSave = getState().items.filter(item => item.listId === tempList.id)
for (let tempItem of itemsToSave) {
FakeListItemAPI
.post(tempItem)
.then(item => dispatch(receiveCreatedItem(tempItem.id, item)))
}
)
}
}
const tempListCreated = (list) => ({
type: 'TEMP_LIST_CREATED',
payload: {
list
}
})
const receiveCreatedList = (oldId, list) => ({
type: 'RECEIVE_CREATED_LIST',
payload: {
list
},
meta: {
oldId
}
})
const createItem = (name, listId) => {
const tempItem = {
id: uniqueId(),
name,
listId
}
return (dispatch) => {
dispatch(tempItemCreated(tempItem))
}
}
const tempItemCreated = (item) => ({
type: 'TEMP_ITEM_CREATED',
payload: {
item
}
})
const receiveCreatedItem = (oldId, item) => ({
type: 'RECEIVE_CREATED_ITEM',
payload: {
item
},
meta: {
oldId
}
})
/* given this state shape :
state = {
lists: {
ids: [ 'list1ID', 'list2ID' ],
byId: {
'list1ID': {
id: 'list1ID',
name: 'list1'
},
'list2ID': {
id: 'list2ID',
name: 'list2'
},
}
...
},
items: {
ids: [ 'item1ID','item2ID' ],
byId: {
'item1ID': {
id: 'item1ID',
name: 'item1',
listID: 'list1ID'
},
'item2ID': {
id: 'item2ID',
name: 'item2',
listID: 'list2ID'
}
}
}
}
*/
// Here i'm using a immediately invoked function just
// to isolate ids and byId variable to avoid duplicate
// declaration issue since we need them for both
// lists and items reducers
const lists = (() => {
const ids = (ids = [], action = {}) => ({
switch (action.type) {
// when receiving the temporary list
// we need to add the temporary id
// in the ids list
case 'TEMP_LIST_CREATED':
return [...ids, action.payload.list.id]
// when receiving the real list
// we need to remove the old temporary id
// and add the real id instead
case 'RECEIVE_CREATED_LIST':
return ids
.filter(id => id !== action.meta.oldId)
.concat([action.payload.list.id])
default:
return ids
}
})
const byId = (byId = {}, action = {}) => ({
switch (action.type) {
// same as above, when the the temp list
// gets created we store it indexed by
// its temp id
case 'TEMP_LIST_CREATED':
return {
...byId,
[action.payload.list.id]: action.payload.list
}
// when we receive the real list we first
// need to remove the old one before
// adding the real list
case 'RECEIVE_CREATED_LIST': {
const {
[action.meta.oldId]: oldList,
...otherLists
} = byId
return {
...otherLists,
[action.payload.list.id]: action.payload.list
}
}
}
})
return combineReducers({
ids,
byId
})
})()
const items = (() => {
const ids = (ids = [], action = {}) => ({
switch (action.type) {
case 'TEMP_ITEM_CREATED':
return [...ids, action.payload.item.id]
case 'RECEIVE_CREATED_ITEM':
return ids
.filter(id => id !== action.meta.oldId)
.concat([action.payload.item.id])
default:
return ids
}
})
const byId = (byId = {}, action = {}) => ({
switch (action.type) {
case 'TEMP_ITEM_CREATED':
return {
...byId,
[action.payload.item.id]: action.payload.item
}
case 'RECEIVE_CREATED_ITEM': {
const {
[action.meta.oldId]: oldList,
...otherItems
} = byId
return {
...otherItems,
[action.payload.item.id]: action.payload.item
}
}
// when we receive a real list
// we need to reappropriate all
// the items that are referring to
// the old listId to the new one
case 'RECEIVE_CREATED_LIST': {
const oldListId = action.meta.oldId
const newListId = action.payload.list.id
const _byId = {}
for (let id of Object.keys(byId)) {
let item = byId[id]
_byId[id] = {
...item,
listId: item.listId === oldListId ? newListId : item.listId
}
}
return _byId
}
}
})
return combineReducers({
ids,
byId
})
})()
const reducer = combineReducers({
lists,
items
})
/* REDUCERS & ACTIONS */

Related

Can I use condition in my action reducer?

Basically, in our case, we need to either get an alerts list that shows the first few items (mounting it first time in the DOM) or show the initial list + the next list (clicking a load more button).
Hence we needed to do this condition in our GET_ALERTS action:
case "GET_ALERTS":
if (action.initialList) {
newState.list = [...newState.list, action.res.data.list];
} else {
newState.list = newState.list.concat(
action.res.data.list
);
}
And when we call the action reducer in our Alerts component, we need to indicate whether initialList is true or false.
E.g.
componentDidMount() {
this.props.getAlerts(pageNum, true);
}
markAllAsRead() {
// other code calling api to mark all as read
this.props.getAlerts(pageNum, false);
}
readMore() {
// other code that increases pageNum state counter
this.props.getAlerts(pageNum, true);
}
Anyway in such a case, is it fine to use conditional statement in the reducer?
I am against this idea. The reducer has a single responsibility: update Redux state according to the action.
Here are three ways to slove this:
easy way - initialize your list in Redux state to empty list
if you set the list in state to empty list ([]) then it's much simpler.
You can basically just change your reducer to this:
case "GET_ALERTS":
return {...state, list: [...state.list, action.res.data.list]
This will make sure that even if you have get initial list or more items to add to the list, they will be appended. No need to add any logic - which is awesome IMHO.
redux-thunk and separating type into two different types
create two actions: GET_INIT_ALERTS and GET_MORE_ALERTS.
switch(action.type) {
case "GET_INIT_ALERTS":
return {...state, list: action.res.data.list }
case "GET_MORE_ALERTS":
return {...state, list: [...state.list, ...action.res.data.list]}
case "CHECK_READ_ALERTS":
return {...state, read: [...state.read, ...action.res.data.list]}
}
In the component I will have:
componentDidMount() {
this.props.getInitAlerts();
}
markAllAsRead() {
// other code calling api to mark all as read
this.props.getAlerts(pageNum, false);
}
readMore() {
// other code that increases pageNum state counter
this.props.getAlerts(pageNum);
}
In alerts action with the help of redux-thunk:
export const getAlerts = (pageNum : number) => (dispatch) => {
return apiAction(`/alerts/${pageNum}`, 'GET').then(res => dispatch({type: "GET_MORE_ALERTS", res});
}
export const getInitAlerts = () => (dispatch) => {
return apiAction('/alerts/1', 'GET').then(res => dispatch({type: "GET_INIT_ALERTS", res});
}
I guess you update pageNum after readMore or componentDidMount. Of course you can save that state in Redux and map it back to props and just increment it when calling the getAlerts action.
write your own middleware
Another way to do this is to write an ad-hoc/feature middleware to concat new data to a list.
const concatLists = store => next => action => {
let newAction = action
if (action.type.includes("GET") && action.initialList) {
newAction = {...action, concatList: action.res.data.list}
} else if (action.type.includes("GET") {
newAction = {...action, concatList: [...state[action.key].list, action.res.data.list]}
}
return next(newAction);
}
And change your reducer to simply push concatList to the state:
case "GET_ALERTS":
return {...state, list: action.concatList}
In addition, you will have to change your action to include key (in this case the key will be set to alert (or the name of the key where you store the alert state in redux) and initialList to determine whether to concat or not.
BTW, it's a good practice to put these two under the meta key.
{
type: "GET_ALERT",
meta: {
initialList: true,
key: "alert",
},
res: {...}
}
I hope this helps.
I would suggest you to have following set of actions:
ALERTS/INIT - loads initial list
ALERTS/LOAD_MORE - loads next page and then increments pageNo, so next call will know how many pages are loaded
ALERTS/MARK_ALL_AS_READ - does server call and reinitializes list
The store structure
{
list: [],
currentPage: 0
}
And component code should not track pageNum
componentDidMount() {
this.props.initAlerts();
}
markAllAsRead() {
this.props.markAllAsRead();
}
readMore() {
this.props.loadMore();
}

Updating/ Pushing new data into NgRedux state

I am new to Angular and writing service which I will be using to add new addresses (posting to a REST API).
The saveAddress method call returns a newly created address object on server.
Which I wanted to push into the already existing array of addresses in store. I am trying to do something like:
saveAddress( payload ) {
this.httpClient.post( this.endpoint, payload).subscribe(
response => {
this.ngRedux.select( s => s.addresses ).subscribe( addresses => {
let data = addresses.data.push( response )
this.ngRedux.dispatch({ type: ADD_ADDRESS_SUCCESS, payload: data })
})
},
err => {
this.ngRedux.dispatch({ type: ADD_ADDRESS_ERROR })
}
)
}
How may I do it properly?
You must never alter the Store (state) anywhere else than in a reducer. The reducers receives an action (ADD_ADDRESS_SUCCESS) with its payload ({address: {...}}) and then updates the Store with that information:
reducer(state = initialState, action) {
switch (action.type) {
case ADD_ADDRESS_SUCCESS:
return Object.assign({}, state, {
addresses: state.addresses.push(action.payload.address)
})
default:
return state
}
}
Note, that we always make a copy of the state and do not mutate it.
To really understand it, please read the documentation for #angular-redux/store.
For API calls you should use Epics: Think of an Epic as pipeline that transforms an action into one or multiple other actions while handling a side effect. In your case the epic reacts to the ADD_ADDRESS_REQUEST, makes an API call with the payload and then transforms the action into either ADD_ADDRESS_SUCCESS or ADD_ADDRESS_ERROR, depending on the result of the API call. It will never update the state itself but delegate this to the reducer handling ADD_ADDRESS_SUCCESS and ADD_ADDRESS_ERROR respectively.
Read more about epics in the corresponding #angular-redux/store docs.

Reselect - selector that invokes another selector?

I have a selector:
const someSelector = createSelector(
getUserIdsSelector,
(ids) => ids.map((id) => yetAnotherSelector(store, id),
); // ^^^^^ (yetAnotherSelector expects 2 args)
That yetAnotherSelector is another selector, that takes user id - id and returns some data.
However, since it's createSelector, I don't have access to store in it (I don't want it as a function because the memoization wouldn't work then).
Is there a way to access store somehow inside createSelector? Or is there any other way to deal with it?
EDIT:
I have a function:
const someFunc = (store, id) => {
const data = userSelector(store, id);
// ^^^^^^^^^^^^ global selector
return data.map((user) => extendUserDataSelector(store, user));
// ^^^^^^^^^^^^^^^^^^^^ selector
}
Such function is killing my app, causing everything to re-render and driving me nuts. Help appreciated.
!! However:
I have done some basic, custom memoization:
import { isEqual } from 'lodash';
const memoizer = {};
const someFunc = (store, id) => {
const data = userSelector(store, id);
if (id in memoizer && isEqual(data, memoizer(id)) {
return memoizer[id];
}
memoizer[id] = data;
return memoizer[id].map((user) => extendUserDataSelector(store, user));
}
And it does the trick, but isn't it just a workaround?
For Your someFunc Case
For your specific case, I would create a selector that itself returns an extender.
That is, for this:
const someFunc = (store, id) => {
const data = userSelector(store, id);
// ^^^^^^^^^^^^ global selector
return data.map((user) => extendUserDataSelector(store, user));
// ^^^^^^^^^^^^^^^^^^^^ selector
}
I would write:
const extendUserDataSelectorSelector = createSelector(
selectStuffThatExtendUserDataSelectorNeeds,
(state) => state.something.else.it.needs,
(stuff, somethingElse) =>
// This function will be cached as long as
// the results of the above two selectors
// does not change, same as with any other cached value.
(user) => {
// your magic goes here.
return {
// ... user with stuff and somethingElse
};
}
);
Then someFunc would become:
const someFunc = createSelector(
userSelector,
extendUserDataSelectorSelector,
// I prefix injected functions with a $.
// It's not really necessary.
(data, $extendUserDataSelector) =>
data.map($extendUserDataSelector)
);
I call it the reifier pattern because it creates a function that is pre-bound to the current state and which accepts a single input and reifies it. I usually used it with getting things by id, hence the use of "reify". I also like saying "reify", which is honestly the main reason I call it that.
For your However Case
In this case:
import { isEqual } from 'lodash';
const memoizer = {};
const someFunc = (store, id) => {
const data = userSelector(store, id);
if (id in memoizer && isEqual(data, memoizer(id)) {
return memoizer[id];
}
memoizer[id] = data;
return memoizer[id].map((user) => extendUserDataSelector(store, user));
}
That's basically what re-reselect does. You may wish to consider that if you plan on implementing per-id memoization at the global level.
import createCachedSelector from 're-reselect';
const someFunc = createCachedSelector(
userSelector,
extendUserDataSelectorSelector,
(data, $extendUserDataSelector) =>
data.map($extendUserDataSelector)
// NOTE THIS PART DOWN HERE!
// This is how re-reselect gets the cache key.
)((state, id) => id);
Or you can just wrap up your memoized-multi-selector-creator with a bow and call it createCachedSelector, since it's basically the same thing.
Edit: Why Returning Functions
Another way you can do this is to just select all the appropriate data needed to run the extendUserDataSelector calculation, but this means exposing every other function that wants to use that calculation to its interface. By returning a function that accepts just a single user base-datum, you can keep the other selectors' interfaces clean.
Edit: Regarding Collections
One thing the above implementation is currently vulnerable to is if extendUserDataSelectorSelector's output changes because its own dependency-selectors change, but the user data gotten by userSelector did not change, and neither did actual computed entities created by extendUserDataSelectorSelector. In those cases, you'll need to do two things:
Multi-memoize the function that extendUserDataSelectorSelector returns. I recommend extracting it to a separate globally-memoized function.
Wrap someFunc so that when it returns an array, it compares that array element-wise to the previous result, and if they have the same elements, returns the previous result.
Edit: Avoiding So Much Caching
Caching at the global level is certainly doable, as shown above, but you can avoid that if you approach the problem with a couple other strategies in mind:
Don't eagerly extend data, defer that to each React (or other view) component that's actually rendering the data itself.
Don't eagerly convert lists of ids/base-objects into extended versions, rather have parents pass those ids/base-objects to children.
I didn't follow those at first in one of my major work projects, and wish I had. As it is, I had to instead go the global-memoization route later since that was easier to fix than refactoring all the views, something which should be done but which we currently lack time/budget for.
Edit 2 (or 4 I guess?): Re-Regarding Collections pt. 1: Multi-Memoizing the Extender
NOTE: Before you go through this part, it presumes that the Base Entity being passed to the Extender will have some sort of id property that can be used to identify it uniquely, or that some sort of similar property can be derived from it cheaply.
For this, you memoize the Extender itself, in a manner similar to any other Selector. However, since you want the Extender to memoize on its arguments, you don't want to pass State directly to it.
Basically, you need a Multi-Memoizer that basically acts in the same manner as re-reselect does for Selectors.
In fact, it's trivial to punch createCachedSelector into doing that for us:
function cachedMultiMemoizeN(n, cacheKeyFn, fn) {
return createCachedSelector(
// NOTE: same as [...new Array(n)].map((e, i) => Lodash.nthArg(i))
[...new Array(n)].map((e, i) => (...args) => args[i]),
fn
)(cacheKeyFn);
}
function cachedMultiMemoize(cacheKeyFn, fn) {
return cachedMultiMemoizeN(fn.length, cacheKeyFn, fn);
}
Then instead of the old extendUserDataSelectorSelector:
const extendUserDataSelectorSelector = createSelector(
selectStuffThatExtendUserDataSelectorNeeds,
(state) => state.something.else.it.needs,
(stuff, somethingElse) =>
// This function will be cached as long as
// the results of the above two selectors
// does not change, same as with any other cached value.
(user) => {
// your magic goes here.
return {
// ... user with stuff and somethingElse
};
}
);
We have these two functions:
// This is the main caching workhorse,
// creating a memoizer per `user.id`
const extendUserData = cachedMultiMemoize(
// Or however else you get globally unique user id.
(user) => user.id,
function $extendUserData(user, stuff, somethingElse) {
// your magic goes here.
return {
// ...user with stuff and somethingElse
};
}
);
// This is still wrapped in createSelector mostly as a convenience.
// It doesn't actually help much with caching.
const extendUserDataSelectorSelector = createSelector(
selectStuffThatExtendUserDataSelectorNeeds,
(state) => state.something.else.it.needs,
(stuff, somethingElse) =>
// This function will be cached as long as
// the results of the above two selectors
// does not change, same as with any other cached value.
(user) => extendUserData(
user,
stuff,
somethingElse
)
);
That extendUserData is where the real caching occurs, though fair warning: if you have a lot of baseUser entities, it could grow pretty large.
Edit 2 (or 4 I guess?): Re-Regarding Collections pt. 2: Arrays
Arrays are the bane of caching existence:
arrayOfSomeIds may itself not change, but the entities that the ids within point to could have.
arrayOfSomeIds might be a new object in memory, but in reality has the same ids.
arrayOfSomeIds did not change, but the collection holding the referred-to entities did change, yet the particular entities referred to by these specific ids did not change.
That all is why I advocate for delegating the extension/expansion/reification/whateverelseification of arrays (and other collections!) to as late in the data-getting-deriving-view-rendering process as possible: It's a pain in the amygdala to have to consider all of this.
That said, it's not impossible, it just incurs some extra checking.
Starting with the above cached version of someFunc:
const someFunc = createCachedSelector(
userSelector,
extendUserDataSelectorSelector,
(data, $extendUserDataSelector) =>
data.map($extendUserDataSelector)
// NOTE THIS PART DOWN HERE!
// This is how re-reselect gets the cache key.
)((state, id) => id);
We can then wrap it in another function that just caches the output:
function keepLastIfEqualBy(isEqual) {
return function $keepLastIfEqualBy(fn) {
let lastValue;
return function $$keepLastIfEqualBy(...args) {
const nextValue = fn(...args);
if (! isEqual(lastValue, nextValue)) {
lastValue = nextValue;
}
return lastValue;
};
};
}
function isShallowArrayEqual(a, b) {
if (a === b) return true;
if (Array.isArray(a) && Array.isArray(b)) {
if (a.length !== b.length) return false;
// NOTE: calling .every on an empty array always returns true.
return a.every((e, i) => e === b[i]);
}
return false;
}
Now, we can't just apply this to the result of createCachedSelector, that'd only apply to just one set of outputs. Rather, we need to use it for each underlying selector that createCachedSelector creates. Fortunately, re-reselect lets you configure the selector creator it uses:
const someFunc = createCachedSelector(
userSelector,
extendUserDataSelectorSelector,
(data, $extendUserDataSelector) =>
data.map($extendUserDataSelector)
)((state, id) => id,
// NOTE: Second arg to re-reselect: options object.
{
// Wrap each selector that createCachedSelector itself creates.
selectorCreator: (...args) =>
keepLastIfEqualBy(isShallowArrayEqual)(createSelector(...args)),
}
)
Bonus Part: Array Inputs
You may have noticed that we only check array outputs, covering cases 1 and 3, which may be good enough. Sometimes, however, you may need catch case 2, as well, checking the input array.
This is doable by using reselect's createSelectorCreator to make our own createSelector using a custom equality function
import { createSelectorCreator, defaultMemoize } from 'reselect';
const createShallowArrayKeepingSelector = createSelectorCreator(
defaultMemoize,
isShallowArrayEqual
);
// Also wrapping with keepLastIfEqualBy() for good measure.
const createShallowArrayAwareSelector = (...args) =>
keepLastIfEqualBy(
isShallowArrayEqual
)(
createShallowArrayKeepingSelector(...args)
);
// Or, if you have lodash available,
import compose from 'lodash/fp/compose';
const createShallowArrayAwareSelector = compose(
keepLastIfEqualBy(isShallowArrayEqual),
createSelectorCreator(defaultMemoize, isShallowArrayEqual)
);
This further changes the someFunc definition, though just by changing the selectorCreator:
const someFunc = createCachedSelector(
userSelector,
extendUserDataSelectorSelector,
(data, $extendUserDataSelector) =>
data.map($extendUserDataSelector)
)((state, id) => id, {
selectorCreator: createShallowArrayAwareSelector,
});
Other Thoughts
That all said, you should try taking a look at what shows up in npm when you search for reselect and re-reselect. Some new tools there that may or may not be useful to certain cases. You can do a lot with just reselect and re-reselect plus a few extra functions to fit your needs, though.
A problem we faced when using reselect is that there is no support for dynamic dependency tracking. A selector needs to declare upfront which parts of the state will cause a recomputation.
For example, I have a list of online user IDs, and a mapping of users:
{
onlineUserIds: [ 'alice', 'dave' ],
notifications: [ /* unrelated data */ ]
users: {
alice: { name: 'Alice' },
bob: { name: 'Bob' },
charlie: { name: 'Charlie' },
dave: { name: 'Dave' },
eve: { name: 'Eve' }
}
}
I want to select a list of online users, e.g. [ { name: 'Alice' }, { name: 'Dave' } ].
Since I cannot know upfront which users will be online, I need to declare a dependency on the whole state.users branch of the store:
This works, but this means that changes to unrelated users (bob, charlie, eve) will cause the selector to be recomputed.
I believe this is a problem in reselect’s fundamental design choice: dependencies between selectors are static. (In contrast, Knockout, Vue and MobX do support dynamic dependencies.)
We faced the same problem and we came up with #taskworld.com/rereselect. Instead of declaring dependencies upfront and statically, dependencies are collected just-in-time and dynamically during each computation:
This allows our selectors to have a more fine-grained control of which part of state can cause a selector to be recomputed.
Preface
I faced the same case as yours, and unfortunately didn't find an efficient way to call a selector from another selector's body.
I said efficient way, because you can always have an input selector, which passes down the whole state (store), but this will recalculate your selector on each state's changes:
const someSelector = createSelector(
getUserIdsSelector,
state => state,
(ids, state) => ids.map((id) => yetAnotherSelector(state, id)
)
Approaches
However, I found out two possible approaches, for the use-case described below. I guess your case is similar, so you can take some insights.
So the case is as follows: You have a selector, that gets a specific User from the Store by an id, and the selector returns the User in a specific structure. Let's say getUserById selector. For now everything's fine and simple as possible. But the problem occurs when you want to get several Users by their ids and also reuse the previous selector. Let's name it getUsersByIds selector.
1. Using always an Array, for input ids values
The first possible solution is to have a selector that always expects an array of ids (getUsersByIds) and a second one, that reuses the previous, but it will get only 1 User (getUserById). So when you want to get only 1 User from the Store, you have to use getUserById, but you have to pass an array with only one user id.
Here's the implementation:
import { createSelectorCreator, defaultMemoize } from 'reselect'
import { isEqual } from 'lodash'
/**
* Create a "selector creator" that uses `lodash.isEqual` instead of `===`
*
* Example use case: when we pass an array to the selectors,
* they are always recalculated, because the default `reselect` memoize function
* treats the arrays always as new instances.
*
* #credits https://github.com/reactjs/reselect#customize-equalitycheck-for-defaultmemoize
*/
const createDeepEqualSelector = createSelectorCreator(
defaultMemoize,
isEqual
)
export const getUsersIds = createDeepEqualSelector(
(state, { ids }) => ids), ids => ids)
export const getUsersByIds = createSelector(state => state.users, getUsersIds,
(users, userIds) => {
return userIds.map(id => ({ ...users[id] })
}
)
export const getUserById = createSelector(getUsersByIds, users => users[0])
Usage:
// Get 1 User by id
const user = getUserById(state, { ids: [1] })
// Get as many Users as you want by ids
const users = getUsersByIds(state, { ids: [1, 2, 3] })
2. Reuse selector's body, as a stand-alone function
The idea here is to separate the common and reusable part of the selector body in a stand-alone function, so this function to be callable from all other selectors.
Here's the implementation:
export const getUsersByIds = createSelector(state => state.users, getUsersIds,
(users, userIds) => {
return userIds.map(id => _getUserById(users, id))
}
)
export const getUserById = createSelector(state => state.users, (state, props) => props.id, _getUserById)
const _getUserById = (users, id) => ({ ...users[id]})
Usage:
// Get 1 User by id
const user = getUserById(state, { id: 1 })
// Get as many Users as you want by ids
const users = getUsersByIds(state, { ids: [1, 2, 3] })
Conclusion
Approach #1. has less boilerplate (we don't have a stand-alone function) and has clean implementation.
Approach #2. is more reusable. Imagine the case, where we don't have an User's id when we call a selector, but we get it from the selector's body as a relation. In that case, we can easily reuse the stand-alone function. Here's а pseudo example:
export const getBook = createSelector(state => state.books, state => state.users, (state, props) => props.id,
(books, users, id) => {
const book = books[id]
// Here we have the author id (User's id)
// and out goal is to reuse `getUserById()` selector body,
// so our solution is to reuse the stand-alone `_getUserById` function.
const authorId = book.authorId
const author = _getUserById(users, authorId)
return {
...book,
author
}
}
I have made the following workaround:
const getSomeSelector = (state: RootState) => () => state.someSelector;
const getState = (state: RootState) => () => state;
const reportDerivedStepsSelector = createSelector(
[getState, getSomeSelector],
(getState, someSelector
) => {
const state = getState();
const getAnother = anotherSelector(state);
...
}
The function getState will never change and you can get the complete state from your selector without breaking the selector memo.
Recompute is an alternative to reselect that implements dynamic dependency tracking and allows any number of arguments to be passed to the selector, you could check if this would solve your problem
you add as many parameters as you want, and parameters can be other selector functions.
the end callback have the results of these selectors respectively ..
export const anySelector = createSelector(firstSelector, second, ..., (resultFromFirstSelector, resultFromSecond, ...) => { // do your thing.. });
documentation

Dispatch Redux action after React Apollo query returns

I'm using React Apollo to query all records in my datastore so I can create choices within a search filter.
The important database model I'm using is Report.
A Report has doorType, doorWidth, glass and manufacturer fields.
Currently when the query responds, I'm passing allReports to multiple dumb components which go through the array and just get the unique items to make a selectable list, like so..
const uniqueItems = []
items.map(i => {
const current = i[itemType]
if (typeof current === 'object') {
if (uniqueItems.filter(o => o.id !== current.id)) {
return uniqueItems.push(current)
}
} else if (!uniqueItems.includes(current)) {
return uniqueItems.push(current)
}
return
})
Obviously this code isn't pretty and it's a bit overkill.
I'd like to dispatch an action when the query returns within my SidebarFilter components. Here is the query...
const withData = graphql(REPORT_FILTER_QUERY, {
options: ({ isPublished }) => ({
variables: { isPublished }
})
})
const mapStateToProps = ({
reportFilter: { isPublished }
// filterOptions: { doorWidths }
}) => ({
isAssessment
// doorWidths
})
const mapDispatchToProps = dispatch =>
bindActionCreators(
{
resetFilter,
saveFilter,
setDoorWidths,
handleDoorWidthSelect
},
dispatch
)
export default compose(connect(mapStateToProps, mapDispatchToProps), withData)(
Filter
)
The Redux action setDoorWidths basically does the code above in the SidebarFilter component but it's kept in the store so I don't need to re-run the query should the user come back to the page.
It's very rare the data will update and the sidebar needs to change.
Hopefully there is a solution using the props argument to the graphql function. I feel like the data could be taken from ownProps and then an action could be dispatched here but the data could error or be loading, and that would break rendering.
Edit:
Query:
query ($isPublished: Boolean!){
allReports(filter:{
isPublished: $isPublished
}) {
id
oldId
dbrw
core
manufacturer {
id
name
}
doorWidth
doorType
glass
testBy
testDate
testId
isAssessment
file {
url
}
}
}
While this answer addresses the specific issue of the question, the more general question -- where to dispatch a Redux action based on the result of a query -- remains unclear. There does not, as yet, seem to be a best practice here.
It seems to me that, since Apollo already caches the query results in your store for you (or a separate store, if you didn't integrate them), it would be redundant to dispatch an action that would also just store the data in your store.
If I understood your question correctly, your intent is to filter the incoming data only once and then send the result down as a prop to the component's stateless children. You were on the right track with using the props property in the graphql HOC's config. Why not just do something like this:
const mapDataToProps = ({ data = {} }) => {
const items = data
const uniqueItems = []
// insert your logic for filtering the data here
return { uniqueItems } // or whatever you want the prop to be called
}
const withData = graphql(REPORT_FILTER_QUERY, {
options: ({ isPublished }) => ({
variables: { isPublished }
}),
props: mapDataToProps,
})
The above may need to be modified depending on what the structure of data actually looks like. data has some handy props on it that can let you check for whether the query is loading (data.loading) or has errors (data.error). The above example already guards against sending an undefined prop down to your children, but you could easily incorporate those properties into your logic if you so desired.

Managing state in angular2 application - side effects?

This is more of a general question, but based on Victor Savkin post Managing state in angular2
Let's consider approach described there that uses RxJs:
interface Todo { id: number; text: string; completed: boolean; }
interface AppState { todos: Todo[]; visibilityFilter: string; }
function todos(initState: Todo[], actions: Observable<Action>): Observable<Todo[]> {
return actions.scan((state, action) => {
if (action instanceof AddTodoAction) {
const newTodo = {id: action.todoId, text: action.text, completed: false};
return [...state, newTodo];
} else {
return state;
}
}, initState);
}
All is fine, but let's add few more requirements:
Upon adding new Todo item, its text should be sent to the backend and analysed to extract possible due date and location.
If Todo item has due date, it should be added to my Google calendar
So if i add Todo "Get my hair done at Sally's Saloon on Thursday", with first call i would get from backend Sally's Saloon and date which is set to this weeks (or next weeks) Thursday and second call would add this todo to my Google calendar and mark item as in calendar.
So my new Todo item structure might look something like this:
interface Todo {
id: number;
text: string;
completed: boolean;
location?: Coordinates;
date?: Date;
inCalendar?: boolean;
parsed?: boolean;
}
And now i have two side effects :
After todo has been added i need to parse the text
After date has been added to Todo, i need to add it to calendar.
How do i deal with these side effects in this approach? Redux says that reducers should be kept clean, and they also have a notion of Sagas.
Option 1 - fire new event(s) for side effects when todo is added
function todos(initState: Todo[], actions: Observable<Action>): Observable<Todo[]> {
return actions.scan((state, action) => {
if (action instanceof AddTodoAction) {
const newTodo = {id: action.todoId, text: action.text, completed: false};
actions.onNext(new ParseTodoAction(action.todoId));
return [...state, newTodo];
} else if (action instanceOf ParseTodoAction){
const todo = state.find(t => t.todoId === action.todoId)
parserService
.parse(todo.todoId, todo.text)
.subscribe(r => actions.onNext(new TodoParsedAction(todo.todoId, r.coordinates, r.date)))
} else {
return state;
}
}, initState);
}
But this will fail, because new todo is not yet available on the state.
I could of course use only TodoParsedAction and instead of ParseTodoAction just invoke backend call inline, but this would also assume that backend call will take longer to process, and by the time it finishes state will already have that new Todo item which is trouble waiting to happen.
Option 2 - subscribe to actions and check each todo for missing properties
actions
.flatMap(todos => Observable.from(todos))
.subscribe(todo => {
if (!todo.coordinates && !todo.parsed) {
parserService
.parse(todo.todoId, todo.text)
.subscribe(r => actions.onNext(new TodoParsedAction(todo.todoId, r.coordinates, r.date)))
}
if (todo.date && todo.inCalendar === undefined) {
calendarService
.add(todo.text, todo.date)
.subscribe(_ => actions.onNext(new TodoInCalendarAction(todo.todoId, true)))
}
})
But this somehow does not feel right - shouldn't be everything managed by actions, and should i always loop through all of todo items?
Your option 1 can't work as stated: actions is an Observable<Action> observables are read-only and onNext isn't part of that API. You need an Observer<Action> to support option 1. This highlights the real flaw of option 1: your state function (same thing as a Redux reducer) needs to be pure and side-effect free. That means they cannot and should not dispatch more actions.
Now in the blog article you reference, indeed the code is really passing in a Subject, which is both Observer and Observable. So you probably do have an onNext. But I can tell you that recursively publishing data to a Subject while you are handling data being published by that Subject will get you into no end of trouble and is rarely worth the headaches to make work correctly.
In Redux, the typical solution to invoking backend processing to enrich your state would be to dispatch multiple actions at the beginning when you have already decided to dispatch AddTodo. This can often be done by using redux-thunk and dispatching functions as "smart actions":
Instead of:
export function addToDo(args) {
return new AddToDoAction(args);
}
you'd do:
export function addToDo(args) {
return (dispatch) => {
dispatch(new AddToDoAction(args)); // if you want to dispatch the Todo before parsing
dispatch(parseToDo(args)); // handle parsing
};
}
export function parseToDo(args) {
return (dispatch) => {
if (thisToDoNeedsParsing(args)) {
callServerAndParse(args).then(result => {
// dispatch an action to update the Todo
dispatch(new EnrichToDoWithParsedData(result));
});
}
};
}
// UI code would do:
dispatch(addToDo(args));
The UI dispatches a smart action (thunk) which will dispatch the AddToDoAction to get the unparsed todo in your state (your UI can choose to not show it until the parse completes if you want). It then dispatches another smart action (thunk) which will actually call the server to get more data then dispatch an EnrichToDoWithParsedData action with the results so that your Todo can be updated.
As for updating of the calendar...you can probably use the pattern above (inserting calls to possiblyUpdateCalendar() in both addToDo and parseToDo so that if the todo has all the stuff you need, it can update the calendar and when that finishes dispatch an action to mark the todo as added.
Now this example I've shown is Redux-specific and I don't think the RxJs-based example you are working from has anything like a thunk. One way to add support for this in your scheme is to add a flatMap operator to the subject that goes something like this:
let actionStream = actionsSubject.flatMap(action => {
if (typeof action !== "function") {
// not a thunk. just return it as a simple observable
return Rx.Observable.of(action);
}
// call the function and give it a dispatch method to collect any actions it dispatches
var actions = [];
var dispatch = a => actions.push(a);
action(dispatch);
// now return the actions that the action thunk dispatched
return Rx.Observable.of(actions);
});
// pass actionStream to your stateFns instead of passing the raw subject
var state$ = stateFn(initState, actionStream);
// Now your UI code *can* pass in "smart" actions:
actionSubject.onNext(addTodo(args));
// or "dumb" actions:
actionSubject.onNext(new SomeSimpleAction(args));
Notice all of that code above is in the code that dispatches an action. I didn't show any of your state function. Your state function would be pure and something like:
function todos(initState: Todo[], actions: Observable<Action>): Observable<Todo[]> {
return actions.scan((state, action) => {
if (action instanceof AddTodoAction) {
const newTodo = {id: action.todoId, text: action.text, completed: false};
return [...state, newTodo];
} else if (action instanceof EnrichTodoWithParsedData) {
// (replace the todo inside the state array with a new updated one)
} else if (action instanceof AddedToCalendar) {
// (replace the todo inside the state array with a new updated one)
}
} else {
return state;
}
}, initState);
}

Categories

Resources