I am new to Angular and writing service which I will be using to add new addresses (posting to a REST API).
The saveAddress method call returns a newly created address object on server.
Which I wanted to push into the already existing array of addresses in store. I am trying to do something like:
saveAddress( payload ) {
this.httpClient.post( this.endpoint, payload).subscribe(
response => {
this.ngRedux.select( s => s.addresses ).subscribe( addresses => {
let data = addresses.data.push( response )
this.ngRedux.dispatch({ type: ADD_ADDRESS_SUCCESS, payload: data })
})
},
err => {
this.ngRedux.dispatch({ type: ADD_ADDRESS_ERROR })
}
)
}
How may I do it properly?
You must never alter the Store (state) anywhere else than in a reducer. The reducers receives an action (ADD_ADDRESS_SUCCESS) with its payload ({address: {...}}) and then updates the Store with that information:
reducer(state = initialState, action) {
switch (action.type) {
case ADD_ADDRESS_SUCCESS:
return Object.assign({}, state, {
addresses: state.addresses.push(action.payload.address)
})
default:
return state
}
}
Note, that we always make a copy of the state and do not mutate it.
To really understand it, please read the documentation for #angular-redux/store.
For API calls you should use Epics: Think of an Epic as pipeline that transforms an action into one or multiple other actions while handling a side effect. In your case the epic reacts to the ADD_ADDRESS_REQUEST, makes an API call with the payload and then transforms the action into either ADD_ADDRESS_SUCCESS or ADD_ADDRESS_ERROR, depending on the result of the API call. It will never update the state itself but delegate this to the reducer handling ADD_ADDRESS_SUCCESS and ADD_ADDRESS_ERROR respectively.
Read more about epics in the corresponding #angular-redux/store docs.
Related
For example, could I iterate over Vuex data in a Vue file and choose the data needing updating, then pass the found data to an action, which commits it and then the mutation only makes the update?
The reason I'm unsure about it is because the typical format of a Vuex mutation contains the parameter for 'state', so I assume it needs to be used, and the only way to do that is either by doing all the looping inside the mutation, or to pass indexes to it to more quickly find the exact fields needing changing.
For who asked, a code example:
someVueFile.vue
computed: {
...mapState({
arrayOfObjects: (state) => state.someVuexStore.arrayOfObjects
}),
},
methods: {
myUpdateMethod() {
let toBePassedForUpdate = null;
let newFieldState = "oneValue";
this.arrayOfObjects.forEach((myObject) => {
if (myObject.someDataField !== "oneValue") {
toBePassedForUpdate = myObject.someDataField;
}
})
if (toBePassedForUpdate) {
let passObject = {
updateThis: toBePassedForUpdate,
newFieldState: newFieldState
}
this.$store.dispatch("updateMyObjectField", passObject)
}
}
}
someVuexStore.js
const state = {
arrayOfObjects: [],
/* contains some object such as:
myCoolObject: {
someDataField: "otherValue"
}
*/
}
const mutations = {
updateMyObjectField(state, data) {
data.updateThis = data.newFieldState;
}
}
const actions = {
updateMyObjectField(state, data) {
state.commit("updateMyObjectField", data);
}
}
Yes, it's alright to mutate state passed in through the payload argument rather than state. Vuex doesn't bother to distinguish between the two. In either case, it's the same state, and neither option detracts from the purposes of using mutations.
To feel more sure of that, you can ask what are the purposes of mutations and of enforcing their use. The answer is to keep a centralized, trackable location for concretely defined changes to state.
To illustrate this is a good thing, imagine an app with 1000 components, each one changing state locally, outside of a mutation, and in different ways. This could be a nightmare to debug or comprehend as a 3rd party, because you don't know how or where state changes.
So mutations enforce how and a centralized where. Neither of these are damaged by only using the payload argument in a mutation.
I would do all of the logic from one action, you can desctructured the context object in the action signature like so :
actions: {
myAction ({ state, commit, getters, dispacth } ,anyOtherParameter) {
let myVar = getters.myGetter//use a getter to get your data
//execute logic
commit('myCommit', myVar)//commit the change
}
}
If you need to do the logic in your component you can easily extract the getter and the logic from the action.
I am in an unusual situation:
On page load, let's say my Redux store gets hydrated with a basic value:
{ foo: true }
In a reducer, I am using react-router-redux (or any other library that dispatches actions itself, only giving access to an action type) to update my state on actions of type LOCATION_CHANGE:
...
case LOCATION_CHANGE: {
const deserialized = deserialize(action.payload.query, **foo**);
return { ...state, deserialized };
}
...
My deserialize function need the value of foo to update my state accordingly. Normally, I would add getState().foo to my action payload, but since this is a third-party library, I do not control the action payload. Is there an easy workaround for this problem that doesn't require me to rip out the third-party library?
Yes, use a Redux middleware to transform the action.
You might want to look at some of the existing middleware for intercepting and modifying dispatched actions.
I accepted markerikson's answer, but here is the middleware I ended up writing:
const locationChangeMiddleware = store => next => action => {
if (action.type === LOCATION_CHANGE) {
const { foo } = store.getState();
return next({
...action,
payload: {
...action.payload,
foo,
},
});
}
return next(action);
};
Now my reducer looks like:
...
case LOCATION_CHANGE: {
const deserialized = deserialize(action.payload.query, action.payload.foo);
return { ...state, deserialized };
}
...
I'm using React Apollo to query all records in my datastore so I can create choices within a search filter.
The important database model I'm using is Report.
A Report has doorType, doorWidth, glass and manufacturer fields.
Currently when the query responds, I'm passing allReports to multiple dumb components which go through the array and just get the unique items to make a selectable list, like so..
const uniqueItems = []
items.map(i => {
const current = i[itemType]
if (typeof current === 'object') {
if (uniqueItems.filter(o => o.id !== current.id)) {
return uniqueItems.push(current)
}
} else if (!uniqueItems.includes(current)) {
return uniqueItems.push(current)
}
return
})
Obviously this code isn't pretty and it's a bit overkill.
I'd like to dispatch an action when the query returns within my SidebarFilter components. Here is the query...
const withData = graphql(REPORT_FILTER_QUERY, {
options: ({ isPublished }) => ({
variables: { isPublished }
})
})
const mapStateToProps = ({
reportFilter: { isPublished }
// filterOptions: { doorWidths }
}) => ({
isAssessment
// doorWidths
})
const mapDispatchToProps = dispatch =>
bindActionCreators(
{
resetFilter,
saveFilter,
setDoorWidths,
handleDoorWidthSelect
},
dispatch
)
export default compose(connect(mapStateToProps, mapDispatchToProps), withData)(
Filter
)
The Redux action setDoorWidths basically does the code above in the SidebarFilter component but it's kept in the store so I don't need to re-run the query should the user come back to the page.
It's very rare the data will update and the sidebar needs to change.
Hopefully there is a solution using the props argument to the graphql function. I feel like the data could be taken from ownProps and then an action could be dispatched here but the data could error or be loading, and that would break rendering.
Edit:
Query:
query ($isPublished: Boolean!){
allReports(filter:{
isPublished: $isPublished
}) {
id
oldId
dbrw
core
manufacturer {
id
name
}
doorWidth
doorType
glass
testBy
testDate
testId
isAssessment
file {
url
}
}
}
While this answer addresses the specific issue of the question, the more general question -- where to dispatch a Redux action based on the result of a query -- remains unclear. There does not, as yet, seem to be a best practice here.
It seems to me that, since Apollo already caches the query results in your store for you (or a separate store, if you didn't integrate them), it would be redundant to dispatch an action that would also just store the data in your store.
If I understood your question correctly, your intent is to filter the incoming data only once and then send the result down as a prop to the component's stateless children. You were on the right track with using the props property in the graphql HOC's config. Why not just do something like this:
const mapDataToProps = ({ data = {} }) => {
const items = data
const uniqueItems = []
// insert your logic for filtering the data here
return { uniqueItems } // or whatever you want the prop to be called
}
const withData = graphql(REPORT_FILTER_QUERY, {
options: ({ isPublished }) => ({
variables: { isPublished }
}),
props: mapDataToProps,
})
The above may need to be modified depending on what the structure of data actually looks like. data has some handy props on it that can let you check for whether the query is loading (data.loading) or has errors (data.error). The above example already guards against sending an undefined prop down to your children, but you could easily incorporate those properties into your logic if you so desired.
I've currently got a situation whereby I need Redux Actions to be run consecutively. I've taken a look at various middlewares, such a redux-promise, which seem to be fine if you know what the successive actions are at the point of the root (for lack of a better term) action being triggered.
Essentially, I'd like to maintain a queue of actions that can be added to at any point. Each object has an instance of this queue in its state and dependent actions can be enqueued, processed and dequeued accordingly. I have an implementation, but in doing so I'm accessing state in my action creators, which feels like an anti-pattern.
I'll try and give some context on use case and implementation.
Use Case
Suppose you want to create some lists and persist them on a server. On list creation, the server responds with an id for that list, which is used in subsequent API end points pertaining to the list:
http://my.api.com/v1.0/lists/ // POST returns some id
http://my.api.com/v1.0/lists/<id>/items // API end points include id
Imagine that the client wants to perform optimistic updates on these API points, to enhance UX - nobody likes looking at spinners. So when you create a list, your new list instantly appears, with an option at add items:
+-------------+----------+
| List Name | Actions |
+-------------+----------+
| My New List | Add Item |
+-------------+----------+
Suppose that someone attempts to add an item before the response from the initial create call has made it back. The items API is dependent on the id, so we know we can't call it until we have that data. However, we might want to optimistically show the new item and enqueue a call to the items API so that it triggers once the create call is done.
A Potential Solution
The method I'm using to get around this currently is by giving each list an action queue - that is, a list of Redux actions that will be triggered in succession.
The reducer functionality for a list creation might look something like this:
case ADD_LIST:
return {
id: undefined, // To be filled on server response
name: action.payload.name,
actionQueue: []
}
Then, in an action creator, we'd enqueue an action instead of directly triggering it:
export const createListItem = (name) => {
return (dispatch) => {
dispatch(addList(name)); // Optimistic action
dispatch(enqueueListAction(name, backendCreateListAction(name));
}
}
For brevity, assume the backendCreateListAction function calls a fetch API, which dispatches messages to dequeue from the list on success/failure.
The Problem
What worries me here is the implementation of the enqueueListAction method. This is where I'm accessing state to govern the advancement of the queue. It looks something like this (ignore this matching on name - this actually uses a clientId in reality, but I'm trying to keep the example simple):
const enqueueListAction = (name, asyncAction) => {
return (dispatch, getState) => {
const state = getState();
dispatch(enqueue(name, asyncAction));{
const thisList = state.lists.find((l) => {
return l.name == name;
});
// If there's nothing in the queue then process immediately
if (thisList.actionQueue.length === 0) {
asyncAction(dispatch);
}
}
}
Here, assume that the enqueue method returns a plain action that inserts an async action into the lists actionQueue.
The whole thing feels a bit against the grain, but I'm not sure if there's another way to go with it. Additionally, since I need to dispatch in my asyncActions, I need to pass the dispatch method down to them.
There is similar code in the method to dequeue from the list, which triggers the next action should one exist:
const dequeueListAction = (name) => {
return (dispatch, getState) => {
dispatch(dequeue(name));
const state = getState();
const thisList = state.lists.find((l) => {
return l.name === name;
});
// Process next action if exists.
if (thisList.actionQueue.length > 0) {
thisList.actionQueue[0].asyncAction(dispatch);
}
}
Generally speaking, I can live with this, but I'm concerned that it's an anti-pattern and there might be a more concise, idiomatic way of doing this in Redux.
Any help is appreciated.
I have the perfect tool for what you are looking for. When you need a lot of control over redux, (especially anything asynchronous) and you need redux actions to happen sequentially there is no better tool than Redux Sagas. It is built on top of es6 generators giving you a lot of control since you can, in a sense, pause your code at certain points.
The action queue you describe is what is called a saga. Now since it is created to work with redux these sagas can be triggered to run by dispatching in your components.
Since Sagas use generators you can also ensure with certainty that your dispatches occur in a specific order and only happen under certain conditions. Here is an example from their documentation and I will walk you through it to illustrate what I mean:
function* loginFlow() {
while (true) {
const {user, password} = yield take('LOGIN_REQUEST')
const token = yield call(authorize, user, password)
if (token) {
yield call(Api.storeItem, {token})
yield take('LOGOUT')
yield call(Api.clearItem, 'token')
}
}
}
Alright, it looks a little confusing at first but this saga defines the exact order a login sequence needs to happen. The infinite loop is allowed because of the nature of generators. When your code gets to a yield it will stop at that line and wait. It will not continue to the next line until you tell it to. So look where it says yield take('LOGIN_REQUEST'). The saga will yield or wait at this point until you dispatch 'LOGIN_REQUEST' after which the saga will call the authorize method, and go until the next yield. The next method is an asynchronous yield call(Api.storeItem, {token}) so it will not go to the next line until that code resolves.
Now, this is where the magic happens. The saga will stop again at yield take('LOGOUT') until you dispatch LOGOUT in your application. This is crucial since if you were to dispatch LOGIN_REQUEST again before LOGOUT, the login process would not be invoked. Now, if you dispatch LOGOUT it will loop back to the first yield and wait for the application to dispatch LOGIN_REQUEST again.
Redux Sagas are, by far, one of my favorite tools to use with Redux. It gives you so much control over your application and anyone reading your code will thank you since everything now reads one line at a time.
Have a look at this: https://github.com/gaearon/redux-thunk
The id alone shouldn't go through the reducer. In your action creator (thunk), fetch the list id first, and then() perform a second call to add the item to the list. After this, you can dispatch different actions based on whether or not the addition was successful.
You can dispatch multiple actions while doing this, to report when the server interaction has started and finished. This will allow you to show a message or a spinner, in case the operation is heavy and might take a while.
A more in-depth analysis can be found here: http://redux.js.org/docs/advanced/AsyncActions.html
All credit to Dan Abramov
I was facing a similar problem to yours. I needed a queue to guarantee that optimistic actions were committed or eventually committed (in case of network problems) to remote server in same sequential order they were created, or rollback if not possible. I found that with Redux only, fells short for this, basically because I believe it was not designed for this and doing it with promises alone can be really a hard problem to reason with, besides the fact you need to manage your queue state somehow... IMHO.
I think #Pcriulan's suggestion on using redux-saga was a good one. At first sight, redux-saga doesn't provide anything to help you with until you get to channels. This opens you a door to deal with concurrency in other ways other languages do, CSP specifically (see Go or Clojure's async for example), thanks to JS generators. There are even questions on why is named after the Saga pattern and not CSP haha... anyway.
So here is how a saga could help you with your queue:
export default function* watchRequests() {
while (true) {
// 1- Create a channel for request actions
const requestChan = yield actionChannel('ASYNC_ACTION');
let resetChannel = false;
while (!resetChannel) {
// 2- take from the channel
const action = yield take(requestChan);
// 3- Note that we're using a blocking call
resetChannel = yield call(handleRequest, action);
}
}
}
function* handleRequest({ asyncAction, payload }) {
while (true) {
try {
// Perform action
yield call(asyncAction, payload);
return false;
} catch(e) {
if(e instanceof ConflictError) {
// Could be a rollback or syncing again with server?
yield put({ type: 'ROLLBACK', payload });
// Store is out of consistency so
// don't let waiting actions come through
return true;
} else if(e instanceof ConnectionError) {
// try again
yield call(delay, 2000);
}
}
}
}
So the interesting part here is how the channel acts as a buffer (a queue) which keeps "listening" for incoming actions but won't proceed with future actions until it finish with the current one. You might need to go over their documentation in order to grasp the code better, but I think it's worth it. The resetting channel part might or not work for your needs :thinking:
Hope it helps!
This is how I would tackle this problem:
Make sure each local list have an unique identifier. I'm not talking about the backend id here. Name is probably not enough to identify a list? An "optimistic" list not yet persisted should be uniquely identifiable, and user may try to create 2 lists with the same name, even if it's an edge case.
On list creation, add a promise of backend id to a cache
CreatedListIdPromiseCache[localListId] = createBackendList({...}).then(list => list.id);
On item add, try to get the backend id from Redux store. If it does not exist, then try to get it from CreatedListIdCache. The returned id must be async because CreatedListIdCache returns a promise.
const getListIdPromise = (localListId,state) => {
// Get id from already created list
if ( state.lists[localListId] ) {
return Promise.resolve(state.lists[localListId].id)
}
// Get id from pending list creations
else if ( CreatedListIdPromiseCache[localListId] ) {
return CreatedListIdPromiseCache[localListId];
}
// Unexpected error
else {
return Promise.reject(new Error("Unable to find backend list id for list with local id = " + localListId));
}
}
Use this method in your addItem, so that your addItem will be delayed automatically until the backend id is available
// Create item, but do not attempt creation until we are sure to get a backend id
const backendListItemPromise = getListIdPromise(localListId,reduxState).then(backendListId => {
return createBackendListItem(backendListId, itemData);
})
// Provide user optimistic feedback even if the item is not yet added to the list
dispatch(addListItemOptimistic());
backendListItemPromise.then(
backendListItem => dispatch(addListItemCommit()),
error => dispatch(addListItemRollback())
);
You may want to clean the CreatedListIdPromiseCache, but it's probably not very important for most apps unless you have very strict memory usage requirements.
Another option would be that the backend id is computed on frontend, with something like UUID. Your backend just need to verify the unicity of this id. Thus you would always have a valid backend id for all optimistically created lists, even if backend didn't reply yet.
You don't have to deal with queuing actions. It will hide the data flow and it will make your app more tedious to debug.
I suggest you to use some temporary ids when creating a list or an item and then update those ids when you actually receive the real ones from the store.
Something like this maybe ? (don't tested but you get the id) :
EDIT : I didn't understand at first that the items need to be automatically saved when the list is saved. I edited the createList action creator.
/* REDUCERS & ACTIONS */
// this "thunk" action creator is responsible for :
// - creating the temporary list item in the store with some
// generated unique id
// - dispatching the action to tell the store that a temporary list
// has been created (optimistic update)
// - triggering a POST request to save the list in the database
// - dispatching an action to tell the store the list is correctly
// saved
// - triggering a POST request for saving items related to the old
// list id and triggering the correspondant receiveCreatedItem
// action
const createList = (name) => {
const tempList = {
id: uniqueId(),
name
}
return (dispatch, getState) => {
dispatch(tempListCreated(tempList))
FakeListAPI
.post(tempList)
.then(list => {
dispatch(receiveCreatedList(tempList.id, list))
// when the list is saved we can now safely
// save the related items since the API
// certainly need a real list ID to correctly
// save an item
const itemsToSave = getState().items.filter(item => item.listId === tempList.id)
for (let tempItem of itemsToSave) {
FakeListItemAPI
.post(tempItem)
.then(item => dispatch(receiveCreatedItem(tempItem.id, item)))
}
)
}
}
const tempListCreated = (list) => ({
type: 'TEMP_LIST_CREATED',
payload: {
list
}
})
const receiveCreatedList = (oldId, list) => ({
type: 'RECEIVE_CREATED_LIST',
payload: {
list
},
meta: {
oldId
}
})
const createItem = (name, listId) => {
const tempItem = {
id: uniqueId(),
name,
listId
}
return (dispatch) => {
dispatch(tempItemCreated(tempItem))
}
}
const tempItemCreated = (item) => ({
type: 'TEMP_ITEM_CREATED',
payload: {
item
}
})
const receiveCreatedItem = (oldId, item) => ({
type: 'RECEIVE_CREATED_ITEM',
payload: {
item
},
meta: {
oldId
}
})
/* given this state shape :
state = {
lists: {
ids: [ 'list1ID', 'list2ID' ],
byId: {
'list1ID': {
id: 'list1ID',
name: 'list1'
},
'list2ID': {
id: 'list2ID',
name: 'list2'
},
}
...
},
items: {
ids: [ 'item1ID','item2ID' ],
byId: {
'item1ID': {
id: 'item1ID',
name: 'item1',
listID: 'list1ID'
},
'item2ID': {
id: 'item2ID',
name: 'item2',
listID: 'list2ID'
}
}
}
}
*/
// Here i'm using a immediately invoked function just
// to isolate ids and byId variable to avoid duplicate
// declaration issue since we need them for both
// lists and items reducers
const lists = (() => {
const ids = (ids = [], action = {}) => ({
switch (action.type) {
// when receiving the temporary list
// we need to add the temporary id
// in the ids list
case 'TEMP_LIST_CREATED':
return [...ids, action.payload.list.id]
// when receiving the real list
// we need to remove the old temporary id
// and add the real id instead
case 'RECEIVE_CREATED_LIST':
return ids
.filter(id => id !== action.meta.oldId)
.concat([action.payload.list.id])
default:
return ids
}
})
const byId = (byId = {}, action = {}) => ({
switch (action.type) {
// same as above, when the the temp list
// gets created we store it indexed by
// its temp id
case 'TEMP_LIST_CREATED':
return {
...byId,
[action.payload.list.id]: action.payload.list
}
// when we receive the real list we first
// need to remove the old one before
// adding the real list
case 'RECEIVE_CREATED_LIST': {
const {
[action.meta.oldId]: oldList,
...otherLists
} = byId
return {
...otherLists,
[action.payload.list.id]: action.payload.list
}
}
}
})
return combineReducers({
ids,
byId
})
})()
const items = (() => {
const ids = (ids = [], action = {}) => ({
switch (action.type) {
case 'TEMP_ITEM_CREATED':
return [...ids, action.payload.item.id]
case 'RECEIVE_CREATED_ITEM':
return ids
.filter(id => id !== action.meta.oldId)
.concat([action.payload.item.id])
default:
return ids
}
})
const byId = (byId = {}, action = {}) => ({
switch (action.type) {
case 'TEMP_ITEM_CREATED':
return {
...byId,
[action.payload.item.id]: action.payload.item
}
case 'RECEIVE_CREATED_ITEM': {
const {
[action.meta.oldId]: oldList,
...otherItems
} = byId
return {
...otherItems,
[action.payload.item.id]: action.payload.item
}
}
// when we receive a real list
// we need to reappropriate all
// the items that are referring to
// the old listId to the new one
case 'RECEIVE_CREATED_LIST': {
const oldListId = action.meta.oldId
const newListId = action.payload.list.id
const _byId = {}
for (let id of Object.keys(byId)) {
let item = byId[id]
_byId[id] = {
...item,
listId: item.listId === oldListId ? newListId : item.listId
}
}
return _byId
}
}
})
return combineReducers({
ids,
byId
})
})()
const reducer = combineReducers({
lists,
items
})
/* REDUCERS & ACTIONS */
This is more of a general question, but based on Victor Savkin post Managing state in angular2
Let's consider approach described there that uses RxJs:
interface Todo { id: number; text: string; completed: boolean; }
interface AppState { todos: Todo[]; visibilityFilter: string; }
function todos(initState: Todo[], actions: Observable<Action>): Observable<Todo[]> {
return actions.scan((state, action) => {
if (action instanceof AddTodoAction) {
const newTodo = {id: action.todoId, text: action.text, completed: false};
return [...state, newTodo];
} else {
return state;
}
}, initState);
}
All is fine, but let's add few more requirements:
Upon adding new Todo item, its text should be sent to the backend and analysed to extract possible due date and location.
If Todo item has due date, it should be added to my Google calendar
So if i add Todo "Get my hair done at Sally's Saloon on Thursday", with first call i would get from backend Sally's Saloon and date which is set to this weeks (or next weeks) Thursday and second call would add this todo to my Google calendar and mark item as in calendar.
So my new Todo item structure might look something like this:
interface Todo {
id: number;
text: string;
completed: boolean;
location?: Coordinates;
date?: Date;
inCalendar?: boolean;
parsed?: boolean;
}
And now i have two side effects :
After todo has been added i need to parse the text
After date has been added to Todo, i need to add it to calendar.
How do i deal with these side effects in this approach? Redux says that reducers should be kept clean, and they also have a notion of Sagas.
Option 1 - fire new event(s) for side effects when todo is added
function todos(initState: Todo[], actions: Observable<Action>): Observable<Todo[]> {
return actions.scan((state, action) => {
if (action instanceof AddTodoAction) {
const newTodo = {id: action.todoId, text: action.text, completed: false};
actions.onNext(new ParseTodoAction(action.todoId));
return [...state, newTodo];
} else if (action instanceOf ParseTodoAction){
const todo = state.find(t => t.todoId === action.todoId)
parserService
.parse(todo.todoId, todo.text)
.subscribe(r => actions.onNext(new TodoParsedAction(todo.todoId, r.coordinates, r.date)))
} else {
return state;
}
}, initState);
}
But this will fail, because new todo is not yet available on the state.
I could of course use only TodoParsedAction and instead of ParseTodoAction just invoke backend call inline, but this would also assume that backend call will take longer to process, and by the time it finishes state will already have that new Todo item which is trouble waiting to happen.
Option 2 - subscribe to actions and check each todo for missing properties
actions
.flatMap(todos => Observable.from(todos))
.subscribe(todo => {
if (!todo.coordinates && !todo.parsed) {
parserService
.parse(todo.todoId, todo.text)
.subscribe(r => actions.onNext(new TodoParsedAction(todo.todoId, r.coordinates, r.date)))
}
if (todo.date && todo.inCalendar === undefined) {
calendarService
.add(todo.text, todo.date)
.subscribe(_ => actions.onNext(new TodoInCalendarAction(todo.todoId, true)))
}
})
But this somehow does not feel right - shouldn't be everything managed by actions, and should i always loop through all of todo items?
Your option 1 can't work as stated: actions is an Observable<Action> observables are read-only and onNext isn't part of that API. You need an Observer<Action> to support option 1. This highlights the real flaw of option 1: your state function (same thing as a Redux reducer) needs to be pure and side-effect free. That means they cannot and should not dispatch more actions.
Now in the blog article you reference, indeed the code is really passing in a Subject, which is both Observer and Observable. So you probably do have an onNext. But I can tell you that recursively publishing data to a Subject while you are handling data being published by that Subject will get you into no end of trouble and is rarely worth the headaches to make work correctly.
In Redux, the typical solution to invoking backend processing to enrich your state would be to dispatch multiple actions at the beginning when you have already decided to dispatch AddTodo. This can often be done by using redux-thunk and dispatching functions as "smart actions":
Instead of:
export function addToDo(args) {
return new AddToDoAction(args);
}
you'd do:
export function addToDo(args) {
return (dispatch) => {
dispatch(new AddToDoAction(args)); // if you want to dispatch the Todo before parsing
dispatch(parseToDo(args)); // handle parsing
};
}
export function parseToDo(args) {
return (dispatch) => {
if (thisToDoNeedsParsing(args)) {
callServerAndParse(args).then(result => {
// dispatch an action to update the Todo
dispatch(new EnrichToDoWithParsedData(result));
});
}
};
}
// UI code would do:
dispatch(addToDo(args));
The UI dispatches a smart action (thunk) which will dispatch the AddToDoAction to get the unparsed todo in your state (your UI can choose to not show it until the parse completes if you want). It then dispatches another smart action (thunk) which will actually call the server to get more data then dispatch an EnrichToDoWithParsedData action with the results so that your Todo can be updated.
As for updating of the calendar...you can probably use the pattern above (inserting calls to possiblyUpdateCalendar() in both addToDo and parseToDo so that if the todo has all the stuff you need, it can update the calendar and when that finishes dispatch an action to mark the todo as added.
Now this example I've shown is Redux-specific and I don't think the RxJs-based example you are working from has anything like a thunk. One way to add support for this in your scheme is to add a flatMap operator to the subject that goes something like this:
let actionStream = actionsSubject.flatMap(action => {
if (typeof action !== "function") {
// not a thunk. just return it as a simple observable
return Rx.Observable.of(action);
}
// call the function and give it a dispatch method to collect any actions it dispatches
var actions = [];
var dispatch = a => actions.push(a);
action(dispatch);
// now return the actions that the action thunk dispatched
return Rx.Observable.of(actions);
});
// pass actionStream to your stateFns instead of passing the raw subject
var state$ = stateFn(initState, actionStream);
// Now your UI code *can* pass in "smart" actions:
actionSubject.onNext(addTodo(args));
// or "dumb" actions:
actionSubject.onNext(new SomeSimpleAction(args));
Notice all of that code above is in the code that dispatches an action. I didn't show any of your state function. Your state function would be pure and something like:
function todos(initState: Todo[], actions: Observable<Action>): Observable<Todo[]> {
return actions.scan((state, action) => {
if (action instanceof AddTodoAction) {
const newTodo = {id: action.todoId, text: action.text, completed: false};
return [...state, newTodo];
} else if (action instanceof EnrichTodoWithParsedData) {
// (replace the todo inside the state array with a new updated one)
} else if (action instanceof AddedToCalendar) {
// (replace the todo inside the state array with a new updated one)
}
} else {
return state;
}
}, initState);
}