Is using setState in every iteration of a loop bad practice? - javascript

Here is a little code snippet:
async componentDidMount() {
...
this.state.postList.forEach(element => {
this.fetchItem(element);
});
}
async fetchItem(query) {
...
this.setState( previousState => {
const list = [...previousState.data, data];
return { data: list };
});
}
I'm curious to know if using setState in every iteration of a forEach loop is a bad idea or not. I'm suspecting that it impacts performance, but I'd like to know for sure because this seemed like the simplest solution to this problem.

Here's an alternative approach: Update your fetchItem to just return the item. In your componentDidMount use Promise.all to get all the items and then commit them to the state in a single operation.
async componentDidMount() {
const items = await Promise.all(this.state.postList.map(element => fetchItem(element)));
this.setState({data: items});
}
async fetchItem(query) {
const item = await getItem(query) // however you accomplish this
return item;
}

I'm curious to know if using setState in every iteration of a forEach loop is a bad idea or not.
If it would be directly inside of an iteration then definetly yes, as React than has to merge all the updates you make during the iteration, which will probably take much more time than if you would just set the state after the loop.
In your case however, you do start an asynchronous action in each iteration, and as all asynchronous tasks finish at different times, the updates aren't run all at once. The main benefit of your approach is that if these asynchronous tasks take some time (e.g. if you fetch a lot of data for each), then some information can already be shown, while some are still loading. If all those asynchronous calls only load a small amount of data, well then you should actually change your API to deliver all the data at once. So it really depends on your usecase wether thats good or bad.

Related

React setState within Promise.all

I'm working on a project where we would like run multiple fetches in parallel to a very slow API. Ideally, we would like to populate our interface for the user as this data is received and do so in a summative manner. These requests may or may not resolve in the order that the API calls were made.
Most use cases of Promise.all with a setState involve setting state after all promises have resolved. However, what I'm looking to do demands setting state as a side effect within the child promises themselves, I believe.
So this is (simplified) what I am doing to achieve this:
const request = async (setState, endpoint) => {
const response = await fetch(endpoint);
const data = response.json();
setState(state => ({ ...state, ...data }))
}
// Called within React component as a side effect
const fetchAllData = (setState) => {
Promise.all(
[
request(setState, url_1),
request(setState, url_2),
request(setState, url_3)
]
)
}
Now, I'm running some testing and this does appear to work. I believe I should not be running into race conditions with the state because setState is being passed a function. However, I do wonder if I'm doing something dangerous with respect to React, updating state, and rendering.
Is there anything wrong with this picture?
There's nothing wrong with immediately updating state from each individual promise; this will work just fine. You might have a race condition if every request attempts to update the same bit of data, but as long as they write different parts of your state you should be fine (the state updater callback pattern is necessary though).
The only thing wrong with your code is the missing error handling, and that the Promise.all is currently a bit superfluous.

.subscribe is not being executed in the correct order

Using Angular and the RxJS library.
I have a simple method:
method(){
const data = this.data;
const moreData = this.moreData;
this.anotherMethodOne(data)
this.anotherMethodTwo(moreData)
}
As you can see I am calling two other methods. anotherMethodOne() should be executed before anotherMethodTwo() - and it is.
Within anotherMethodOne() I have a .subscribe:
anotherMethodOne(data) {
<!-- at this point it jumps out of this method and continues to proceed in the parent method, but returns later -->
this.service.getService(id).subscribe(res => {
res
}
}
As soon as my debugger hits the .subscribe call it jumps back into the parent method() and proceeds to execute this.anotherMethodTwo(moreData). It then jumps back into this.anotherMethodOne(data) to finally complete the .subscribe - however, by this time its too late.
I need to make sure this.anotherMethodOne(data) is completed fully before this.anotherMethodTwo(moreData) runs or ensure that the .subscribe runs before it proceeds any further.
I started looking at switchMap and concat in the RxJS library, but unsure how that would be implemented in my case.
You cannot guarantee an observable would've emitted before proceeding with it's emitted value. So you need to return the observable from the methods and subscribe where it's value is required.
For observable depending on the emissions of other observables, you could use higher order mapping operator like switchMap. More info about them here.
method(){
const data = this.data;
const moreData = this.moreData;
this.anotherMethodOne(data).pipe(
switchMap((someValue) =>
this.anotherMethodTwo(moreData)
)
).subscribe({
// callbacks
});
}
anotherMethodOne(data): Observable<any> {
return this.http.get('url');
}
anotherMethodTwo(moreData): Observable<any> {
return this.http.get('url');
}
When it comes to asynchronous functions, you should stay with the reactive approach and not try to determine when a certain code has finished, but instead just wait for it to finish and then invoke the callback or the responsible handler subsequently. Think about this. What is the subscribe never returns anything - is that okay? What if it takes a really long time to return anything - is that okay too?
But to fix the problem at hand, simply move anotherMethodTwo into the subscription. Then when something is returned, anotherMethodTwo is run.
method(){
const data = this.data;
const moreData = this.moreData;
this.anotherMethodOne(data)
}
anotherMethodOne(data) {
this.service.getService(id).subscribe(res => {
console.log(res)
this.anotherMethodTwo(moreData)
}
}
Going down this path would be similar to async callback hell which nesting style is not recommended but it will do for now.

AngularFire firestore get / snapshotchanges / valuechanges action on observable is not async?

Hi I am having a very strange behaviour.
I am iterating over some documents and setting some promises that when the documents are fetched the UI is updated.
However, while the promises are atomic, the firestore / AngularFire waits for all the promises.
Example:
for (const event of events) {
this.eventService.getEventActivitiesAndSomeStreams(this.user,
event.getID(),
[DataLatitudeDegrees.type, DataLongitudeDegrees.type])
.pipe(take(1)).toPromise().then((fullEvent) => {
this.logger.info(`Promise completed`)
})
}
One would expect that slowly for each promise as the data comes it would print the promise completed.
However they are all printed as once. It doesn't look that those promises come one by one but "all at once". There is a big waiting time till the first console log is printed and then all promises print that.
So I would expect if I have a progress bar to increase little but little but increases at once
The inner call this.eventService.getEventActivitiesAndSomeStreams
return this.afs
.collection('users')
.doc(userID)
.collection('events')
.doc(eventID)
.collection('activities')
.doc(activityID)
.collection('streams', ((ref) => {
return ref.where('type', 'in', typesBatch);
}))
.get()
.pipe(map((documentSnapshots) => {
return documentSnapshots.docs.reduce((streamArray: StreamInterface[], documentSnapshot) => {
streamArray.push(this.processStreamDocumentSnapshot(documentSnapshot)); // Does nothing rather to create an class of the JSON object passed back from the firestore
return streamArray;
}, []);
}))
Now, if I put an await inside the for loop of course this works as it should going and completing the promises as it should, but then it takes a lot of time.
I also tried to not use AngularFire and use the native JS SDK with the same effect.
I am suspecting that the IndexedDB can be causing this or some other Firebase logic.
What am I missing here, and how can I have the desired behaviour if possible?
You could repro this via a ["users" -> "events" -> "something"] firestore collections, were each "user" has lets say 500 "events" and each of those events has 2 more docs.
So get all the events for the user and try to make for each one a promise that will return 2 documents of "something" inside a for array )
This behavior is pretty expected and has nothing at all to do with firebase. you're iterating over an array and sending out requests. there is no waiting or delay between items, so the for loop (without await statements) will finish in an imperceptibly small amount of time, which means all of the requests are being sent out within milliseconds of each other, or basically at the same time. So their responses should be expected to arrive at basically the same time as well.
You've stated that you don't want to use await statements and iterate one by one, so it's tough to know exactly what you do want or expect to happen. maybe you want them to be spaced .5 second apart? If so, you need to write that logic:
timer(0, 500).pipe( // put whatever ms time between requests you want here?
take(events.length),
switchMap(i => {
return this.eventService.getEventActivitiesAndSomeStreams(this.user,
events[i].getID(),
[DataLatitudeDegrees.type, DataLongitudeDegrees.type]).pipe(take(1))
})
).subscribe(fullEvent => {
this.logger.info(`Promise completed`)
})
(removed promises cause idk why they're being used in the first place and this kind of control is easier with rxjs IMO)

Better way to "loop" promises

This is a post that might come across as quite conceptual, since I first start with a lot of pseudo code. - At the end you'll see the use case for this problem, though a solution would be a "tool I can add to my tool-belt of useful programming techniques".
The problem
Sometimes one might need to create multiple promises, and either do something after all promises have ended. Or one might create multiple promises, based on the results of the previous promises. The analogy can be made to creating an array of values instead of a single value.
There are two basic cases to be considered, where the number of promises is indepedented of the result of said promises, and the case where it is depedent. Simple pseudo code of what "could" be done.
for (let i=0; i<10; i++) {
promise(...)
.then(...)
.catch(...);
}.then(new function(result) {
//All promises finished execute this code now.
})
The basically creates n (10) promises, and the final code would be executed after all promises are done. Of course the syntax isn't working in javascript, but it shows the idea. This problem is relativelly easy, and could be called completely asynchronous.
Now the second problem is like:
while (continueFn()) {
promise(...)
.then(.. potentially changing outcome of continueFn ..)
.catch(.. potentially changing outcome of continueFn ..)
}.then(new function(result) {
//All promises finished execute this code now.
})
This is much more complex, as one can't just start all promises and then wait for them to finish: in the end you'll have to go "promise-by-promise". This second case is what I wish to figure out (if one can do the second case you can also do the first).
The (bad) solution
I do have a working "solution". This is not a good solution as can probably quickly be seen, after the code I'll talk about why I dislike this method. Basically instead of looping it uses recursion - so the "promise" (or a wrapper around a promise which is a promise) calls itself when it's fulfilled, in code:
function promiseFunction(state_obj) {
return new Promise((resolve, reject) => {
//initialize fields here
let InnerFn = (stateObj) => {
if (!stateObj.checkContinue()) {
return resolve(state_obj);
}
ActualPromise(...)
.then(new function(result) {
newState = stateObj.cloneMe(); //we'll have to clone to prevent asynchronous write problems
newState.changeStateBasedOnResult(result);
return InnerFn(newState);
})
.catch(new function(err) {
return reject(err); //forward error handling (must be done manually?)
});
}
InnerFn(initialState); //kickstart
});
}
Important to note is that the stateObj should not change during its lifetime, but it can be really easy. In my real problem (which I'll explain at the end) the stateObj was simply a counter (number), and the if (!stateObj.checkContinue()) was simply if (counter < maxNumber).
Now this solution is really bad; It is ugly, complicated, error prone and finally impossible to scale.
Ugly because the actual business logic is buried in a mess of code. It doesn't show "on the can" that is actually simply doing what the while loop above does.
Complicated because the flow of execution is impossible to follow. First of all recursive code is never "easy" to follow, but more importantly you also have to keep in mind thread safety with the state-object. (Which might also have a reference to another object to, say, store a list of results for later processing).
It's error prone since there is more redundancy than strictly necessary; You'll have to explicitly forward the rejection. Debugging tools such as a stack trace also quickly become really hard to look through.
The scalability is also a problem at some points: this is a recursive function, so at one point it will create a stackoverflow/encounter maximum recursive depth. Normally one could either optimize by tail recursion or, more common, create a virtual stack (on the heap) and transform the function to a loop using the manual stack. In this case, however, one can't change the recursive calls to a loop-with-manual-stack; simply because of how promise syntax works.
The alternative (bad) solution
A colleague suggested an alternative approach to this problem, something that initially looked much less problematic, but I discarded ultimatelly since it was against everything promises are meant to do.
What he suggested was basically looping over the promises as per above. But instead of letting the loop continue there would be a variable "finished" and an inner loop that constantly checks for this variable; so in code it would be like:
function promiseFunction(state_obj) {
return new Promise((resolve, reject) => {
while (stateObj.checkContinue()) {
let finished = false;
let err = false;
let res = null;
actualPromise(...)
.then(new function(result) {
res = result;
finished = true;
})
.catch(new function(err) {
res = err;
err = true;
finished = true;
});
while(!finished) {
sleep(100); //to not burn our cpu
}
if (err) {
return reject(err);
}
stateObj.changeStateBasedOnResult(result);
}
});
}
While this is less complicated, since it's now easy to follow the flow of execution. This has problems of its own: not for the least that it's unclear when this function will end; and it's really bad for performance.
Conclusion
Well this isn't much to conclude yet, I'd really like something as simple as in the first pseudo code above. Maybe another way of looking at things so that one doesn't have the trouble of deeply recursive functions.
So how would you rewrite a promise that is part of a loop?
The real problem used as motivation
Now this problem has roots in a real thing I had to create. While this problem is now solved (by applying the recursive method above), it might be interesting to know what spawned this; The real question however isn't about this specific case, but rather on how to do this in general with any promise.
In a sails app I had to check a database, which had orders with order-ids. I had to find the first N "non existing order-ids". My solution was to get the "first" M products from the database, find the missing numbers within it. Then if the number of missing numbers was less than N get the next batch of M products.
Now to get an item from a database, one uses a promise (or callback), thus the code won't wait for the database data to return. - So I'm basically at the "second problem:"
function GenerateEmptySpots(maxNum) {
return new Promise((resolve, reject) => {
//initialize fields
let InnerFn = (counter, r) => {
if (r > 0) {
return resolve(true);
}
let query = {
orderNr: {'>=': counter, '<': (counter + maxNum)}
};
Order.find({
where: query,
sort: 'orderNr ASC'})
.then(new function(result) {
n = findNumberOfMissingSpotsAndStoreThemInThis();
return InnerFn(newState, r - n);
}.bind(this))
.catch(new function(err) {
return reject(err);
});
}
InnerFn(maxNum); //kickstart
});
}
EDIT:
Small post scriptus: the sleep function in the alternative is just from another library which provided a non-blocking-sleep. (not that it matters).
Also, should've indicated I'm using es2015.
The alternative (bad) solution
…doesn't actually work, as there is no sleep function in JavaScript. (If you have a runtime library which provides a non-blocking-sleep, you could just have used a while loop and non-blocking-wait for the promise inside it using the same style).
The bad solution is ugly, complicated, error prone and finally impossible to scale.
Nope. The recursive approach is indeed the proper way to do this.
Ugly because the actual business logic is buried in a mess of code. And error-prone as you'll have to explicitly forward the rejection.
This is just caused by the Promise constructor antipattern! Avoid it.
Complicated because the flow of execution is impossible to follow. Recursive code is never "easy" to follow
I'll challenge that statement. You just have to get accustomed to it.
You also have to keep in mind thread safety with the state-object.
No. There is no multi-threading and shared memory access in JavaScript, if you worry about concurrency where other stuff affects your state object while the loop runs that will a problem with any approach.
The scalability is also a problem at some points: this is a recursive function, so at one point it will create a stackoverflow
No. It's asynchronous! The callback will run on a new stack, it's not actually called recursively during the function call and does not carry those stack frames around. The asynchronous event loop already provides the trampoline to make this tail-recursive.
The good solution
function promiseFunction(state) {
const initialState = state.cloneMe(); // clone once for this run
// initialize fields here
return (function recurse(localState) {
if (!localState.checkContinue())
return Promise.resolve(localState);
else
return actualPromise(…).then(result =>
recurse(localState.changeStateBasedOnResult(result))
);
}(initialState)); // kickstart
}
The modern solution
You know, async/await is available in every environment that implemented ES6, as all of them also implemented ES8 now!
async function promiseFunction(state) {
const localState = state.cloneMe(); // clone once for this run
// initialize fields here
while (!localState.checkContinue()) {
const result = await actualPromise(…);
localState = localState.changeStateBasedOnResult(result);
}
return localState;
}
Let’s begin with the simple case: You have N promises that all do some work, and you want to do something when all the promises have finished. There’s actually a built-in way to do exactly that: Promise.all. With that, the code will look like this:
let promises = [];
for (let i=0; i<10; i++) {
promises.push(doSomethingAsynchronously());
}
Promise.all(promises).then(arrayOfResults => {
// all promises finished
});
Now, the second call is a situation you encounter all the time when you want to continue doing something asynchronously depending on the previous asynchronous result. A common example (that’s a bit less abstract) would be to simply fetch pages until you hit the end.
With modern JavaScript, there’s luckily a way to write this in a really readable way: Using asynchronous functions and await:
async function readFromAllPages() {
let shouldContinue = true;
let pageId = 0;
let items = [];
while (shouldContinue) {
// fetch the next page
let result = await fetchSinglePage(pageId);
// store items
items.push.apply(items, result.items);
// evaluate whether we want to continue
if (!result.items.length) {
shouldContinue = false;
}
pageId++;
}
return items;
}
readFromAllPages().then(allItems => {
// items have been read from all pages
});
Without async/await, this will look a bit more complicated, since you need to manage all this yourself. But unless you try to make it super generic, it shouldn’t look that bad. For example, the paging one could look like this:
function readFromAllPages() {
let items = [];
function readNextPage(pageId) {
return fetchSinglePage(pageId).then(result => {
items.push.apply(items, result.items);
if (!result.items.length) {
return Promise.resolve(null);
}
return readNextPage(pageId + 1);
});
}
return readNextPage(0).then(() => items);
}
First of all recursive code is never "easy" to follow
I think the code is fine to read. As I’ve said: Unless you try to make it super generic, you can really keep it simple. And naming also helps a lot.
but more importantly you also have to keep in mind thread safety with the state-object
No, JavaScript is single-threaded. You doing things asynchronously but that does not necessarily mean that things are happening at the same time. JavaScript uses an event loop to work off asynchronous processes, where only one code block runs at a single time.
The scalability is also a problem at some points: this is a recursive function, so at one point it will create a stackoverflow/encounter maximum recursive depth.
Also no. This is recursive in the sense that the function references itself. But it will not call itself directly. Instead it will register itself as a callback when an asynchronous process finishes. So the current execution of the function will finish first, then at some point the asynchronous process finishes, and then the callback will eventually run. These are (at least) three separate steps from the event loop, which all run independently from another, so you do no have a problem with recursion depth here.
The crux of the matter seems to be that "the actual business logic is buried in a mess of code".
Yes it is ... in both solutions.
Things can be separated out by :
having an asyncRecursor function that simply knows how to (asynchronously) recurse.
allowing the recursor's caller(s) to specify the business logic (the terminal test to apply, and the work to be performed).
It is also better to allow caller(s) to be responsible for cloning the original object rather than resolver() assuming cloning always to be necessary. The caller really needs to be in charge in this regard.
function asyncRecursor(subject, testFn, workFn) {
// asyncRecursor orchestrates the recursion
if(testFn(subject)) {
return Promise.resolve(workFn(subject)).then(result => asyncRecursor(result, testFn, workFn));
// the `Promise.resolve()` wrapper safeguards against workFn() not being thenable.
} else {
return Promise.resolve(subject);
// the `Promise.resolve()` wrapper safeguards against `testFn(subject)` failing at the first call of asyncRecursor().
}
}
Now you can write your caller as follows :
// example caller
function someBusinessOrientedCallerFn(state_obj) {
// ... preamble ...
return asyncRecursor(
state_obj, // or state_obj.cloneMe() if necessary
(obj) => obj.checkContinue(), // testFn
(obj) => somethingAsync(...).then((result) => { // workFn
obj.changeStateBasedOnResult(result);
return obj; // return `obj` or anything you like providing it makes a valid parameter to be passed to `testFn()` and `workFn()` at next recursion.
});
);
}
You could theoretically incorporate your terminal test inside the workFn but keeping them separate will help enforce the discipline, in writers of the business-logic, to remember to include a test. Otherwise they will consider it optional and sure as you like, they will leave it out!
Sorry, this doesn't use Promises, but sometimes abstractions just get in the way.
This example, which builds from #poke's answer, is short and easy to comprehend.
function readFromAllPages(done=function(){}, pageId=0, res=[]) {
fetchSinglePage(pageId, res => {
if (res.items.length) {
readFromAllPages(done, ++pageId, items.concat(res.items));
} else {
done(items);
}
});
}
readFromAllPages(allItems => {
// items have been read from all pages
});
This has only a single depth of nested functions. In general, you can solve the nested callback problem without resorting to a subsystem that manages things for you.
If we drop the parameter defaults and change the arrow functions, we get code that runs in legacy ES3 browsers.

Observable async vs sync

How do I know when an Observable producer is async or sync?
An sync example:
Observable.of([1, 2, 3])
another async example (ngrx Store, see here)
this.store.take(1);
And now an obvious async example:
this.http.get(restUrl)
I fully understand how this works, and that some Observables can be sync and others Async. What I don't understand is how i can tell the difference in advance? Is there for example an obvious interface on the producer that tells me that it will be async?
tl;dr
The main reason this question has come up is because of the answer from the link above (here).
#Sasxa has (correctly) answered that we can use a synchronous call to get the latest value from the ngrx store:
function getState(store: Store<State>): State {
let state: State;
store.take(1).subscribe(s => state = s);
return state;
}
However knowing that observables are usually asynchronous, I saw this and immediately thought RACE CONDITION! the method could return an undefined value before subscribe has called back the function.
I've debugged through the internals of Store and Observable and this example is indeed synchronous, but how should I have known that? For those that know ngrx, the select method of store (used to get the latest values) is asynchronous, without a doubt, as this is what gives us the reactive gui, which is why I came to the assumption I did.
It means that I cannot refactor the code above as follows:
function getLatest(observable: Observable): any {
let obj: any;
observable.take(1).subscribe(latest => obj = latest);
return obj;
}
I could call this with any Observable and it would work for synchronous producers - it may even work SOME OF THE TIME for async producers, but this is without a doubt a race condition if an async observable is passed in.
It is possible to determine if an observable is asynchronous for sure if was directly scheduled with asynchronous scheduler (Scheduler.async or Scheduler.asap), it's exposed as foo$.scheduler:
let schedulers = [null, Scheduler.asap];
let randomScheduler = schedulers[~~(Math.random()*2)]
let foo$ = Observable.of('foo', randomScheduler);
This information becomes even less available when foo$ is processed further, e.g. chained with other operators.
And it's impossible to determine if values will be produced synchronously (on same tick) or asynchronously because this depends on observable internals:
let foo$ = new Observable(observer => {
if (~~(Math.random()*2))
setTimeout(() => observer.next('foo'));
else
observer.next('foo');
});
TL;DR: it's impossible to know this for sure.
It is hard to figure out whether an observable you get is sync or async (imagine you get observables return from a thrid party library). That's why you write it in a fashion that the execution order is controlled and predictable.
That's also why there are operators like concat, combineLatest, forkjoin, switchMap, race, merge to help you get the order and performance right for different scenario

Categories

Resources