This article describes the refCount operator and explains that in order to prevent the unsubscription of obervable A we have to add delay(0) to the source observable such that th import { Observable } from "rxjs/Observable";
const source = Observable.defer(() => Observable.of(
Math.floor(Math.random() * 100)
)).delay(0);
Is 0 always enough? In other words does passing zero guarantee that the notification will be delayed until all m.subscribe() statements have run, assuming they are all run immediately following the multicast statement like this:
const m = source.multicast(() => new Subject<number>()).refCount();
m.subscribe(observer("a"));
m.subscribe(observer("b"));
In the above case we are only subscribing observers a and b. If we subscribed a million observers after the multicast statement would running delay(0) still guarantee taht they all will be subscribed before the first source notification happens?
To understand the issue you must know that:
Javascript is single threaded;
Asynchronous events runs in event loop (a.k.a. Micro Task and Macro Task)
When Async event happens, it is added to the Event loop;
After async event added to the Event loop, Javascript continues with synchronous code;
After no synchronous code left, it runs events code from Event loop.
This Observable would be synchronous if you wouldn't add delay(0):
const source = Observable.defer(() => Observable.of(
Math.floor(Math.random() * 100)
)).delay(0);
When first Subscription happens (subscribing is synchronous code), Observable emits immediately, because it is also synchronous. But if you add delay(0) (similar to setTimeout), Javascript will wait until all synchronous code (all source.subscribe() in this case), are executed. After that it will run asynchronous delay(0)).
And here:
const m = source.multicast(() => new Subject<number>()).refCount();
m.subscribe(observer("a"));
m.subscribe(observer("b"));
You have source Observable which becomes asynchronous after its emition is passed to delay(0). At that point, synchronous code will continue (all your other source.subscribe() calls) and after they are done, synchronous delay(0) will emit.
So it is safe even for millions source.subscribe() calls to get executed in this case.
p.s.
multicast(() => new Subject<number>()).refCount() is exactly the same as share() - it takes multicast with Subject factory and counts active subscriptions with refCount.
Related
I'm creating an Action handler in NgXS (state management using NgRx under the hood).
Assuming that the action will be dispatched multiple times in short time period - is there a chance that more than one execution will reach doAsyncStuff() method before the first one finishes it? To be more specific - can the execution of the code be interrupted between 3rd and 4th line by another execution, so both threads will pass the 'if' check? getState and setState methods are synchronous.
#Action(ExampleAction)
public async handleAction(ctx: StateContext<StateModel>) {
if (ctx.getState().state !== 'loading') {
ctx.setState(produce(ctx.getState(), (model: StateModel) => {model.state = 'loading';}));
try {
await doAsyncStuff();
model.state = 'success';
} catch (e) {
model.state = 'error';
}
}
}
I've read some answers regarding multiple threads in JS and some of them suggest that synchronous part will never be interupted in JS, but I also fount this:
How to synchronize access to private members of a javascript object
"First, ECMAScript doesn't require JS to be executed in a single thread. Second, setTimeout() etc. are specified by HTML5 where it is clearly defined that the handler can indeed run in a parallel thread: html.spec.whatwg.org/multipage/infrastructure.html#in-parallel"
I'd like to know if it's possible that the execution of the code be interrupted between 3rd and 4th line and if it is a real possibility with most common browsers.
When using Javascript promises, does the event loop get blocked?
My understanding is that using await & async, makes the stack stop until the operation has completed. Does it do this by blocking the stack or does it act similar to a callback and pass of the process to an API of sorts?
When using Javascript promises, does the event loop get blocked?
No. Promises are only an event notification system. They aren't an operation themselves. They simply respond to being resolved or rejected by calling the appropriate .then() or .catch() handlers and if chained to other promises, they can delay calling those handlers until the promises they are chained to also resolve/reject. As such a single promise doesn't block anything and certainly does not block the event loop.
My understanding is that using await & async, makes the stack stop
until the operation has completed. Does it do this by blocking the
stack or does it act similar to a callback and pass of the process to
an API of sorts?
await is simply syntactic sugar that replaces a .then() handler with a bit simpler syntax. But, under the covers the operation is the same. The code that comes after the await is basically put inside an invisible .then() handler and there is no blocking of the event loop, just like there is no blocking with a .then() handler.
Note to address one of the comments below:
Now, if you were to construct code that overwhelms the event loop with continually resolving promises (in some sort of infinite loop as proposed in some comments here), then the event loop will just over and over process those continually resolved promises from the microtask queue and will never get a chance to process macrotasks waiting in the event loop (other types of events). The event loop is still running and is still processing microtasks, but if you are stuffing new microtasks (resolved promises) into it continually, then it may never get to the macrotasks. There seems to be some debate about whether one would call this "blocking the event loop" or not. That's just a terminology question - what's more important is what is actually happening. In this example of an infinite loop continually resolving a new promise over and over, the event loop will continue processing those resolved promises and the other events in the event queue will not get processed because they never get to the front of the line to get their turn. This is more often referred to as "starvation" than it is "blocking", but the point is that macrotasks may not get serviced if you are continually and infinitely putting new microtasks in the queue.
This notion of an infinite loop continually resolving a new promise should be avoided in Javascript. It can starve other events from getting a chance to be serviced.
Do Javascript promises block the stack
No, not the stack. The current job will run until completion before the Promise's callback starts executing.
When using Javascript promises, does the event loop get blocked?
Yes it does.
Different environments have different event-loop processing models, so I'll be talking about the one in browsers, but even though nodejs's model is a bit simpler, they actually expose the same behavior.
In a browser, Promises' callbacks (PromiseReactionJob in ES terms), are actually executed in what is called a microtask.
A microtask is a special task that gets queued in the special microtask-queue.
This microtask-queue is visited various times during a single event-loop iteration in what is called a microtask-checkpoint, and every time the JS call stack is empty, for instance after the main task is done, after rendering events like resize are executed, after every animation-frame callback, etc.
These microtask-checkpoints are part of the event-loop, and will block it the time they run just like any other task.
What is more about these however is that a microtask scheduled from a microtask-checkpoint will get executed by that same microtask-checkpoint.
This means that the simple fact of using a Promise doesn't make your code let the event-loop breath, like a setTimeout() scheduled task could do, and even though the js stack has been emptied and the previous task has been executed entirely before the callback is called, you can still very well lock completely the event-loop, never allowing it to process any other task or even update the rendering:
const log = document.getElementById( "log" );
let now = performance.now();
let i = 0;
const promLoop = () => {
// only the final result will get painted
// because the event-loop can never reach the "update the rendering steps"
log.textContent = i++;
if( performance.now() - now < 5000 ) {
// this doesn't let the event-loop loop
return Promise.resolve().then( promLoop );
}
else { i = 0; }
};
const taskLoop = () => {
log.textContent = i++;
if( performance.now() - now < 5000 ) {
// this does let the event-loop loop
postTask( taskLoop );
}
else { i = 0; }
};
document.getElementById( "prom-btn" ).onclick = start( promLoop );
document.getElementById( "task-btn" ).onclick = start( taskLoop );
function start( fn ) {
return (evt) => {
i = 0;
now = performance.now();
fn();
};
}
// Posts a "macro-task".
// We could use setTimeout, but this method gets throttled
// to 4ms after 5 recursive calls.
// So instead we use either the incoming postTask API
// or the MesageChannel API which are not affected
// by this limitation
function postTask( task ) {
// Available in Chrome 86+ under the 'Experimental Web Platforms' flag
if( window.scheduler ) {
return scheduler.postTask( task, { priority: "user-blocking" } );
}
else {
const channel = postTask.channel ||= new MessageChannel();
channel.port1
.addEventListener( "message", () => task(), { once: true } );
channel.port2.postMessage( "" );
channel.port1.start();
}
}
<button id="prom-btn">use promises</button>
<button id="task-btn">use postTask</button>
<pre id="log"></pre>
So beware, using a Promise doesn't help at all with letting the event-loop actually loop.
Too often we see code using a batching pattern to not block the UI that fails completely its goal because it is assuming Promises will let the event-loop loop. For this, keep using setTimeout() as a mean to schedule a task, or use the postTask API if you are in a near future.
My understanding is that using await & async, makes the stack stop until the operation has completed.
Kind of... when awaiting a value it will add the remaining of the function execution to the callbacks attached to the awaited Promise (which can be a new Promise resolving the non-Promise value).
So the stack is indeed cleared at this time, but the event loop is not blocked at all here, on the contrary it's been freed to execute anything else until the Promise resolves.
This means that you can very well await for a never resolving promise and still let your browser live correctly.
async function fn() {
console.log( "will wait a bit" );
const prom = await new Promise( (res, rej) => {} );
console.log( "done waiting" );
}
fn();
onmousemove = () => console.log( "still alive" );
move your mouse to check if the page is locked
An await blocks only the current async function, the event loop continues to run normally. When the promise settles, the execution of the function body is resumed where it stopped.
Every async/await can be transformed in an equivalent .then(…)-callback program, and works just like that from the concurrency perspective. So while a promise is being awaited, other events may fire and arbitrary other code may run.
As other mentioned above... Promises are just like an event notification system and async/await is the same as then(). However, be very careful, You can "block" the event loop by executing a blocking operation. Take a look to the following code:
function blocking_operation_inside_promise(){
return new Promise ( (res, rej) => {
while( true ) console.log(' loop inside promise ')
res();
})
}
async function init(){
let await_forever = await blocking_operation_inside_promise()
}
init()
console.log('END')
The END log will never be printed. JS is single threaded and that thread is busy right now. You could say that whole thing is "blocked" by the blocking operation. In this particular case the event loop is not blocked per se, but it wont deliver events to your application because the main thread is busy.
JS/Node can be a very useful programming language, very efficient when using non-blocking operations (like network operations). But do not use it to execute very intense CPU algorithms. If you are at the browser consider to use Web Workers, if you are at the server side use Worker Threads, Child Processes or a Microservice Architecture.
I have problems understand the execution model/order of RxJS Observables and Subjects.
I read a lot of literature and blog posts about RxJS observables being the better promise since their subscription can be canceled and they can emit multiple results/values via next().
This question might be answered easily but how does RxJS create or simulate asynchronism?
Does RxJS Observables wrap around promises and create a sequence of promises to make the code execution asynchronous? Or is it because of the implemented observable pattern that change is propagated asynchronous to subscribers but code execution is still synchronous?
In my point of view javascript code is asynchronous when it is handle via callbacks in any of the JavaScript callback queues processed by the event loop.
In RxJS, everything is about producer. The producer can be anything and it can be synchronous or asynchronous, thus Observables can both emit synchronously or asynchronously.
Lets try to understand what is (a)synchronous behavior. I will leave couple of links for deeper understanding of the subject: a talk by Philip Roberts, another talk by Jake Archibald and Jake's blog if you don't like watching long videos.
Tl;dw(atch): all JavaScript code is synchronous and executes within a single thread. On the other hand, WebAPIs, which can be accessed from JS code, may execute some other stuff in other threads and bring back the result to the JavaScript runtime. And the results are passed through to the runtime by Event loop and the callbacks. So, when you say:
In my point of view javascript code is asynchronous when it is handle via callbacks in any of the JavaScript callback queues processed by the event loop.
You're right. A callback handled by the Event loop is the asynchronous callback. Examples of WebAPIs which have asynchronous callbacks are: setTimeout and setInterval, DOM events, XHR events, Fetch events, Web workers, Web sockets, Promises, MutationObserver callbacks and so on. The last two (Promises and MutationObservers) schedule tasks on a different queue (microtask queue), but it's still asynchronous.
Back to RxJS. I already told that in RxJS it's everything about the producer. Observables wrap producers using observers. To quote Ben Lesh from the article:
[A producer] is anything you’re using to get values and pass them to observer.next(value).
This means that the code that is synchronous (and all JS code is) will synchronously emit values when wrapped with an Observable. For example:
import { Observable } from 'rxjs';
const o = new Observable(observer => {
[1, 2, 3].forEach(i => observer.next(i));
observer.complete();
});
o.subscribe(x => console.log(x));
console.log('Anything logged after this?');
Logs:
1
2
3
Anything logged after this?
On the other hand, next example uses setTimeout (which is not part of the ECMAScript specification and uses asynchronous callback):
import { Observable } from 'rxjs';
const o = new Observable(observer => {
setTimeout(() => {
observer.next(1);
observer.complete();
}, 0);
});
o.subscribe(x => console.log(x));
console.log('Anything logged after this?');
Logs this:
Anything logged after this?
1
This means that, even though I subscribed to the source observable before last console.log, we've got the message before observer sent next value. This is because of the asynchronous nature of setTimeout.
In fact, RxJS has many ways of creating Observables so that you don't have to write your own implementations by wrapping all of this.
So, improved first example:
import { from } from 'rxjs';
from([1, 2, 3]).subscribe(i => console.log(i));
console.log('Anything logged after this?');
Or improved second example:
import { of, scheduled, asyncScheduler } from 'rxjs';
scheduled(of(1), asyncScheduler).subscribe(i => console.log(i));
console.log('Anything logged after this?');
scheduled creation operator uses schedulers for dispatching events on different task queues. asyncScheduler internally uses setTimeout to dispatch the event to the macrotask queue, while asapScheduler internally uses Promises as it uses microtask queue.
However, setTimeout is the most obvious and the most repeated example of asynchronous behavior. XHR is the one that is much more interesting to us. Angular's HTTP client does the same wrapping as I did in my first two examples, so that, when response comes, it is transferred to the responseObserver using next.
When the response comes from the server, XMLHttpRequest object puts it to macrotask queue which gets pushed to the call stack by Event loop once call stack is cleared, and the message can be passed to the responseObserver.
This way, the asynchronous event happens, and the subscribers to the Observable that wraps that XMLHttpRequest object get their value asynchronously.
I read a lot of literature and blog posts about RxJS observables being the better promise since their subscription can be canceled and they can emit multiple results/values via next().
The difference between Observables and Promises is indeed in the fact that Observables are cancelable. This is the most important when you're working a lot with WebAPIs as many of them need to have means to be cancelable (so that resources are not lost when we stop using them).
In fact, since RxJS has many creation operators that wrap many of the WebAPIs, they're already dealing with the cancelation stuff for you. All you have to do is to keep track of the subscriptions and to unsubscribe at the right moment. Article that might be helpful for that can be found here.
Does RxJS Observables wrap around promises and create a sequence of promises to make the code execution asynchronous?
No, they wrap a producer. Anything that can call observer.next method. If a producer uses asynchronous callbacks which call observer.next method, then Observables emit asynchronously. Other way around, they emit synchronously.
But, even though original emissions are synchronous, they can be dispatched to be emitted asynchronously by using schedulers.
Good rule of thumb is that in RxJS everything is synchronous unless you work with time. This default behavior has changed between RxJS 4 and RxJS 5+. So for example range(), from() or of() these all are synchronous. All inner subscriptions inside switchMap, mergeMap, forkJoin, etc. are synchronous. This means that you can easily make infinite loops if you emit from subscribe():
const subject$ = new Subject();
const stop$ = new Subject();
subject$.pipe(
tap(() => /* whatever */)
takeUntil(stop),
).subscribe(() => {
subject$.next();
stop$.next();
});
This example will never reach stop$.next().
A common source of confusion is using combineLatest() with synchronous sources. For example both combineLatest() and range() emit synchronously. Try to guess what series of values this chain emits. We want to get all combinations from the two range Observables:
import { combineLatest, range} from 'rxjs';
combineLatest([
range(1, 5),
range(1, 5),
]).subscribe(console.log);
Live demo: https://stackblitz.com/edit/rxjs-p863rv
This emitted only five values where the first number is always 5 which is weird at the first sight. If we want to emit all combinations we would have to chain each range() with delay(0) or use asyncScheduler or use subscribeOn(asyncScheduler) operator to force async behavior.
combineLatest([
range(1, 5, asyncScheduler),
range(1, 5, asyncScheduler),
]).subscribe(console.log);
Live demo: https://stackblitz.com/edit/rxjs-tnxonz
I believe RxJS does not run on Promises internally. It's just how the whole publish-subscribe pattern works. If simplified basically you have Observer, Observable and Subscriber. If you ever created your own observable, you could see that you can wrap it around basically anything: promises, events, http calls even synchronous code like just reading array. The way it's achieved is that Observer has methods next and complete (but not limited to them, e.g. there is also error). Whenever you call .next() on your Observer all subscribers of Observable will have onNext called. That's because through Observable Observer is connected to Subscribers and whenever you call .next() it will call onNext. Where onNext along with onError and onComplete are just callbacks that you're supplying to subscriber when calling .subscribe(). Which means that if you call .next() after a promise resolves it will be asynchronous.
Here is an example:
new Observable<T>((observer: Observer<T>) => {
Promise.resolve(() => {
observer.next()
observer.complete()
})
})
If you subscribe to this observable it will call your onNext asynchronously.
but you can also do something like:
const array = [1,2,3,4,5]
new Observable<T>((observer: Observer<T>) => {
array.forEach((num) => observer.next(num))
observer.complete()
})
Subscribing to this in theory should be synchronous. But you can play around with it. Thing is that rxjs also has such thing as Scheduler which allows you to control the nature of your Observable, but there are also limitations I believe.
There is also a video of simple pattern implementation that helps understanding how it works.
I have an async function that runs by a setInterval somewhere in my code. This function updates some cache in regular intervals.
I also have a different, synchronous function which needs to retrieve values - preferably from the cache, yet if it's a cache-miss, then from the data origins
(I realize making IO operations in a synchronous manner is ill-advised, but lets assume this is required in this case).
My problem is I'd like the synchronous function to be able to wait for a value from the async one, but it's not possible to use the await keyword inside a non-async function:
function syncFunc(key) {
if (!(key in cache)) {
await updateCacheForKey([key]);
}
}
async function updateCacheForKey(keys) {
// updates cache for given keys
...
}
Now, this can be easily circumvented by extracting the logic inside updateCacheForKey into a new synchronous function, and calling this new function from both existing functions.
My question is why absolutely prevent this use case in the first place? My only guess is that it has to do with "idiot-proofing", since in most cases, waiting on an async function from a synchronous one is wrong. But am I wrong to think it has its valid use cases at times?
(I think this is possible in C# as well by using Task.Wait, though I might be confusing things here).
My problem is I'd like the synchronous function to be able to wait for a value from the async one...
They can't, because:
JavaScript works on the basis of a "job queue" processed by a thread, where jobs have run-to-completion semantics, and
JavaScript doesn't really have asynchronous functions — even async functions are, under the covers, synchronous functions that return promises (details below)
The job queue (event loop) is conceptually quite simple: When something needs to be done (the initial execution of a script, an event handler callback, etc.), that work is put in the job queue. The thread servicing that job queue picks up the next pending job, runs it to completion, and then goes back for the next one. (It's more complicated than that, of course, but that's sufficient for our purposes.) So when a function gets called, it's called as part of the processing of a job, and jobs are always processed to completion before the next job can run.
Running to completion means that if the job called a function, that function has to return before the job is done. Jobs don't get suspended in the middle while the thread runs off to do something else. This makes code dramatically simpler to write correctly and reason about than if jobs could get suspended in the middle while something else happens. (Again it's more complicated than that, but again that's sufficient for our purposes here.)
So far so good. What's this about not really having asynchronous functions?!
Although we talk about "synchronous" vs. "asynchronous" functions, and even have an async keyword we can apply to functions, a function call is always synchronous in JavaScript. An async function is a function that synchronously returns a promise that the function's logic fulfills or rejects later, queuing callbacks the environment will call later.
Let's assume updateCacheForKey looks something like this:
async function updateCacheForKey(key) {
const value = await fetch(/*...*/);
cache[key] = value;
return value;
}
What that's really doing, under the covers, is (very roughly, not literally) this:
function updateCacheForKey(key) {
return fetch(/*...*/).then(result => {
const value = result;
cache[key] = value;
return value;
});
}
(I go into more detail on this in Chapter 9 of my recent book, JavaScript: The New Toys.)
It asks the browser to start the process of fetching the data, and registers a callback with it (via then) for the browser to call when the data comes back, and then it exits, returning the promise from then. The data isn't fetched yet, but updateCacheForKey is done. It has returned. It did its work synchronously.
Later, when the fetch completes, the browser queues a job to call that promise callback; when that job is picked up from the queue, the callback gets called, and its return value is used to resolve the promise then returned.
My question is why absolutely prevent this use case in the first place?
Let's see what that would look like:
The thread picks up a job and that job involves calling syncFunc, which calls updateCacheForKey. updateCacheForKey asks the browser to fetch the resource and returns its promise. Through the magic of this non-async await, we synchronously wait for that promise to be resolved, holding up the job.
At some point, the browser's network code finishes retrieving the resource and queues a job to call the promise callback we registered in updateCacheForKey.
Nothing happens, ever again. :-)
...because jobs have run-to-completion semantics, and the thread isn't allowed to pick up the next job until it completes the previous one. The thread isn't allowed to suspend the job that called syncFunc in the middle so it can go process the job that would resolve the promise.
That seems arbitrary, but again, the reason for it is that it makes it dramatically easier to write correct code and reason about what the code is doing.
But it does mean that a "synchronous" function can't wait for an "asynchronous" function to complete.
There's a lot of hand-waving of details and such above. If you want to get into the nitty-gritty of it, you can delve into the spec. Pack lots of provisions and warm clothes, you'll be some time. :-)
Jobs and Job Queues
Execution Contexts
Realms and Agents
You can call an async function from within a non-async function via an Immediately Invoked Function Expression (IIFE):
(async () => await updateCacheForKey([key]))();
And as applied to your example:
function syncFunc(key) {
if (!(key in cache)) {
(async () => await updateCacheForKey([key]))();
}
}
async function updateCacheForKey(keys) {
// updates cache for given keys
...
}
This shows how a function can be both sync and async, and how the Immediately Invoked Function Expression idiom is only immediate if the path through the function being called does synchronous things.
function test() {
console.log('Test before');
(async () => await print(0.3))();
console.log('Test between');
(async () => await print(0.7))();
console.log('Test after');
}
async function print(v) {
if(v<0.5)await sleep(5000);
else console.log('No sleep')
console.log(`Printing ${v}`);
}
function sleep(ms : number) {
return new Promise(resolve => setTimeout(resolve, ms));
}
test();
(Based off of Ayyappa's code in a comment to another answer.)
The console.log looks like this:
16:53:00.804 Test before
16:53:00.804 Test between
16:53:00.804 No sleep
16:53:00.805 Printing 0.7
16:53:00.805 Test after
16:53:05.805 Printing 0.3
If you change the 0.7 to 0.4 everything runs async:
17:05:14.185 Test before
17:05:14.186 Test between
17:05:14.186 Test after
17:05:19.186 Printing 0.3
17:05:19.187 Printing 0.4
And if you change both numbers to be over 0.5, everything runs sync, and no promises get created at all:
17:06:56.504 Test before
17:06:56.504 No sleep
17:06:56.505 Printing 0.6
17:06:56.505 Test between
17:06:56.505 No sleep
17:06:56.505 Printing 0.7
17:06:56.505 Test after
This does suggest an answer to the original question, though. You could have a function like this (disclaimer: untested nodeJS code):
const cache = {}
async getData(key, forceSync){
if(cache.hasOwnProperty(key))return cache[key] //Runs sync
if(forceSync){ //Runs sync
const value = fs.readFileSync(`${key}.txt`)
cache[key] = value
return value
}
//If we reach here, the code will run async
const value = await fsPromises.readFile(`${key}.txt`)
cache[key] = value
return value
}
Now, this can be easily circumvented by extracting the logic inside updateCacheForKey into a new synchronous function, and calling this new function from both existing functions.
T.J. Crowder explains the semantics of async functions in JavaScript perfectly. But in my opinion the paragraph above deserves more discussion. Depending on what updateCacheForKey does, it may not be possible to extract its logic into a synchronous function because, in JavaScript, some things can only be done asynchronously. For example there is no way to perform a network request and wait for its response synchronously. If updateCacheForKey relies on a server response, it can't be turned into a synchronous function.
It was true even before the advent of asynchronous functions and promises: XMLHttpRequest, for instance, gets a callback and calls it when the response is ready. There's no way of obtaining a response synchronously. Promises are just an abstraction layer on callbacks and asynchronous functions are just an abstraction layer on promises.
Now this could have been done differently. And it is in some environments:
In PHP, pretty much everything is synchronous. You send a request with curl and your script blocks until it gets a response.
Node.js has synchronous versions of its file system calls (readFileSync, writeFileSync etc.) which block until the operation completes.
Even plain old browser JavaScript has alert and friends (confirm, prompt) which block until the user dismisses the modal dialog.
This demonstrates that the designers of the JavaScript language could have opted for synchronous versions of XMLHttpRequest, fetch etc. Why didn't they?
[W]hy absolutely prevent this use case in the first place?
This is a design decision.
alert, for instance, prevents the user from interacting with the rest of the page because JavaScript is single threaded and the one and only thread of execution is blocked until the alert call completes. Therefore there's no way to execute event handlers, which means no way to become interactive. If there was a syncFetch function, it would block the user from doing anything until the network request completes, which can potentially take minutes, even hours or days.
This is clearly against the nature of the interactive environment we call the "web". alert was a mistake in retrospect and it should not be used except under very few circumstances.
The only alternative would be to allow multithreading in JavaScript which is notoriously difficult to write correct programs with. Are you having trouble wrapping your head around asynchronous functions? Try semaphores!
It is possible to add a good old .then() to the async function and it will work.
Should consider though instead of doing that, changing your current regular function to async one, and all the way up the call stack until returned promise is not needed, i.e. there's no work to be done with the value returned from async function. In which case it actually CAN be called from a synchronous one.
I'm still figuring out reactive programming so I'm pretty sure this is very basic, but the number of stream transformations is pretty overwhelming to a beginner.
I'm creating an Observable from a DOM event. This event should in turn trigger a REST call and all other DOM events will be ignored until this event has been resolved.
const stream = Observable.fromEvent(document, 'some-event')
stream
.flatMap(() => httpRestService())
.subscribe(() => {
})
How do I ignore the events from the stream until the last HTTP promise has resolved?
DOM event
A - - - - B - - - - C
HTTP event
D ...........done - C
You could try flatMapFirst which seems to do what you want. The following code could work (jsfiddle here - click anywhere) :
const stream = Observable.fromEvent(document, 'some-event')
stream
.flatMapFirst(() => httpRestService())
.subscribe(() => {
})
Quoting the documentation :
The flatMapFirst operator is similar to the flatMap and concatMap methods described above, however, rather than emitting all of the items emitted by all of the Observables that the operator generates by transforming items from the source Observable, flatMapFirst instead propagates the first Observable exclusively until it completes before it begins subscribes to the next Observable. Observables that come before the current Observable completes will be dropped and will not propagate.
UPDATE
Looking at the source code (https://github.com/Reactive-Extensions/RxJS/blob/master/src/core/linq/observable/switchfirst.js) it seems that while the the current observable has not completed, all the incoming observables in the meantime will be discarded, i.e. not subscribed to.
So if subscribing to these observables triggers the http call (would be interesting to see the code for httpRestService), then there is no unnecessary http call. If those calls are triggered immediately by calling the function and the result is passed through an observable, then there is a possibility that those calls are indeed triggered unnecessarily. In which case, that issue is easily solvable with using the defer operator to do the http call only at subscription time. In short, you need lazy execution of the rest request if you don't already have it.