Suppose we have a look like this:
documents.forEach(url=>{
await fetch(url).
then(
document=>
console.log(document);
}
Will this load all the documents in parallel, or will the documents be loaded in serial?
In other words we could push the fetch promises into an array and then call Promise.all on that array which will execute all the promises in parallel.
IIUC there is no difference really except the Promise.all will fail on the first failure of a fetch request.
Provided the callback function is declared as async, then await can be used:
documents.forEach(async url => {
await fetch(url).then(console.log);
}
The sequence of execution is then as follows:
The forEach method is executed.
This includes calling the callback for each url, which happens one after the other
One execution of the callback will call fetch, which launches the HTTP request and returns immediately with a promise. In the background the non-JavaScript, lower-level API polls for the HTTP response to come in. Only that background part happens in parallel with the steps described here.
The then method is executed and the callback passed to it is registered. The then method immediately returns a promise
The await kicks in: the callback function's execution context is saved and it exits, returning a promise which this code ignores.
The next iteration takes place, repeating from step 3.
The forEach method ends. At this stage there are several non-JavaScript threads polling for HTTP responses, and there are several JS execution contexts pending for later execution.
In some unpredictable order (depending on the response time of the server giving the response), the fetch API resolves the promise that it had returned, and puts a message in the Promise Job Queue to indicate that.
The JavaScript event loop detects the event and proceeds with the execution of the callback registered in step 4 (the then callback), outputting to the console. The promise that was returned by the then method is resolved, which puts a new message in the Promise Job Queue.
The message is pulled from the queue and this will restore the corresponding execution context of the forEach callback. The execution continues after the await, and as there is nothing more to execute, the promise returned in step 5 is resolved (but no-one listens to that)
JavaScript monitors the event queue for more such work, and will at some point repeat at step 8.
No JavaScript code executes in parallel with JavaScript here, but the fetch API relies on non-JavaScript code, reaching "down" into Operating System functions, which do run in parallel with JavaScript code (and other OS functions).
Also note that the code is very similar to this non-async/await variant:
documents.forEach(url =>
fetch(url).then(console.log)
);
...because this callback also returns a promise. Except for the "saving execution context" part, which is not taking place here, the execution plan is quite similar.
Related
I am using middleware in redux for some async logic execution.
My question is what happens when a action is being processed by the middleware and another action is dispatched before the completion of the first one, what will happen in this case, whether the action will be put to halt till the middleware complete processing or whether that action will pass the middleware which will start executing for this action as well (along with the first one).
I am a beginner to react and redux and don't know if the answer should be obvious as it is not to me.
TLDR: It cannot happen because JS is a sync language. It operates using event loop and tasks queue. Each async task (i.e. receiving fetch response or event listener triggered by user) is placed in a queue and is executed one by one. So when some task is executed another will wait in queue.
Also keep in mind that async function is more then one task. Each await indicates a new task.
There are a lot of explanation of this model in the internet by keywords: "JS event loop"
1, 2
I've noticed that setTimeout(function, milliseconds)
when used in middle of function will only be executed once function has ended regardless of timming given to it,
for example:
function doStuff(){
var begin = (new Date()).getTime();
console.log(begin);
setTimeout(function(){console.log("Timeout");},100);
doWork(4000);
var end = (new Date()).getTime();
console.log(end);
console.log("diff: "+(end-begin));
}
function doWork(num){
for(;num>0;--num) console.log("num");
}
doStuff();
the code above sets the timeout for 100 milliseconds but it is invoked only after all function completes which is much more then 100 milliseconds,
my question is:
why does this happen ?
how can i insure correct timing ?
JavaScript is not pre-emptive: it first finishes what it is doing before looking what is the next task that was posted in the queue (functions submitted for asynchronous execution). So even if a time-out expires, this will not influence the currently running code -- it does not get interrupted. Only when the stack of currently running functions have all returned and nothing remains to be run, JavaScript will check if there is a time-out request to be fulfilled, or whatever else is first in the queue.
From "Concurrency model and Event Loop" on MDN:
Queue
A JavaScript runtime contains a message queue, which is a list of messages to be processed. A function is associated with each message. When the stack is empty, a message is taken out of the queue and processed. The processing consists of calling the associated function (and thus creating an initial stack frame). The message processing ends when the stack becomes empty again.
Event loop
The event loop got its name because of how it's usually implemented, which usually resembles:
while(queue.waitForMessage()){
queue.processNextMessage();
}
queue.waitForMessage waits synchronously for a message to arrive if there is none currently.
"Run-to-completion"
Each message is processed completely before any other message is processed. This offers some nice properties when reasoning about your program, including the fact that whenever a function runs, it cannot be pre-empted and will run entirely before any other code runs (and can modify data the function manipulates). This differs from C, for instance, where if a function runs in a thread, it can be stopped at any point to run some other code in another thread.
A downside of this model is that if a message takes too long to complete, the web application is unable to process user interactions like click or scroll. The browser mitigates this with the "a script is taking too long to run" dialog. A good practice to follow is to make message processing short and if possible cut down one message into several messages.
To get code to run concurrently, you can make use of Web Workers. Web Workers run scripts in different threads.
I'm trying to send a file in chunks over WebRTC, and I'm wondering if I can create a callback function to be called after RTCDataChannel.send() finishes sending each chunk of the file.
Is RTCDataChannel.send() a synchronous/blocking call? If so, my callback can be executed on the line after .send().
If .send() is asynchronous/non-blocking, then this will get tricky since it doesn't seem like .send() accepts a callback function, and I want to avoid using a buffer and a timeout.
The send method is blocking. It however doesn't wait until the data went over the wire, but only puts the data on an internal buffer from where it might later (or in parallel to the script execution) be sent.
The amount of data that has not been transmitted is available as the bufferedAmount property, which will be synchronously increased by every send() call (and not be updated otherwise until the next event loop turn).
You might make your wrapper asynchronous therefore, and put a timeout before actually calling send() when the currently buffered data is "too much" (by whatever criterion you see fit).
As noted above send() is effectively async - you don't get delivery receipt.
However there is a callback onbufferedamountlow which is invoked when
the channel drains it's send buffer below a value set with bufferedAmountLowThreshold
(see MDN onbufferedamountlow)
You can use that callback to decide when to send the next chunk.
Note however that this is relatively new to the draft standard and may not be supported everywhere.
Why is AJAX called asynchronous? How does it accomplish communication asynchronously with the server?
It's asynchronous in that it doesn't lock up the browser. If you fire an Ajax request, the user can still work while the request is waiting for a response. When the server returns the response, a callback runs to handle it.
You can make the XMLHttpRequest synchronous if you want, and if you do, the browser locks up while the request is outstanding (so most of the time this is inappropriate)
It's asynchronous because the client and the server run independently of each other for the duration of the function call.
During a normal function call, you make the call, and the calling function doesn't get to execute again until the function call finishes and returns. The caller and the callee are always synchronized.
During an asynchronous function call, you make the call, and then control returns immediately to the caller. The callee then returns a value some indeterminate amount of time later. That "indeterminate amount of time" means the caller and callee are no longer synchronized, so it's asynchronous.
Simply put, it does not need to reload the whole page to get new information.
Think of a email client. You would not need to refresh the page to see new emails. Ajax just pulls the server every couple of minutes to see if there are new emails, if so display them
I.e. not "blocking", within the context of Javascript execution, as the response will be handled by an event loop.
The client and the server run independently of each other for the duration of the function call.
Normal function call - you make the call, and the calling function doesn't get to execute again until the function call finishes and returns. The caller and the callee are always synchronized.
Asynchronous function call - you make the call, and then control returns immediately to the caller. The callee then returns a value some undefined amount of time later. That "undefined amount of time" means the caller and callee are no longer synchronized, so it's asynchronous.
Synchronous always maintain sequence when called, but asynchronous is not maintain sequence.
I have read a handful of setTimeout questions and none appear to relate to the question I have in mind. I know I could use setInterval() but it is not my prefered option.
I have a web app that could in theory be running a day or more without a page refresh and more than one check a minute. Am I likely to get tripped up if my function calls itself several hundred (or more) times using setInterval? Will I reach "stack overflow" for example?
If I use setInterval there is a remote possibility of two requests being run at the same time especially on slow connections (where a second one is raised before the first is finished). I know I could create flags to test if a request is already active, but I'm afraid of false positives.
My solution is to have a function call my jquery ajax code, do its thing, and then as part of ajaxComplete, it does a setInterval to call itself again in X seconds. This method also allows me to alter the duration between calls, so if my server is busy(slow), one reply can set a flag to increase the time between ajax calls.
Sample code of my idea...
function ServiceOrderSync()
{
// 1. Sync stage changes from client with the server
// 2. Sync new orders from server to this client
$.ajax( { "data": dataurl,
"success": function(data)
{ // process my data },
"complete": function(data)
{
// Queue the next sync
setTimeout( ServiceOrderSync, 15000 );
});
}
You won't get a stack overflow, since the call isn't truly recursive (I call it "pseudo-recursive")
JavaScript is an event driven language, and when you call setTimeout it just queues an event in the list of pending events, and code execution then just continues from where you are and the call stack gets completely unwound before the next event is pulled from that list.
p.s. I'd strongly recommend using Promises if you're using jQuery to handle async code.