Rewriting JS app from Synchronous calls to Asynchronous - javascript

I am trying to migrate an old Javascript application to use AsyncJAX (AJAX) instead of Synchronous calls to AJAX(SJAX). I have considered few approaches, each taking quite some time
1) using async/await
in order to get this work i would have to prepend every function that results in calling the SJAX somewhere in the stack by "async" and every call to SJAX by "await"
2) Analyze all possible stacktraces (i do not know of any tool that can do it for JS) and do the re-write by hand using callbacks/Promisses (don't really care about cleanliness of the code right now. There is more 200 occurences of the SJAX calls in the app so i expect there might be much more possible stacktraces to handle
3) throwing an error before the actual data-fetch (SJAX call), recording the caller somewhere and repeating the call when the data is readily available (there might be other issues, imagine running the same code twice using a "stateful" variable)
4) something completely different
My question is, what approach would you think would be the best. I know this is opinion based, but maybe there is other way to handle this elegantly (point 4). If you know an answer to that ... shoot it.

Related

Is it ever better to use Node's filesystem sync methods over the same async methods?

This is a question about performance more than anything else.
Node exposes three different types of methods to accomplish various filesystem tasks:
Promises API (async)
Callback API (async)
Synchronous API (sync)
I've read more articles and stackoverflow answers than I can count, all of which claiming to never need the sync methods.
I recently wrote a script which required a couple directories to be made if they didn't already exist. During this, I noticed that if I used the async/await methods (primarily fs.promises.mkdir and fs.promises.access), the event loop would simply continue to the next async bit of code, regardless of the fact that the next bits require those directories. This is expected behavior, after all, it's async.
I understand this could be solved with a nice little callback hell sesh, but that isn't the question, whereas the idea that the promises api can be used over all other methods is.
The question then becomes:
Is it ever better to use Node's filesystem sync methods over the same async methods?
Is it ever truly required in situations like this to block the process?
Or said differently:
Is it possible to completely avoid sync methods and ONLY use the promises api (NOT promises + callbacks)?
It seems like using the sync methods (given my situation above, where the directories are required to be there before any other call is made) can be EXTREMELY useful to write readable, clear code, even though it may negatively impact performance.
With that being said, there's an overwhelming level of information to say that the sync api is completely useless and never required.
Again, this purely caters to the promises api. Yes, callbacks and promises are both async, but the difference between the job and message queues makes the both api's completely different in this context.
PS: For additonal context on examples, I've provided a code sample so you don't have to imagine my example ;)
Thanks! :)
// Checks if dir exists, if not, creates it. (not the actual code, just an example)
// Sync version
if (!fs.existsSync(dirPath)) {
fs.mkdirSync(dirPath);
}
// Async version
try {
await fs.promises.access(dirPath);
} catch {
await fs.promises.mkdir(dirPath);
}
It depends on the situation. The main benefit of the sync methods is that they allow for easier consumption of their results, and the main disadvantage is that they prevent all other code from executing while working.
If you find yourself in a situation where other code not being able to respond to events is not an issue, you might consider it to be reasonable to use the sync methods - if the code in question has no chance of or reason for running in parallel with anything else.
For example, you would definitely not want to use the sync methods inside, say, a server handling a request.
If your code requires reading some configuration files (or creating some folders) when the script first runs, and there aren't enough of them such that parallelism would be a benefit, you can consider using the sync methods.
That said, even if your current implementation doesn't require parallelism, something to keep in mind is that, if the situation changes and you find that you do actually need to allow for parallel processing, you won't have to make any changes to your existing code if you had started out by using the promise-based methods in the first place - and if you understand the language, using the Promises properly should be pretty easy, so if there's a chance of that, you might consider using the Promises anyway.

XMLHttpRequest Promise, nested in some sync code

Why is a sync xmlhttprequest deprecated ?
I would appreciate if the browser waits for the data instead of
open it for further clicking without finalizing the data-query.
I would like to write a program with just one step after the other.
Moreover the code should be properly structured and I wonder how
I can realize this without the promise-hell.
Is there any sync alternative to xmlhttprequest which is not "deprecated" ?
Thanks
Thomas
Why is a sync xmlhttprequest deprecated ?
Because it blocks the Event Loop and provides bad user experience to your end users.
Basically it "freezes" the page while it's happening.
I would appreciate if the browser waits for the data
That is not how browsers work though. Fortunately with practice the way browsers do work start to make an awful lot of sense - enough for languages that have previously used mostly synchronous I/O like Python to add "promises" too.
I would like to write a program with just one step after the other.
You can return responses from asynchronous calls in JavaScript. Using async/await code looks pretty similar and readable. See the async function mdn page.
Moreover the code should be properly structured and I wonder how I can realize this without the promise-hell.
"promise-hell" refers to the fact once a function does I/O all other functions it calls that. I'm not convinced that's a bad thing or that there is anything wrong with
async function sequence() {
let one = await fetch('/your-endpoint?someParam').then(x => x.json());
// do something with one
let param = one.param;
let two = await fetch('/your-other?param=' + param).then(x => x.json());
// do something with second call
}
sequence();
That is, async/await lets you write asynchronous code synchronously.
Don't expect to understand this all at once - it's ok it takes time.

Continuation-Passing Style And Concurrency

I find lots of blogs mention concurrent/non-blocking/asynchronous programming as a benefit of Continuation-Passing Style (CPS). I cannot figure out why CPS provides concurrency, e.g., people mention Node.js is implemented using CPS though JavaScript is a synchronous language. Would someone comment on my thoughts?
First, my naive understanding of CPS is that wrapping all subsequent code at a point into a function and pass that function explicitly as a parameter. Some blogs name the continuation function as return(), Gabriel Gonzalez calls it a hole, both of which are brilliant explanations.
My confusion mostly comes from a popular blog article Asynchronous programming and continuation-passing style in JavaScript. At the beginning of the article, Dr. Axel Rauschmayer gives two code snippets, a synchronous program and an asynchronous one in CPS (pasted here for easy reading).
The synchronous code:
function loadAvatarImage(id) {
var profile = loadProfile(id);
return loadImage(profile.avatarUrl);
}
The asynchronous code:
function loadAvatarImage(id, callback) {
loadProfile(id, function (profile) {
loadImage(profile.avatarUrl, callback);
});
}
I don't get it why the CPS one is asynchronous. After I read another article By example: Continuation-passing style in JavaScript, I think maybe there is an assumption to the code: the function loadProfile() and loadImage() are asynchronous functions by themselves. Then it is not CPS that makes it asynchronous. In the second article, the author actually shows an implementation of fetch(), which is similar to loadProfile() in the blog earlier. The fetch() function makes an explicit assumption of the underlying concurrent execution model by calling req.onreadystatechange. This leads me to think maybe it is not CPS that provides concurrency.
Assume the underlying functions are asynchronous, then I go into my second question: can we write asynchronous code without CPS? Think of the implementation of the function loadProfile(). If it is asynchronous not because of CPS, why can't we just take the same mechanism to implement loadAvatarImage() asynchronously? Assume loadProfile() uses fork() to create a new thread to send the request and wait for the response while the main thread is executing in a non-blocking manner, we can possibly do the same for loadAvatarImage().
function loadAvatarImage(id, updateDOM) {
function act () {
var profile = loadProfile(id);
var img = loadImage(profile.avatarUrl);
updateDOM (img);
}
fork (act());
}
I give it a callback function updateDOM(). Without updateDOM(), it is not fair to compare it with the CPS version -- the CPS version has extra information about what to do after the image is fetched, i.e., the callback function, but the original synchronous loadAvatarImage() does not.
Interestingly, #DarthFennec pointed out my new loadAvatarImage() is actually CPS: fork() is CPS, act() is CPS (if we explicitly give it updateDOM), and loadAvatarImage() is CPS. The chain makes loadAvatarImage() asynchronous. loadProfile() and loadImage() do not need to be asynchronous or CPS.
If the reasoning up to here is correct, can I get these two conclusions?
Given a set of synchronous APIs, someone coding following CPS will not magically create asynchronous functions.
If the underlying asynchronous/concurrent APIs are provided in the CPS style, like CPS versions of loadProfile(), loadImage(), fetch(), or fork(), then one can only code in CPS style to ensure the asynchronous APIs are used asynchronously, e.g., return loadImage(profile.avatarUrl) will nullify the concurrency of loadImage().
A Brief Overview of Javascript
Javascript's concurrency model is non-parallel and cooperative:
Javascript is non-parallel because it runs in a single thread; it achieves concurrency by interleaving multiple execution threads, rather than by actually running them at the same time.
Javascript is cooperative because the scheduler only switches to a different thread when the current thread asks it to. The alternative would be preemptive scheduling, where the scheduler decides to arbitrarily switch threads whenever it feels like it.
By doing these two things, Javascript avoids many problems that other languages don't. Parallel code, and non-parallel preemptively-scheduled code, cannot make the basic assumption that variables will not suddenly change their values in the middle of execution, since another thread might be working on the same variable at the same time, or the scheduler might decide to interleave another thread absolutely anywhere. This leads to mutual exclusion problems and confusing race condition bugs. Javascript avoids all of this because in a cooperatively-scheduled system, the programmer decides where all the interleaves happen. The main drawback of this is if the programmer decides not to create interleaves for long periods of time, other threads never have a chance to run. In a browser, even actions like polling for user input and drawing updates to the page run in the same single-threaded environment as the Javascript, so a long-running Javascript thread will cause the entire page to become unresponsive.
In the beginning, CPS was most often used in Javascript for the purpose of event-driven UI programming: if you wanted some code to run every time someone pressed a button, you would register your callback function to the button's 'click' event; when the button was clicked, the callback would run. As it turns out, this same approach could be used for other purposes as well. Say you want to wait one minute and then do a thing. The naive approach would be to stall the Javascript thread for sixty seconds, which (as stated above) would cause the page to crash for that duration. However, if the timer was exposed as a UI event, the thread could be suspended by the scheduler instead, allowing other threads to run in the meantime. Then, the timer would cause the callback to execute, in the same way a button press would. The same approach can be used to request a resource from the server, or to wait for the page to load fully, or a number of other things. The idea is that, to keep Javascript as responsive as possible, any built-in function that might take a long time to complete should be part of the event system; in other words, it should use CPS to enable concurrency.
Most languages that support cooperative scheduling (often in the form of coroutines) have special keywords and syntax that must be used to tell the language to interleave. For example, Python has the yield keyword, C# has async and await, etc. When Javascript was first designed, it had no such syntax. It did, however, have support for closures, which is a really easy way to allow CPS. I expect the intention behind this was to support the event-driven UI system, and that it was never intended to become a general-purpose concurrency model (especially once Node.js came along and removed the UI aspect entirely). I don't know for sure, though.
Why does CPS provide concurrency?
To be clear, continuation-passing style is a method that can be used to enable concurrency. Not all CPS code is concurrent. CPS isn't the only way to create concurrent code. CPS is useful for things other than enabling concurrency. Simply put, CPS does not necessarily imply concurrency, and vice versa.
In order to interleave threads, execution must be interrupted in such a way that it can be resumed later. This means the context of the thread must be preserved, and later re-instated. This context isn't generally accessible from inside of a program. Because of this, the only way to support concurrency (short of the language having special syntax for it) is to write the code in such a way that the thread context is encoded as a value. This is what CPS does: the context to be resumed is encoded as a function that can be called. This function being called is equivalent to a thread being resumed. This can happen any time: after an image is loaded, after a timer triggers, after other threads have had a chance to run for a while, or even immediately. Since the context is all encoded into the continuation closure, it doesn't matter, as long as it runs eventually.
To better understand this, we can write a simple scheduler:
var _threadqueue = []
function fork(cb) {
_threadqueue.push(cb)
}
function run(t) {
_threadqueue.push(t)
while (_threadqueue.length > 0) {
var next = _threadqueue.shift()
next()
}
}
An example of this in use:
run(function() {
fork(function() {
console.print("thread 1, first line")
fork(function() {
console.print("thread 1, second line")
})
})
fork(function() {
console.print("thread 2, first line")
fork(function() {
console.print("thread 2, second line")
})
})
})
This should print the following to the console:
thread 1, first line
thread 2, first line
thread 1, second line
thread 2, second line
The results are interleaved. While not particularly useful on its own, this logic is more or less the foundation of something like Javascript's concurrency system.
Can we write asynchronous code without CPS?
Only if you have access to the context through some other means. As previously stated, many languages do this through special keywords or other syntax. Some languages have special builtins: Scheme has the call/cc builtin, which will wrap the current context into a callable function-like object, and pass that object to its argument. Operating systems get concurrency by literally copying around the thread's callstack (the callstack contains all of the needed context to resume the thread).
If you mean in Javascript specifically, then I'm fairly certain it's impossible to reasonably write asynchronous code without CPS. Or it would be, but newer versions of Javascript also come with the async and await keywords, as well as a yield keyword, so using those is becoming an option.
Conclusion: Given a set of synchronous APIs, someone coding following CPS will not magically create asynchronous functions.
Correct. If an API is synchronous, CPS alone will not make that API asynchronous. It may introduce a level of concurrency (as in the example code earlier), but that concurrency can only exist within the thread. Asynchronous loading in Javascript works because the loading itself runs in parallel to the scheduler, so the only way to make a synchronous API asynchronous is to run it in a separate system thread (which can't be done in Javascript). But even if you did do that, it still wouldn't be asynchronous unless you also used CPS.
CPS doesn't cause asynchronicity. However, asynchronicity does require CPS, or some alternative to CPS.
Conclusion: If the underlying asynchronous/concurrent APIs are provided in the CPS style, then one can only code in CPS style
Correct. If the API is loadImage(url, callback) and you run return loadImage(profile.avatarUrl), it will return null immediately and it will not give you the image. Most likely it will throw an error because callback is undefined, since you didn't pass it. Essentially, if the API is CPS and you decide not to use CPS, you're not using the API correctly.
In general though, it is accurate to say that if you write a function that calls a CPS function, your function also needs to be CPS. This is actually a good thing. Remember what I said about the basic assumption that variables will not suddenly change their values in the middle of execution? CPS solves this issue by making it very clear to the programmer where exactly the interleave boundaries are; or rather, where values might arbitrarily change. But if you could hide CPS function calls inside of non-CPS functions, you would no longer be able to tell. This is also the reason the newer Javascript async and await keywords work the way they do: any function that uses await must be marked as async, and any call to an async function must be prefixed with the await keyword (there's more to it than that, but I don't want to get into how promises work just now). Because of this, you can always tell where your interleave boundaries are, because there will always be await keywords there.

What do we call JS async "strands of execution"?

In Java we have "threads", in CPython we have threads (non-concurrent) and "processes".
In JS, when I kick off an async function or method, how do I officially refer to these "strands of executing code"?
I have heard that each such code block executes from start to finish*, meaning that there is never any concurrent** processing in JS. I'm not quite sure whether this is the same situation as with CPython threads. Personally I hesitate to use "thread" for what we have in JS as these "strands" are so different from Java concurrent threads.
* Just to clarify in light of Stephen Cleary's helpful response: I mean "each such synchronous code block". Obviously if an await is encountered control is released ...
** And obviously never any "true parallel" processing. I'm following the widely accepted distinction between "concurrent" (only one thread at any one time, but one "strand of execution" may give way to another) and "parallel" (multiple processes, implementing true parallel processing, often using multiple CPUs or cores or processes). My understanding is that these "strands" in JS are not even concurrent: once one AJAX method or Promise or async method/function starts executing nothing can happen until it's finished (or an await happens)...
In JS, when I kick off an async function or method, how do I officially refer to these "strands of executing code"?
For lack of a better term, I've referred to these as "asynchronous operations".
I have heard that each such code block executes from start to finish
This was true... until await. Now there's a couple of ways to think about it, both of which are correct:
Code blocks execute exclusively unless there's an await.
await splits its function into multiple code blocks that do execute exclusively (each await is a "split point").
meaning that there is never any concurrent processing in JS.
I'd disagree with this statement. JavaScript forces asynchrony (before async/await, it used callbacks or promises), so it does have asynchronous concurrency, though not parallel concurrency.
A good mental model is that JavaScript is inherently single-threaded, and that one thread has a queue of work to do. await does not block that thread; it returns, and when its thenable/Promise completes, it schedules the remainder of that method (the "continuation") to the queue.
So you can have multiple asynchronous methods executing concurrently. This is exactly how Node.js handles multiple simultaneous requests.
My understanding is that these "strands" in JS are not even concurrent: once one AJAX method or Promise or async method/function starts executing nothing can happen until it's finished...
No, this is incorrect. await will return control to the main thread, freeing it up to do other work.
the way things operate in JS means you "never have to worry about concurrency issues, etc."
Well... yes and no. Since JavaScript is inherently single-threaded, there's never any contention over data shared between multiple threads (obviously). I'd say that's what the original writer was thinking about. So I'd say easily north of 90% of thread-synchronization problems do just go away.
However, as you noted, you still need to be careful modifying shared state from multiple asynchronous operations. If they're running concurrently, then they can complete in any order, and your logic has to properly handle that.
Ideally, the best solution is to move to a more functional mindset - that is, get rid of the shared state as much as possible. Asynchronous operations should return their results instead of updating shared state. Then these concurrent asynchronous operations can be composed into a higher-level asynchronous operation using async, await, and Promise.all. If possible, this more functional approach of returning-instead-of-state and function composition will make the code easier to deal with.
But there's still some situations where this isn't easily achievable. It's possible to develop asynchronous equivalents of classical synchronous coordination primitives. I worked up a proof-of-concept AsyncLock, but can't seem to find the code anywhere. Well, it's possible to do this, anyway.
To date 24 people, no more no fewer, have chosen to look at this.
FWIW, and should anyone eccentric take an interest in this question one day, I propose "hogging strands" or just "strands" ... they hog the browser's JS engine, it would appear, and just don't let go, unless they encounter an await.
I've been developing this app where 3 "strands" operate more or less simultaneously, all involving AJAX calls and a database query... but in fact it is highly preferable if the 2nd of the 3 executes and returns before the 3rd.
But because the 3rd query is quite a bit simpler for MySQL than the 2nd the former tends to return from its AJAX "excursion" quite a bit earlier if everything is allowed to operate with unconstrained asynchronicity. This then causes a big delay before the 2nd strand is allowed to do its stuff. To force the code to perform and complete the 2nd "strand" therefore requires quite a bit of coercion, using async and await.
I read one comment on JS asynchronicity somewhere where someone opined that the way things operate in JS means you "never have to worry about concurrency issues, etc.". I don't agree: if you don't in fact know when awaits will unblock this does necessarily mean that concurrency is involved. Perhaps "hogging strands" are easier to deal with than true "concurrency", let alone true "parallelism", however... ?

Is there any reason to use a synchronous XMLHttpRequest?

It seems most everyone does asynchronous requests with XMLHttpRequest but obviously the fact that there is the ability to do synchronous requests indicates there might be a valid reason to do so. So what might that valid reason be?
Synchronous XHRs are useful for saving user data. If you handle the beforeunload event you can upload data to the server as the user closes the page.
If this were done using the async option, then the page could close before the request completes. Doing this synchronously ensures the request completes or fails in an expected way.
I think they might become more popular as HTML 5 standards progress. If a web application is given access to web workers, I could foresee developers using a dedicated web worker to make synchronous requests for, as Jonathan said, to ensure one request happens before another. With the current situation of one thread, it is a less than ideal design as it blocks until the request is complete.
Update:
The below hinted at - but was unsuccessful in delivering - that with the advent of better asynchronous request handling, there really is no reason to use synchronous requests, unless intending to purposely block the users from doing anything until a request is complete - sounds malicious :)
Although, this may sound bad, there may be times where it's important that a request (or series of requests) occur before a user leaves a page, or before an action is performed - blocking other code execution (e.g., preventing back button) could possibly reduce errors/maintenance for a poorly designed system; that said, I've never seen it in the wild and stress that it should be avoided.
Libraries, like promise, feign synchronicity by chaining processes via callbacks. This suits the majority of development needs where the desire is to have ordered, non-blocking events that enable the browsers to retain responsiveness for the user (good UX).
As stated in the Mozilla docs there are cases where you have to use synchronous requests; however, also listed is a workaround that uses beacon (not available in IE/Safari) for such cases. While this is experimental, if it ever reaches standards-acceptance, it could possibly put a nail in the synchronous-request coffin.
You'd want to perform synchronous calls in any sort of transaction-like processing, or wherever any order of operation is necessary.
For instance, let's say you want to customize an event to log you out after playing a song. If the logout operation occurs first, then the song will never be played. This requires synchronizing the requests.
Another reason would be when working with a WebService, especially when performing math on the server.
Example: Server has a variable with value of 1.
Step (1) Perform Update: add 1 to variable
Step (2) Perform Update: set variable to the power of 3
End Value: variable equals 8
If Step (2) occurs first, then the end value is 2, not 8; thus order of operation matters and synchronization is needed.
There are very few times that a synchronous call may be justified in a common real world example. Perhaps when clicking login and then clicking a portion of the site that requires a user to be logged in.
As others have said, it will tie up your browser, so stay away from it where you can.
Instead of synchronous calls, though, often users want to stop an event that is currently loading and then perform some other operation. In a way this is synchronization, since the first event is quit before the second begins. To do this, use the abort() method on the xml connection object.
I'd say that if you consider blocking the user's browser while the request completes acceptable, then sure use a synchronous request.
If serialization of requests is your aim, then this can be accomplished using async requests, by having the onComplete callback of your previous request fire the next in line.
There are many real world cases where blocking the UI is exactly the desired behaviour.
Take an app with multiple fields and some fields must be validated by a xmlhttp call to a remote server providing as input this field's value and other fields values.
In synchronous mode, the logic is simple, the blocking experienced by the user is very short and there is no problem.
In async mode, the user may change the values of any other fields while the initial one is being validated. These changes will trigger other xmlhttp calls with values from the initial field not yet validated. What happens if the initial validation failed ? Pure mess. If sync mode becomes deprecated and prohibited, the application logic becomes a nightmare to handle. Basically the application has to be re-written to manage locks (eg. disable other items during validation processes). Code complexity increases tremendously. Failing to do so may lead to logic failure and ultimately data corruption.
Basically the question is: what is more important, non-blocked UI experience or risk of data corruption ? The answer should remain with the application developer, not the W3C.
I can see a use for synchronous XHR requests to be used when a resource in a variable location must be loaded before other static resources in the page that depend on the first resource to fully function. In point of fact, I'm implementing such an XHR request in a little sub-project of my own whereas JavaScript resources reside in variable locations on the server depending on a set of specific parameters. Subsequent JavaScript resources rely on those variable resources and such files MUST be guaranteed to load before the other reliant files are loaded, thus making the application whole.
That idea foundation really kind of expands on vol7ron's answer. Transaction-based procedures are really the only time where synchronous requests should be made. In most other cases, asynchronous calls are the better alternative in which, after the call, the DOM is updated as necessary. In many cases, such as user-based systems, you could have certain features locked to "unauthorized users" until they have, per se, logged in. The those features, after the asynchronous call, are unlocked via a DOM update procedure.
I'd have to finally say that I agree with most individuals' points on the matter: wherever possible, synchronous XHR requests should be avoided as, with the way it works, the browser locks up with synchronous calls. When implementing synchronous requests, they should be done in a manner where the browser would normally be locked, anyway, say in the HEAD section before page loading actually occurs.
jQuery uses synchronous AJAX internally under some circumstances. When inserting HTML that contains scripts, the browser will not execute them. The scripts need to be executed manually. These scripts may attach click handlers. Assume a user clicks on an element before the handler is attached and the page would not function as intended. Therefore to prevent race conditions, synchronous AJAX would be used to fetch those scripts. Because synchronous AJAX effectively blocks everything else, it can be sure that scripts and events execute in the right order.
As of 2015 desktop javascript apps are becoming more popular. Usually in those apps when loading local files (and loading them using XHR is a perfectly valid option), the load speed is so fast that there is little point overcomplicating the code with async. Of course there might be cases where async is the way to go (requesting content from the internet, loading really big files or a huge number of files in a single batch), but otherwise sync works just fine (and is much easier to use).
Reason:
Let's say you have an ajax application which needs to do half a dozen http gets to load various data from the server before the user can do any interaction.
Obviously you want this triggered from onload.
Synchronous calls work very well for this without any added complexity to the code. It is simple and straightforward.
Drawback:
The only drawback is that your browser locks up until all data is loaded or a timeout happens. As for the ajax application in question, this isn't much of a problem because the application is of no use until all the initial data is loaded anyway.
Alternative?
However many browsers lock up all windows/tabs when while the javascript is busy in any one of them, which is a stupid browser design problem - but as a result blocking on possibly slow network gets is not polite if it keeps users from using other tabs while waiting for ajax page to load.
However, it looks like synchronous gets have been removed or restricted from recent browsers anyway. I'm not sure if that's because somebody decided they were just always bad, or if browser writers were confused by the WC Working Draft on the topic.
http://www.w3.org/TR/2012/WD-XMLHttpRequest-20120117/#the-open-method does make it look like (see section 4.7.3) you are not allowed to set a timeout when using blocking mode. Seems counter intuitive to me: Whenever one does blocking IO it's polite to set a reasonable timeout, so why allow blocking io but not with a user specified timeout?
My opinion is that blocking IO has a vital role in some situations but must be implemented correctly. While it is not acceptable for one browser tab or window to lock up all other tabs or windows, that's a browser design flaw. Shame where shame is due. But it is perfectly acceptable in some cases for an individual tab or window to be non-responsive for a couple of seconds (i.e. using blocking IO/HTTP GET) in some situations -- for example, on page load, perhaps a lot of data needs to be before anything can be done anyway. Sometimes properly implemented blocking code is the cleanest way to do it.
Of course equivalent function in this case can be obtained using asynchronous http gets, but what sort of goofy routine is required?
I guess I would try something along these lines:
On document load, do the following:
1: Set up 6 global "Done" flag variables, initialized to 0.
2: Execute all 6 background gets (Assuming the order didn't matter)
Then, the completion callbacks for each of the 6 http get's would set their respective "Done" flags.
Also, each callback would check all the other done flags to see if all 6 HTTP gets had completed. The last callback to complete, upon seeing that all others had completed, would then call the REAL init function which would then set everything up, now that the data was all fetched.
If the order of the fetching mattered -- or if the webserver was unable to accept multiple requests at same time -- then you would need something like this:
In onload(), the first http get would be launched.
In it's callback, the second one would be launched.
In it's callback, the third -- and so on and so forth, with each callback launching the next HTTP GET. When the last one returned, then it would call the real init() routine.
What happens if you make a synchronous call in production code?
The sky falls down.
No seriously, the user does not like a locked up browser.
I use it to validate a username, during the check that the username does not exist already.
I know it would be better to do that asynchronously, but then I should use a different code for this particular validation rule. I explain better. My validation setup uses some validation functions, which return true or false, depending if the data is valid.
Since the function has to return, I cannot use asynchronous techniques, so I just make that synchronous and hope that the server will answer promptly enough not to be too noticeable. If I used an AJAX callback, then I would have to handle the rest of the execution differently from the other validation methods.
Sometimes you have an action that depends in others. For example, action B can only be started if A is finished. The synchronous approach is usually used to avoid race conditions. Sometimes using a synchronous call is a simpler implementation then creating complex logic to check every state of your asynchronous calls that depend on each other.
The problem with this approach is that you "block" the user's browser until the action is finished (until the request returns, finishes, loads, etc). So be careful when using it.
I use synchronous calls when developing code- whatever you did while the request was commuting to and from the server can obscure the cause of an error.
When it's working, I make it asynchronous, but I try to include an abort timer and failure callbacks, cause you never know...
SYNC vs ASYNC: What is the difference?
Basically it boils down to this:
console.info('Hello, World!');
doSomething(function handleResult(result) {
console.info('Got result!');
});
console.info('Goodbye cruel world!');
When doSomething is synchronous this would print:
Hello, World!
Got result!
Goodbye cruel world!
In contrast, if doSomething is asynchronous, this would print:
Hello, World!
Goodbye cruel world!
Got result!
Because the function doSomething is doing it's work asynchronously, it returns before it's work is done. So we only get the result after printing Goodbye cruel world!
If we are depending on the result of an asynch call, we need to place the depending code in the callback:
console.info('Hello, World!');
doSomething(function handleResult(result) {
console.info('Got result!');
if (result === 'good') {
console.info('I feel great!');
}
else {
console.info('Goodbye cruel world!');
}
});
As such, just the fact that 2 or three things need to happen in order is no reason to do them synchronously (though sync code is easier for most people to work with).
WHY USE SYNCHRONOUS XMLHTTPREQUEST?
There are some situations where you need the result before the called function completes. Consider this scenario:
function lives(name) {
return (name !== 'Elvis');
}
console.info('Elvis ' + (lives('Elvis') ? 'lives!' : 'has left the building...');
Suppose we have no control over the calling code (the console.info line) and need to change function lives to ask the server... There is no way we can do an async request to the server from within lives and still have our response before lives completes. So we wouldn't know whether to return true or false. The only way to get the result before the function completes is by doing a synchronous request.
As Sami Samhuri mentions in his answer, a very real scenario where you may need an answer to your server request before your function terminates is the onbeforeunload event, as it's the last function from your app that will ever run before the window being closed.
I DON'T NEED SYNCH CALLS, BUT I USE THEM ANYWAY AS THEY ARE EASIER
Please don't. Synchronous calls lock up your browser and make the app feel unresponsive. But you are right. Async code is harder. There is, however a way to make dealing with it much easier. Not as easy as sync code, but it's getting close: Promises.
Here is an example: Two asynch calls should both complete succesfully before a third segment of code may run:
var carRented = rentCar().then(function(car){
gasStation.refuel(car);
});
var hotelBooked = bookHotel().then(function(reservation) {
reservation.confirm();
});
Promise.all([carRented, hotelBooked]).then(function(){
// At this point our car is rented and our hotel booked.
goOnHoliday();
});
Here is how you would implement bookHotel:
function bookHotel() {
return new Promise(function(resolve, reject){
if (roomsAvailable()) {
var reservation = reserveRoom();
resolve(reservation);
}
else {
reject(new Error('Could not book a reservation. No rooms available.'));
}
});
}
See also: Write Better JavaScript with Promises.
XMLHttpRequest is traditionally used for asynchronous requests. Sometimes (for debugging, or specific business logic) you would like to change all/several of the async calls in one page to sync.
You would like to do it without changing everything in your JS code. The async/sync flag gives you that ability, and if designed correctly, you need only change one line in your code/change the value of one var during execution time.
Firefox (and probable all non-IE browsers) does not support async XHR timeOut.
Stackoverflow discussion
Mozilla Firefox XMLHttpRequest
HTML5 WebWorkers do support timeouts. So, you may want to wrap sync XHR request to WebWorker with timeout to implement async-like XHR with timeout behaviour.
I just had a situation where asynchronous requests for a list of urls called in succession using forEach (and a for loop) would cause the remaining requests to be cancelled. I switched to synchronous and they work as intended.
Synchronous XHR can be very useful for (non-production) internal tool and/or framework development. Imagine, for example, you wanted to load a code library synchronously on first access, like this:
get draw()
{
if (!_draw)
{
let file;
switch(config.option)
{
case 'svg':
file = 'svgdraw.js';
break;
case 'canvas':
file = 'canvasdraw.js';
break;
default:
file = 'webgldraw.js';
}
var request = new XMLHttpRequest();
request.open('GET', file, false);
request.send(null);
_draw = eval(request.responseText);
}
return _draw;
}
Before you get yourself in a tizzy and blindly regurgitate the evil's of eval, keep in mind that this is only for local testing. For production builds, _draw would already be set.
So, your code might look like this:
foo.drawLib.draw.something(); //loaded on demand
This is just one example of something that would be impossible to do without sync XHR. You could load this library up front, yes, or do a promise/callback, but you could not load the lib synchronously without sync XHR. Think about how much this type of thing could clean up your code...
The limits to what you can do with this for tooling and frameworks (running locally) is only limited by your imagination. Though, it appears imagination is a bit limited in the JavaScript world.
Using synchronous HTTP requests is a common practice in the mobile advertisement business.
Companies (aka "Publishers") that build applications often run ads to generate revenue. For this they install advertising SDKs into their app. Many exist (MoPub, Ogury, TapJob, AppNext, Google Ads AdMob).
These SDKs will serve ads in a webview.
When serving an ad to a user, it has to be a smoothe experience, especially when playing a video. There should be no buffering or loading at any moment.
To solve this precaching is used. Where the media (picture / videos / etc) are loaded synchronously in background of the webview.
Why not do it asynchronously?
This is part of a globally accepted standard
The SDK listens for the onload event to know when the ad is "ready" to be served to the user
With the deprecation of synchronous XMLHttpRequests, ad business will most likely be forced to change the standard in the future unless another way can be determined.
Well here's one good reason. I wanted to do an http request then, depending on the result, call click() on an input type=file. This is not possible with asynchronous xhr or fetch. The callback loses the context "user action", so the call click() is ignored. Synchronous xhr saved my bacon.
onclick(event){
//here I can, but I don't want to.
//document.getElementById("myFileInput").click();
fetch("Validate.aspx", { method : "POST", body: formData, credentials: "include" })
.then((response)=>response.json())
.then(function (validResult) {
if (validResult.success) {
//here, I can't.
document.getElementById("myFileInput").click();
}
});
}
Because chrome.webRequest.*.addListener does not support asynchronous handlers.

Categories

Resources