Is there any reason to use a synchronous XMLHttpRequest? - javascript

It seems most everyone does asynchronous requests with XMLHttpRequest but obviously the fact that there is the ability to do synchronous requests indicates there might be a valid reason to do so. So what might that valid reason be?

Synchronous XHRs are useful for saving user data. If you handle the beforeunload event you can upload data to the server as the user closes the page.
If this were done using the async option, then the page could close before the request completes. Doing this synchronously ensures the request completes or fails in an expected way.

I think they might become more popular as HTML 5 standards progress. If a web application is given access to web workers, I could foresee developers using a dedicated web worker to make synchronous requests for, as Jonathan said, to ensure one request happens before another. With the current situation of one thread, it is a less than ideal design as it blocks until the request is complete.

Update:
The below hinted at - but was unsuccessful in delivering - that with the advent of better asynchronous request handling, there really is no reason to use synchronous requests, unless intending to purposely block the users from doing anything until a request is complete - sounds malicious :)
Although, this may sound bad, there may be times where it's important that a request (or series of requests) occur before a user leaves a page, or before an action is performed - blocking other code execution (e.g., preventing back button) could possibly reduce errors/maintenance for a poorly designed system; that said, I've never seen it in the wild and stress that it should be avoided.
Libraries, like promise, feign synchronicity by chaining processes via callbacks. This suits the majority of development needs where the desire is to have ordered, non-blocking events that enable the browsers to retain responsiveness for the user (good UX).
As stated in the Mozilla docs there are cases where you have to use synchronous requests; however, also listed is a workaround that uses beacon (not available in IE/Safari) for such cases. While this is experimental, if it ever reaches standards-acceptance, it could possibly put a nail in the synchronous-request coffin.
You'd want to perform synchronous calls in any sort of transaction-like processing, or wherever any order of operation is necessary.
For instance, let's say you want to customize an event to log you out after playing a song. If the logout operation occurs first, then the song will never be played. This requires synchronizing the requests.
Another reason would be when working with a WebService, especially when performing math on the server.
Example: Server has a variable with value of 1.
Step (1) Perform Update: add 1 to variable
Step (2) Perform Update: set variable to the power of 3
End Value: variable equals 8
If Step (2) occurs first, then the end value is 2, not 8; thus order of operation matters and synchronization is needed.
There are very few times that a synchronous call may be justified in a common real world example. Perhaps when clicking login and then clicking a portion of the site that requires a user to be logged in.
As others have said, it will tie up your browser, so stay away from it where you can.
Instead of synchronous calls, though, often users want to stop an event that is currently loading and then perform some other operation. In a way this is synchronization, since the first event is quit before the second begins. To do this, use the abort() method on the xml connection object.

I'd say that if you consider blocking the user's browser while the request completes acceptable, then sure use a synchronous request.
If serialization of requests is your aim, then this can be accomplished using async requests, by having the onComplete callback of your previous request fire the next in line.

There are many real world cases where blocking the UI is exactly the desired behaviour.
Take an app with multiple fields and some fields must be validated by a xmlhttp call to a remote server providing as input this field's value and other fields values.
In synchronous mode, the logic is simple, the blocking experienced by the user is very short and there is no problem.
In async mode, the user may change the values of any other fields while the initial one is being validated. These changes will trigger other xmlhttp calls with values from the initial field not yet validated. What happens if the initial validation failed ? Pure mess. If sync mode becomes deprecated and prohibited, the application logic becomes a nightmare to handle. Basically the application has to be re-written to manage locks (eg. disable other items during validation processes). Code complexity increases tremendously. Failing to do so may lead to logic failure and ultimately data corruption.
Basically the question is: what is more important, non-blocked UI experience or risk of data corruption ? The answer should remain with the application developer, not the W3C.

I can see a use for synchronous XHR requests to be used when a resource in a variable location must be loaded before other static resources in the page that depend on the first resource to fully function. In point of fact, I'm implementing such an XHR request in a little sub-project of my own whereas JavaScript resources reside in variable locations on the server depending on a set of specific parameters. Subsequent JavaScript resources rely on those variable resources and such files MUST be guaranteed to load before the other reliant files are loaded, thus making the application whole.
That idea foundation really kind of expands on vol7ron's answer. Transaction-based procedures are really the only time where synchronous requests should be made. In most other cases, asynchronous calls are the better alternative in which, after the call, the DOM is updated as necessary. In many cases, such as user-based systems, you could have certain features locked to "unauthorized users" until they have, per se, logged in. The those features, after the asynchronous call, are unlocked via a DOM update procedure.
I'd have to finally say that I agree with most individuals' points on the matter: wherever possible, synchronous XHR requests should be avoided as, with the way it works, the browser locks up with synchronous calls. When implementing synchronous requests, they should be done in a manner where the browser would normally be locked, anyway, say in the HEAD section before page loading actually occurs.

jQuery uses synchronous AJAX internally under some circumstances. When inserting HTML that contains scripts, the browser will not execute them. The scripts need to be executed manually. These scripts may attach click handlers. Assume a user clicks on an element before the handler is attached and the page would not function as intended. Therefore to prevent race conditions, synchronous AJAX would be used to fetch those scripts. Because synchronous AJAX effectively blocks everything else, it can be sure that scripts and events execute in the right order.

As of 2015 desktop javascript apps are becoming more popular. Usually in those apps when loading local files (and loading them using XHR is a perfectly valid option), the load speed is so fast that there is little point overcomplicating the code with async. Of course there might be cases where async is the way to go (requesting content from the internet, loading really big files or a huge number of files in a single batch), but otherwise sync works just fine (and is much easier to use).

Reason:
Let's say you have an ajax application which needs to do half a dozen http gets to load various data from the server before the user can do any interaction.
Obviously you want this triggered from onload.
Synchronous calls work very well for this without any added complexity to the code. It is simple and straightforward.
Drawback:
The only drawback is that your browser locks up until all data is loaded or a timeout happens. As for the ajax application in question, this isn't much of a problem because the application is of no use until all the initial data is loaded anyway.
Alternative?
However many browsers lock up all windows/tabs when while the javascript is busy in any one of them, which is a stupid browser design problem - but as a result blocking on possibly slow network gets is not polite if it keeps users from using other tabs while waiting for ajax page to load.
However, it looks like synchronous gets have been removed or restricted from recent browsers anyway. I'm not sure if that's because somebody decided they were just always bad, or if browser writers were confused by the WC Working Draft on the topic.
http://www.w3.org/TR/2012/WD-XMLHttpRequest-20120117/#the-open-method does make it look like (see section 4.7.3) you are not allowed to set a timeout when using blocking mode. Seems counter intuitive to me: Whenever one does blocking IO it's polite to set a reasonable timeout, so why allow blocking io but not with a user specified timeout?
My opinion is that blocking IO has a vital role in some situations but must be implemented correctly. While it is not acceptable for one browser tab or window to lock up all other tabs or windows, that's a browser design flaw. Shame where shame is due. But it is perfectly acceptable in some cases for an individual tab or window to be non-responsive for a couple of seconds (i.e. using blocking IO/HTTP GET) in some situations -- for example, on page load, perhaps a lot of data needs to be before anything can be done anyway. Sometimes properly implemented blocking code is the cleanest way to do it.
Of course equivalent function in this case can be obtained using asynchronous http gets, but what sort of goofy routine is required?
I guess I would try something along these lines:
On document load, do the following:
1: Set up 6 global "Done" flag variables, initialized to 0.
2: Execute all 6 background gets (Assuming the order didn't matter)
Then, the completion callbacks for each of the 6 http get's would set their respective "Done" flags.
Also, each callback would check all the other done flags to see if all 6 HTTP gets had completed. The last callback to complete, upon seeing that all others had completed, would then call the REAL init function which would then set everything up, now that the data was all fetched.
If the order of the fetching mattered -- or if the webserver was unable to accept multiple requests at same time -- then you would need something like this:
In onload(), the first http get would be launched.
In it's callback, the second one would be launched.
In it's callback, the third -- and so on and so forth, with each callback launching the next HTTP GET. When the last one returned, then it would call the real init() routine.

What happens if you make a synchronous call in production code?
The sky falls down.
No seriously, the user does not like a locked up browser.

I use it to validate a username, during the check that the username does not exist already.
I know it would be better to do that asynchronously, but then I should use a different code for this particular validation rule. I explain better. My validation setup uses some validation functions, which return true or false, depending if the data is valid.
Since the function has to return, I cannot use asynchronous techniques, so I just make that synchronous and hope that the server will answer promptly enough not to be too noticeable. If I used an AJAX callback, then I would have to handle the rest of the execution differently from the other validation methods.

Sometimes you have an action that depends in others. For example, action B can only be started if A is finished. The synchronous approach is usually used to avoid race conditions. Sometimes using a synchronous call is a simpler implementation then creating complex logic to check every state of your asynchronous calls that depend on each other.
The problem with this approach is that you "block" the user's browser until the action is finished (until the request returns, finishes, loads, etc). So be careful when using it.

I use synchronous calls when developing code- whatever you did while the request was commuting to and from the server can obscure the cause of an error.
When it's working, I make it asynchronous, but I try to include an abort timer and failure callbacks, cause you never know...

SYNC vs ASYNC: What is the difference?
Basically it boils down to this:
console.info('Hello, World!');
doSomething(function handleResult(result) {
console.info('Got result!');
});
console.info('Goodbye cruel world!');
When doSomething is synchronous this would print:
Hello, World!
Got result!
Goodbye cruel world!
In contrast, if doSomething is asynchronous, this would print:
Hello, World!
Goodbye cruel world!
Got result!
Because the function doSomething is doing it's work asynchronously, it returns before it's work is done. So we only get the result after printing Goodbye cruel world!
If we are depending on the result of an asynch call, we need to place the depending code in the callback:
console.info('Hello, World!');
doSomething(function handleResult(result) {
console.info('Got result!');
if (result === 'good') {
console.info('I feel great!');
}
else {
console.info('Goodbye cruel world!');
}
});
As such, just the fact that 2 or three things need to happen in order is no reason to do them synchronously (though sync code is easier for most people to work with).
WHY USE SYNCHRONOUS XMLHTTPREQUEST?
There are some situations where you need the result before the called function completes. Consider this scenario:
function lives(name) {
return (name !== 'Elvis');
}
console.info('Elvis ' + (lives('Elvis') ? 'lives!' : 'has left the building...');
Suppose we have no control over the calling code (the console.info line) and need to change function lives to ask the server... There is no way we can do an async request to the server from within lives and still have our response before lives completes. So we wouldn't know whether to return true or false. The only way to get the result before the function completes is by doing a synchronous request.
As Sami Samhuri mentions in his answer, a very real scenario where you may need an answer to your server request before your function terminates is the onbeforeunload event, as it's the last function from your app that will ever run before the window being closed.
I DON'T NEED SYNCH CALLS, BUT I USE THEM ANYWAY AS THEY ARE EASIER
Please don't. Synchronous calls lock up your browser and make the app feel unresponsive. But you are right. Async code is harder. There is, however a way to make dealing with it much easier. Not as easy as sync code, but it's getting close: Promises.
Here is an example: Two asynch calls should both complete succesfully before a third segment of code may run:
var carRented = rentCar().then(function(car){
gasStation.refuel(car);
});
var hotelBooked = bookHotel().then(function(reservation) {
reservation.confirm();
});
Promise.all([carRented, hotelBooked]).then(function(){
// At this point our car is rented and our hotel booked.
goOnHoliday();
});
Here is how you would implement bookHotel:
function bookHotel() {
return new Promise(function(resolve, reject){
if (roomsAvailable()) {
var reservation = reserveRoom();
resolve(reservation);
}
else {
reject(new Error('Could not book a reservation. No rooms available.'));
}
});
}
See also: Write Better JavaScript with Promises.

XMLHttpRequest is traditionally used for asynchronous requests. Sometimes (for debugging, or specific business logic) you would like to change all/several of the async calls in one page to sync.
You would like to do it without changing everything in your JS code. The async/sync flag gives you that ability, and if designed correctly, you need only change one line in your code/change the value of one var during execution time.

Firefox (and probable all non-IE browsers) does not support async XHR timeOut.
Stackoverflow discussion
Mozilla Firefox XMLHttpRequest
HTML5 WebWorkers do support timeouts. So, you may want to wrap sync XHR request to WebWorker with timeout to implement async-like XHR with timeout behaviour.

I just had a situation where asynchronous requests for a list of urls called in succession using forEach (and a for loop) would cause the remaining requests to be cancelled. I switched to synchronous and they work as intended.

Synchronous XHR can be very useful for (non-production) internal tool and/or framework development. Imagine, for example, you wanted to load a code library synchronously on first access, like this:
get draw()
{
if (!_draw)
{
let file;
switch(config.option)
{
case 'svg':
file = 'svgdraw.js';
break;
case 'canvas':
file = 'canvasdraw.js';
break;
default:
file = 'webgldraw.js';
}
var request = new XMLHttpRequest();
request.open('GET', file, false);
request.send(null);
_draw = eval(request.responseText);
}
return _draw;
}
Before you get yourself in a tizzy and blindly regurgitate the evil's of eval, keep in mind that this is only for local testing. For production builds, _draw would already be set.
So, your code might look like this:
foo.drawLib.draw.something(); //loaded on demand
This is just one example of something that would be impossible to do without sync XHR. You could load this library up front, yes, or do a promise/callback, but you could not load the lib synchronously without sync XHR. Think about how much this type of thing could clean up your code...
The limits to what you can do with this for tooling and frameworks (running locally) is only limited by your imagination. Though, it appears imagination is a bit limited in the JavaScript world.

Using synchronous HTTP requests is a common practice in the mobile advertisement business.
Companies (aka "Publishers") that build applications often run ads to generate revenue. For this they install advertising SDKs into their app. Many exist (MoPub, Ogury, TapJob, AppNext, Google Ads AdMob).
These SDKs will serve ads in a webview.
When serving an ad to a user, it has to be a smoothe experience, especially when playing a video. There should be no buffering or loading at any moment.
To solve this precaching is used. Where the media (picture / videos / etc) are loaded synchronously in background of the webview.
Why not do it asynchronously?
This is part of a globally accepted standard
The SDK listens for the onload event to know when the ad is "ready" to be served to the user
With the deprecation of synchronous XMLHttpRequests, ad business will most likely be forced to change the standard in the future unless another way can be determined.

Well here's one good reason. I wanted to do an http request then, depending on the result, call click() on an input type=file. This is not possible with asynchronous xhr or fetch. The callback loses the context "user action", so the call click() is ignored. Synchronous xhr saved my bacon.
onclick(event){
//here I can, but I don't want to.
//document.getElementById("myFileInput").click();
fetch("Validate.aspx", { method : "POST", body: formData, credentials: "include" })
.then((response)=>response.json())
.then(function (validResult) {
if (validResult.success) {
//here, I can't.
document.getElementById("myFileInput").click();
}
});
}

Because chrome.webRequest.*.addListener does not support asynchronous handlers.

Related

Ajax calls increase the website load time

Hi i am working on a web app.in that app there is a code like below
$(document).ready(function()}{
getNewLetterContent();
getVideoComplition();
getRssFeed();
intializeEvents();
});
each of these functions are makes ajax calls to thrid party api.most of these calls are taking so much time to get a response.this ultimately makes the app to load slow.whatever the response we are getting back that is not much important for the initial view of the app(Above the fold content) .so i have searched the internet for a solution ,i replaced document.ready to window.load that doesn't make a much difference.can you guys please help me how i can improve the performance along with this calls
You should check code in all these functions. By default JavaScript engines have event loop system, which is event-driven system, which notifies about events and invokes needed callbacks. It means that ajax calls per se shouldn't slow your webpage, because it is just small piece of javascript (just few functions to invoke and to send XHR/fetch requests).
There is a chance that these functions have some heavy code, which is blocking, and therefore page is really slow (it might be the case if all these functions are old 3rd-party-libraries).
Also, there are few possibilities with fully asynchronous code. First of all, there are number of maximum concurrent requests, and if you exceed them heavily, page will be slow (had this problem and add waiting explicitly through promises).
Also, another possibility is that some function actually loaded data, and started some heavy manipulation on the page (changing DOM, forcing recalculation of styles, adding animations, etc). Each case should be investigated, but I recommend to start looking at the network tab in the chrome console.
the simplest solution is to add a timeout function in ready method, this will complete your page load without depending on these functions.
$(document).ready(function()}{
setTimeout(function(){
getNewLetterContent();
getVideoComplition();
getRssFeed();
intializeEvents();
},8000);
});

Best Server API and Client Side Javascript Interaction Methods?

Currently, I'm using setTimeout() to pause a for loop on a huge list so that I can add some styling to the page. For instance,
Eg: http://imdbnator.com/process?id=wtf&redirect=false
What I use setTimeOut for:
I use setTimeout() to add images,text and css progress bar (Why doesn't Progress Bar dynamically change unlike Text?2).
Clearly, as you can see it is quite painful for a user to just browse through the page and hover over a few images. It gets extremely laggy. Is there any any workaround to this?
My FOR Loop:
Each for loop makes an ajax request on the background to a PHP API. It definitely costs me some efficiency there but how do all other websites pull it off with such elegance? I mean, I've seen websites show a nice loading image with no user interference while it makes an API request. While I try to do something like that, I have set a time-out everytime.
Is that they use better Server-Client side interaction languages like the node.js that I've heard?
Also, I'e thought of a few alternatives but run into other complications. I would greatly appreciate if you can help me on each of these possible alternatives.
Method 1:
Instead of making an AJAX call to my PHP API through jQuery, I could do a complete server side script altogether. But then, the problem I run into is that I cannot make a good Client Side Page (as in my current page) which updates the progress bar and adds dynamic images after each of the item of the list is processed. Or is this possible?
Method 2: (Edited)
Like one the useful answers below, I think the biggest problem is the server API and client interaction. Websockets as suggested by him look promising to me. Will they necessarily be a better fix over a setTimeout? Is there any significant time difference in lets say I replace my current 1000 AJAX requests into a websocket?
Also, I would appreciate if there is anything other than websocket that is better off than an AJAX call.
How do professional websites get around with a fluidic server and client side interactions?
Edit 1: Please explain how professional websites (such as http://www.cleartrip.com when you are requesting for flight details) provide a smooth client side while processing the server side.
Edit 2: As #Syd suggested. That is something that I'm looking for.I think there is a lot of delay in my current client and server interaction. Websockets seem to be a fix for that. What are the other/ best ways for improving server cleint interaction apart from the standard AJAX?
Your first link doesn't work for me but I'll try to explain a couple of things that might help you if I understand your overall problem.
First of all it is bad to have synchronous calls with large amount of data that require processing in your main ui thread because the user experience might suffer a lot. For reference you might want to take a look into "Is it feasible to do an AJAX request from a Web Worker?"
If I understand correctly you want to load some data on demand based on an event.
Here you might want to sit back and think what is the best event for your need, it's quite different to make an ajax request every once in a while especially when you have a lot of traffic. Also you might want to check if your previous request has completed before you initialize the next one (this might not be needed in some cases though). Have a look at async.js if you want to create chained asynchronous code execution without facing the javascript "pyramid of doom" effect and messy code.
Moreover you might want to "validate - halt" the event before making the actual request. For example let's assume a user triggers a "mouseenter" you should not just fire an ajax call. Hold your breath use setTimeout and check if the user didn't fire any other "mouseenter" event for the next 250 ms this will allow your server to breath. Or in implementations that load content based on scroll. You should not fire an event if the user scrolls like a maniac. So validate the events.
Also loops and iterations, we all know that if the damn loop is too long and does heavy lifting you might experience unwanted results. So in order to overcome this you might want to look into timed loops (take a look at the snippet bellow). basically loops that break after x amount of time and continue after a while. Here are some references that helped me with a three.js project. "optimizing-three-dot-js-performance-simulating-tens-of-thousands-of-independent-moving-objects" and "Timed array processing in JavaScript"
//Copyright 2009 Nicholas C. Zakas. All rights reserved.
//MIT Licensed
function timedChunk(items, process, context, callback){
var todo = items.concat(); //create a clone of the original
setTimeout(function(){
var start = +new Date();
do {
process.call(context, todo.shift());
} while (todo.length > 0 && (+new Date() - start < 50));
if (todo.length > 0){
setTimeout(arguments.callee, 25);
} else {
callback(items);
}
}, 25);
}
cleartip.com will probably might use some of these techniques and from what I've seen what it does is get a chunk of data when you visit the page and then upon scroll it fetches other chunks as well. The trick here is to fire the request a little sooner before the user reaches the bottom of the page in order to provide a smooth experience. Regarding the left side filters they only filter out data that are already in the browser, no more requests are being made. So you fetch and you keep something like cache (in other scenarios though caching might be unwanted for live data feeds etc).
Finally If you are interested for further reading and smaller overhead in data transactions you might want to take a look into "WebSockets".
You must use async AJAX calls. Right now, the user interaction is blocked while the HTTP ajax request is being done.
Q: "how professional websites (such as cleartrip.com) provide a smooth client side while processing the server side."
A: By using async AJAX calls

Best practice for handling async Javascript code in a application framework

I am trying to build a web application, where I have abstracted logic in a Javascript controller class. I want to minimize function calls outside of that class.
Once in a page in my application, whenever a user performs a task, I want the mark a previous action as completed on my server. So I call a method in my Javascript class e.g. EndTask(id), which in turn sends out an ajax command to the server. No I want to wait for a response to my ajax command before allowing the user to proceed further.
I also want to ensure that my Javascript API is kept to a minimum and the page do not have to register any callback.
So in essence, I would like to call just EndTask(), and when the function returns, the page would understand that the whole process of marking the task as completed has been performed.
Is there any way to do this in Javascript, without marking the ajax call as synchronous?
It is possible to make an asynchronous function synchronous in JavaScript but to do so you would need to make use of generators which are currently only supported by Firefox.
I would suggest that you synchronous AJAX calls and fall back on generators in Firefox since it doesn't support synchronous XMLHttpRequest objects. Beware however that this might lead to redundancy in your code.

Synchronous vs. asynchronous database access

I want to develop a complex game with possibly thousands of functions and database calls.
I am wondering if it's really necessary to do my database queries in async. Its a pain to code, and all of my functions will need to use callbacks instead of the clean return method. Is this a normal approach?
Are coding these calls in async really that much faster, considering a MySQL database processes a single query at a time?
Unless something changed drastically in Node.JS recently, you're pretty much forced to use async database access to scale well since all your user requests will execute on one single thread and synchronously waiting for the database will really drop your performance. If one user does a slow operation, all other users will have to wait until it's done.
Node.JS is really built for an async event driven flow, you'll get much better performance working with it than working around it.
Asynchronous request are not faster than synchronous ones, no matter how you do them they still do the same exact thing. The only this that changes is rather you block on the request or not.
When you go with a synchronous the method that made the request will stop its execution waiting for the return of the request, only when that got through will it continue with its execution. Though when using asynchronous request, there's no need to wait for the request to complete, you can jut keep on and when it's done the callback will be called upon.
Another thing is that usually the database is the bottleneck when the application is making a lot of calls to the dbms, because of that you might want to consider using come kind of caching to reduce to load from the dbms.
The speed of the database engine + transmission times will be pretty much the same either way. The issue is that async calls do not block the caller. Thus async arch is the way to go for any "realtime-ish" systems that need to be highly responsive to other inputs. (Such as games which should always be very responsive to the human.)
The queries will be queued up at the database level in MySQL. There are lot more options if you can think of using Mongo DB for some of your data.
Like Nitzan, you must know that "Asynchronous request are not faster than synchronous ones, no matter how you do them they still do the same exact thing."
No one talked about, but if there are a lot of users and a lot of requests, you have other solutions to limit database access :
Creating cache files
And update them by users actions or by CRON tasks.
Users informations
Users alerts
Users actions
Users inventory
...
Stocked database process
For some recurrent requests you can stock them in MySQL.
Those will be executed faster than a user request.

How to call an asynchronous JavaScript function and block the original caller

I have an interesting situation that my usually clever mind hasn't been able to come up with a solution for :) Here's the situation...
I have a class that has a get() method... this method is called to get stored user preferences... what it does is calls on some underlying provider to actually get the data... as written now, it's calling on a provider that talks cookies... so, get() calls providerGet() let's say, providerGet() returns a value and get() passes it along to the caller. The caller expects a response before it continues its work obviously.
Here's the tricky part... I now am trying to implement a provider that is asychronous in nature (using local storage in this case)... so, providerGet() would return right away, having dispatched a call to local storage that will, some time later, call a callback function that was passed to it... but, since providerGet() already returned, and so did get() now by extension to the original called, it obviously hasn't returned the actual retrieved data.
So, the question is simply is there a way to essentially "block" the return from providerGet() until the asychronous call returns? Note that for my purposes I'm not concerned with the performance implications this might have, I'm just trying to figure out how to make it work.
I don't think there's a way, certainly I know I haven't been able to come up with it... so I wanted to toss it out and see what other people can come up with :)
edit: I'm just learning now that the core of the problem, the fact that the web sql API is asychronous, may have a solution... turns out there's a synchronous version of the API as well, something I didn't realize... I'm reading through docs now to see how to use it, but that would solve the problem nicely since the only reason providerGet() was written asychronously at all was to allow for that provider... the code that get() is a part of is my own abstraction layer above various storage providers (cookies, web sql, localStorage, etc) so the lowest common denominator has to win, which means if one is asychronous they ALL have to be asychronous... the only one that was is web sql... so if there's a way to do that synchronously my point become moot (still an interesting question generically I think though)
edit2: Ah well, no help there it seems... seems like the synchronous version of the API isn't implemented in any browser and even if it was it's specified that it can only be used from worker threads, so this doesn't seem like it'd help anyway. Although, reading some other things it sounds like there's a way to pull of this trick using recursion... I'm throwing together some test code now, I'll post it if/when I get it working, seems like a very interesting way to get around any such situation generically.
edit3: As per my comments below, there's really no way to do exactly what I wanted. The solution I'm going with to solve my immediate problem is to simply not allow usage of web SQL for data storage. It's not the ideal solution, but as that spec is in flux and not widely implemented anyway it's not the end of the world... hopefully when its properly supported the synchronous version will be available and I can plug in a new provider for it and be good to go. Generically though, there doesn't appear to be any way to pull of this miracle... confirms what I expected was the case, but wish I was wrong this one time :)
spawn a webworker thread to do the async operation for you.
pass it info it needs to do the task plus a unique id.
the trick is to have it send the result to a webserver when it finishes.
meanwhile...the function which spawned the webworker sends an ajax request to the same webserver
use the synchronous flag of the xmlhttprequest object(yes, it has a synchronous option). since it will block until the http request is complete, you can just have your webserver script poll the database for updates or whatever until the result has been sent to it.
ugly, i know. but it would block without hogging cpu :D
basically
function get(...) {
spawnWebworker(...);
var xhr = sendSynchronousXHR(...);
return xhr.responseTEXT;
}
No, you can't block until the asynch call finishes. It's that simple.
It sounds like you may already know this, but if you want to use asynchronous ajax calls, then you have to restructure the way your code is used. You cannot just have a .get() method that makes an asynchronous ajax call, blocks until it's complete and returns the result. The design pattern most commonly used in these cases (look at all of Google's javascript APIs that do networking, for example) is to have the caller pass you a completion function. The call to .get() will start the asynchronous operation and then return immediately. When the operation completes, the completion function will be called. The caller must structure their code accordingly.
You simply cannot write straight, sequential procedural javascript code when using asynchronous networking like:
var result = abc.get()
document.write(result);
The most common design pattern is like this:
abc.get(function(result) {
document.write(result);
});
If your problem is several calling layers deep, then callbacks can be passed along to different levels and invoked when needed.
FYI, newer browsers support the concept of promises which can then be used with async and await to write code that might look like this:
async function someFunc() {
let result = await abc.get()
document.write(result);
}
This is still asynchronous. It is still non-blocking. abc.get() must return a promise that resolves to the value result. This code must be inside a function that is declared async and other code outside this function will continue to run (that's what makes this non-blocking). But, you get to write code that "looks" more like blocking code when local to the specific function it's contained within.
Why not just have the original caller pass in a callback of its own to get()? This callback would contain the code that relies on the response.
The get() method will forward the callback to providerGet(), which would then invoke it when it invokes its own callback.
The result of the fetch would be passed to the original caller's callback.
function get( arg1, arg2, fn ) {
// whatever code
// call providerGet, passing along the callback
providerGet( fn );
}
function providerGet( fn ) {
// do async activity
// in the callback to the async, invoke the callback and pass it the data
// ...
fn( received_data );
// ...
}
get( 'some_arg', 'another_arg', function( data ) {
alert( data );
});
When your async method starts, I would open some sort of modal dialog (that the user cannot close) telling them that the request is in process. When the request finishes, close the modal in your callback.
One possible way to do this is with jqModal, but that would require you to load jQuery into your project. I'm not sure if that's an option for you or not.
This is ugly, but anyway I think the question is kindof implying an ugly solution is desired...
In your get function, serialize your query into a string.
Open an iframe, passing (A) this serialized query and (B) a random number in querystring to this iframe
Your iframe has some javascript code that reads the SQL query and number from its own querystring
Your iframe asynchronously begins running the query.
When your iframe query is asynchronously finished, it sends it, along with the random number to a server of yours, say to /write.php?rand=###&reslt="blahblahblah"
Write.php saves this info somewhere
Back in your main script, after creating and loading the iframe, you create a synchronous AJAX request to your server, say to /read.php?rand=####
/read.php blocks until the written info is available, then returns it to your main page
Alternately, to avoid sending the data over the network, you could instead have your iframe encode the result into a canvas-generated image that the browser caches (similar to the approach that Zombie cookie reportedly used). Then your blocking script would try to continually load this image over and over again (with some small network-generated delay on each request) until the cached version is available, which you could recognize via some flag that you've set to indicate it's done.

Categories

Resources