The "right" way to do synchronous HTTP request - javascript

You probably came here to chide me but this is a real use case.
In the world of online education, there are SCORM courses. I have to make old SCORM courses work on a site. SCORM courses are "web based" and run in a browser, but they expect to run in an iframe and they expect the parent to supply a GetValue method and a SetValue.
So these SCORM courses are doing things like parent.SetValue("score", "90") and moving on. That function is supposed to return "false" if there was any issue.
SCORM comes from the 90's, and in modern web we know we have to do callbacks/promises and http fails "often". You might think the solution is a SetValue that writes to local data and then tries and retries until it get's through, but the SCORM course typically is set up to only move to the next screen if the SetValue worked, so you shouldn't be letting the user advance unless the SetValue actually was saved on the server.
TL;DR
Assuming a syncronous request is a requirement, what is the right way to do it?
So far I know of $.ajax({async:false ... but now browsers warn about that and sound like they're going to just ignore your request to be synchronous. I am thinking maybe using websockets or web workers or something is the right way to do a syncronous request in modern programming. But I don't know how to make a request like that. And I am not allowed change the code of the SCORM courses (they are generated with various course-making tools)
To clarify, I have full control over the implementation of the SetValue function.
Will $.ajax({async:false ... work long term? (5-10 years)
NOTE: it is entirely acceptable in this use case to completely freeze the UI until the request either succeeds or fails. That's what the courses assume.

So far I know of $.ajax({async:false… but now browsers warn about that
This is the right way (if you're using jQuery), it sends a synchronous XMLHttpRequest. Just ignore the warning. It's a warning that you are using outdated technology, which you already know.
and sound like they're going to just ignore your request to be synchronous.
That's unlikely.
I am thinking maybe using websockets or web workers or something is the right way to do a syncronous request in modern programming.
No, websockets and web workers are always asynchronous, you can't use them to make your asynchronous request look synchronous (in fact there's nothing that lets you do this).
Will $.ajax({async:false… work long term? (5-10 years)
We cannot know (and SO is not a crystal ball). It might, especially in older browsers, or it might not. Browser vendors are reluctant to break compatibility of features that run the web, and synchronous requests still are needed from time to time. At some point, too few (important) web pages will use it (<1%, <1‰, whatever threshold they decide on) and browsers will finally be confident to remove it. At that point, your business will have realised to deprecate these outdated course-making tools.

Based on my experience with learning management systems, the answer is: fake it.
You wrote:
it is entirely acceptable in this use case to completely freeze the UI until the request either succeeds or fails. That's what the courses assume.
Perhaps your courses assume this, but this is not the case in any learning management system I've used over the past decade.
From what I've seen, learning management systems don't use synchronous requests because they block other scripts, which gives the impression the page/course is locked up or broken. The workaround is to use async calls via an abstraction layer (which includes the SCORM API), and return 'true' to the course even if you have no way of verifying that the AJAX call was in fact was successful.
High-level view of how LMSs typically handle SCORM data:
When a course is launched, the LMS gets ALL of the course's existing SCORM data from the database, then puts it into a JavaScript object on the client side (accessible via the SCORM API). When you fetch data via SCORM, you are typically fetching data that is in this pre-loaded JS object -- you are NOT getting a real-time response directly from the database. Therefore AJAX is not needed when using SCORM's API.GetValue.
When you attempt to API.SetValue, you're initially storing the key/value pair in the JS object, not the SCORM database. Therefore the client-side JS object needs to synchronously indicate whether it successfully stored the data ('true') or not ('false'). The database -- and AJAX -- doesn't come into play until you try to persist the data to the database using API.Commit().
When you try to get a success value from API.Commit(), which is invoking AJAX, most LMSs will fake it. They will do an asynchronous request for the sake of ensuring the course doesn't feel broken, so the value returned from Commit() will almost always be 'true'. It's not reliable.

Related

What exactly are web workers and when to use them

I was reading up something about XMLHttpRequest (Is there any reason to use a synchronous XMLHttpRequest?) here on SO where I read on a thread from 2010 that, with the introduction of 'threads' in HTML5, developers might start to use synchronous APIs. Searching a bit on google, I found the MDN page on web workers.
I am writing Javascript and Node from about a year now (assume a beginner), and I am still to encounter something that makes use of these web workers. Maybe I need to read more code.
Now my question is, even though they seem to be very useful, why isn't it seen much in the wild? Also, what are the general use cases and guidelines when using them? Is it possible to reap the multithreaded processing benefits in Nodejs environment? If so, why are all Nodejs APIs still asynchronous?
Thank you.
A web-worker is strictly a clientside thing, so it has nothing to do with Node.js (EDIT: actually, see this module).
You might have heard that JavaScript is strictly single-threaded: if a function is doing some heavy calculation, nothing else is getting done, including animating icons, repainting the window, nothing. Thus, clientside JS should always avoid heavy computation, large loops and anything else that might usurp the thread for more than a fraction of a second.
Web-workers are the solution for that. Each web-worker is running in its own thread, and it can block as much as it wants - it won't affect the normal operation of the web page. The tradeoff is that it cannot have any access to the DOM: the fact that it doesn't affect the rendering means you cannot affect rendering with it. :) If a web-worker wants to render something, it would have to send a message to the main thread to do it.
Implementation-wise, each web-worker needs to be in a separate JS file. The reason why you don't see more of them is probably twofold: the average Joe probably doesn't know how to use them, and they are only needed when you need serious computation and don't want it to block your main thread - which is not that common in the first place, and when it is, the computation is commonly offloaded to the server (on clientside) or to separate processes (in Node.js).
Read more on HTML5 Rocks.

Chrome Extension Options from a content script

I'm porting my Firefox Extension to Chrome and the lack of a synchronous preferences service is making life fun.
I have an options page for my users and I'm using the approach from here: https://developer.chrome.com/extensions/options. I'll be running a content script to get data from the page and using sendMessage to send/receive callbacks to a background script.
The content script needs access to my Extension Options. It needs these before it does it's processing. Of course the Storage API is Asynchronous. I've tried cheating with Stratify.js to force the Storage API to behave Synchronously, but that's ugly as heck.
That leaves me writing code like this:
chrome.storage.sync.get(defaultPrefs, function(myPrefs) {
//Do all my webpage processing here,
//basically writing my entire Extension inside
// this call to chrome.storage.sync.get()
}
I've seen this question asked before with a few solutions, but they mostly use localStorage, which won't work since I want to use my preferences from here.
This just feels wrong, but if I'm going to use the Storage Sync API for my Extension preferences then I'm kinda stuck. The localStorage solutions I've used mostly involve calls to sendMessage and leave me stuck in the same sort of callback pattern. I'm I missing something?
You are not missing any thing. The Google API is callback driven as it follow the Javascript philosophy. And you should accept it to become a good javascript developer.
A reason of the asynchronous mode for Sync Storage is the latency of the network and the possibly long time to send/receive the data from the synced storage. Javascript VM is mono-thread so if the call to the storage is synchronous and take some times, the user interface will freeze waiting for the response. This is not acceptable for the user experience. The only way to avoid this behavior is to use callbacks. you give the function that you want to execute when the request is finished.
It's not the better pattern ever made but it does the job. But it has a limitation : the Callback Hell. You can try to manage it with Promises and defining simple and short functions that only do one atomic action. Having a Functional Programming approach can help to do this.
An other way to avoid it is to create an object that will automatically synchronize himself with the storage. It allow you to use it fully synchronously but it's more difficult to handle possible errors. I have made one here. It lacks of error handling and can largely be improved but you can get the idea.
I will try to improve this later but I lack time...

Why is org/arangodb/request synchronous?

Why is the new JavaScript module request synchronous? Is it supposed to be only used in a job queue?
Is there any way to make asynchronous http(s) requests in ArangoDB?
Full disclosure: I'm part of ArangoDB's development team and primarily work on Foxx and everything JavaScript. I'm also the guy who wrote the org/arangodb/request module.
ArangoDB is a different environment than Node.js, despite sharing many similarities (such as using the V8 JavaScript engine). Unlike Node.js (or the browser), ArangoDB uses a thread-based concurrency model and doesn't feature an Event Loop. However the threads are not exposed in JavaScript (and in fact in V8 every thread is fully isolated) so you normally don't even have to think of them.
In the browser and in Node.js functions like setTimeout work by delaying code execution via the Event Loop (until a certain amount of time has passed or until an external event has occurred).
In ArangoDB the code is always executed linearly. For example, incoming HTTP requests are passed to Foxx controllers in JavaScript and the response is sent as soon as the controller returns. Even if you could use setTimeout, the external resources you were working with (or even "internal" ones like the document collections and transactions) would likely be already gone by the time the delayed code could execute.
Because of this, the request function provided by the org/arangodb/request module is also entirely synchronous. Instead of returning a promise or taking a callback it directly returns the incoming response data. It is also decidedly not the same module as request on npm but rather a synchronous implementation based on that module's API to the extent that implementing its API is possible outside Node.js (e.g. not including streams and returning the remote response instead of taking callbacks).
If you come from a Node.js/io.js background, this may feel wrong because non-blocking IO can achieve higher throughput, but keep in mind that the design goals of ArangoDB and Node.js are very different. Node.js is built around streams and network connections. ArangoDB is built as a persistent data storage and has to deal with transactions and locks instead.
It is probably not the best idea to access external APIs directly from your Foxx controllers if you have a high likelihood of serious network latency or if the external API's response is not essential to the client response. This is what the Foxx queues are for. Transactional e-mails are a prime example for this.
While Foxx is very versatile, its primary focus is to allow you to move most of your application (especially logic that benefits from running closer to the data) directly into the database. For small to medium scale projects that, you can probably get away with doing external API calls in-bounds. But if your application is primarily concerned with talking to other services over the network, running that code in a database is probably not the optimal solution.
Luckily ArangoDB plays well with others, so it's easy to move your network-intensive code out of Foxx if you find that it becomes a performance bottleneck at higher loads. Foxx doesn't eliminate the need for application servers, but it can considerably reduce their complexity.
As a correction to Brian's answer: sadly promises won't let you write async code in a synchronous environment either. The Promises/A+ spec defines promises as having to be executed asynchronously. Where they aren't natively supported they still have to be built on top of existing functions like setTimeout or process.nextTick, neither of which ArangoDB implements.

mootools: I want to implement architecture similar to Big pipe in Facebook

I am developing an application in mootools. I have used Reqeust class to implement pipelining it.
I want to develop a superior method to handle client server requests. I referred the following article to understand how big pipe works in facebook.
http://www.facebook.com/notes/facebook-engineering/bigpipe-pipelining-web-pages-for-high-performance/389414033919
In facebook, a javascript function is called on arrival of any server response to update data user screen. (see the screenshot)
http://img815.imageshack.us/img815/5154/facebookna.jpg
if i get a basic model of such architecture, i can start building application using that
code.
can some one please provide me such a basic model?
Till now i have designed an architecture in which response_data is stored in a global variable and then a function called to update data to user screen.(Used synchronous Request here) which is very slow.
so which method is superior 'synchronous or Asynchronous'?
Firstly, thanks for the read, it was a very interesting blog post.
You may want to look into this libary which was inspired by Facebook's BigPipe. Note: I'm not endorsing it as I've never used it, but building it yourself is not trivial.
With regards to whether synchronous and asynchronous is better, that depends. Synchronous is simpler - the dependencies are obvious, and there's no overhead. Asynchronous is only an advantage if your resources are not fully utilised, and your processing can be easily broken down into independant blocks. I can't tell what you're trying to do, so you need to make the decision yourself where the performance bottleneck actually is, and whether architecting your application such that multiple sections can be downloaded, processed and rendered in parallel will actually provide an advantage.
As an example, if you're downloading a single, massive block of data to be rendered as a table in the browser, then breaking that data into multiple parallel downloads will improve performance - at the cost of creating some queuing system to deal with out-of-order responses. On the other hand, though technically slower, batching the download into synchronous blocks so that one block is downloaded and rendered before the next one is requested, will still do wonders to perceived performance, and is a much simpler alternative.

severside processing vs client side processing + ajax?

looking for some general advice and/or thoughts...
i'm creating what i think to be more of a web application then web page, because i intend it to be like a gmail app where you would leave the page open all day long while getting updates "pushed" to the page (for the interested i'm using the comet programming technique). i've never created a web page before that was so rich in ajax and javascript (i am now a huge fan of jquery). because of this, time and time again when i'm implementing a new feature that requires a dynamic change in the UI that the server needs to know about, i am faced with the same question:
1) should i do all the processing on the client in javascript and post back as little as possible via ajax
or
2) should i post a request to the server via ajax, have the server do all the processing and then send back the new html. then on the ajax response i do a simple assignment with the new HTML
i have been inclined to always follow #1. this web app i imagine may get pretty chatty with all the ajax requests. my thought is minimize as much as possible the size of the requests and responses, and rely on the continuously improving javascript engines to do as much of the processing and UI updates as possible. i've discovered with jquery i can do so much on the client side that i wouldn't have been able to do very easily before. my javascript code is actually much bigger and more complex than my serverside code. there are also simple calulcations i need to perform and i've pushed that on the client side, too.
i guess the main question i have is, should we ALWAYS strive for client side processing over server side processing whenever possible? i 've always felt the less the server has to handle the better for scalability/performance. let the power of the client's processor do all the hard work (if possible).
thoughts?
There are several considerations when deciding if new HTML fragments created by an ajax request should be constructed on the server or client side. Some things to consider:
Performance. The work your server has to do is what you should be concerned with. By doing more of the processing on the client side, you reduce the amount of work the server does, and speed things up. If the server can send a small bit of JSON instead of giant HTML fragment, for example, it'd be much more efficient to let the client do it. In situations where it's a small amount of data being sent either way, the difference is probably negligible.
Readability. The disadvantage to generating markup in your JavaScript is that it's much harder to read and maintain the code. Embedding HTML in quoted strings is nasty to look at in a text editor with syntax coloring set to JavaScript and makes for more difficult editing.
Separation of data, presentation, and behavior. Along the lines of readability, having HTML fragments in your JavaScript doesn't make much sense for code organization. HTML templates should handle the markup and JavaScript should be left alone to handle the behavior of your application. The contents of an HTML fragment being inserted into a page is not relevant to your JavaScript code, just the fact that it's being inserted, where, and when.
I tend to lean more toward returning HTML fragments from the server when dealing with ajax responses, for the readability and code organization reasons I mention above. Of course, it all depends on how your application works, how processing intensive the ajax responses are, and how much traffic the app is getting. If the server is having to do significant work in generating these responses and is causing a bottleneck, then it may be more important to push the work to the client and forego other considerations.
I'm currently working on a pretty computationally-heavy application right now and I'm rendering almost all of it on the client-side. I don't know exactly what your application is going to be doing (more details would be great), but I'd say your application could probably do the same. Just make sure all of your security- and database-related code lies on the server-side, because not doing so will open security holes in your application. Here are some general guidelines that I follow:
Don't ever rely on the user having a super-fast browser or computer. Some people are using Internet Explore 7 on old machines, and if it's too slow for them, you're going to lose a lot of potential customers. Test on as many different browsers and machines as possible.
Any time you have some code that could potentially slow down or freeze the browser momentarily, show a feedback mechanism (in most cases a simple "Loading" message will do) to tell the user that something is indeed going on, and the browser didn't just randomly freeze.
Try to load as much as you can during initialization and cache everything. In my application, I'm doing something similar to Gmail: show a loading bar, load up everything that the application will ever need, and then give the user a smooth experience from there on out. Yes, they're going to have to potentially wait a couple seconds for it to load, but after that there should be no problems.
Minimize DOM manipulation. Raw number-crunching JavaScript performance might be "fast enough", but access to the DOM is still slow. Avoid creating and destroying elements; instead simply hide them if you don't need them at the moment.
I recently ran into the same problem and decided to go with browser side processing, everything worked great in FF and IE8 and IE8 in 7 mode, but then... our client, using Internet Explorer 7 ran into problems, the application would freeze up and a script timeout box would appear, I had put too much work into the solution to throw it away so I ended up spending an hour or so optimizing the script and adding setTimeout wherever possible.
My suggestions?
If possible, keep non-critical calculations client side.
To keep data transfers low, use JSON and let the client side sort out the HTML.
Test your script using the lowest common denominator.
If needed use the profiling feature in FireBug. Corollary: use the uncompressed (development) version of jQuery.
I agree with you. Push as much as possible to users, but not too much. If your app slows or even worse crashes their browser you loose.
My advice is to actually test how you application acts when turned on for all day. Check that there are no memory leaks. Check that there isn't a ajax request created every half of second after working with application for a while (timers in JS can be a pain sometime).
Apart from that never perform user input validation with javascript. Always duplicate it on server.
Edit
Use jquery live binding. It will save you a lot of time when rebinding generated content and will make your architecture more clear. Sadly when I was developing with jQuery it wasn't available yet; we used other tools with same effect.
In past I also had a problem when one page part generation using ajax depends on other part generation. Generating first part first and second part second will make your page slower as expected. Plan this in front. Develop a pages so that they already have all content when opened.
Also (regarding simple pages too), keep number of referenced files on one server low. Join javascript and css libraries into one file on server side. Keep images on separate host, better separate hosts (creating just a third level domain will do too). Though this is worth it only on production; it will make development process more difficult.
Of course it depends on the data, but a majority of the time if you can push it client side, do. Make the client do more of the processing and use less bandwidth. (Again this depends on the data, you can get into cases that you have to send more data across to do it client side).
Some stuff like security checks should always be done on the server. If you have a computation that takes a lot of data and produces less data, also put it on the server.
Incidentally, did you know you could run Javascript on the server side, rendering templates and hitting databases? Check out the CommonJS ecosystem.
There could also be cross-browser support issues. If you're using a cross-browser, client-side library (eg JQuery) and it can handle all the processing you need then you can let the library take care of it. Generating cross-browser HTML server-side can be harder (tends to be more manual), depending on the complexity of the markup.
this is possible, but with the heavy intial page load && heavy use of caching. take gmail as an example
On initial page load, it downloads most of the js files it needed to run. And most of all cached.
dont over use of images and graphics.
Load all the data need to show in intial load and along with the subsequent predictable user data. in gmail & latest yahoo mail the inbox is not only populated with the single mail conversation body, It loads first few full email messages in advance at the time of pageload. secret of high resposiveness comes with the cost (gmail asks to load the light version if the bandwidth is low.i bet most of us have experienced ).
follow KISS principle. means keep ur desgin simple.
And never try to render the whole page using javascript in any case, you cannot predict all your endusers using the high config systems or high bandwidth systems.
Its smart to split the workload between your server and client.
If you think in the future you might want to create an API for your application (communicating with iPhone or android apps, letting other sites integrate with yours,) your would have to duplicate a bunch of code for all those devices if you go with a bare-bones server implementation of your application.

Categories

Resources