Chrome Extension Options from a content script - javascript

I'm porting my Firefox Extension to Chrome and the lack of a synchronous preferences service is making life fun.
I have an options page for my users and I'm using the approach from here: https://developer.chrome.com/extensions/options. I'll be running a content script to get data from the page and using sendMessage to send/receive callbacks to a background script.
The content script needs access to my Extension Options. It needs these before it does it's processing. Of course the Storage API is Asynchronous. I've tried cheating with Stratify.js to force the Storage API to behave Synchronously, but that's ugly as heck.
That leaves me writing code like this:
chrome.storage.sync.get(defaultPrefs, function(myPrefs) {
//Do all my webpage processing here,
//basically writing my entire Extension inside
// this call to chrome.storage.sync.get()
}
I've seen this question asked before with a few solutions, but they mostly use localStorage, which won't work since I want to use my preferences from here.
This just feels wrong, but if I'm going to use the Storage Sync API for my Extension preferences then I'm kinda stuck. The localStorage solutions I've used mostly involve calls to sendMessage and leave me stuck in the same sort of callback pattern. I'm I missing something?

You are not missing any thing. The Google API is callback driven as it follow the Javascript philosophy. And you should accept it to become a good javascript developer.
A reason of the asynchronous mode for Sync Storage is the latency of the network and the possibly long time to send/receive the data from the synced storage. Javascript VM is mono-thread so if the call to the storage is synchronous and take some times, the user interface will freeze waiting for the response. This is not acceptable for the user experience. The only way to avoid this behavior is to use callbacks. you give the function that you want to execute when the request is finished.
It's not the better pattern ever made but it does the job. But it has a limitation : the Callback Hell. You can try to manage it with Promises and defining simple and short functions that only do one atomic action. Having a Functional Programming approach can help to do this.
An other way to avoid it is to create an object that will automatically synchronize himself with the storage. It allow you to use it fully synchronously but it's more difficult to handle possible errors. I have made one here. It lacks of error handling and can largely be improved but you can get the idea.
I will try to improve this later but I lack time...

Related

The "right" way to do synchronous HTTP request

You probably came here to chide me but this is a real use case.
In the world of online education, there are SCORM courses. I have to make old SCORM courses work on a site. SCORM courses are "web based" and run in a browser, but they expect to run in an iframe and they expect the parent to supply a GetValue method and a SetValue.
So these SCORM courses are doing things like parent.SetValue("score", "90") and moving on. That function is supposed to return "false" if there was any issue.
SCORM comes from the 90's, and in modern web we know we have to do callbacks/promises and http fails "often". You might think the solution is a SetValue that writes to local data and then tries and retries until it get's through, but the SCORM course typically is set up to only move to the next screen if the SetValue worked, so you shouldn't be letting the user advance unless the SetValue actually was saved on the server.
TL;DR
Assuming a syncronous request is a requirement, what is the right way to do it?
So far I know of $.ajax({async:false ... but now browsers warn about that and sound like they're going to just ignore your request to be synchronous. I am thinking maybe using websockets or web workers or something is the right way to do a syncronous request in modern programming. But I don't know how to make a request like that. And I am not allowed change the code of the SCORM courses (they are generated with various course-making tools)
To clarify, I have full control over the implementation of the SetValue function.
Will $.ajax({async:false ... work long term? (5-10 years)
NOTE: it is entirely acceptable in this use case to completely freeze the UI until the request either succeeds or fails. That's what the courses assume.
So far I know of $.ajax({async:false… but now browsers warn about that
This is the right way (if you're using jQuery), it sends a synchronous XMLHttpRequest. Just ignore the warning. It's a warning that you are using outdated technology, which you already know.
and sound like they're going to just ignore your request to be synchronous.
That's unlikely.
I am thinking maybe using websockets or web workers or something is the right way to do a syncronous request in modern programming.
No, websockets and web workers are always asynchronous, you can't use them to make your asynchronous request look synchronous (in fact there's nothing that lets you do this).
Will $.ajax({async:false… work long term? (5-10 years)
We cannot know (and SO is not a crystal ball). It might, especially in older browsers, or it might not. Browser vendors are reluctant to break compatibility of features that run the web, and synchronous requests still are needed from time to time. At some point, too few (important) web pages will use it (<1%, <1‰, whatever threshold they decide on) and browsers will finally be confident to remove it. At that point, your business will have realised to deprecate these outdated course-making tools.
Based on my experience with learning management systems, the answer is: fake it.
You wrote:
it is entirely acceptable in this use case to completely freeze the UI until the request either succeeds or fails. That's what the courses assume.
Perhaps your courses assume this, but this is not the case in any learning management system I've used over the past decade.
From what I've seen, learning management systems don't use synchronous requests because they block other scripts, which gives the impression the page/course is locked up or broken. The workaround is to use async calls via an abstraction layer (which includes the SCORM API), and return 'true' to the course even if you have no way of verifying that the AJAX call was in fact was successful.
High-level view of how LMSs typically handle SCORM data:
When a course is launched, the LMS gets ALL of the course's existing SCORM data from the database, then puts it into a JavaScript object on the client side (accessible via the SCORM API). When you fetch data via SCORM, you are typically fetching data that is in this pre-loaded JS object -- you are NOT getting a real-time response directly from the database. Therefore AJAX is not needed when using SCORM's API.GetValue.
When you attempt to API.SetValue, you're initially storing the key/value pair in the JS object, not the SCORM database. Therefore the client-side JS object needs to synchronously indicate whether it successfully stored the data ('true') or not ('false'). The database -- and AJAX -- doesn't come into play until you try to persist the data to the database using API.Commit().
When you try to get a success value from API.Commit(), which is invoking AJAX, most LMSs will fake it. They will do an asynchronous request for the sake of ensuring the course doesn't feel broken, so the value returned from Commit() will almost always be 'true'. It's not reliable.

What exactly are web workers and when to use them

I was reading up something about XMLHttpRequest (Is there any reason to use a synchronous XMLHttpRequest?) here on SO where I read on a thread from 2010 that, with the introduction of 'threads' in HTML5, developers might start to use synchronous APIs. Searching a bit on google, I found the MDN page on web workers.
I am writing Javascript and Node from about a year now (assume a beginner), and I am still to encounter something that makes use of these web workers. Maybe I need to read more code.
Now my question is, even though they seem to be very useful, why isn't it seen much in the wild? Also, what are the general use cases and guidelines when using them? Is it possible to reap the multithreaded processing benefits in Nodejs environment? If so, why are all Nodejs APIs still asynchronous?
Thank you.
A web-worker is strictly a clientside thing, so it has nothing to do with Node.js (EDIT: actually, see this module).
You might have heard that JavaScript is strictly single-threaded: if a function is doing some heavy calculation, nothing else is getting done, including animating icons, repainting the window, nothing. Thus, clientside JS should always avoid heavy computation, large loops and anything else that might usurp the thread for more than a fraction of a second.
Web-workers are the solution for that. Each web-worker is running in its own thread, and it can block as much as it wants - it won't affect the normal operation of the web page. The tradeoff is that it cannot have any access to the DOM: the fact that it doesn't affect the rendering means you cannot affect rendering with it. :) If a web-worker wants to render something, it would have to send a message to the main thread to do it.
Implementation-wise, each web-worker needs to be in a separate JS file. The reason why you don't see more of them is probably twofold: the average Joe probably doesn't know how to use them, and they are only needed when you need serious computation and don't want it to block your main thread - which is not that common in the first place, and when it is, the computation is commonly offloaded to the server (on clientside) or to separate processes (in Node.js).
Read more on HTML5 Rocks.

Do you have any idea how Google Docs Javascript do the interval data autorefresh?

Alright, Here it goes:
I'm currently implementing a software which autorefresh/autopull/autoreload the data to keep the screen live by using AJAX.
This is actually working, but I know I´ve used the simplest approach which is:
SetInterval (javascript)
Call the Refresh Method over and over each n seconds.
Read the Json Data, rebuild the HTML and update it.
This can also be done by just calling a SetTimeOut (javascript) and the end of the AJAX request.
In the refresh method I internally check that it´s not being called simultaneously, etc.
However... this is the simplest approach, it works but, in slow computers, firefox and ie, I can see this activity sometimes freezes the browser, and I know this might not be necessary because of the AJAX call, but how "intensive" is the javascript operation overall... but, after running a profiler, Overall javascript (using jquery by the way) seem to be fine. Also if I disable the autorefresh, the browser wont freeze by short seconds in slow computers.
I decided to investigate how several of the majors AJAX applications works out there.
Facebook for instance.. they do a request all the time, every N seconds, interpret the JSON and update the screen, but, google docs... I can seem to find any request.. This is maybe because: they are just telling the javascript debugger engine that they do not want their request to be logged??, or, are they using another approach to the refresh dilemma?
I read in another answer here at stackoverflow, that Google Docs keeps an open connection..
Can this be the answer? http://ajaxpatterns.org/HTTP_Streaming
What do you guys know about this?
Just as a side note, the application I´m developing is meant to be accessed by thousands of users at a time, and I know the JavaScript refresh routine only tells a little part of the history, but the Server Side Application and the database is currently supporting such a load according to the stress tests I did by using several thousands of virtualized stations. I just want to know what you think about the client browser problem specifically.
Regards and
If you are still reading this..
Thanks you for your time.
I suspect they're using WebSockets. Browser support is flaky, so your mileage may vary with this approach.
You may also want to look at APE (ajax push engine), which is a decent implementation of long polling with a client/server architecture.
You can read up on Long Polling. But then you'll have to handle dropped connections etc.

mootools: I want to implement architecture similar to Big pipe in Facebook

I am developing an application in mootools. I have used Reqeust class to implement pipelining it.
I want to develop a superior method to handle client server requests. I referred the following article to understand how big pipe works in facebook.
http://www.facebook.com/notes/facebook-engineering/bigpipe-pipelining-web-pages-for-high-performance/389414033919
In facebook, a javascript function is called on arrival of any server response to update data user screen. (see the screenshot)
http://img815.imageshack.us/img815/5154/facebookna.jpg
if i get a basic model of such architecture, i can start building application using that
code.
can some one please provide me such a basic model?
Till now i have designed an architecture in which response_data is stored in a global variable and then a function called to update data to user screen.(Used synchronous Request here) which is very slow.
so which method is superior 'synchronous or Asynchronous'?
Firstly, thanks for the read, it was a very interesting blog post.
You may want to look into this libary which was inspired by Facebook's BigPipe. Note: I'm not endorsing it as I've never used it, but building it yourself is not trivial.
With regards to whether synchronous and asynchronous is better, that depends. Synchronous is simpler - the dependencies are obvious, and there's no overhead. Asynchronous is only an advantage if your resources are not fully utilised, and your processing can be easily broken down into independant blocks. I can't tell what you're trying to do, so you need to make the decision yourself where the performance bottleneck actually is, and whether architecting your application such that multiple sections can be downloaded, processed and rendered in parallel will actually provide an advantage.
As an example, if you're downloading a single, massive block of data to be rendered as a table in the browser, then breaking that data into multiple parallel downloads will improve performance - at the cost of creating some queuing system to deal with out-of-order responses. On the other hand, though technically slower, batching the download into synchronous blocks so that one block is downloaded and rendered before the next one is requested, will still do wonders to perceived performance, and is a much simpler alternative.

Can firebug detect ajax operations that are in progress?

The firebug console has various panels that can keep track of a lot of information. The net panel keeps track of almost all network traffic and reports various pieces of information on that traffic, e.g. headers, latency, request parameters, etc. What I would like to do is access all this information programatically from the javascript panel because I have a script that needs to know if there is a request in progress. I haven't found any documentation on how the various panels interoperate or if they are even aware of each other. I need to make the script as generic as possible so tying the script to the code on the page is not desirable because the script would not operate on other pages because of minor quirks like function names not being the same.
What you're asking for is to gain access to Firebug's internal functionality, which can only be done if they expose an API. As far as I know, they don't expose an API to javascript, other than the familiar console object.
What they do have however, is an API for firefox plugin development. So you can create a firefox plugin that then either extends the Net panel of firebug to do what you want, or exposes another javascript object called console.net or something like that.
Here is a good tutorial (well, part of a tutorial series) that explains specifically how to listen to events in the net panel: http://www.softwareishard.com/blog/firebug-tutorial/extending-firebug-net-panel-listener-part-viii/
Check out firebug plugin NetExport
Edit: Here is the source code:
Also this source code may interest you tracingconsole
Well as far as I know, the only way to really "know" about in-flight XMLHttpRequests is to explicitly "remember" them in the code. It's like timeouts and interval timers - if your code doesn't hang on to a handle, it's lost.
If the pages in question do everything via jQuery or some other framework, then it might be possible to worm into that code and leverage whatever's done to track ajax work, but exactly how you'd do that would depend on the framework.
It might help your progress if you were to explain more about what it is you want to achieve with this technique. In other words, while what you asked about directly may not be possible, it might be possible to find another way to do whatever it is you want.

Categories

Resources