Cache the execution of a javascript file - javascript

As far as i know it's impossible to achieve the following, but only an expert can confirm this:
I've got page number 1 that request for some user and application data as soon as the page loads, page number 2 uses the same script and it would be wasteful to request for the same info.
I know that the browsers caches the script, my question is if it caches the execution (data) as well.
The pages don't share the same layout, so it is not possible to make page number 2 be reloaded via ajax.

The browser doesn't automatically cache the result of the script (that would be seriously weird), but you can, by setting (and checking for) cookies, using the new local storage stuff on modern browser, etc. Note with cookies, though, that they're sent to the server on every request, so result in increased size of requests; if you can use local storage, do.

You can "cache" your data, if you use some kind of client side storage like localStorage (see MDN docu for more details).
The Browser itself may also cache your request internally as the ajax request is no different from any other request made by the browser (html docs, images, etc.). So depending on your exact request (including all parameters) the Browser may actually use a cached version of your request to avoid unnecessary calls. Here, however, the usual restrictions and properties of caching apply, so you can not rely on that behaviour!

Browser will not cache your data automatically if your "page" is a new URL.
But it is certainly possible for you to implement it in several ways
One is to use local storage in new browsers that support HTML5
Another is to write your app such that it is a single page with multiple views and transitions
Use AJAX to replace portions of your page (views).
This technique is becoming increasingly popular.
I highly recommend reading "Javascript Web Applications" by Alex MacCaw to understand javascript MVC and how to use javascript to create a client side (browser based) controller and views and manage caching, state etc in the browser. Also look at frameworks like backbone.js
http://www.amazon.com/JavaScript-Web-Applications-Alex-MacCaw/dp/144930351X/ref=sr_1_1?s=books&ie=UTF8&qid=1332771002&sr=1-1

I would avoid caching the data, except if there's serious performance problems (and, then, rather eliminate the performance problems than caching it). It's premature optimization.
When having the data cached, all kind of scenarios (stale data, deleted data) must be considered (except if the data is static, but then, it's not relevant anyways).

Related

Is browser cache aware of javascript xmlhttp requests?

When I fetch a page using GET request in javascript, does the browser cache it the same way as it does when I click that link or type it in address bar?
If not, since I have already fetched the page, is there a way that I can add it (programmatically) to the browser cache?
When the browser fetches web pages, it is also using a GET request. Chances are that all GET requests go through the same caching mechanism in the browser, though there is no specification that formalizes how that works.
There is no programmatic way to add something to the browser's own cache other than just requesting the resource and letting the browser's cache do its normal thing with it. If you want to know if all common browsers will cache it in this way, then you need to make sure the server-side header settings are set appropriately (to allow it to be cached) and then test each browser to make sure it's cached like you want.
If you are staying within the same page and want to make sure something is not requested more than once from the same page, you can implement your own cache within that page's javascript code. You just store the result in a javascript variable the first time it is requested and then a function you implement to fetch this resource just checks your own local storage object to see if the resource is already here. If not, it requests it via a GET and then saves the result. You could make a simple version of this that was hardcoded to one particular resource or a more general version that saved the URL and result and a timestamp and implemented more typical caching behaviors.
If you want it to be cached across pages and your testing finds that the built-in browser caches are not adequate, then you can use Local Storage to store the data (probably with a timestamp) and then just check the local storage before requesting it with a GET request.

When is filename based browser cachebusting actually needed ?

Currently, we're using a method to bust browser cached resources like css and js in a way similar to SE: https://meta.stackexchange.com/questions/112182/how-does-se-determine-the-css-and-js-version-parameter
Anyways, after doing some testing with the HTTP Headers, i'm wondering when this is actually necessary. Is this just a relic left over from the 90s, or are there modern browsers that can't read the Last-Modified or ETags HTTP headers ?
Caching Issues
When you are attempting to server JS or CSS that is volatile and you don't want to/can't (e.g. using a CDN) rely on HTTP cache directive headers to make the browser request the new files. Some older browsers don't respond to HTTP cache directives; so if you are targeting them you have limited options. Baring older browsers some proxy servers strip or invalidate or ignore proxy information because they are buggy or they are acting as aggressive caches. As such using HTTP cache control headers will not work. In this case you are just ensuring your end users don't get odd functionality until they hit F5.
Volatile JS/CSS resources can come from files/resources that are editable through an administration/configuration panel. Some reasons for this are theming, layout editing, or language definition files for internationalization.
HTTP 1.0
There are legacy systems out there that use it. Consider that Oracle's built-in HTTP server (the EGP gateway) in their RDBMS solution still uses it. Some proxies translate 1.1 requests to 1.0. Ancient browsers still only support 1.0, but that should be a relatively non-issue these days.
Whatever the case, HTTP 1.0 uses a different set of control mechanisms that were "primitive" compared to HTTP 1.1's offering. They included a lot of heuristics testing that wasn't specified in the RFC to get caching to work reasonably well. In either case, caching would often cause odd behavior due to stale content being delivered or the same content being requests with no change.
A note on pragma:no-cache
Works only on REQUESTS not RESPONSES; a common thing people don't know. It was meant to keep intermediate systems from caching sensitive information. It still has backwards support in HTTP 1.1, but shouldn't be used because it is deprecated.
...except where Microsoft says IE doesn't do that: http://support.microsoft.com/kb/234067
Input For Generated Content
Yet another reason is JS or CSS that is generated based on input parameters. Just because the URL includes somefile.js does not mean that needs to be a real file on a file system. It could just be JS that is output from a process. Should that process need to output different content based on parameter, GET parameters are good way to make that happen.
Consider page versioning. In large applications where pages may be kept for historical or business requirements, it allows the same named resource to exist, but should a specific version be needed it can be served as needed. You could just save each version in a different file or you could just create a process that outputs the right content with the correct version changes.
Old Browser Issues
In IE6, AJAX requests would be subject to the browser cache. If you were requesting a service you did not have control over with a URL that didn't change, adding a trivial random string to the URL would circumvent that issue.
Browser Cache Options
If we consider the RFC on HTTP 1.1 for user agent cache settings we also see this:
Many user agents make it possible for users to override the basic
caching mechanisms. For example, the user agent might allow the user
to specify that cached entities (even explicitly stale ones) are
never validated. Or the user agent might habitually add "Cache-
Control: max-stale=3600" to every request. The user agent SHOULD NOT
default to either non-transparent behavior, or behavior that results
in abnormally ineffective caching, but MAY be explicitly configured
to do so by an explicit action of the user.
Altering the URL for versioning of resources could be considered a counter measure to such an issue. Whether you believe it is worthwhile I will leave up to the reader.
Conclusion
There are reasons to add GET parameters to a file request, but realistically the only reason to do that now (writing as of 2012) is to supply input parameters for dynamically generated scripts and overcoming issues where you can't control the cache headers.
Personally I only use for providing input parameters to scripts that dynamically output initialization scripts, but like everything in development there is always some edge case that adds reason.

Catching flash POST from javascript

I have a series of .swf files that I inherited from an old version of a site I'm trying to rebuild.
When flash_element.submitForm() is called, they POST some data directly to a static url ("/submit"), then depending on the response, reload the browser page.
I would very much like to capture the data that they POST using javascript - preferably without it getting sent at all - so that I can have more intelligent logic to handle to request/response than is built into the .swf files I've inherited.
Basically: When a flash object makes a http request, can I catch and cancel this event in Javascript?
Basically no. You can try and use the various swf disassembler/reassembler things like the swfdump.exe that comes with flex to get rid of the post, or change it to a javascript call. There's precious little control or knowledge you can gain from a swf directly from javascript that the swf doesn't make explicitly available via the appropriate API's. This is is as it should be- if what you suggested were possible it would be a fairly serious security hole.

Best practice use sam AJAX in multiple browser windows?

I am developing a website that has some sort of realtime update.
Now the website is generated with a javascript variable of the current ID of the dataset.
Then in an interval of some seconsd an AJAX call is made passing on the current ID, and if theres something new the server returns it along with the latest ID which is then updated in the javascript.
Very simple, but here comes the Problem.
If the user opens the same page multiple times, every page does this AJAX requests which produces heavy serverload.
Now I thought about the following approach:
The website is loaded with a javascript variable of the current timestamp and ID of the current dataset.
My desired refresh interval is for example 3 seconds.
In the website an interval counter counts up every seconds, and everytime the timestamp reaches a state where (timestmap % 3===0) returns true, the content is updated.
The link looks like http://www.example.com/refresh.php?my-revision=123&timestamp=123456
Now this should ensure that every browser window calls the same URL.
Then I can turn on browser level caching.
But I don't really like this solution.
I would prefer adding another layer of data sharing in a Cookie.
This shouldn't be much of a problem, I can just store every request in a cookie named by timestamp and data revision with a TTL of 10 seconds or so and check for its exitence first.
BUT
The pages will do the request at the same time. So the whole logic of browser caching and cookie might not work because the requests occour simultanously and not one after another.
So I thought about limiting the current connections to 1 server side. But then I would need at least an extra vhost, because I really dont want to do that for the whole page.
And this lets me run into problems concerning cross-site policies!
Of course there are some super complicated load balancing solutions / server side solusions bound to request uri and ip adress or something but thats all extreme overkill!
It must be a common problem! Just think of facebook chat. I really don't think they do all the requests in every window you have open...
Any ideas? I'm really stuck with this one!
Maby I can do some inter-window Javascript communication? Shouldnt be a problem if its all on the same domain?
A thing I can do of course is server side caching. Which avoids at least DB Connections and intensive calculations... but it still is an request which I would like to avoid.
You might want to check out Comet and Orbited .
This is best solved with server push technology.
The first thing is: Do server-side caching anyway, using Memcache or Redis or whatever. So you're defended against three machines doing the requests. But you knew that.
I think you're onto the right thing with cookies, frankly (but see below for a more modern option) — they are shared by all window instances, easily queried, etc. Your polling logic could look something like this:
On polling interval:
Look at content cookie: Is it fresher than what you have? If so, use it and you're done.
Look at status cookie; is someone else actively polling (e.g., cookie is set and not stale)? If yes, come back in a second.
Set status cookie: I'm actively polling at (now).
Do request
On response:
If the new data is newer than the (possibly updated) contents of the content cookie, set the content cookie to the new data
Clear status cookie if you're the one who set it
Basically, the status cookie acts as a semaphore indicating to all window instances that someone, somewhere is on the job of updating the content.
Your content cookie might contain the content directly, or if your content is large-ish and you're worried about running into limits, you could have each page have a hidden iframe, each with a unique name, and have your Ajax update write the output to the iframe. The content cookie would publish the name of the most up-to-date iframe, and other windows seeing that there's fresh content could use window.open to get at that iframe (since window.open doesn't open a window if you use the name of an existing one).
Be alert to race conditions. Although JavaScript within any given page is single-threaded (barring the explicit use of web workers), you can't expect that JavaScript in the other windows is necessarily running on the same thread (it is on some browsers, not on others — heck, on Chrome it's not even the same process). I also don't know that there's any guarantee of atomicity in writing cookies, so you'll want to be vigilant.
Now, HTML5 defines some useful inter-document communication mechanisms, and so you might consider looking to see if those exist and using them before falling back on this cookie approach, since they'll work in modern browsers today but not in older browsers you're probably having to deal with right now. Still, on the browsers that support it, great!
Web storage might also be an option worth investigating as an aspect of the above, but your clients will almost certainly have to give your app permissions and it's also a fairly new thing.

How to create temporary files on the client machine, from Web Application?

I am creating a Web Application using JSP, Struts, EJB and Servlets. The Application is a combined CRM and Accounting Package so the Database size is very huge. So, in order to make Execution faster, I want prevent round trips to the Database.
For that purpose, what I want to do is create some temporary XML files on the client Machine and use them whenever required. How can I do this, as Javascript do not permits me to do so. Is there any way of doing this? Or, is there any other solution which I can adopt in order to make my application Faster?
You do not have unfettered access to the client file system to create a temporary file on the client. The browser sandbox prevents this for very good reasons.
What you can do, perhaps, is make some creative use of caching in the browser. jQuery's data method is an example of this. TIBCO General Interface makes extensive use of a browser cache for XML data. Their code is open source and you could take a look to see how they've implemented their browser cache.
If the database is large and you are attempting to store large files, the browser is likely not going to be a great place for that data. If, however, the information you want to store is fairly small, using an in-browser cache may accomplish what you'd like.
You should be caching on the web server.
As you've no doubt realised by now, there is a very limited set of things you can do on the client machine from a web app (eg, write cookie).
You can make your application use the browser plugin Google Gears, that allows you a real clientside storage.
Apart from that, remember, there is a huge overhead for every single request, if needed you can easily stack a few 100 kB in one response, but far away users might only be able to execute a few requests per second. Try to keep the number of requests down, even if it means adding overhead in form of more data.
#justkt Actually, there is no good reason to not allow a web application to store data. Indeed HTML5 specifications include a database similar to the one offered by Google Gears, browser support is just a bit too sporadic for relying on that feature.
If you absolutely want to cache it on the client, you can create the file on your server and make your web app retrieve it. This way the browser will fetch it and keep it on the client cache.
But keep in mind that this could be a pain for the client if the file is large enough.

Categories

Resources