Disable browser cache - javascript

I implemented a REST service and i'm using a web page as client.
My page has some javascript functions that performs several times the same http get request to REST server and process the replies.
My problem is that the browser caches the first reply and not actualy sends the following requests..
Is there some way to force the browser execute all the requests without caching?
I'm using internet explorer 8.0
Thanks

Not sure if it can help you, but sometimes, I add a random parameter in the URL of my request in order to avoid being cached.
So instead of having:
http://my-server:8080/myApp/foo?bar=baz
I will use:
http://my-server:8080/myApp/foo?bar=baz&random=123456789
of course, the value of the random is different for every request. You can use the current time in milliseconds for that.

Not really. This is a known issue with IE, the classic solution is to append a random parameter at the end of the query string for every request. Most JS libraries do this natively if you ask them to (jQuery's cache:false AJAX option, for instance)

Well, of course you don't actually want to disable the browser cache entirely; correct caching is a key part of REST and the fact that it can (if properly followed by both client and server) allow for a high degree of caching while also giving fine control over the cache expiry and revalidation is one of the key advantages.
There is though an issue, as you have spotted, with subsequent GETs to the same URI from the same document (as in DOM document lifetime, reload the page and you'll get another go at that XMLHttpRequest request). Pretty much IE seems to treat it as it would a request for more than one copy of the same image or other related resource in a web page; it uses the cached version even if the entity isn't cacheable.
Firefox has the opposite problem, and will send a subsequent request even when caching information says that it shouldn't!
We could add a random or time-stamped bogus parameter at the end of a query string for each request. However, this is a bit like screaming "THIS IS SPARTA!" and kicking our hard-won download into a deep pit that no Health & Safety inspector considered putting a safety rail around. We obviously don't want to repeat a full unconditional request when we don't need to.
However, this behaviour has a time component. If we delay the subsequent request by a second, then IE will re-request when appropriate while Firefox will honour the max-age and expires headers and not re-request when needless.
Hence, if two requests could be within a second of each other (either we know they are called from the same function, or there's the chance of two events triggering it in close succession) using setTimeout to delay the second request by a second after the first has completed will make it use the cache correctly, rather than in the two different sorts of incorrect behaviour.
Of course, a second's delay is a second's delay. This could be a big deal or not, depending primarily on the size of the downloaded entity.
Another possibility is that something that changes so rapidly shouldn't be modelled as GETting the state of a resource at all, but as POSTing a request for a current status to a resource. This does smell heavily of abusing REST and POSTing what should really be a GET though.
Which can mean that on balance the THIS IS SPARTA approach of appending random stuff to query strings is the way to go. It depends, really.

Related

What happens when an XMLHttpRequest is aborted?

Will an aborted XMLHttpRequest still download the response from the server?
At what point in the request lifecycle does it differ from a regular request?
Do different browsers behave differently?
Is it bad practise to abort requests?
No, the download will (should) cancel (does in my browser at least)
When a request is aborted, its readyState is changed to XMLHttpRequest.UNSENT (0) and the request's status code is set to 0. -- MDN
No, at least hopefully not. They should be following the spec.
In my opinion, definitely not. It's a waste of bandwidth and other resources to have requests you no longer need running in the background. Much better to abort them.
Two recent use-cases from personal experience:
A table with various parameters for filtering. Depending on the parameters selected, the resulting request sometimes took a while to complete. If you selected a slow set of parameters A, and then a fast set of parameters B before A completed, you'd first see the results of B in the table, but then A would eventually complete and "replace" the contents of the table so you'd suddenly see A instead.
Solution: Abort the previous incomplete request before starting the next one.
SPA with pages with sometimes long running requests, for example the previously mentioned table. When navigating away to a different page, there were sometimes several requests running in the background for stuff no longer needed.
Solution: Register those requests to be aborted when the page/component was unmounted.

AJAX calls done during javascript initialization follow cache rules of browser however later on calls do not

I've recently started using JQuery AJAX calls to fetch some content within a document ready function. I am setting headers for cache control in the AJAX call that get overridden when a forced reload of the page is done (Chrome) which is exactly what I want.
Unfortunately later on calls to AJAX through user interaction after the page and content is completely materialized do not follow these cache rules.
For instance if I control-reload a page that initially accesses /dostuff/ during initialization with a cache control header set to an obscenely high max age time the browser overrides the cache control header and sets the max age to 0 which is nice.. it gives the
user a lot of control to refresh content.
Is this proper? Should I always expect AJAX calls that are part of initialization to override request headers the way I'm beginning to expect them to. It seems like there is a lot of room for inconsistency.
If I call the same URL later on it does what I want and the browser automagically adds in an if-modified-since header that helps me return properly from the server.
If I call a URL that hasn't been part of the initialization however.. like /dootherstuff/ .. It won't set the max age to 0 if the page initialized through a force reload.
I don't expect the be able to fix this problem since it appears to be working as it should be.. I would however like to know how to reliably detect if the page was force reloaded so that I can handle the cache control headers properly.
Resolving this issue using version keys on the URL that are fudged to deal with reloads, rather than actual content versions, will cause me a lot of grief and extra network traffic and processing time.

What purpose is of "&rnd=" parameter in http requests?

Why do some web-applications use the http-get parameter rnd? What is the purpose of it? What problems are solved by using this parameter?
This could be to make sure the page/image/whatever isn't taken from the user's cache. If the link is different every time then the browser will get it from the server rather than from the cache, ensuring it's the latest version.
It could also be to track people's progress through the site. Best explained with a little story:
A user visits example.com. All the links are given the same random number (let's say 4).
The user opens a link in a new window/tab, and the link is page2.php?rnd=4. All the links in this page are given the random number 7.
The user can click the link to page3.php from the original tab or the new one, and the analytics software on the server can tell which one by whether it has rnd=4 or rnd=7.
All we can do is suggest possibilities though. There's no one standard reason to put rnd= in a URL, and we can't know the website designer's motives without seeing the server software.
Internet Explorer and other browsers will read an image URL, download the image, and store it in a cache.
If your application is going to be updating the image regular, and so you want your users to not see a cached image, the URL needs to be unique each time.
Therefore, adding a random string ensures this will be unique and downloaded into the cache each time.
It's almost always for cache-busting.
As has been suggested by others. This kind of behaviour is usually used to avoid caching issues when you are calling a page that returns dynamic content data.
For example, say you have a page that gets some current user information such as "mysite.com/CurrentUserData". Now on the first call to this page, the user data will be returned as expected, but depending on the timing and caching settings, the second call may return the same data - even though the expected data may have been updated.
The main reason for caching is of course to optimise the speed of frequent request. But in the instance where this is not wanted, adding a random value as a query string parameter is known to be a widely used solution.
There are however other ways to get around this issue. For example if you were doing an Ajax request with javascript/JQuery. You could set the cache to false in your call...
$.ajax({url: 'page.html', cache: false});
you could also change it for all page calls on document load with...
$.ajaxSetup({cache: false}});
If you were to do an MVC application, you can even disable the caching on the control action methods with an attribute like so...
[OutputCache(NoStore = true, Duration = 0, VaryByParam = "*")]
public ActionResult NonCacheableData()
{
return View();
}
(thanks to a quick copy and paste from here)
I dare say there are also settings in IIS you could apply to get the same affect - though I have not been that far with this yet.

Time to first byte with javascript?

Is there any modern browser that via javascript exposes time to first byte (TTFB) and/or time to last byte (TTLB) on a http request without resorting to any plugin?
What I would like is a javascript snippet that can access these values and post them back the the server for performance monitoring purposes.
Clarification:
I am not looking for any js timers or developer tools. What I wonder and hoping is if there are any browsers that measures load times and exposes those value via javascript.
What you want is the W3C's PerformanceTiming interface. Browser support is good (see this survey from Sep 2011). Like you speculated in response to Shadow Wizard's answer, these times are captured by the browser and exposed to javascript in the window object. You can find them in window.performance.timing. The endpoint of your TTFB interval will be window.performance.timing.responseStart (defined as "the time immediately after the user agent receives the first byte of the response from the server, or from relevant application caches or from local resources"). There are some options for the starting point, depending on whether you're interested in time to unload the previous document, or time to resolve DNS, so you probably want to read the documentation and decide which one is right for your application.
I fear it's just not possible.
JavaScript becomes "active" only after part of the request has been sent from server, accepted by the browser and parsed.
What you ask is kind like asking "Can I measure the weight of a cake after eating it?" - you need to first weight and only then eat the cake.
You can see the response time in the Chrome Developer Tools.
It's impossible to get the true TTFB in JS, as the page gets a JS context only after the first byte has been received. The closest you can get is with something like the following:
<script type="text/javascript">var startTime = (new Date()).getTime()</script>
very early in your <head> tag. Then depending on if you want to check when the html finishes, or everything finishes downloading, you can either put a similar tag near the bottom of your html page (and subtract the values), and then do an XHR back to the server (or set a cookie, which you can retrieve server side on the next page request) or listen to the onload event, and do the same.

new image makes http request even though cached?

I have a javascript slide show that creates the next slide dynamically and then moves it into view. Since the images are actually sprites, the src is transparent.png and the actual image is mapped via background:url(.. in css.
Every time (well, most of the time) the script creates a new Element, Firefox makes an http request for transparent.png. I have a far-future expires header, and Firefox is respecting all other files' expiries.
Is there a way to avoid these unnecessary requests. Even though the server is returning 304 unmodified responses, it would be nice if Firefox would respect the expiries on dynamically created images.
I suspect that if I injected a simple string instead of using new Element, this might solve the problem, but I use some methods on Prototypes extended Element object, so I would like to avoid a bunch of html strings in my js file.
This is a nit-picky question, but I'm working on front-end optimization now, so I thought I would address it.
Thanks.
#TJ Crowder Here are two images: http://tinypic.com/r/29kon45/5. The first shows that the requests for trans.png are proliferating. The second shows an example of the headers. Thanks
#all Just to reiterate: what's real strange is that it only makes these unnecessary requests about half the time, even though all images are created via identical logic.
I know this doesn't address why Firefox ignores your caching times, but you could always just bypass the issue and not use image tags for the slides. If you make the slides empty div tags and just apply the sprite as a background, Firefox won't have to make any more requests.
EDIT:
According to the explanation at this site, Firefox isn't ignoring you cache times. If the image has expired, then the browser is supposed to just request the image again. If the time has not expired, which is happening in this case, then the browser is supposed to issue a conditional GET request. I don't think you can get away from it.
I think Firefox only issues requests half of the time because it just received the "304 Not Modified" status for the image on a previous request and wants to trust that for subsequent requests if they happen quickly enough.
It's a caching issue. There are a number of ways to control browser caching by altering the Response headers that your web server adds. I usually use a combination of ETag and Expires
If there are conflicting or incomplete caching instructions in the Response headers, some browsers may just ignore them and get the latest version of the resource.

Categories

Resources