new image makes http request even though cached? - javascript

I have a javascript slide show that creates the next slide dynamically and then moves it into view. Since the images are actually sprites, the src is transparent.png and the actual image is mapped via background:url(.. in css.
Every time (well, most of the time) the script creates a new Element, Firefox makes an http request for transparent.png. I have a far-future expires header, and Firefox is respecting all other files' expiries.
Is there a way to avoid these unnecessary requests. Even though the server is returning 304 unmodified responses, it would be nice if Firefox would respect the expiries on dynamically created images.
I suspect that if I injected a simple string instead of using new Element, this might solve the problem, but I use some methods on Prototypes extended Element object, so I would like to avoid a bunch of html strings in my js file.
This is a nit-picky question, but I'm working on front-end optimization now, so I thought I would address it.
Thanks.
#TJ Crowder Here are two images: http://tinypic.com/r/29kon45/5. The first shows that the requests for trans.png are proliferating. The second shows an example of the headers. Thanks
#all Just to reiterate: what's real strange is that it only makes these unnecessary requests about half the time, even though all images are created via identical logic.

I know this doesn't address why Firefox ignores your caching times, but you could always just bypass the issue and not use image tags for the slides. If you make the slides empty div tags and just apply the sprite as a background, Firefox won't have to make any more requests.
EDIT:
According to the explanation at this site, Firefox isn't ignoring you cache times. If the image has expired, then the browser is supposed to just request the image again. If the time has not expired, which is happening in this case, then the browser is supposed to issue a conditional GET request. I don't think you can get away from it.
I think Firefox only issues requests half of the time because it just received the "304 Not Modified" status for the image on a previous request and wants to trust that for subsequent requests if they happen quickly enough.

It's a caching issue. There are a number of ways to control browser caching by altering the Response headers that your web server adds. I usually use a combination of ETag and Expires
If there are conflicting or incomplete caching instructions in the Response headers, some browsers may just ignore them and get the latest version of the resource.

Related

how to make request on back button - chrome

I have a dynamically updated html page through jquery(ajax) named 'home.html'. After this dynamic update on the page, I go to a next page called 'takeexam.html'. But when I go back to the home page from takeexam, the dynamic updated html is not there and I see only the html which was generated during pageload. But when I check this with mozilla or IE, this dynamic update is available on back button. What is the problem in chrome and how do I solve this ?
dynamic update - I add few html into the DOM, checking from ajax response using jquery
As far as I have noticed the server, Internet explorer and Mozilla makes a request to the server and serves the page on back button, but chrome does not make request to server on back button and so is the problem.
What is the best way to solve this ? How do I make all the browsers make request to server on back button click.? Or is there any better way to do this ?
fyi - Preserve dynamically changed HTML on back button - I have already read this and I am not able to understand this thread
Without knowing what kind of dynamically updated HTML you are talking about (eg. new components, style changes, etc...) we cannot guide to a particular answer.
I can say that if state is important use hidden text fields, cookies or/and localstorage. And based on localstorage dynamically regenerate the page to it's previous configuration.
After few hours of learning, the problem is because of the caching in chrome. I have set the HTTP headers Cache-Control: max-age=0, no-cache, no-store, must-revalidate.
This makes the page not to cache and when we are requesting this page again, the browser do not have cache and so it makes request to the server again.
But I still cannot understand how IE and Mozilla makes it work without explicitly setting the HTTP headers.

Performing a server-query via an img tag

Disclaimer: This is more of a general interest question rather than something I am actually considering doing, since there are less hacky ways to achieve the same thing.
But, out of curiosity:
I can create custom <img ...> tags with JavaScript, which the browser will go and fetch. By choosing the URL for the src attribute of the image appropriately, this technically allows me to execute arbitrary server queries if my web server doesn't just return a static image file for the url-request, but actually processes it via some form of GET-servlet.
The beauty of this is that it can be used for cross-domain requests in pretty much any browser that supports images without opening yourself up for any of the vulnerabilities that JSONP would, for example.
The problem is that I cannot return any data from this sort of server query, since the cross-domain policies usually block any sort of pixel-access to the image...
Or can I?
I should be able to at least return two small unsigned integers in the range of 1 to maybe 1000 or more by generating a blank image with a certain width and height, which the browser then allows me to retrieve no matter where the image was loaded from, correct?
Are there other such tricks I can use to send even more information along with the image? And what limitations would I face in trying to get at the information?
As an example, could the server set a cookie when returning the image? If so, I could probably only access that cookie from the same domain, right? So it wouldn't be useful for cross-site-scripting (except if the client doesn't need to care about the info in the cookie, but the server does).
Or could I, as another example, somehow change the image's src by having the server reply with a redirect in such a way that I can retrieve the redirect-target in JavaScript?
Crazy ideas welcome! ;)
NOTE:
This question is not meant as very broad (received close request?!?): There are only a very limited number of properties that an HTML img tag has that could be manipulated by a server and then read by a client. Image width and height might in fact be the only ones (at least I couldn't find any others). I'm just asking whether there are more. There should only be very few other properties out there, if any.

AJAX calls done during javascript initialization follow cache rules of browser however later on calls do not

I've recently started using JQuery AJAX calls to fetch some content within a document ready function. I am setting headers for cache control in the AJAX call that get overridden when a forced reload of the page is done (Chrome) which is exactly what I want.
Unfortunately later on calls to AJAX through user interaction after the page and content is completely materialized do not follow these cache rules.
For instance if I control-reload a page that initially accesses /dostuff/ during initialization with a cache control header set to an obscenely high max age time the browser overrides the cache control header and sets the max age to 0 which is nice.. it gives the
user a lot of control to refresh content.
Is this proper? Should I always expect AJAX calls that are part of initialization to override request headers the way I'm beginning to expect them to. It seems like there is a lot of room for inconsistency.
If I call the same URL later on it does what I want and the browser automagically adds in an if-modified-since header that helps me return properly from the server.
If I call a URL that hasn't been part of the initialization however.. like /dootherstuff/ .. It won't set the max age to 0 if the page initialized through a force reload.
I don't expect the be able to fix this problem since it appears to be working as it should be.. I would however like to know how to reliably detect if the page was force reloaded so that I can handle the cache control headers properly.
Resolving this issue using version keys on the URL that are fudged to deal with reloads, rather than actual content versions, will cause me a lot of grief and extra network traffic and processing time.

Can I tell from javascript whether my page was hard refreshed?

I've given up on this, but I thought I'd post here out of curiosity.
What I call a "hard refresh" is the Ctrl+R or Shift+F5 that you do during development to see your changes.
This causes the browser to add a Cache-Control: max-age=0 header to the request and "child" requests like images and scripts, etc.
If you're doing your job, you'll get a 304 on everything but the resource that's changed. (Okay, well, see comments. This is assuming that other validators are sent based on browser caches.)
So far, so good.
The problem is that I'm not loading scripts directly from the page, but through a load.js, and the browsers are inconsistent about whether they include that Cache-Control header on those requests. Chrome doesn't do it at all, and Firefox seems to stop in the middle of a series.
Since I can't access the headers of the current request, there's no way to know whether that header should be included or not.
The result is that when I change a script (other than load.js), a hard refresh does not reliably work, and I have to, e.g., clear the browser cache (which is a bit heavy-handed).
Any thoughts on this?
Unfortunately you cannot detect a hard refresh from JavaScript (there is no access to the headers for the currently loaded page).
However, the server can tell from the request headers if this is a hard refresh, so there's the option of cooperating. For example the server can include a custom <meta> tag in the response or add a special class to <body> and your script will then have access to this information.
Once load.js detects a hard refresh it can then propagate it to the dependent scripts by e.g. attaching a URL parameter to the requests (think "?t=" + timestamp).
You could try checking localStorage. Set a localStorage variable and check it. If it's there, it's not a hard refresh, otherwise, it is a hard refresh.

Disable browser cache

I implemented a REST service and i'm using a web page as client.
My page has some javascript functions that performs several times the same http get request to REST server and process the replies.
My problem is that the browser caches the first reply and not actualy sends the following requests..
Is there some way to force the browser execute all the requests without caching?
I'm using internet explorer 8.0
Thanks
Not sure if it can help you, but sometimes, I add a random parameter in the URL of my request in order to avoid being cached.
So instead of having:
http://my-server:8080/myApp/foo?bar=baz
I will use:
http://my-server:8080/myApp/foo?bar=baz&random=123456789
of course, the value of the random is different for every request. You can use the current time in milliseconds for that.
Not really. This is a known issue with IE, the classic solution is to append a random parameter at the end of the query string for every request. Most JS libraries do this natively if you ask them to (jQuery's cache:false AJAX option, for instance)
Well, of course you don't actually want to disable the browser cache entirely; correct caching is a key part of REST and the fact that it can (if properly followed by both client and server) allow for a high degree of caching while also giving fine control over the cache expiry and revalidation is one of the key advantages.
There is though an issue, as you have spotted, with subsequent GETs to the same URI from the same document (as in DOM document lifetime, reload the page and you'll get another go at that XMLHttpRequest request). Pretty much IE seems to treat it as it would a request for more than one copy of the same image or other related resource in a web page; it uses the cached version even if the entity isn't cacheable.
Firefox has the opposite problem, and will send a subsequent request even when caching information says that it shouldn't!
We could add a random or time-stamped bogus parameter at the end of a query string for each request. However, this is a bit like screaming "THIS IS SPARTA!" and kicking our hard-won download into a deep pit that no Health & Safety inspector considered putting a safety rail around. We obviously don't want to repeat a full unconditional request when we don't need to.
However, this behaviour has a time component. If we delay the subsequent request by a second, then IE will re-request when appropriate while Firefox will honour the max-age and expires headers and not re-request when needless.
Hence, if two requests could be within a second of each other (either we know they are called from the same function, or there's the chance of two events triggering it in close succession) using setTimeout to delay the second request by a second after the first has completed will make it use the cache correctly, rather than in the two different sorts of incorrect behaviour.
Of course, a second's delay is a second's delay. This could be a big deal or not, depending primarily on the size of the downloaded entity.
Another possibility is that something that changes so rapidly shouldn't be modelled as GETting the state of a resource at all, but as POSTing a request for a current status to a resource. This does smell heavily of abusing REST and POSTing what should really be a GET though.
Which can mean that on balance the THIS IS SPARTA approach of appending random stuff to query strings is the way to go. It depends, really.

Categories

Resources