I've given up on this, but I thought I'd post here out of curiosity.
What I call a "hard refresh" is the Ctrl+R or Shift+F5 that you do during development to see your changes.
This causes the browser to add a Cache-Control: max-age=0 header to the request and "child" requests like images and scripts, etc.
If you're doing your job, you'll get a 304 on everything but the resource that's changed. (Okay, well, see comments. This is assuming that other validators are sent based on browser caches.)
So far, so good.
The problem is that I'm not loading scripts directly from the page, but through a load.js, and the browsers are inconsistent about whether they include that Cache-Control header on those requests. Chrome doesn't do it at all, and Firefox seems to stop in the middle of a series.
Since I can't access the headers of the current request, there's no way to know whether that header should be included or not.
The result is that when I change a script (other than load.js), a hard refresh does not reliably work, and I have to, e.g., clear the browser cache (which is a bit heavy-handed).
Any thoughts on this?
Unfortunately you cannot detect a hard refresh from JavaScript (there is no access to the headers for the currently loaded page).
However, the server can tell from the request headers if this is a hard refresh, so there's the option of cooperating. For example the server can include a custom <meta> tag in the response or add a special class to <body> and your script will then have access to this information.
Once load.js detects a hard refresh it can then propagate it to the dependent scripts by e.g. attaching a URL parameter to the requests (think "?t=" + timestamp).
You could try checking localStorage. Set a localStorage variable and check it. If it's there, it's not a hard refresh, otherwise, it is a hard refresh.
Related
I have a dynamically updated html page through jquery(ajax) named 'home.html'. After this dynamic update on the page, I go to a next page called 'takeexam.html'. But when I go back to the home page from takeexam, the dynamic updated html is not there and I see only the html which was generated during pageload. But when I check this with mozilla or IE, this dynamic update is available on back button. What is the problem in chrome and how do I solve this ?
dynamic update - I add few html into the DOM, checking from ajax response using jquery
As far as I have noticed the server, Internet explorer and Mozilla makes a request to the server and serves the page on back button, but chrome does not make request to server on back button and so is the problem.
What is the best way to solve this ? How do I make all the browsers make request to server on back button click.? Or is there any better way to do this ?
fyi - Preserve dynamically changed HTML on back button - I have already read this and I am not able to understand this thread
Without knowing what kind of dynamically updated HTML you are talking about (eg. new components, style changes, etc...) we cannot guide to a particular answer.
I can say that if state is important use hidden text fields, cookies or/and localstorage. And based on localstorage dynamically regenerate the page to it's previous configuration.
After few hours of learning, the problem is because of the caching in chrome. I have set the HTTP headers Cache-Control: max-age=0, no-cache, no-store, must-revalidate.
This makes the page not to cache and when we are requesting this page again, the browser do not have cache and so it makes request to the server again.
But I still cannot understand how IE and Mozilla makes it work without explicitly setting the HTTP headers.
I've recently started using JQuery AJAX calls to fetch some content within a document ready function. I am setting headers for cache control in the AJAX call that get overridden when a forced reload of the page is done (Chrome) which is exactly what I want.
Unfortunately later on calls to AJAX through user interaction after the page and content is completely materialized do not follow these cache rules.
For instance if I control-reload a page that initially accesses /dostuff/ during initialization with a cache control header set to an obscenely high max age time the browser overrides the cache control header and sets the max age to 0 which is nice.. it gives the
user a lot of control to refresh content.
Is this proper? Should I always expect AJAX calls that are part of initialization to override request headers the way I'm beginning to expect them to. It seems like there is a lot of room for inconsistency.
If I call the same URL later on it does what I want and the browser automagically adds in an if-modified-since header that helps me return properly from the server.
If I call a URL that hasn't been part of the initialization however.. like /dootherstuff/ .. It won't set the max age to 0 if the page initialized through a force reload.
I don't expect the be able to fix this problem since it appears to be working as it should be.. I would however like to know how to reliably detect if the page was force reloaded so that I can handle the cache control headers properly.
Resolving this issue using version keys on the URL that are fudged to deal with reloads, rather than actual content versions, will cause me a lot of grief and extra network traffic and processing time.
Due to decisions that are completely outside of my control, I am in the following situation:
I have a product listing on catalog.org
Clicking the "Add to Cart" button on a product makes an AJAX JSONP request to secure.com/product/add/[productKey], which saves the cart record to the database, sets a cookie with the cart ID, and returns a true response (or false if it failed)
Back on catalog.org, if the response is true, another AJAX JSONP request is made to secure.com/cart/info, which reads the cart ID cookie, fetches the record, and returns the number of items in the cart
Back on catalog.org once again, the response is read and an element on the page is updated showing the number of items in the cart (if any)
At this point, clicking the "Go to Cart" button on catalog.org displays the cart summary on secure.com
This works beautifully in Firefox 17, Chrome 32 and IE 11. It also works in IE8 - IE10 on our development and test environments, where catalog.org is catalog.development.com and catalog.test.com and secure.com is secure.development.com and secure.test.com respectively.
However, after we deployed to production, this stopped working in IE8 - IE10. After adding a product to the cart, the number of items in the cart is updated successfully on catalog.org. Then, after clicking the "Go to Cart" button on catalog.org, the cart summary on secure.com shows nothing because it can't read the cookie. Going to Cache > "View cookie information" in IE develeoper tools shows no cart ID cookie. It should be there, just like it is there in other browsers and in our development and test environments.
I believe what's happening is IE is blocking third party cookies. We have added a P3P compact policy header to all requests on secure.com, but the cookie is still not being set. The header we are setting is:
P3P: CP="CAO PSA OUR"
Why doesn't adding the compact policy header fix this in IE8 - IE10? How can I fix this to work in all versions of IE?
Solution
There are several good ideas posted below. I accepted #sdecima's because it sounded the most promising. We ended up combining some of these ideas but managed to avoid XDomainRequest:
Clicking the "Add to Cart" button on a product makes an AJAX JSONP
request to secure.com/product/add/[productKey], which saves the cart
record to the database, sets a cookie with the cart ID, and returns a
true response (or false if it failed)
We changed the action at secure.com/product/add to return a JSON object with a boolean indicating success or failure and the cart ID.
Back on catalog.org, if the response is true, another AJAX JSONP
request is made to secure.com/cart/info, which reads the cart ID
cookie, fetches the record, and returns the number of items in the
cart
We changed the callback function to check for both properties in the response object. If success is true and the cart ID is present, we create a hidden iframe on the page. The src attribute of the iframe is set to a new endpoint we added to secure.com. This action accepts a cart ID parameter and saves the cart ID cookie. We no longer need to save the cookie in the secure.com/product/add action.
Next, we changed the action at secure.com/cart/info to accept a cart ID parameter. This action will use the cart ID parameter if present to fetch the cart information, otherwise it will still attempt to read the cookie. This extra check would be unnecessary if we could guarantee that the iframe had finished loading and the cookie had been saved on secure.com, but we have no way of knowing when the iframe has finished loading on catalog.org due to browser security restrictions.
Finally, the P3P header CP="CAO PSA OUR" is still required for this to work in IE7 - IE10. (Yes, this works in IE7 now too :)
We now have a solution (albeit an incredibly complex one) for saving and accessing cross domain cookies that works in all major browser, at least as far back as we can reliably test.
We will probably refactor this some more. For one thing, the second AJAX JSONP request to secure.com/cart/info is redundant at this point since we can return all the information we need in the original request to secure.com/product/add action (a side benefit of changing that action to return a JSON object - plus we can return an error message indicating exactly why it failed if there was an error).
In short
Cookies will NOT go through a cross-origin request on IE 8 and 9. It should work on IE 10 and 11 though.
IE 8 and 9
On IE8/9 XMLHttpRequest partially supports CORS, and cross-origin requests are made with the help of the XDomainRequest object which does NOT send cookies with each request.
You can read more about this on the following official MSDN Blog post:
http://blogs.msdn.com/b/ieinternals/archive/2010/05/13/xdomainrequest-restrictions-limitations-and-workarounds.aspx
Particularly this part:
5 . No authentication or cookies will be sent with the request
In order to prevent misuse of the user’s ambient authority (e.g.
cookies, HTTP credentials, client certificates, etc), the request will
be stripped of cookies and credentials and will ignore any
authentication challenges or Set-Cookie directives in the HTTP
response. XDomainRequests will not be sent on previously-authenticated
connections, because some Windows authentication protocols (e.g.
NTLM/Kerberos) are per-connection-based rather than per-request-based.
IE 10+
Starting with IE10, full CORS support was added to XMLHTTPRequest and it should work fine with a correct Access-Control-Allow-Origin header property on the response from the server (that wishes to set the cookie on the browser).
More about this here:
http://blogs.msdn.com/b/ie/archive/2012/02/09/cors-for-xhr-in-ie10.aspx
And here:
http://www.html5rocks.com/en/tutorials/cors/
Workarounds on IE 8 and 9
The only way to go around this on IE8/9 is, quoting the same MSDN post as above:
Sites that wish to perform authentication of the user for cross-origin
requests can use explicit methods (e.g. tokens in the POST body or
URL) to pass this authentication information without risking the
user’s ambient authority.
Bottom line: third party cookies are commonly blocked by privacy/advertisement blocking extensions and should be considered unreliable. You'll be shooting yourself in the foot leaving it in production.
The syntax suggests that the endpoint has ambitions to one day become RESTful. The only problem with that is using cookies, which throws the whole "stateless" concept out of the window! Ideally, changes should be made to the API. If you are not integrating with a third party (i.e. "secure.com" is operated by your company) this is absolutely the correct way to deal with the issue.
Move the cartId out of the secure.com cookie into its querystring:
secure.com/product/add/9876?cartId=1234 //should be a POST
Where to get a valid cartId value? We can persist it in some secure-com-cart-id cookie set for catalog domain, which will avoid any cross-domain issues. Check that value and, if present, append to every secure.com request as above:
$.post('secure.com/product/add/9876', { //needs jQuery.cookie
cartId: $.cookie('secure-com-cart-id')
});
If you don't have a valid cartId, treat it as a new user and make the request without the parameter. Your API should then assign a new id and return it in the response. The "local" secure-com-cart-id cookie can then be updated. Rinse and repeat.
Voila, you've just persisted an active user cart without polluting API calls with cookies. Go yell at your architect. If you can't do that (changing API syntax or yelling), you'll have to set up a tunnel to secure.com endpoint so that there will be no cross-domain request - basically something sitting at catalog.org/secure-com-endpoint which will channel the requests to secure.com verbatim. It's a workaround specifically to avoid making changes to the API, just don't do it with code and have proper Apache/IIS/F5 rules set up to handle it instead. A quick search comes up with several explanations, this one looks pretty good to me.
P.S.: this is a classic XY problem in my opinion. The solution isn't necessarily about persisting 3rd party cookies but about passing necessary parameters to a 3rd party while persisting the data somewhere.
Although a correct solution would be a change of architecture, if you're looking for a quick, temporary solution:
JSONP files are actually just javascript. You could add a line of code to set cookies to the front of your JSONP.
eg instead of:
callback({"exampleKey": "exampleValue"});
Your JSONP could look like:
document.cookie="cartID=1234";
callback({"exampleKey": "exampleValue"});
If you control the DNS records, create a new entry so both servers are in the same domain.
Is there 1 database serving catalog.org and secure.com or can they communicate?
If so then, you got it.
When catalog.org servers a cookie, save it in the db.
When secure.com servers a cookie, save it in the db.
Then you can determine who's cart belongs to which user.
This is a fun problem to consider......Update 2:
When a user goes to catalog.org:
check if he has a cat_org cookie, if not, then:
in catalog.org:
create a key value pair and save in the db {cat_cookie_id, unique_number}
set cat_cookie_id in browser
instruct the browser to ajax visit secure.com/register/unique_number
in secure.com
read unique_number from url
create a secure_cookie id
save in the db {cat_cookie_id, unique_number, secure_cookie_id}
delete unique_number, as this is a one-time use key
Now the db can map cat_cookie_id to secure_cookie_id and vice versa.
I implemented a REST service and i'm using a web page as client.
My page has some javascript functions that performs several times the same http get request to REST server and process the replies.
My problem is that the browser caches the first reply and not actualy sends the following requests..
Is there some way to force the browser execute all the requests without caching?
I'm using internet explorer 8.0
Thanks
Not sure if it can help you, but sometimes, I add a random parameter in the URL of my request in order to avoid being cached.
So instead of having:
http://my-server:8080/myApp/foo?bar=baz
I will use:
http://my-server:8080/myApp/foo?bar=baz&random=123456789
of course, the value of the random is different for every request. You can use the current time in milliseconds for that.
Not really. This is a known issue with IE, the classic solution is to append a random parameter at the end of the query string for every request. Most JS libraries do this natively if you ask them to (jQuery's cache:false AJAX option, for instance)
Well, of course you don't actually want to disable the browser cache entirely; correct caching is a key part of REST and the fact that it can (if properly followed by both client and server) allow for a high degree of caching while also giving fine control over the cache expiry and revalidation is one of the key advantages.
There is though an issue, as you have spotted, with subsequent GETs to the same URI from the same document (as in DOM document lifetime, reload the page and you'll get another go at that XMLHttpRequest request). Pretty much IE seems to treat it as it would a request for more than one copy of the same image or other related resource in a web page; it uses the cached version even if the entity isn't cacheable.
Firefox has the opposite problem, and will send a subsequent request even when caching information says that it shouldn't!
We could add a random or time-stamped bogus parameter at the end of a query string for each request. However, this is a bit like screaming "THIS IS SPARTA!" and kicking our hard-won download into a deep pit that no Health & Safety inspector considered putting a safety rail around. We obviously don't want to repeat a full unconditional request when we don't need to.
However, this behaviour has a time component. If we delay the subsequent request by a second, then IE will re-request when appropriate while Firefox will honour the max-age and expires headers and not re-request when needless.
Hence, if two requests could be within a second of each other (either we know they are called from the same function, or there's the chance of two events triggering it in close succession) using setTimeout to delay the second request by a second after the first has completed will make it use the cache correctly, rather than in the two different sorts of incorrect behaviour.
Of course, a second's delay is a second's delay. This could be a big deal or not, depending primarily on the size of the downloaded entity.
Another possibility is that something that changes so rapidly shouldn't be modelled as GETting the state of a resource at all, but as POSTing a request for a current status to a resource. This does smell heavily of abusing REST and POSTing what should really be a GET though.
Which can mean that on balance the THIS IS SPARTA approach of appending random stuff to query strings is the way to go. It depends, really.
I have a javascript slide show that creates the next slide dynamically and then moves it into view. Since the images are actually sprites, the src is transparent.png and the actual image is mapped via background:url(.. in css.
Every time (well, most of the time) the script creates a new Element, Firefox makes an http request for transparent.png. I have a far-future expires header, and Firefox is respecting all other files' expiries.
Is there a way to avoid these unnecessary requests. Even though the server is returning 304 unmodified responses, it would be nice if Firefox would respect the expiries on dynamically created images.
I suspect that if I injected a simple string instead of using new Element, this might solve the problem, but I use some methods on Prototypes extended Element object, so I would like to avoid a bunch of html strings in my js file.
This is a nit-picky question, but I'm working on front-end optimization now, so I thought I would address it.
Thanks.
#TJ Crowder Here are two images: http://tinypic.com/r/29kon45/5. The first shows that the requests for trans.png are proliferating. The second shows an example of the headers. Thanks
#all Just to reiterate: what's real strange is that it only makes these unnecessary requests about half the time, even though all images are created via identical logic.
I know this doesn't address why Firefox ignores your caching times, but you could always just bypass the issue and not use image tags for the slides. If you make the slides empty div tags and just apply the sprite as a background, Firefox won't have to make any more requests.
EDIT:
According to the explanation at this site, Firefox isn't ignoring you cache times. If the image has expired, then the browser is supposed to just request the image again. If the time has not expired, which is happening in this case, then the browser is supposed to issue a conditional GET request. I don't think you can get away from it.
I think Firefox only issues requests half of the time because it just received the "304 Not Modified" status for the image on a previous request and wants to trust that for subsequent requests if they happen quickly enough.
It's a caching issue. There are a number of ways to control browser caching by altering the Response headers that your web server adds. I usually use a combination of ETag and Expires
If there are conflicting or incomplete caching instructions in the Response headers, some browsers may just ignore them and get the latest version of the resource.