I have a dynamically updated html page through jquery(ajax) named 'home.html'. After this dynamic update on the page, I go to a next page called 'takeexam.html'. But when I go back to the home page from takeexam, the dynamic updated html is not there and I see only the html which was generated during pageload. But when I check this with mozilla or IE, this dynamic update is available on back button. What is the problem in chrome and how do I solve this ?
dynamic update - I add few html into the DOM, checking from ajax response using jquery
As far as I have noticed the server, Internet explorer and Mozilla makes a request to the server and serves the page on back button, but chrome does not make request to server on back button and so is the problem.
What is the best way to solve this ? How do I make all the browsers make request to server on back button click.? Or is there any better way to do this ?
fyi - Preserve dynamically changed HTML on back button - I have already read this and I am not able to understand this thread
Without knowing what kind of dynamically updated HTML you are talking about (eg. new components, style changes, etc...) we cannot guide to a particular answer.
I can say that if state is important use hidden text fields, cookies or/and localstorage. And based on localstorage dynamically regenerate the page to it's previous configuration.
After few hours of learning, the problem is because of the caching in chrome. I have set the HTTP headers Cache-Control: max-age=0, no-cache, no-store, must-revalidate.
This makes the page not to cache and when we are requesting this page again, the browser do not have cache and so it makes request to the server again.
But I still cannot understand how IE and Mozilla makes it work without explicitly setting the HTTP headers.
Related
While testing a way in Firefox to reload an HTML page without caching, I've included the following snippet in my code:
<script>
window.setTimeout(function () {
location.reload(false);
}, 5000);
</script>
This reloads the page after 5 seconds, however I get shown the prompt: "To display this page, Firefox must send information that will repeat any action (such as a search or order confirmation) that was performed earlier."
If there a way to do a silent refresh via Javascript, one that doesn't show any prompt? For instance, if I used the refresh Meta tag (HTML), my browser silently refreshes. I want to approximate that same experience, but via JS (and no cache). BTW mine is a Django web app, and I inject the JS code in my Django template.
This is standard behaviour to protect people from submitting form information more than once (eg, prevent double payments in an ecommerce system). Try telling the Javascript to direct to a 'new' page:
Try using this, setting the url to your own;
window.location = "my.url/index.html?nocache=" + (new Date()).getTime();
Answer borrowed from here where there is also an explanation given for why this works -> How can I force window.location to make an HTTP request instead of using the cache?
Have you tried location.reload (true) ? If set to true, it will always reload from server. Set to false it'll look at the cache first.
You are getting this prompt because you ask to reload a POST request. You should always get this prompt when reloading a POST request, as it is not a safe method
However, if you wish, you can explicitely resend a POST request (though you might have difficulties to find back the POST data previously sent). Or explicitely send a GET request to the same URL.
I've recently started using JQuery AJAX calls to fetch some content within a document ready function. I am setting headers for cache control in the AJAX call that get overridden when a forced reload of the page is done (Chrome) which is exactly what I want.
Unfortunately later on calls to AJAX through user interaction after the page and content is completely materialized do not follow these cache rules.
For instance if I control-reload a page that initially accesses /dostuff/ during initialization with a cache control header set to an obscenely high max age time the browser overrides the cache control header and sets the max age to 0 which is nice.. it gives the
user a lot of control to refresh content.
Is this proper? Should I always expect AJAX calls that are part of initialization to override request headers the way I'm beginning to expect them to. It seems like there is a lot of room for inconsistency.
If I call the same URL later on it does what I want and the browser automagically adds in an if-modified-since header that helps me return properly from the server.
If I call a URL that hasn't been part of the initialization however.. like /dootherstuff/ .. It won't set the max age to 0 if the page initialized through a force reload.
I don't expect the be able to fix this problem since it appears to be working as it should be.. I would however like to know how to reliably detect if the page was force reloaded so that I can handle the cache control headers properly.
Resolving this issue using version keys on the URL that are fudged to deal with reloads, rather than actual content versions, will cause me a lot of grief and extra network traffic and processing time.
I've given up on this, but I thought I'd post here out of curiosity.
What I call a "hard refresh" is the Ctrl+R or Shift+F5 that you do during development to see your changes.
This causes the browser to add a Cache-Control: max-age=0 header to the request and "child" requests like images and scripts, etc.
If you're doing your job, you'll get a 304 on everything but the resource that's changed. (Okay, well, see comments. This is assuming that other validators are sent based on browser caches.)
So far, so good.
The problem is that I'm not loading scripts directly from the page, but through a load.js, and the browsers are inconsistent about whether they include that Cache-Control header on those requests. Chrome doesn't do it at all, and Firefox seems to stop in the middle of a series.
Since I can't access the headers of the current request, there's no way to know whether that header should be included or not.
The result is that when I change a script (other than load.js), a hard refresh does not reliably work, and I have to, e.g., clear the browser cache (which is a bit heavy-handed).
Any thoughts on this?
Unfortunately you cannot detect a hard refresh from JavaScript (there is no access to the headers for the currently loaded page).
However, the server can tell from the request headers if this is a hard refresh, so there's the option of cooperating. For example the server can include a custom <meta> tag in the response or add a special class to <body> and your script will then have access to this information.
Once load.js detects a hard refresh it can then propagate it to the dependent scripts by e.g. attaching a URL parameter to the requests (think "?t=" + timestamp).
You could try checking localStorage. Set a localStorage variable and check it. If it's there, it's not a hard refresh, otherwise, it is a hard refresh.
I want to implement AJAX like facebook, so my sites can be really fast too. After weeks of research and also knowing about bigPipe (which is not ajax).
so the only thing left was how they are pulling other requests like going to page/profile, I opened up firebug and was just checking things there for what I get if I click on different profiles. But the problem is, firebug doen'tt record any such request and but still page gets loaded with AJAX and changes the HTML also, firebug does show change on html.
So I'm wondering, if they are using iframe to block firebug to see the request or what? Because I want to know how much data they pull on each request. Is it the complete page or only a part of page, because page layout changes as well, depending on the page it is (for example: groups, page, profile, ...).
I would be really grateful if a pro gives some feedback on this, because i cant find it anywhere for weeks.
The reason they use iframe, usually its security. iframes are like new tabs, there is no communication between your page and the iframe facebook page. The iframe has its own cookies and session, so really you need to think about it like another window rather than part of your own page (except for the obvious fact that the output is shown within your page).
That said - the developer mode in chrome does show you the communications to and from the iframe.
When I click on user's profile at facebook, then in Firebug I clearly see how request for data happens, and how div's content changing.
So, what is the question about?
After click on some user profile, Facebook does following GET request:
http://www.facebook.com/ajax/hovercard/user.php?id=100000655044XXX&__a=1
This request's response is a complex JS data, which contain all necessary information to build a new page. There is a array of profile's friends (with names, avatar thumbnails links, etc), array of the profile last entries (again, with thumbnails URLs, annotations, etc.).
There is no magic, no something like code hiding or obfuscation. =)
Looking at face book through google chromes inspector they use ajax to request files the give back javascript which is then used to make any changes to the page.
I don't know why/wether Facebook uses IFRAMEs to asynchroneously load data but I guess there is no special reason behind that. We used IFRAMEs too but now switched to XMLHttpRequest for our projects because it's more flexible. Perhaps the IFRAME method works better on (much) older browsers, but even IE6 supports XMLHttpRequest fine.
Anyway, I'm certain that there is no performance advantage when using IFRAMEs. If you need fast asynchroneous data loading to dynamically update your page, go with XMLHttpRequest since any modern browsers supports it and it's fast as HTTP can be.
If you know about bigPipe then you will be able to understand that,
As you have read about big pipe their response look like this :-
<script type="text/javascript"> bigpipe.onPageArrive({ 'css' : '', '__html' : ' ' }); </script>
So if they ajax then they will not able to use bigpipe, mean if they use ajax and one server they flush buffer, on client there will no effect of that, the ajax oncomplete only will call when complete data received and connection closed, In other words they will not able to use their one of the best page speed technique there,
but what if they use iframe for ajax,, this make point,, they can use their bigpipe in iframe and server will send data like this :-
<script type="text/javascript"> parent.bigpipe.onPageArrive({ 'some' : 'some' });
so server can flush buffer and as soon as buffer will clear, browser will get that, that was not possible in ajax case.
Important :-
They use iframe only when page url change, mean when a new page need to be downloaded that contains the pagelets, for other request like some popup box or notifications etc they simple send ajax request.
All informations are unofficial, Actually i was researching on that, so i found,
( I m not a native english speaker, sorry for spelling and grammer mistakes! )
when you click on different profile, facebook doesn't use ajax for loading the profile
you simple open a new link plain old html... but maybe I misunderstood you
I have a javascript slide show that creates the next slide dynamically and then moves it into view. Since the images are actually sprites, the src is transparent.png and the actual image is mapped via background:url(.. in css.
Every time (well, most of the time) the script creates a new Element, Firefox makes an http request for transparent.png. I have a far-future expires header, and Firefox is respecting all other files' expiries.
Is there a way to avoid these unnecessary requests. Even though the server is returning 304 unmodified responses, it would be nice if Firefox would respect the expiries on dynamically created images.
I suspect that if I injected a simple string instead of using new Element, this might solve the problem, but I use some methods on Prototypes extended Element object, so I would like to avoid a bunch of html strings in my js file.
This is a nit-picky question, but I'm working on front-end optimization now, so I thought I would address it.
Thanks.
#TJ Crowder Here are two images: http://tinypic.com/r/29kon45/5. The first shows that the requests for trans.png are proliferating. The second shows an example of the headers. Thanks
#all Just to reiterate: what's real strange is that it only makes these unnecessary requests about half the time, even though all images are created via identical logic.
I know this doesn't address why Firefox ignores your caching times, but you could always just bypass the issue and not use image tags for the slides. If you make the slides empty div tags and just apply the sprite as a background, Firefox won't have to make any more requests.
EDIT:
According to the explanation at this site, Firefox isn't ignoring you cache times. If the image has expired, then the browser is supposed to just request the image again. If the time has not expired, which is happening in this case, then the browser is supposed to issue a conditional GET request. I don't think you can get away from it.
I think Firefox only issues requests half of the time because it just received the "304 Not Modified" status for the image on a previous request and wants to trust that for subsequent requests if they happen quickly enough.
It's a caching issue. There are a number of ways to control browser caching by altering the Response headers that your web server adds. I usually use a combination of ETag and Expires
If there are conflicting or incomplete caching instructions in the Response headers, some browsers may just ignore them and get the latest version of the resource.