Browser used cache without revalidate - javascript

I am trying to learning the browser's (Chrome/Firefox) cache mechanism.
I set up a simple HTML:
<HTML><BODY>
Hellow World
<script>
function loadJS(){
var s = document.createElement('script');
s.setAttribute('src','/myscript');
document.body.appendChild(s);
}
loadJS()
</script>
<BODY></HTML>
I output "Cache-Control: max-age:30" for "/myscript"
Everytime I press F5, browser will re-validate /myscript with my server to get a 304 back.
But if I use
setTimeout(loadJS, 1);
Everytime I press F5, it looks like browser will check expire time, and if not expired, browser will use the cache directly instead of going to server for revalidation.
My question is:
Why? is there a detail explanation for this?
Does it mean if I want browser to use cache and reduce network request as much as possible, I need to wait the page loaded, and then request resources by js later?

I've done a fair amount of experimentation with browser cache control, and I am surprised that no one has posted an answer.
Many people do not pay attention to this. As a results websites--for no reason at all--make browsers perform useless roundtrips for a 304-not modified on images, js or css files which are unlikely changed in 5 years--like who is going to change jquery.v-whatever?
So anyway, I have found that when you hard refresh the browser using F5 or ctrl-r, Chrome will revalidate just about everything on the page--as it should. This is very helpful and is why you want keep the etags in the response header.
When testing your max-age and expires headers, browse the site as a user naturally would by clicking the links on the page. Watch the web server's logfile (I use http://www.apacheviewer.com) and you'll get a good idea of how the browsers are caching.
Setting the headers works. I posted this a while back: Apache: set max-age or expires in .htaccess for directory
The easiest way for me to manage the web server is to create a /cache directory and instruct apache to set a 1 year max-age and expires header for everything in every subdir.
It works wonders. My pages make 1 round trip to the server, where as they used to make 3-5 trips with each request, just to get a 304.
Write your html as you normally would. The browsers will obey the cache settings in the headers.
Just know that hard refreshing the browser causes the browser to ignore max-age and relies on etags.

Related

How can I cancel consecutive requests to my server? [duplicate]

What would cause a page to be canceled? I have a screenshot of the Chrome Developer Tools.
This happens often but not every time. It seems like once some other resources are cached, a page refresh will load the LeftPane.aspx. And what's really odd is this only happens in Google Chrome, not Internet Explorer 8. Any ideas why Chrome would cancel a request?
We fought a similar problem where Chrome was canceling requests to load things within frames or iframes, but only intermittently and it seemed dependent on the computer and/or the speed of the internet connection.
This information is a few months out of date, but I built Chromium from scratch, dug through the source to find all the places where requests could get cancelled, and slapped breakpoints on all of them to debug. From memory, the only places where Chrome will cancel a request:
The DOM element that caused the request to be made got deleted (i.e. an IMG is being loaded, but before the load happened, you deleted the IMG node)
You did something that made loading the data unnecessary. (i.e. you started loading a iframe, then changed the src or overwrite the contents)
There are lots of requests going to the same server, and a network problem on earlier requests showed that subsequent requests weren't going to work (DNS lookup error, earlier (same) request resulted e.g. HTTP 400 error code, etc)
In our case we finally traced it down to one frame trying to append HTML to another frame, that sometimes happened before the destination frame even loaded. Once you touch the contents of an iframe, it can no longer load the resource into it (how would it know where to put it?) so it cancels the request.
status=canceled may happen also on ajax requests on JavaScript events:
<script>
$("#call_ajax").on("click", function(event){
$.ajax({
...
});
});
</script>
<button id="call_ajax">call</button>
The event successfully sends the request, but is is canceled then (but processed by the server). The reason is, the elements submit forms on click events, no matter if you make any ajax requests on the same click event.
To prevent request from being cancelled, JavaScript event.preventDefault(); have to be called:
<script>
$("#call_ajax").on("click", function(event){
event.preventDefault();
$.ajax({
...
});
});
</script>
NB: Make sure you don't have any wrapping form elements.
I had a similar issue where my button with onclick={} was wrapped in a form element. When clicking the button the form is also submitted, and that messed it all up...
Another thing to look out for could be the AdBlock extension, or extensions in general.
But "a lot" of people have AdBlock....
To rule out extension(s) open a new tab in incognito making sure that "allow in incognito is off" for the extention(s) you want to test.
In my case, I found that it is jquery global timeout settings, a jquery plugin setup global timeout to 500ms, so that when the request exceed 500ms, chrome will cancel the request.
You might want to check the "X-Frame-Options" header tag. If its set to SAMEORIGIN or DENY then the iFrame insertion will be canceled by Chrome (and other browsers) per the spec.
Also, note that some browsers support the ALLOW-FROM setting but Chrome does not.
To resolve this, you will need to remove the "X-Frame-Options" header tag. This could leave you open to clickjacking attacks so you will need to decide what the risks are and how to mitigate them.
Here's what happened to me: the server was returning a malformed "Location" header for a 302 redirect.
Chrome failed to tell me this, of course. I opened the page in firefox, and immediately discovered the problem.
Nice to have multiple tools :)
Another place we've encountered the (canceled) status is in a particular TLS certificate misconfiguration. If a site such as https://www.example.com is misconfigured such that the certificate does not include the www. but is valid for https://example.com, chrome will cancel this request and automatically redirect to the latter site. This is not the case for Firefox.
Currently valid example: https://www.pthree.org/
A cancelled request happened to me when redirecting between secure and non-secure pages on separate domains within an iframe. The redirected request showed in dev tools as a "cancelled" request.
I have a page with an iframe containing a form hosted by my payment gateway. When the form in the iframe was submitted, the payment gateway would redirect back to a URL on my server. The redirect recently stopped working and ended up as a "cancelled" request instead.
It seems that Chrome (I was using Windows 7 Chrome 30.0.1599.101) no longer allowed a redirect within the iframe to go to a non-secure page on a separate domain. To fix it, I just made sure any redirected requests in the iframe were always sent to secure URLs.
When I created a simpler test page with only an iframe, there was a warning in the console (which I had previous missed or maybe didn't show up):
[Blocked] The page at https://mydomain.com/Payment/EnterDetails ran insecure content from http://mydomain.com/Payment/Success
The redirect turned into a cancelled request in Chrome on PC, Mac and Android. I don't know if it is specific to my website setup (SagePay Low Profile) or if something has changed in Chrome.
Chrome Version 33.0.1750.154 m consistently cancels image loads if I am using the Mobile Emulation pointed at my localhost; specifically with User Agent spoofing on (vs. just Screen settings).
When I turn User Agent spoofing off; image requests aren't canceled, I see the images.
I still don't understand why; in the former case, where the request is cancelled the Request Headers (CAUTION: Provisional headers are shown) have only
Accept
Cache-Control
Pragma
Referer
User-Agent
In the latter case, all of those plus others like:
Cookie
Connection
Host
Accept-Encoding
Accept-Language
Shrug
I got this error in Chrome when I redirected via JavaScript:
<script>
window.location.href = "devhost:88/somepage";
</script>
As you see I forgot the 'http://'. After I added it, it worked.
Here is another case of request being canceled by chrome, which I just encountered, which is not covered by any of answers up there.
In a nutshell
Self-signed certificate not being trusted on my android phone.
Details
We are in development/debug phase. The url is pointing to a self-signed host. The code is like:
location.href = 'https://some.host.com/some/path'
Chrome just canceled the request silently, leaving no clue for newbie to web development like myself to fix the issue. Once I downloaded and installed the certificate using the android phone the issue is gone.
If you use axios it can help you
// change timeout delay:
instance.defaults.timeout = 2500;
https://github.com/axios/axios#config-order-of-precedence
For my case, I had an anchor with click event like
<a href="" onclick="somemethod($index, hour, $event)">
Inside click event I had some network call, Chrome cancelling the request. The anchor has href with "" means, it reloads the page and the same time it has click event with network call that gets cancelled. Whenever i replace the href with void like
<a href="javascript:void(0)" onclick="somemethod($index, hour, $event)">
The problem went away!
If you make use of some Observable-based HTTP requests like those built-in in Angular (2+), then the HTTP request can be canceled when observable gets canceled (common thing when you're using RxJS 6 switchMap operator to combine the streams). In most cases it's enough to use mergeMap operator instead, if you want the request to complete.
I had faced the same issue, somewhere deep in our code we had this pseudocode:
create an iframe
onload of iframe submit a form
After 2 seconds, remove the iframe
thus, when the server takes more than 2 seconds to respond the iframe to which the server was writing the response to, was removed, but the response was still to be written , but there was no iframe to write , thus chrome cancelled the request, thus to avoid this I made sure that the iframe is removed only after the response is over, or you can change the target to "_blank".
Thus one of the reason is:
when the resource(iframe in my case) that you are writing something in, is removed or deleted before you stop writing to it, the request will be cancelled
I have embedded all types of font as well as woff, woff2, ttf when I embed a web font in style sheet. Recently I noticed that Chrome cancels request to ttf and woff when woff2 is present. I use Chrome version 66.0.3359.181 right now but I am not sure when Chrome started canceling of extra font types.
We had this problem having tag <button> in the form, that was supposed to send ajax request from js. But this request was canceled, due to browser, that sends form automatically on any click on button inside the form.
So if you realy want to use button instead of regular div or span on the page, and you want to send form throw js - you should setup a listener with preventDefault function.
e.g.
$('button').on('click', function(e){
e.preventDefault();
//do ajax
$.ajax({
...
});
})
I had the exact same thing with two CSS files that were stored in another folder outside my main css folder. I'm using Expression Engine and found that the issue was in the rules in my htaccess file. I just added the folder to one of my conditions and it fixed it. Here's an example:
RewriteCond %{REQUEST_URI} !(images|css|js|new_folder|favicon.ico)
So it might be worth you checking your htaccess file for any potential conflicts
happened to me the same when calling a. js file with $. ajax, and make an ajax request, what I did was call normally.
In my case the code to show e-mail client window caused Chrome to stop loading images:
document.location.href = mailToLink;
moving it to $(window).load(function () {...}) instead of $(function () {...}) helped.
In can this helps anybody I came across the cancelled status when I left out the return false; in the form submit. This caused the ajax send to be immediately followed by the submit action, which overwrote the current page. The code is shown below, with the important return false at the end.
$('form').submit(function() {
$.validator.unobtrusive.parse($('form'));
var data = $('form').serialize();
data.__RequestVerificationToken = $('input[name=__RequestVerificationToken]').val();
if ($('form').valid()) {
$.ajax({
url: this.action,
type: 'POST',
data: data,
success: submitSuccess,
fail: submitFailed
});
}
return false; //needed to stop default form submit action
});
Hope that helps someone.
For anyone coming from LoopbackJS and attempting to use the custom stream method like provided in their chart example. I was getting this error using a PersistedModel, switching to a basic Model fixed my issue of the eventsource status cancelling out.
Again, this is specifically for the loopback api. And since this is a top answer and top on google i figured i'de throw this in the mix of answers.
For me 'canceled' status was because the file did not exist. Strange why chrome does not show 404.
It was as simple as an incorrect path for me. I would suggest the first step in debugging would be to see if you can load the file independently of ajax etc.
The requests might have been blocked by a tracking protection plugin.
It happened to me when loading 300 images as background images. I'm guessing once first one timed out, it cancelled all the rest, or reached max concurrent request. need to implement a 5-at-a-time
One the reasons could be that the XMLHttpRequest.abort() was called somewhere in the code, in this case, the request will have the cancelled status in the Chrome Developer tools Network tab.
In my case, it started coming after chrome 76 update.
Due to some issue in my JS code, window.location was getting updated multiple times which resulted in canceling previous request.
Although the issue was present from before, chrome started cancelling request after update to version 76.
I had the same issue when updating a record. Inside the save() i was prepping the rawdata taken from the form to match the database format (doing a lot of mapping of enums values, etc), and this intermittently cancels the put request. i resolved it by taking out the data prepping from the save() and creating a dedicated dataPrep() method out of it. I turned this dataPrep into async and await all the memory intensive data conversion. I then return the prepped data to the save() method that i could use in the http put client. I made sure i await on dataPrep() before calling the put method:
await dataToUpdate = await dataPrep();
http.put(apiUrl, dataToUpdate);
This solved the intermittent cancelling of request.

Updating Offline Cache (Chrome On Mobile)

Afternoon All,
A bit of background - I'm building a custom calendar for a company where jobs can be scheduled and engineers can access it from their mobile to know when and where they're going. They previously used Google calendar but now want something bespoke.
All is fine until somebody loses phone signal and gets a horrible offline page in Chrome and can't access any information. What I'm wanting to do is have it save an offline version of the calendar but also update it when they re-visit it with a better connection - as job times often change.
I've tried saving the page and enabling offline mode in Chrome but the page doesn't update until you manually clear the cache so no good.
I've tried adding some javascript to hard refresh the page in the hope it clears the browser cache but again it doesn't update the page.
<script>location.reload(true);</script>
I read about cache manifests and have tried that too but although it feels like it wants to work it also doesn't update the page until I go to chrome://appcache-internals and remove the file.
CACHE MANIFEST
/calendar.php
/css/style.css
Neither PHP or Javascript headers work either as they either don't update the file on re-visit or simply don't save any files in the first place.
header("Cache-Control: no-cache, must-revalidate");
<meta http-equiv="Cache-Control" content="no-store" />
As far as I can tell there's no way to manually delete a user's website cache and re-download it and once the cache has been saved there's no way to force it to update. If you set it to expire then it's not there to access and you don't know when they will next have a connection to update so I'm going round in circles.
I've been trying for several hours now to find something that works and can't believe it's not a simple thing to do and therefore I'm now throwing myself on the mercy of you fine coders to point me in the right direction before my boss hangs me from the first floor window.
Many Thanks
UPDATE
Using what Clarence said as a starting point I came up with the following code in my appcache file:
CACHE MANIFEST
CACHE:
/css/bootstrap.css
/css/style.css
/calendar.php
NETWORK:
/calendar.php
# UPDATED: 03-04-2018 15:55:57
What this does is caches the calendar.php file BUT if there is a connection then it has another look at those files under the NETWORK heading so I also put it in there. If the appcache files hasn't changed then the browser doesn't bother looking so I've used the following code to write to the file when a job has been altered:
$manifest = file_get_contents(__DIR__ . '/cache.appcache');
$newFile = substr($manifest, 0, (strpos($manifest, '# UPDATED: ') + 11));
$newFile = $newFile . date('d-m-Y H:i:s');
file_put_contents('cache.appcache', $newFile);
Basically just searches the file for "UPDATED" and inserts a new time thus updating the file and requiring a re-check from returning users.
Somebody might point out this isn't the right way to do it but it seems to work from my tests so would like to thank those that contributed.
Have you tried changing the contents of your cache manifest whenever you change one of the files? The APPCACHE is a bit finicky when it comes to changes in the files and can be troublesome to handle. I usually include a comment with a timestamp and version number just to force it to update in the browser, like so:
CACHE MANIFEST
# 01-01-2001 v1.0 (Change whenever you need to force an update of the cache)
CACHE:
/css/file.css
/js/file.js
NETWORK:
*
FALLBACK:
well the best option would be creating a PWA. This should include the manifest files and Service workers as well. It enables you to cache the content of the websites and update it once the connection has been reestablished. However it is very new and would require a decent amount of research into service workers. If you need any help regarding the development of PWA would be happy to help to an extent
The only way without HTTP-Header is to rename files continuously.
And the depending HTML tag file-name.
So the files are loaded afresh.
With HTTP-Header look here.
How to control web page caching, across all browsers?
You can do this whenever a new change in the calendar was made.

Eliminate: ISP Injects Pages with Iframe Script for Ads

So my ISP (Smartfren; Indonesia) has decided to start injecting all non-SSL pages with an iframing script that allows them to insert ads into pages. Here's what's happening:
My browser sends a request to the server. ISP intercepts it and instead returns a javascript that loads the requested page inside an iframe.
Aside being annoying in principle, this injection also breaks any number of standard page functionality; and presents possible security hazards.
What I've tried to do so far:
Using a GreaseMonkey script to nix away the injected code and redirect to the original URL. Result: Breaks some legitimate iframes. Also, the ISP's code gets executed, because GreaseMonkey only kicks in after the page is loaded.
Using Privoxy for a local proxy and setting up a filter to clean up the injection and replace it with a plain javascript redirect to the original URL. Result: Breaks some legitimate iframes. ISP's code never gets to the browser.
You can view the GreaseMonkey and Privoxy fixes I've been working on at the following paste: http://pastebin.com/sKQTvgY2 ... along with a sample of the ISP's injection.
Ideally I could configure Privoxy to immediately resend the request when the alteration is detected, instead of filtering out the injected JS and replacing it with a JS redirection to the original URL. (The ISP-injection gets switched off when the same request is resent without delay.) I'm yet to figure out how to accomplish that. I believe it'd fix the iframe-breaking problem.
I know I could switch to a VPN or use the Tor browser. (Or change the ISP.) I'm hoping there's another way around. Any suggestions on how to eliminate this nuisance?
Actually now I have a solution:
The ISP proxy react on the Accept: header that the browser sends.
So this is the default for firefox:
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,/;q=0.8
Now we are going to change this default:
And set it to: Accept: */*
Here is how to setup header hacker for google chrome
Set the title to anything you like:NO IFRAME
Append/replace select replace with
String */*
And Match string to .* and then click add.
In the permanent header switches
Set domain to .* and select the rule you just created
PS: changing it in the firefox settings does not work 100% because some request like ajax seem to bypass it so a plugin is the only way as it literally intercepts every outgoing browser request
That's it no more iframes!!!
Hope this helps!
UPDATE: Use DNSCrypt is the best solution 😁
OLD ANSWER
Im using this method
Find resource that contain iframe code (use chrome dev tool)
Block the url with proxy or host file
I'm using linux, so i edited my hosts file on
/etc/hosts
Example :
127.0.0.1 ibnads.xl.co.id

Javascript debugging difficult as browser doesn't refresh the scripts!

I'm trying to debug a Javascript written in the Mootools framework. Right now I am developing a web application on top of Rails and my webserver is the rails s that boots WEBrick.
When I modify a particular tree.js file thats called with in one a mootools init script,
require: {
css: [MUI.path.plugins + 'tree/css/style.css'],
js: [MUI.path.plugins + 'tree/scripts/tree.js'],
onload: function(){
if (buildTree) buildTree('tree1');
}
},
the changes are not loaded as the headers being sent to the client are Last Modified: 10 July, 2010..... which is obviously not true since I just modified the file.
How do I get rid of this annoying caching. If I go directly to the script in my browser (Chrome) it doesn't show the changes until I hit refresh, but this doesn't fix my problem when I go back to my application and hit refresh, it still loads the pre-modified script.
This has happen to me also in FF, I think it is a cache header sent by the server or the browser itself.
Anyway a simple way to avoid this problem while in development is adding a random param to the file name of the script.
instead of calling 'tree/scripts/tree.js' use 'tree/scripts/tree.js?'+random that should invalidate all caches.
As frisco says, adding a random number in development does the trick but you will likely find that the problem still affects you production. You want to push new JavaScript changes to your users but can't until their browsers stop caching the file. In order to do this, just get the files mtime and add that as the random string. This will only change when the file is modified and so the JavaScript will be loaded from cache if it has not been changed or it will be loaded from the server, if it has.
PHP has the function filemtime but as I'm not familiar with Ruby, I'm afraid I can't help you further in that direction (sorry!). However, this answer seems to accomplish what you want.
Try the Ctrl+F5 trick. To avoid hitting browser cache.
More info here:
What requests do browsers' "F5" and "Ctrl + F5" refreshes generate?

How can I tell when a web page resource is cached?

Is there a way in JavaScript for me to tell whether a resource is already in the browser cache?
We're instrumenting a fraction of our client-side page views so that we can get better data on how quickly pages are loading for our users. The first time users arrive on our site, a number of resources (JS, CSS, images) are cached by the browser, so their initial pageview is going to be slower than subsequent ones.
Right now, that data is mixed together, so it's hard to tell an initial page load from a subsequent pageview that's slow for some other reason. I'd love a cross-browser way to check to see whether the cache is already primed, so that I can segregate the two sorts of pageview and analyze them separately.
You should use TransferSize:
window.performance.getEntriesByName("https://[resource-name].js")[0].transferSize
To verify it, you can run the above line on Chrome...
If the browser has caching enabled and your resource was previously loaded with proper cache-control header, transferSize should be 0.
If you disable caching (Network tab -> Disable cache) and reload, transferSize should be > 0.
There isn't a JavaScript API for checking if a resource is cached. I think the best you can do is check how long it took to load the resources, and bucket the ones with shorter load times together.
At the top of the page:
<script>var startPageLoad = new Date().getTime();</script>
On each resource:
<img src="foo.gif" onload="var fooLoadTime = startPageLoad - new Date().getTime()">
<script src="bar.js" onload="var barLoadTime = startPageLoad - new Date().getTime()">
When reporting load times:
var fooProbablyCached = fooLoadTime < 200; // Took < 200ms to load foo.gif
var barProbablyCached = barLoadTime < 200; // Took < 200ms to load bar.gif
You may need to use onreadystatechange events instead of onload in IE.
You need a plug-in to do this. Firebug can tell you on the "NET" tab, once you install it (for Firefox). JavaScript itself cannot see the browser's HTTP traffic.

Categories

Resources