Is there any way to browser caching for some css and javascript files only through htaccess file?
I have three css files
http://www.example.com/css/main.css
http://www.example.com/css/star_rating.css
http://www.example.com/js/jquery.autocomplete.css
"main.css" may be chaged day by day. I want caching for star_rating.css and jquery.autocomplete.css only, not for main.css. How can I achieve this?
Also is there any way to caching google adsense javascript file.
https://www.gstatic.com/swiffy/v7.1/runtime.js
http://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js
https://pagead2.googlesyndication.com/pagead/osd.js
Set a cache-control header in your HTTP Response, in .htaccess, already answered here: How can i add cache control code to htaccess?
You will need a subsequent rule to reduce the cache interval of main.css, to whatever you need. However, before you go ahead with that...
Personally, I wouldn't bother with such sophisticated granularity, just set your cache time so the resources are only requested once for a typical browsing session (24 hours?). Although some browser caches can be rather large, there's no guarantee a busy user is going to still have your resources cached the next time they visit your site, if they fill their cache, the less frequent/stale items will be removed.
For long-term caching strategies I would just check that ETag support is working on your servers. If a browser already has one of your items cached, it will request with an "If Not Modified" header and provide the ETag it holds for your resource.
If the resource has not been modified (if the ETag values match), your server will respond with a 304 (Not Modified) instead of a 200, a good saving for large resources.
You cannot influence the response headers if hot-linking to the Google AdSense JavaScript files and not hosting them yourself, but they should have sensible cache-control headers (set by Google) anyway I would expect.
Related
My website displays traffic cams amongst other resources. They change every minute. I have to use the &nonce= to override the caching in order to get an update every minute. However, ALL of those get cached and the storage profile (specifically image caching) gets into gigabytes quickly.
As the traffic cam resources are out of my control (and they don't specify no-cache, but they DO prevent CORS), I see these options to prevent caching of images (but keep for other resources).
Specify (what?) in the request so that it's not cached.
Using xhr to specify no-cache and createObjectURL would fail b/c of CORS. And, can't bypass CORS b/c it's a PWA, not meant to have a local proxy server.
Override the response (headers!) with some middleware? (which?)
Clear only images in the cache every minute. (how?).
A better option I'm missing?
(Using straight js, no jquery).
I am trying to improve performance of few web pages and wanted to understand if the javascript files are cached by I.E or not for my internal application. So, I had fiddler to watch the requests going to server.
I can see, every single time a create customer page is loaded, the same number of requests in the fiddler, for the same files with Result '200' (and not 304 - not modified ) to fetch javascript files. These include jquery, knockout and a few custom ones.
I studied the request and response header (below) but I see cache-control to be ok and nothing that conveys it is not cached. But don't understand why these same http requests show up in fiddler (which conveys a request is actually made to server) if it is cached.
I can see the same requests every time going to server, which makes me wonder :
Is the browser caching these or not ?
If not, are these atleast cached in IIS ?
How can I avoid these unnecessary http requests, since these javascript files dont change at all ?
Many Thanks.
Your request for the file has a Pragma: no-cache header (up at the top of your image, two lines under "Request Headers"), which tells the browser and the server that you don't want to use the cached copy.
You'll want to look at how you're making that request to find out why that header is there, and get rid of it.
Possibilities:
You're loading it via some kind of AMD or other dynamic loading mechanism that is configured to not use cache
You're running with development tools with the "disable cache" option most of them have turned on
My website has 200k Active users daily
I read an article not to long ago about forcing javascript and PHP to cache files. I have never needed to have my files cached before, but now that i am dealing with a massive amount of data being transferred to and from the server i would like to store some of this data locally on the client side.
I don't know if there are any better ways on doing this but essentially, i am considering writing a library using
HTML5 local storage if its available / manifest
with a fallback of java if its available
with a fallback of silverlight if its available.
I am very interested in pursuing this, preferably in JavaScript.
I would like to know how to cache files using JavaScript
Before anyone thinks i am re-inventing the wheel
(example)
I have several Javascript files which if updated, the browser will not reload the script because it is cached. With version control, i can manage when a user needs to reload cached data.
See caching in HTTP. Basically, for every request you should specify the cache-control header field in the response, indicating when a fresh content will be available. The formal definition of the cache-control header field is as follows:
The Cache-Control general-header field is used to specify directives
that MUST be obeyed by all caching mechanisms along the
request/response chain. The directives specify behavior intended to
prevent caches from adversely interfering with the request or
response. These directives typically override the default caching
algorithms. Cache directives are unidirectional in that the presence
of a directive in a request does not imply that the same directive is
to be given in the response.
The field is usually specified along the lines of
cache-control: private|public, max-age=[, no-cache].
public
Indicates that the response MAY be cached by any cache, even if
it would normally be non-cacheable or cacheable only within a non-
shared cache. (See also Authorization, section 14.8, for additional
details.)
private
Indicates that all or part of the response message
is intended for a single user and MUST NOT be cached by a shared
cache. This allows an origin server to state that the specified parts
of the response are intended for only one user and are not a valid
response for requests by other users. A private (non-shared) cache MAY
cache the response. Note: This usage of the word private only controls
where the response may be cached, and cannot ensure the privacy of the
message content.
no-cache
If the no-cache directive does not specify a field-name, then
a cache MUST NOT use the response to satisfy a subsequent request
without successful revalidation with the origin server. This allows an
origin server to prevent caching even by caches that have been
configured to return stale responses to client requests. If the
no-cache directive does specify one or more field-names, then a cache
MAY use the response to satisfy a subsequent request, subject to any
other restrictions on caching. However, the specified field-name(s)
MUST NOT be sent in the response to a subsequent request without
successful revalidation with the origin server. This allows an origin
server to prevent the re-use of certain header fields in a response,
while still allowing caching of the rest of the response.
For example, cache-control: private, max-age=86400, no-cache directs the client to cache a response and reuse it until 86400 seconds (24 hours) have elapsed. However, things may change before that time elapses. no-cache directive causes a revalidation each time. It is like the browser asking each time may I really present your user with the cached content? Together with the ETag header, you will be able to push important changes to your user before previously cached content expires.
During revalidation, an Etag present in a response is compared with the one provided previously in a request for same resource. If they are same, it reassures that the resource has not changed, thus, cache is really valid. Else if they differ, then the resource content has changed, and the new content will be given as response to the user.
Read more about HTTP caching:
https://developers.google.com/web/fundamentals/performance/optimizing-content-efficiency/http-caching?hl=en#validating-cached-responses-with-etags
http://www.mobify.com/blog/beginners-guide-to-http-cache-headers/
Meanwhile, note that the use of the Application Cache is mainly applicable if you wish to provide your users with offline content.
In my opinion you would reinvent the wheel. Instead of trying to create a second cache on top of a browser's built-in cache, you should take advantage of a proxy like CloudFlare to handle caching of static assets for you.
As for the issue of cached files not updating, a common technique to force resources to be re-requested is to add a query string parameter containing the file's last modification time (e.g. /js/script.js?1441538979), which normally forces the browser to re-download the file.
When you read about preventing cache on deployment (if a file has changed) the solutions are often to add an incrementing query string (file.js?v=123) or rename the file to a MD5 hash of the file's contents using a build script (for example https://www.npmjs.org/package/grunt-cachebuster).
Why not use Last-Modified or ETag to solve it instead? What are the disadvantages?
Sure, you can use Last-Modified and ETag instead. However, using these cache mechanisms requires a roundtrip HTTP query for these resources every single time. Which can be pretty wasteful. Instead, you really want the browser to keep a local copy and not bother checking in with the server at all for as long as possible.
Last-Modified and ETag mostly just avoid the bandwidth cost of repeatedly transferring the file over the network.
Expires or Cache-Control headers reduce the effective traffic to zero, saving both time and bandwidth; at the cost of risking using outdated data. Uniquely identifying each version of a file helps solve that.
I am developing a web app using the AngularJs-framework. I'm currently trying to figure out how to prevent web caching and I am doing so by setting a hash in front my filenames.
What I have seen thus far is that most people only do this for image-, javascript- and css-files, for instance here:
http://davidtucker.net/articles/automating-with-grunt/#workflowCache
My question is, is there other kind of files that I should take into consideration?
Doesn't web browsers cache html-files as well?
Follow Google's guidelines for Optimizing Caching.
Some key points:
Set caching headers aggressively for all static resources.
For all cacheable resources, we recommend the following settings:
Set Expires to a minimum of one month, and preferably up to one year, in the future. (We prefer Expires over Cache-Control: max-age because it is is more widely supported.)
Do not set it to more than one year in the future, as that violates the RFC guidelines.
If you know exactly when a resource is going to change, setting a shorter expiration is okay. But if you think it "might change soon" but don't know when, you should set a long expiration and use URL fingerprinting (described below). Setting caching aggressively does not "pollute" browser caches: as far as we know, all browsers clear their caches according to a Least Recently Used algorithm; we are not aware of any browsers that wait until resources expire before purging them.
Set the Last-Modified date to the last time the resource was changed: If the Last-Modified date is sufficiently far enough in the past, chances are the browser won't refetch it.
Use fingerprinting to dynamically enable caching: For resources that change occasionally, you can have the browser cache the resource until it changes on the server, at which point the server tells the browser that a new version is available. You accomplish this by embedding a fingerprint of the resource in its URL (i.e. the file path). When the resource changes, so does its fingerprint, and in turn, so does its URL. As soon as the URL changes, the browser is forced to re-fetch the resource. Fingerprinting allows you to set expiry dates long into the future even for resources that change more frequently than that. Of course, this technique requires that all of the pages that reference the resource know about the fingerprinted URL, which may or may not be feasible, depending on how your pages are coded.
Read Google's full article for other points, especially regarding inter-operability.