How to force refresh client js from server side? - javascript

Here is the case:
I get a js to monitor web ads.Because of the browser cache,when i update js on server side,js on client side will not be refreshed immediately.How could i force refresh client js as soon as i update js on server side?
p.s. Add version number strategy is not useful in my case.

Simple strategy - add a version number as a query string to your js files, and change the number. This will cause the browsers to fetch your js files again -
<script src="mysource.js?version=123"></script>
Whenever you change your script on the server, change this version number in the html too. Or better yet, apply a random number as the version value every time you request this script.

You can use HTTP's cache-control mechanisms to control the browser's caching.
When serving a copy of your JS file, include an ETag and/or Last-Modified header in the response. Also include a "Cache-Control: must-revalidate" header. This tells the browser that it must check back with the server every time, and it can send an If-None-Match and/or If-Modified-Since header in future requests to ask the server to send the file only if it's changed.
If you'd like to avoid the load of browsers checking with the server every time, and it's OK for the changes to not take effect immediately, you can also include a Date header with the current time and an Expires header set to some point in the future — maybe 12 or 24 hours. That allows the browser to use its cached copy for the specified amount of time before it has to check back with your server again.
HTTP's cache-control features are pretty robust, but there are plenty of nuances, such as controls for intermediate caches (e.g. other systems between your server and the user's browser). You'll want to read about caching in HTTP overall, not just the specific header fields that I've mentioned.

You can do this by changing the name of the file. Add some version number (could be like parameter, i.e. filename.js?v=time(); for PHP for example) or just append some random numbers at the end of the filename.
Actually I'm not sure whether you can force the client to refresh this type of files. But when changing the file name you will force the browser to get the newest version.

Related

Access a cookie that was set on a javascript file, but not on the HTML

I have a script that needs some external information to work with. It fetches this using Ajax requests. So far so good.
However, the script needs some of it's data right from the start. So I have been pondering a few options to supply it with that initial data at page load time:
Simplest: Just have it perform an Ajax request for the data right away. Downside of this is extra latency and more requests than strictly needed.
Ugly: Add a small script fragment at HTML render time that provides the initial data
Bad caching properties: Create the whole JS file dynamically and add the data right then.
Impossible: Something with headers... but unfortunately it seems we can't access them (see e.g. this question). Doing the extra Ajax request is not useful here as in that case we might just as well use option #1.
Something with cookies...
Not tried yet: Create a dynamic 'initial-data.js' script whose sole purpose it is to load the initial data. This would at least only send the data when needed, but it would require all users of my script to include 2 script files instead of one.... Also it will cause an extra request...
I am trying out the 4th option of using cookies to transport the initial data but so far not having any success. What I am trying to do:
When the browser requests the .js file, have the server add a Set-Cookie header with the initial data in it in the response.
In the JS file, read out the cookie.
It doesn't work. It seems I need to set the cookie on the response for the .html instead of the .js for the browser to make it available to the script... That's too bad as it would involve adding the Set-Cookie header to each page, even though it's only needed by that particular piece of JS.
I was actually very happy with the solution I thought I found because it let me send the initial data along with the request for the script only to those pages that actually use the script... Too bad!
Is there any way to do what I'm trying to do using cookies, headers or some similar mechanism?
Do you guys have any tips for this situation?
Background:
I am trying to write a semi-offline application. Semi-offline in that it should continue to work (apart from some functions that just need connectivity) when offline, but is expected to have periods with connectivity regularly. So I'm using local storage and synching with the server when possible.
To be able to have the client generate new items when offline, I am including an ID generator that gets handed out ID blocks by the server, consuming them as it generates ID's. The data I was trying to send to the script in a cookie is the initial list of ID blocks and some settings and looks like this:
/suid/suid.json:3:3:dxb,dyb,dzb
^ ^ ^ ^
url min max blocks
Where:
url = path to JSON for subsequent Ajax requests
min = minimum amount of ID blocks to keep in local storage
max = maximum amount of ID blocks to keep in local storage
blocks = comma separated list of ID blocks
The ID blocks are encoded as sort-of Base32 strings. I'm using a custom formatting schema because I want 53-bit ID's to be as short as possible in text format while still being easily human readable and write-able and URL-safe.

Can I tell from javascript whether my page was hard refreshed?

I've given up on this, but I thought I'd post here out of curiosity.
What I call a "hard refresh" is the Ctrl+R or Shift+F5 that you do during development to see your changes.
This causes the browser to add a Cache-Control: max-age=0 header to the request and "child" requests like images and scripts, etc.
If you're doing your job, you'll get a 304 on everything but the resource that's changed. (Okay, well, see comments. This is assuming that other validators are sent based on browser caches.)
So far, so good.
The problem is that I'm not loading scripts directly from the page, but through a load.js, and the browsers are inconsistent about whether they include that Cache-Control header on those requests. Chrome doesn't do it at all, and Firefox seems to stop in the middle of a series.
Since I can't access the headers of the current request, there's no way to know whether that header should be included or not.
The result is that when I change a script (other than load.js), a hard refresh does not reliably work, and I have to, e.g., clear the browser cache (which is a bit heavy-handed).
Any thoughts on this?
Unfortunately you cannot detect a hard refresh from JavaScript (there is no access to the headers for the currently loaded page).
However, the server can tell from the request headers if this is a hard refresh, so there's the option of cooperating. For example the server can include a custom <meta> tag in the response or add a special class to <body> and your script will then have access to this information.
Once load.js detects a hard refresh it can then propagate it to the dependent scripts by e.g. attaching a URL parameter to the requests (think "?t=" + timestamp).
You could try checking localStorage. Set a localStorage variable and check it. If it's there, it's not a hard refresh, otherwise, it is a hard refresh.

Client Side Resource refreshing

Whenever we make changes to JS files, browsers cache the previous version of the client resources. Due to this, the changes are not reflected across the build.
Can any one suggest what is the best way to get this solved?
I wish the new resource to be requested by the browser only after a new build and for every other time the cached resource could just be fine as it helps performance.
Supply a version parameter in the URL. Basically,
<link rel="stylesheet" href="css/style.css?v=${app.version}" />
<script src="js/script.js?v=${app.version}"></script>
wherein the ${app.version} is an application wide variable which returns an integer or decimal value or maybe just the timestamp of the server's startup time. If the request parameter value changes, then the client is forced to send a new request on it.

Prevent Browsers from Cacheing certain JavaScript files

I have two types of JavaScript files. One contains static code and the other contains dynamic code which changes from session to session.
The static JavaScript file should be cached whereas the dynamic one should be cached only for that session and then reloaded In next session. The dynamic JavaScript file is generated once per session and I would like the client browser to cache it for the remainder of session.
How do I force the client browser to request a JavaScript file every session? I know that a common practice is to append a request parameter containing a version number, but one can make only so many updates to a file so that you can manually update JavaScript references. You can't really do that with sessions since there can be multiple sessions per day.
I don't see what's wrong with placing a random number at the end of the JavaScript url. For example:
http://www.example.com/myjavascript.js?r=1234
Won't necessarily stop it from cache'n, but if the number is different, the browser will load that js file again.
Could you append the session id to the JavaScript URL? Assuming you're using JSP, it would look kind of like this:
<script src="/script.js?session=<%= // code to get the session ID %>"></script>
I don't know much about JSP, so I can't help with the specifics, but that should give you a single, unique URL for the session.
Just appending a session id or a random number to the file name would solve your user experience problem, but it also clogs up all the HTTP caches with useless entries. It should be a lot easier just to set the HTTP 1.1 Cache-Control header in your response to "no-cache". If you're using Java Servlets, it's done this way:
response.setHeader("Cache-Control", "no-cache");
(If some of your traffic will come from legacy browsers, http://onjava.com/pub/a/onjava/excerpt/jebp_3/index2.html gives some other header settings to really make sure nothing gets cached.)

Disable browser cache

I implemented a REST service and i'm using a web page as client.
My page has some javascript functions that performs several times the same http get request to REST server and process the replies.
My problem is that the browser caches the first reply and not actualy sends the following requests..
Is there some way to force the browser execute all the requests without caching?
I'm using internet explorer 8.0
Thanks
Not sure if it can help you, but sometimes, I add a random parameter in the URL of my request in order to avoid being cached.
So instead of having:
http://my-server:8080/myApp/foo?bar=baz
I will use:
http://my-server:8080/myApp/foo?bar=baz&random=123456789
of course, the value of the random is different for every request. You can use the current time in milliseconds for that.
Not really. This is a known issue with IE, the classic solution is to append a random parameter at the end of the query string for every request. Most JS libraries do this natively if you ask them to (jQuery's cache:false AJAX option, for instance)
Well, of course you don't actually want to disable the browser cache entirely; correct caching is a key part of REST and the fact that it can (if properly followed by both client and server) allow for a high degree of caching while also giving fine control over the cache expiry and revalidation is one of the key advantages.
There is though an issue, as you have spotted, with subsequent GETs to the same URI from the same document (as in DOM document lifetime, reload the page and you'll get another go at that XMLHttpRequest request). Pretty much IE seems to treat it as it would a request for more than one copy of the same image or other related resource in a web page; it uses the cached version even if the entity isn't cacheable.
Firefox has the opposite problem, and will send a subsequent request even when caching information says that it shouldn't!
We could add a random or time-stamped bogus parameter at the end of a query string for each request. However, this is a bit like screaming "THIS IS SPARTA!" and kicking our hard-won download into a deep pit that no Health & Safety inspector considered putting a safety rail around. We obviously don't want to repeat a full unconditional request when we don't need to.
However, this behaviour has a time component. If we delay the subsequent request by a second, then IE will re-request when appropriate while Firefox will honour the max-age and expires headers and not re-request when needless.
Hence, if two requests could be within a second of each other (either we know they are called from the same function, or there's the chance of two events triggering it in close succession) using setTimeout to delay the second request by a second after the first has completed will make it use the cache correctly, rather than in the two different sorts of incorrect behaviour.
Of course, a second's delay is a second's delay. This could be a big deal or not, depending primarily on the size of the downloaded entity.
Another possibility is that something that changes so rapidly shouldn't be modelled as GETting the state of a resource at all, but as POSTing a request for a current status to a resource. This does smell heavily of abusing REST and POSTing what should really be a GET though.
Which can mean that on balance the THIS IS SPARTA approach of appending random stuff to query strings is the way to go. It depends, really.

Categories

Resources