Client Side Resource refreshing - javascript

Whenever we make changes to JS files, browsers cache the previous version of the client resources. Due to this, the changes are not reflected across the build.
Can any one suggest what is the best way to get this solved?
I wish the new resource to be requested by the browser only after a new build and for every other time the cached resource could just be fine as it helps performance.

Supply a version parameter in the URL. Basically,
<link rel="stylesheet" href="css/style.css?v=${app.version}" />
<script src="js/script.js?v=${app.version}"></script>
wherein the ${app.version} is an application wide variable which returns an integer or decimal value or maybe just the timestamp of the server's startup time. If the request parameter value changes, then the client is forced to send a new request on it.

Related

How to force refresh client js from server side?

Here is the case:
I get a js to monitor web ads.Because of the browser cache,when i update js on server side,js on client side will not be refreshed immediately.How could i force refresh client js as soon as i update js on server side?
p.s. Add version number strategy is not useful in my case.
Simple strategy - add a version number as a query string to your js files, and change the number. This will cause the browsers to fetch your js files again -
<script src="mysource.js?version=123"></script>
Whenever you change your script on the server, change this version number in the html too. Or better yet, apply a random number as the version value every time you request this script.
You can use HTTP's cache-control mechanisms to control the browser's caching.
When serving a copy of your JS file, include an ETag and/or Last-Modified header in the response. Also include a "Cache-Control: must-revalidate" header. This tells the browser that it must check back with the server every time, and it can send an If-None-Match and/or If-Modified-Since header in future requests to ask the server to send the file only if it's changed.
If you'd like to avoid the load of browsers checking with the server every time, and it's OK for the changes to not take effect immediately, you can also include a Date header with the current time and an Expires header set to some point in the future — maybe 12 or 24 hours. That allows the browser to use its cached copy for the specified amount of time before it has to check back with your server again.
HTTP's cache-control features are pretty robust, but there are plenty of nuances, such as controls for intermediate caches (e.g. other systems between your server and the user's browser). You'll want to read about caching in HTTP overall, not just the specific header fields that I've mentioned.
You can do this by changing the name of the file. Add some version number (could be like parameter, i.e. filename.js?v=time(); for PHP for example) or just append some random numbers at the end of the filename.
Actually I'm not sure whether you can force the client to refresh this type of files. But when changing the file name you will force the browser to get the newest version.

What purpose is of "&rnd=" parameter in http requests?

Why do some web-applications use the http-get parameter rnd? What is the purpose of it? What problems are solved by using this parameter?
This could be to make sure the page/image/whatever isn't taken from the user's cache. If the link is different every time then the browser will get it from the server rather than from the cache, ensuring it's the latest version.
It could also be to track people's progress through the site. Best explained with a little story:
A user visits example.com. All the links are given the same random number (let's say 4).
The user opens a link in a new window/tab, and the link is page2.php?rnd=4. All the links in this page are given the random number 7.
The user can click the link to page3.php from the original tab or the new one, and the analytics software on the server can tell which one by whether it has rnd=4 or rnd=7.
All we can do is suggest possibilities though. There's no one standard reason to put rnd= in a URL, and we can't know the website designer's motives without seeing the server software.
Internet Explorer and other browsers will read an image URL, download the image, and store it in a cache.
If your application is going to be updating the image regular, and so you want your users to not see a cached image, the URL needs to be unique each time.
Therefore, adding a random string ensures this will be unique and downloaded into the cache each time.
It's almost always for cache-busting.
As has been suggested by others. This kind of behaviour is usually used to avoid caching issues when you are calling a page that returns dynamic content data.
For example, say you have a page that gets some current user information such as "mysite.com/CurrentUserData". Now on the first call to this page, the user data will be returned as expected, but depending on the timing and caching settings, the second call may return the same data - even though the expected data may have been updated.
The main reason for caching is of course to optimise the speed of frequent request. But in the instance where this is not wanted, adding a random value as a query string parameter is known to be a widely used solution.
There are however other ways to get around this issue. For example if you were doing an Ajax request with javascript/JQuery. You could set the cache to false in your call...
$.ajax({url: 'page.html', cache: false});
you could also change it for all page calls on document load with...
$.ajaxSetup({cache: false}});
If you were to do an MVC application, you can even disable the caching on the control action methods with an attribute like so...
[OutputCache(NoStore = true, Duration = 0, VaryByParam = "*")]
public ActionResult NonCacheableData()
{
return View();
}
(thanks to a quick copy and paste from here)
I dare say there are also settings in IIS you could apply to get the same affect - though I have not been that far with this yet.

Time to first byte with javascript?

Is there any modern browser that via javascript exposes time to first byte (TTFB) and/or time to last byte (TTLB) on a http request without resorting to any plugin?
What I would like is a javascript snippet that can access these values and post them back the the server for performance monitoring purposes.
Clarification:
I am not looking for any js timers or developer tools. What I wonder and hoping is if there are any browsers that measures load times and exposes those value via javascript.
What you want is the W3C's PerformanceTiming interface. Browser support is good (see this survey from Sep 2011). Like you speculated in response to Shadow Wizard's answer, these times are captured by the browser and exposed to javascript in the window object. You can find them in window.performance.timing. The endpoint of your TTFB interval will be window.performance.timing.responseStart (defined as "the time immediately after the user agent receives the first byte of the response from the server, or from relevant application caches or from local resources"). There are some options for the starting point, depending on whether you're interested in time to unload the previous document, or time to resolve DNS, so you probably want to read the documentation and decide which one is right for your application.
I fear it's just not possible.
JavaScript becomes "active" only after part of the request has been sent from server, accepted by the browser and parsed.
What you ask is kind like asking "Can I measure the weight of a cake after eating it?" - you need to first weight and only then eat the cake.
You can see the response time in the Chrome Developer Tools.
It's impossible to get the true TTFB in JS, as the page gets a JS context only after the first byte has been received. The closest you can get is with something like the following:
<script type="text/javascript">var startTime = (new Date()).getTime()</script>
very early in your <head> tag. Then depending on if you want to check when the html finishes, or everything finishes downloading, you can either put a similar tag near the bottom of your html page (and subtract the values), and then do an XHR back to the server (or set a cookie, which you can retrieve server side on the next page request) or listen to the onload event, and do the same.

Prevent Browsers from Cacheing certain JavaScript files

I have two types of JavaScript files. One contains static code and the other contains dynamic code which changes from session to session.
The static JavaScript file should be cached whereas the dynamic one should be cached only for that session and then reloaded In next session. The dynamic JavaScript file is generated once per session and I would like the client browser to cache it for the remainder of session.
How do I force the client browser to request a JavaScript file every session? I know that a common practice is to append a request parameter containing a version number, but one can make only so many updates to a file so that you can manually update JavaScript references. You can't really do that with sessions since there can be multiple sessions per day.
I don't see what's wrong with placing a random number at the end of the JavaScript url. For example:
http://www.example.com/myjavascript.js?r=1234
Won't necessarily stop it from cache'n, but if the number is different, the browser will load that js file again.
Could you append the session id to the JavaScript URL? Assuming you're using JSP, it would look kind of like this:
<script src="/script.js?session=<%= // code to get the session ID %>"></script>
I don't know much about JSP, so I can't help with the specifics, but that should give you a single, unique URL for the session.
Just appending a session id or a random number to the file name would solve your user experience problem, but it also clogs up all the HTTP caches with useless entries. It should be a lot easier just to set the HTTP 1.1 Cache-Control header in your response to "no-cache". If you're using Java Servlets, it's done this way:
response.setHeader("Cache-Control", "no-cache");
(If some of your traffic will come from legacy browsers, http://onjava.com/pub/a/onjava/excerpt/jebp_3/index2.html gives some other header settings to really make sure nothing gets cached.)

Disable browser cache

I implemented a REST service and i'm using a web page as client.
My page has some javascript functions that performs several times the same http get request to REST server and process the replies.
My problem is that the browser caches the first reply and not actualy sends the following requests..
Is there some way to force the browser execute all the requests without caching?
I'm using internet explorer 8.0
Thanks
Not sure if it can help you, but sometimes, I add a random parameter in the URL of my request in order to avoid being cached.
So instead of having:
http://my-server:8080/myApp/foo?bar=baz
I will use:
http://my-server:8080/myApp/foo?bar=baz&random=123456789
of course, the value of the random is different for every request. You can use the current time in milliseconds for that.
Not really. This is a known issue with IE, the classic solution is to append a random parameter at the end of the query string for every request. Most JS libraries do this natively if you ask them to (jQuery's cache:false AJAX option, for instance)
Well, of course you don't actually want to disable the browser cache entirely; correct caching is a key part of REST and the fact that it can (if properly followed by both client and server) allow for a high degree of caching while also giving fine control over the cache expiry and revalidation is one of the key advantages.
There is though an issue, as you have spotted, with subsequent GETs to the same URI from the same document (as in DOM document lifetime, reload the page and you'll get another go at that XMLHttpRequest request). Pretty much IE seems to treat it as it would a request for more than one copy of the same image or other related resource in a web page; it uses the cached version even if the entity isn't cacheable.
Firefox has the opposite problem, and will send a subsequent request even when caching information says that it shouldn't!
We could add a random or time-stamped bogus parameter at the end of a query string for each request. However, this is a bit like screaming "THIS IS SPARTA!" and kicking our hard-won download into a deep pit that no Health & Safety inspector considered putting a safety rail around. We obviously don't want to repeat a full unconditional request when we don't need to.
However, this behaviour has a time component. If we delay the subsequent request by a second, then IE will re-request when appropriate while Firefox will honour the max-age and expires headers and not re-request when needless.
Hence, if two requests could be within a second of each other (either we know they are called from the same function, or there's the chance of two events triggering it in close succession) using setTimeout to delay the second request by a second after the first has completed will make it use the cache correctly, rather than in the two different sorts of incorrect behaviour.
Of course, a second's delay is a second's delay. This could be a big deal or not, depending primarily on the size of the downloaded entity.
Another possibility is that something that changes so rapidly shouldn't be modelled as GETting the state of a resource at all, but as POSTing a request for a current status to a resource. This does smell heavily of abusing REST and POSTing what should really be a GET though.
Which can mean that on balance the THIS IS SPARTA approach of appending random stuff to query strings is the way to go. It depends, really.

Categories

Resources