javascript (DOJO) file caching - Client side - javascript

I want to implement caching of the javascript files (Dojo Tool kit) which are not going to change.. Currently my home page takes about 15-17 secs to load and upon refresh it takes 5-6 secs on load.. But is there a way to use the cached files again when we load it in a new browser session.. I do not want the browser to make request to the server on load of the application home page in a new browser session? Also is there a option to set the expiry to a certain number of days.. I tried with META tag and not helping much.. Either I'm doing something wrong or I'm not implementing it correctly..
I have implemented the dojo compression tool kit and see a slight improvement in the performance but not significant..

Usually your browser should do that already anyway. Please check if caching is really turned on and not only for session.
However, creating a custom dojo build with your app profile defining layers with dojo the build puts all your code together and bundles it with dojo.js (the files are still available independently). The result is just one http request for all of the code (larger file but just once). The gained speed due to reduced http requests is much more than a cache could ever provide.
For details refer to the tutorial: http://dojotoolkit.org/documentation/tutorials/1.8/build/

The caching is made by the browser, which behaviour is influenced by Cache-Control HTTP header (see http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html). Normally, the browser would ask, if newer version of resources is available, so you get 1 short request for each resource anyway.
From my experience, even with very aggresive caching, when the browser is instructed not to ask for the version of the resource on server for the given period of time, the checking in the browser cache for such an immense number of resources is a costly process.
The real solution are custom builds. You have written something about "dojo compression", I assume you're acquainted with the Dojo build profiles. It's quite nasty documented, but once you're successfull with it, you should have some big file(s) with layer(s), with the following format:
require({cache:{
"name/of/dojo/resource": function() { ... here comes the content of JS file ... }
, ...
}});
It's a multi-definition file that inlines all definitions of modules within single layer. So, loading such file will load many modules in single request. But you must load the layer.
In order to get layers to run I've had to add the extra require to each of my entry JS files (those that are referenced in the headers of HTML file):
require(["dojo/domReady!"], function(){
// load the layers
require(['dojo/dojo-qm',/*'qmrzsv/qm'*/], function(){
// here is your normal dojo code, all modules will be loaded from the memory cache
require(["dojo", "dojo/on", "dojo/dom-attr", "dojo/dom-class"...],
function(dojo,on,domAttr,domClass...){
....
})
})
})
It has significantly improved the performance. The bottleneck was loading a large amount of little javascript modules, not parsing them. Loading and parsing all modules at once is much cheaper that loading hundreds of them on demand.

Related

How do I display my project correctly in a server?

I sent my project to my server but no one cant see changes what i did in local mode(i have index.html and other js and php). I had the same problem with another project with index.php but soved adding this <?php time();?> at the end of scrip. Is there any similar solution for javascript?
This is what i did
<script src="assets/js/funciones.js?<?php time();?>"></script>
The problem is that you're changing static files, but not their filenames.
By default apache/nginx/etc serve static content with headers that say "cache this for a very long time" because it's static content, why would you not?
Tacking on random trash to the URL like you're doing with you JS is a kludge that permanently breaks all caching and ensures that users will repeatedly download the exact same static file every time they request a page. You can make the trash less random to break the cache less frequently, but it's still an inefficient kludge. [Albeit a popular one, to my immense annoyance.]
Ideally for resource bundles like JS and CSS, you make a new resource bundle file every time you change it, eg: somefile-v1234.js or somefile-20211007.js and update the reference in your HTML source. This has the side-benefit of ensuring that the versions of your resource bundles always match.
The same goes for any other static file: images, CSV, etc.
The trouble you're having now is that you've updated some HTML pages and the only way to break the cache is to have the user perform an action, like hitting CTRL+F5 to force a refresh.
There are a couple ways around this:
Change the Apache/Nginx/etc settings to set shorter expiries for static file cache headers. You may be able to target specific files like index.html, but YMMV.
Serve the content with PHP. Anything served via a PHP script will not have any cache headers set by default, as the assumption is that the output of a script is dynamic. You can also issue the caching headers yourself in PHP to control what gets cached for how long.
Lastly, you cannot solve this problem retroactively. If a user has a cached version of the HTML page that has not yet reached its expiration, the user MUST take action to break that cache. There's nothing that can be done server side because the valid cache tells the client that it doesn't have to ask the server.
Once you get to the point of your application being popular enough to warrant putting a CDN in front of it this problem gets much worse as now there's a cache in the middle that the user doesn't have control of, and it's potentially an expensive problem because some CDN providers charge a fee for forcing CDN cache invalidations.

Does sw-precache activation of new service worker guarantees cache busting?

I am using sw-precache along with sw-toolbox to allow offline browsing of cached pages of an Angular app.
The app is served through a node express server.
One problem we ran into is that the index.html sometimes doesn't seem to be updated in the cache although other assets has been updated on activation of new service worker.
This leaves users with an outdated index.html that is trying to load no longer existing versioned asset in this case /scripts/a387fbeb.modules.js.
I am not entirely sure what's happening, because it seems that on different browsers where the index.html has been correctly updated have the same hash.
On one browser outdated (problematic) Index.html
(cached with 2cdd5371d1201f857054a716570c1564 hash) includes:
<script src="scripts/a387fbeb.modules.js"></script>
in its content. (this file no longer exists in the cache or on remote).
On another browser updated (good) index.html
(cached with the same 2cdd5371d1201f857054a716570c1564) includes:
<script src="scripts/cec2b711.modules.js"></script>
These two have the same cache, although the content that is returned to the browsers are different!
What should I make of this? Does this mean that sw-precache doesn't guarantee atomic cache busting when new SW activates? How can one protect from this?
If these help, this is the generated service-worker.js file from sw-precache.
Note: I realize I can use remoteFirst strategy (at least for index.html) to avoid this. But I'd still like to understand and figure out a way to use cacheFirst strategy to get the most out of performance.
Note 2: I saw in other related questions that one can change the name of the cache to force bust all the old cache. But this seems to beat the idea of sw-precache only busting updated content? Is this the way to go?
Note 3: Note that even if I hard reload the browser where the website is broken. The site would work because it would skip service worker cache but the cache would still be wrong - the service worker doesn't seem to activate - my guess because this specific SW has been activated already but failed at busting the cache correctly. Subsequent non-hard-refresh visits would still see the broken index.html.
(The answers here are specific to the sw-precache library. The details don't apply to service workers in general, but the concepts about cache maintenance may still apply to a wider audience.)
If the content of index.html is dynamically generated by a server and depends on other resources that are either inlined or referenced via <script> or <link> tags, then you need to specify those dependencies via the dynamicUrlToDependencies option. Here's an example from the app-shell-demo that ships as part of the library:
dynamicUrlToDependencies: {
'/shell': [
...glob.sync(`${BUILD_DIR}/rev/js/**/*.js`),
...glob.sync(`${BUILD_DIR}/rev/styles/all*.css`),
`${SRC_DIR}/views/index.handlebars`
]
}
(/shell is used there instead of /index.html, since that's the URL used for accessing the cached App Shell.)
This configuration tells sw-precache that any time any of the local files that match those patterns change, the cache entry for the dynamic page should be updated.
If your index.html isn't being generated dynamically by the server, but instead is updated during build time using something like this approach, then it's important to make sure that the step in your build process that runs sw-precache happens after all the other modifications and replacements have taken place. This means using something like run-sequence to ensure that the service worker generation isn't run in parallel with other tasks.
If the above information doesn't help you, feel free to file a bug with more details, including your site's URL.

importing javascript library once

I'm building a website which imports a javascript library (located inside <head>):
<script type="text/javascript" src="#routes.Assets.at("nicedit/nicEdit.js")"></script>
which means,
every page loading (no matter which page in my website), this line exists.
I wanted to know if modern browsers download the library once, and then cache it,
or every page load - the javascript is library re-downloaded.
thanks
Under normal circumstances most browsers will cache a javascript file if it is repeatedly requested from the same url; however, my definition of normal circumstances is wrapped up in how servers are usually set up. The real result depends on what cache headers the server is sending, as well as whether or not the URL changes (which is not clear from your question).
There are lots of questions on StackOverflow about caching JavaScript. This one includes something of a compendium.
Please try configuring http.cacheControl
http.cacheControl
HTTP Response headers control for static files: sets the default
max-age in seconds, telling the user’s browser how long it should
cache the page. This is only read in prod mode, in dev mode the cache
is disabled. For example, to send no-cache:
http.cacheControl=0 Default: 3600 - set cache expiry to one hour.
Source of the information.

Send head before body to load CSS and JS asap

I wonder if anyone has found a way to send at mid rendering the head tag so CSS and Javascript are loaded before the page render has finished? Our page takes about 523ms to be rendered and resources aren't loaded until the page is received. I've done a lot of PHP and it is possible to flush the buffer before the end of the script. I've tried to add a Response.flush() at the end of the Masterpage page_load, but the page layout is horribly broken afterward. I've seen a lot of people using an update panel to send the content using AJAX afterward but I don't quite know what impact it would have on the SEO.
If I don't find a solution I guess I'd have to go the reverse proxy route and find a way to invalidate the proxy cache when the pages content change.
Do not place the Flush on code behind but on your html page as:
</head>
<%Response.Flush();%>
<body >
This can make something like fleekering effect on the page, so you can try to move the flush even a little lower to the page.
Also on Yahoo tips page at Flush the Buffer Early
http://developer.yahoo.com/performance/rules.html
Cache on Static
Additionally you can add client cache on static content like the css and javascript. In this page have all the ways for all iis versions.
http://www.iis.net/ConfigReference/system.webServer/staticContent/clientCache
Follow up
One more think that I suggest you to do after I see your pages is to place all css and javascript in one file each. And also use minified to minimize them.
I use this minified http://www.asp.net/ajaxlibrary/Download.ashx with very good results and real time minified.
Consider using a content-delivery-network (CDN) to host your images, CSS and JS files. Browsers have either an eight or four connection limit per domain - so once you use those up the browser has to wait for the resources to be freed up.
By hosting some files on the CDN you get another set of connections to use concurrently, allowing everything to load faster.
Also consider enabling GZIP on your server if you haven't already. This compresses files on the fly, resulting in smaller transfers.
You could use jQuery to execute your js as soon as it is loaded.
$.fn.ready(function(){
//Your code here
})
Or you could just take the standalone ready function -> $(document).ready equivalent without jQuery
You could do a fade-in or show once the document has been loaded. Just set body display:none;

are there any negative implications of sourcing a javascript file that does not actually exist?

If you do script src="/path/to/nonexistent/file.js" in an HTML file and call that in a browser, and there are no dependencies or resources anywhere else in the HTML file that expect the file or code therein to actually exist, is there anything inherently bad-practice about doing this?
Yes, it is an odd question. The rationale is the developer is dealing with a CMS that allows custom (self-contained) javascript files to be provided in certain circumstances. The problem is the CMS is not very flexible when it comes to creating conditional includes for javascript. Therefore it is easier to just make references to the self-contained js files regardless of whether they are actually at the specified path.
Since no errors are displayed to the user, should this practice be considered a viable option?
Well the major drawback is performance since the browser will try (hard) to download the file and your server will look for it. Finally the browser may download the 404 page instead - thus slowing down the page load.
If you have the script referred to in the <head> tag, ( not recommended for starters ), it will slow down the initial page-render time somewhat too.
If instead of quickly returning a 404, your site just accepts the connection and then never responds, this can cause the page to take an indefinite amount of time to load, and in some cases, lock up the entire user interface.
( At least that was the case with one revision of FireFox, I hope they've fixed it since I saw that happen ~2 years ago.* )
You should at least put the tags as low in the page order as you can afford to to remedy this problem.
Your best bet by far is to have one consistent no-op url that is used as a fill-in for all "doesn't exist" JavaScript files, that returns a 0-byte response with HTTP headers telling the UA to cache it till the cows come home, that should negate most your server<->client load penalties beyond the first hit ( and that should hardly hurt people even on ye-olde dialup )
*Lesson learned: don't put script-src references in head, especially for 3rd-party scripts hosted outside your machine, because then you can have the joy of having clients be able to access your website, but run the risk of the page being inoperable because of a bit of advertising JS that was inaccessible due to some internet weirdness. Even if they're a reputable-ish 3rd-party.
If your web server is configured to do work on a 404 error ("you might be looking for this", etc) then you're also causing unnecessary load on the server.
you should ask yourself why you were too lazy to test this yourself :)
i tested 1000 randomized javascript filenames and it took several nanoseconds to load, so no, it doesn't make a difference. example:
script src="/7701992spolsky.js"
This was on my local machine however, so it should take N * roundTripTime for the browser to figure out for remote servers, where N is the number of bad scripts.
If however, you have random domain names that don't exist, like
script src="http://www.randomsite7701992.com/spolsky.js"
then it will take a long f-in time.
If you choose to implement it this way, you could tune the web server that if the referenced JS file is not found, instead of 404, it could return a redirect (301) to an empty/default JS file.
If you are using asp.net you can look into using custom handlers (ASHX files).
Here's an example:
public class JavascriptHandler : IHttpHandler {
public void ProcessRequest (HttpContext context)
{
context.Response.ContentType = "text/plain";
//Some code to check if javascript code exists
string js = "";
if(JavascriptExists())
{
js = GetJavascript();
}
context.Response.write(js);
}
}
Then in your html header you could declare a file pointing to the custom handler:
src="/js/javascripthandler.ashx"

Categories

Resources