Pre-load external JavaScript library for later pages - javascript

I use an external JavaScript library that is hosted on a CDN.
Some users report that when they first visit the pages that require this library, it takes a little while to load as well as to render stuff (the library renders LaTeX).
I wonder if I can optimize their experience by (pre-)loading this library on some other pages before they arrive at the pages that actually use it. This way, I hope the browser will cache the library early, so the rendering may happen faster later.
P.S. the library in question is MathJax.

Since you're interested in MathJax specifically, simply loading it won't buy you that much performance. (Though MathJax will be cached and if you use the MathJax CDN your users might already have cached it while visiting some other site.)
There are two reasons for why you probably won't see much of a performance increase.
First, MathJax loads most of its components dynamically. For example, the MathJax webfonts will only be loaded if no compatible font is installed on a user's system and they are also broken up into several pieces, to be loaded only when MathJax actually encounters a character. The same goes for other components. This also means that on a page without any mathematics, loading MathJax will not help you cache much.
That being said, the actual download of MathJax components can be optimized a bit by using one of the combined configuration files ending on -full, see the MathJax documentation.
However, it's the typesetting that's usually the real performance hit. MathJax output depends on your content as well as the user's combination of screen, browser, and OS. Both calculating the best fit as well as inserting & reflowing the content is an issue that has some fundamental limits (IE8 does a particularly bad job in standards mode).
PS: If you have hidden or dynamic content, you might want to look at the configuration option skipStartupTypeset: true; see the MathJax documentation.
[Disclaimer: I'm part of MathJax]

If you add a script tag calling the library at the end of your page (just before the html closing tag), everything else will load prior to it and the user can start using the page while the library is still loading.
EDIT: This allows you to "pre-load" the library on another page withtout slowing it down.

Related

Multiple files on CDN vs. one file locally

My website uses about 10 third party javascript libraries like jQuery, jQuery UI, prefixfree, a few jQuery plugins and also my own javascript code. Currently I pull the external libraries from CDNs like Google CDN and cloudflare. I was wondering what is a better approach:
Pulling the external libraries from CDNs (like I do today).
Combining all the files to a single js and a single css file and storing them locally.
Any opinions are welcome as long as they are explained.
Thanks :)
The value of a CDN lies in the likelihood of the user having already visited another site calling that same file from that CDN, and becomes increasingly valuable depending on the size of the file. The likelihood of this being the case increases with the ubiquity of the file being requested and the popularity of the CDN.
With this in mind, pulling a relatively large and popular file from a popular CDN makes absolute sense. jQuery, and, to a lesser degree, jQuery UI, fit this bill.
Meanwhile, concatenating files makes sense for smaller files which are not likely to change much — your commonly used plugins will fit this bill, but your core application-specific code probably doesn't: it might change from week to week, and if you're concatenating it with all your other files, you'd have to force the user to download everything all over again.
The HTML5 boilerplate does a pretty good job of providing a generic solution for this:
Modernizr is loaded from local in the head: it's very small and
differs quite a lot from instance to instance, so it doesn't make
sense to source it from a CDN and it won't hurt the user too much to
load it from your server. It's put in the head because CSS may be
making use of it, so you want it's effects to be known before the
body renders. Everything else goes at the bottom, to stop your
heavier scripts blocking rendering while they load and execute.
jQuery from the CDN, since almost everyone uses it and it's quite heavy. The user will probably already have this cached before they
visit your site, in which case they'll load it from cache instantly.
All your smaller 3rd party dependencies and code snippets that aren't likely to change much get concatenating into a plugins.js
file loaded from your own server. This will get cached with a
distant expiry header the first time the user visits and loaded from
cache on subsequent visits.
Your core code goes in main.js, with a closer expiry header to account for the fact that your application logic may change from
week to week or month to month. This way when you've fixe a bug or
introduced new functionality when the user visits a fortnight from
now, this can get loaded fresh while all the content above can be
brought in from cache.
For your other major libraries, you should look at them individually and ask yourself whether they should follow jQuery's lead, be loaded individually from your own server, or get concatenated. An example of how you might come to those decisions:
Angular is incredibly popular, and very large. Get it from the CDN.
Twitter Bootstrap is on a similar level of popularity, but you've got a relatively slim selection of its components, and if the user doesn't already have it it might not be worth getting them to download the full thing. Having said that, the way it fits into the rest of your code is pretty intrinsic, and you're not likely to be changing it without rebuilding the whole site — so you may want to keep it hosted locally but keep it's files separate from your main plugins.js. This way you can always update your plugins.js with Bootstrap extensions without forcing the user to download all of Bootstrap core.
But there's no imperative — your mileage may vary.

Javascript and CSS, inside HTML vs. in external file

Is there any difference between including external javascript and CSS files and including all javascript and CSS files (even jQuery core!) inside html file, between <style>...</style> and <script>...</script> tags except caching? (I want to do something with that html file locally, so cache doesn't matter)
The difference is that your browser doesn't make those extra requests, and as you have pointed out, cannot cache them separately from the page.
From a functional standpoint, no, there is no difference once the resources have been loaded.
The reason most of the time we see external path for CSS and javascript because they are either kept on a CDN or some sort sort cache server now days on a cloud
Very good example is when you include jquery from google
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.7.2/jquery.min.js"></script>
here we see that google is hosting it for us and we don't need to worry about maintainance etc
So if you want to keep them locally then you will have to maintain it
There isn't any difference once the code is loaded. Of course it wont be cached like you pointed out but since you just need to do something locally, it really isn't that important.
On thing to remember would be that you'd have to make sure dependency chains aren't broken since the browser would be loading the scripts simultaneously.
Edit: Of course your main page would appear to take a longer time to load since it has to download all that extra content before the <body> starts to load. You can avert that by moving your JS at the bottom (near the footer) instead.
When your css isnt loaded your page appears crappy at first and then it settles after the css styles are applied, thus now you have to declare your css style on top of the page and then wait for all that to be processed by the browser and then start rendering your page or you let your first page load slowly and on further requests your page will load quicker since the style is now cached
and similarly with your script code, now you need to wait for your code to be rendered on the page and then wait for the the execution that you have bound in $(document).ready().. I hope you realize that $(document).ready will now be delayed since there is no caching.
There is a huge performance issue with this. your load and DOMContentLoaded will fire way slower.
load will fire when browser parses last line of your code. So browser will show waiting circle until all your resources are loaded and parsed. Browsers load multiple resources synchronously. You will lose this performance boost by including JS and CSS code in HTML.
No difference on the client side except you'll do less requests, thus loading faster. On the other hand, you won't be caching but you also won't be able to share the style an the JavaScript among several pages.
If you're positive that CSS and JavaScript are only going to be used in this page, inline is fine IMO.
If you use the script and css only on one page, including them in the html would be the fastest way as the browser needs to make only one request. If you use them on more pages, you should make them external so the browser can cahche them and only has to download them once. Using the jquery from google for example, as mentionned #hatSoft, is even better as the browser is very likly to have them already in cache from othersites that reference them when your user visits for the first time. In real live you rarly use scripts and css on one page only, so making them external is most often the best for performance, and definitly for maintenance. Personaly i always keep HTML, js and css strictly separate!

What are the pros and cons of adding <script> and <link> elements using JavaScript?

Recently I saw some HTML with only a single <script> element in its <head>...
<head>
<title>Example</title>
<script src="script.js" type="text/javascript"></script>
<link href="plain.css" type="text/css" rel="stylesheet" />
</head>
This script.js then adds any other necessary <script> elements and <link> elements to the document using document.write(...): (or it could use document.createElement(...) etc)
document.write("<link href=\"javascript-enabled.css\" type=\"text/css\" rel=\"styleshet\" />");
document.write("<script src=\"https://ajax.googleapis.com/ajax/libs/jquery/1.5.1/jquery.min.js\" type=\"text/javascript\"></script>");
document.write("<script src=\"https://ajax.googleapis.com/ajax/libs/jqueryui/1.8.10/jquery-ui.min.js\" type=\"text/javascript\"></script>");
document.write("<link href=\"http://ajax.googleapis.com/ajax/libs/jqueryui/1.7.0/themes/trontastic/jquery-ui.css\" type=\"text/css\" rel=\"stylesheet\" />")
document.write("<script src=\"validation.js\" type=\"text/css\"></script>")
Note that there is a plain.css CSS file in the document <head> and script.js just adds any and all CSS and JavaScript which would be used by a JS-enabled user agent.
What are some of the pros and cons of this technique?
The blocking nature of document.write
document.write will pause everything that the browser is working on the page (including parsing). It is highly recommended to avoid because of this blocking behavior. The browser has no way of knowing what you're going to shuff into the HTML text stream at that point, or whether the write will totally trash everything on the DOM tree, so it has to stop until you're finished.
Essentially, loading scrips this way will force the browser to stop parsing HTML. If your script is in-line, then the browser will also execute those scripts before it goes on. Therefore, as a side-note, it is always recommended that you defer loading scripts until after your page is parsed and you've shown a reasonable UI to the user.
If your scripts are loaded from separate files in the "src" attribute, then the scripts may not be consistently executed across all browsers.
Losing browser speed optimizations and predictability
This way, you lose a lot of the performance optimizations made by modern browsers. Also, when your scripts execute may be unpredictable.
For example, some browsers will execute the scripts right after you "write" them. In such cases, you lose parallel downloads of scripts (because the browser doesn't see the second script tag until it has downloaded and executed the first). You lose parallel downloads of scripts and stylesheets and other resources (many browsers can download resources, stylesheets and scripts all at the same time).
Some browsers defer the scripts until after the end to execute them.
The browser cannot continue to parse the HTML while document.write is going on and, in certain cases, when the scripts written are being executed due to the blocking behavior of document.write, so your page shows up much slower.
In other words, your site has just become as slow as it was loading on a decades-old browser with no optimizations.
Why would somebody do it like this?
The reason you may want to use something like this is usually for maintainability. For instance, you may have a huge site with thousands of pages, each loading the same set of scripts and stylesheets. However, when you add a script file, you don't want to edit thousands of HTML files to add the script tags. This is particularly troublesome when loading JavaScript libraries (e.g. Dojo or jQuery) -- you have to change each HTML page when you upgrade to the next version.
The problem is that JavaScript doesn't have an #include or #import statement for you to include other files.
Some solutions
The solution to this is probably not by injecting scripts via document.write, but by:
Using #import directives in stylesheets
Using a server scripting language (e.g. PHP) to manage your "master page" and to generate all other pages (however, if you can't use this and must maintain many HTML pages individually, this is not a solution)
Avoid document.write, but load the JavaScript files via XHR, then eval() them -- this may have security concerns though
Use a JavaScript Library (e.g. Dojo) that has module-loading features so that you can keep a master JS file which loads other files. You won't be able to avoid having to update the version numbers of the library file though...
One major disadvantage is browser incompatibility. Not all browsers correctly fetch and incorporate the resources into the DOM, so it's risky to use this approach. This is more true of stylesheets than scripts.
Another issue is one of maintainability. Concatenating and writing strings to add DOM elements on the client'side can become a maintenance nightmare. It's better to use DOM methods such as createElement to semantically create elements.
One obvious advantage is that it makes conditional use of resources much easier. You can have logic that determines which resources to load, thereby reducing the bandwidth consumption and overall processing time for the page. I would use a library call such as jQuery $.getScript() to load scripts versus document.write. The advantage being that such an approach is cleaner and also allows you to have code executed when the request is completed or fails.
Well, I may as well throw my hat in the ring on this one...
If you examine google's closure library, base.js, you will see that document.write is used in their writeScriptTag_() function. This is an essential part of the dependency management system which 'closure' provides, and is an enormous advantage when creating a complicated, multi-file, library-base javascript application - it lets the file/code prerequisites determine loading order. We currently use this technique and have been having little trouble with it. TBH, we have not had a single issue with browser compatibility, and we test regularly on IE 6/7/8, FF3/2, Safari 4/5 and Chrome latest.
The only disadvantage that we have had so far is that it can be challenging to track down issues caused by loading a resource twice, or failing to load one at all. Since the act of loading resources is programmatic, it is subject to programmatic errors, and unlike adding the tags directly to the HTML, it can be hard to see what the exact loading order is. However, this issue can be largely overcome by using a library with some form of dependency management system such as closure, or dojo.
EDIT: I have made a few comments to this nature, but I thought it best to summarize in my answer:
There are some problems with dojo.require() and jQuery.getScript() (both which ultimiately perform an ajax request and eval).
loading via ajax means no cross scripting - ie no loading javascript that is not from your site. This will be a problem if you want to include https://ajax.googleapis.com as listed in the description.
Eval'd scripts will not show up in a javascript debugger's list of page scripts, making debugging quite challenging. Recent releases of firebug will show you eval'd code, however filenames are lost making the act of setting breakpoints tedious. AFAIK, Webkit javascript console and IE8 developer tools do not show eval'd scripts.
It has the advantage that you don't need to repeat the script references in each HTML file. The disadvantage is that the browser must fetch and execute the main javascript file before it may load the others.
I guess one advantage I could think of would be if you use these scripts on multiple pages, you only have to remember to include one script, and it saves some space.
At Google PageSpeed, they highly discourage you from using this technique, because it makes things slower. Apart from the sequential loading of your script.js before all the others, there's another catch:
Modern browsers use speculative parsers to more efficiently discover external resources [...] Thus, using JavaScript's document.write() to fetch external resources makes it impossible for the speculative parser to discover those resources, which can delay the download, parsing, and rendering of those resources.
It's also possible that this was written as a recommendation by an SEO firm to make the head element shorter if possible, so unique content is closer to the top of the document - also creating a higher text-to-HTML ratio. Though it does sound, all-in-all, like not a very good way of doing this; though it would make maintenance more time-consuming, a better approach would probably be to condense javascript into a single .js file and css into a single .css file, if it were deemed utterly necessary to reduce head element size.
A great disadvantage is that adding scripts into head will pause the processing of the document until those scripts were completely downloaded parsed and executed (because the browser thinks they may use document.write). - This will hurt responsiveness.
Now days it is recommended that you put your script tags right befo </body>. Of course this is not possible 100% of times, but if you are using unobtrusve Javascript (as you should), all scripts can be respositioned at the end of the document.
HTML5 came up with the async attribute which suggests the browser only to execute the scripts after the main document has been loaded. This is the behaviour of script-inserted scripts in many browsers, but notin all of them.
I advise against using document.write at all costs. Even without it, this results in one extra request towards the server. (We like to inimze the number of requests, for example with css sprites.)
And yes, as others mentioned earlier, if scriping is disabled your page will be shown without CSS (which makes it potentially unusable).
If JavaScript is disabled - <script> and <link> elements won't be added at all.
If you place JavaScript init functions at the bottom of your page (which is a good practice) and link CSS with JavaScript, it may cause a some delay before CSS loads (broken layout will be visible for a short time).

Javascript and website loading time optimization

I know that best practice for including javascript is having all code in a separate .js file and allowing browsers to cache that file.
But when we begin to use many jquery plugins which have their own .js, and our functions depend on them, wouldn't it be better to load dynamically only the js function and the required .js for the current page?
Wouldn't that be faster, in a page, if I only need one function to load dynamically embedding it in html with the script tag instead of loading the whole js with the js plugins?
In other words, aren't there any cases in which there are better practices than keeping our whole javascript code in a separate .js?
It would seem at first glance that this would be a good idea, but in fact it would actually make matters worse. For example, if one page needs plugins 1, 2 and 3, then a file would be build server side with those plugins in it. Now, the browser goes to another page that needs plugins 2 and 4. This would cause another file to be built, this new file would be different from the first one, but it would also contain the code for plugin 2 so the same code ends up getting downloaded twice, bypassing the version that the browser already has.
You are best off leaving the caching to the browser, rather than trying to second-guess it. However, there are options to improve things.
Top of the list is using a CDN. If the plugins you are using are fairly popular ones, then the chances are that they are being hosted with a CDN. If you link to the CDN-hosted plugins, then any visitors who are hitting your site for the first time and who have also happened to have hit another site that's also using the same plugins from the same CDN, the plugins will already be cached.
There are, of course, other things you can to to speed your javascript up. Best practice includes placing all your script include tags as close to the bottom of the document as possible, so as to not hold up page rendering. You should also look into lazy initialization. This involves, for any stuff that needs significant setup to work, attaching a minimalist event handler that when triggered removes itself and sets up the real event handler.
One problem with having separate js files is that will cause more HTTP requests.
Yahoo have a good best practices guide on speeding up your site: http://developer.yahoo.com/performance/rules.html
I believe Google's closure library has something for combining javascript files and dependencies, but I havn't looked to much into it yet. So don't quote me on it: http://code.google.com/closure/library/docs/calcdeps.html
Also there is a tool called jingo http://code.google.com/p/jingo/ but again, I havn't used it yet.
I keep separate files for each plug-in and page during development, but during production I merge-and-minify all my JavaScript files into a single JS file loaded uniformly throughout the site. My main layout file in my web framework (Sinatra) uses the deployment mode to automatically either generate script tags for all JS files (in order, based on a manifest file) or perform the minification and include a single querystring-timestamped script inclusion.
Every page is given a body tag with a unique id, e.g. <body id="contact">.
For those scripts that need to be specific to a particular page, I either modify the selectors to be prefixed by the body:
$('body#contact form#contact').submit(...);
or (more typically) I have the onload handlers for that page bail early:
jQuery(function($){
if (!$('body#contact').length) return;
// Do things specific to the contact page here.
});
Yes, including code (or even a plug-in) that may only be needed by one page of the site is inefficient if the user never visits that page. On the other hand, after the initial load the entire site's JS is ready to roll from the cache.
The network latency is the main problem.You can get a very responsive page if you reduce the http calls to one.
It means all the JS, CSS are bundled into the HTML page.And if your can forget IE6/7 you can put the images as data:image/png;base64
When we release a new version of our web app, a shell script minify and bundle everything into a single html page.
Then there is a second call for the data, and we render all the HTML client-side using a JS template library: PURE
Ensure the page is cached and gzipped. There is probably a limit in size to consider.We try to stay under 400kb unzipped, and load secondary resources later when needed.
You can also try a service like http://www.blaze.io. It automatically peforms most front end optimization tactics and also couples in a CDN.
There currently in private beta but its worth submitting your website to.
I would recommend you join common bits of functionality into individual javascript module files and load them only in the pages they are being used using RequireJS / head.js or a similar dependency management tool.
An example where you are using lighbox popups, contact forms, tracking, and image sliders in different parts of the website would be to separate these into 4 modules and load them only where needed. That way you optimize caching and make sure your site has no unnecessary flab.
As a general rule its always best to have less files than more, its also important to work on the timing of each JS file, as some are needed BEFORE the page completes loading and some AFTER (ie, when user clicks something)
See a lot more tips in the article: 25 Techniques for Javascript Performance Optimization.
Including a section on managing Javascript file dependencies.
Cheers, hope this is useful.

Integrate Javascript resources in HTML: what's the best way?

I'm working on a project which uses many scripts (Google Maps, jQuery, jQuery plugins, jQuery UI...). Some pages have almost 350 kB of Javascript.
We are concerned about performance and I'm asking myself what is the best way to integrate those heavy scripts.
We have 2 solutions:
Include all scripts in the head, even if they are not utilized on the page.
Include some common scripts in the head, and include page specific ones when they are needed.
I would like to have your advice.
Thanks.
For the best performance I would create a single static minified javascript file (using a tool like YUI compressor) and reference it from the head section. For good tips on website performance check out googles website optimizations page.
Note that the performance penalty of retrieving all your javascript files only happen on the first page, as the browser will use the cache version of the file on subsequent pages.
For even better responsiveness you would split your javascript in two files. Load the first with all the javascript you need when the page loads, then after the page loads load the second file in the background.
If your interested, I have an open source AJAX javascript framework that simplifies compresses and concatenates all your html, css and javascript (including 3rd party libraries) into a single javascript file.
If you think it's likely that some users will never need the Google Maps JavaScript for example, just include that in the relevant pages. Otherwise, put them all in the head - they'll be cached soon enough (and those from Google's CDN may be cached already).
Scripts in the <head> tag do (I think) stop the page from rendering further until they’ve finished downloading, so you might want to move them down to the end of the <body> tag.
It won’t actually make anything load faster, but it should make your page content appear more quickly in some situations, so it should feel faster.
I’d also query whether you’ve really got 350 KB of JavaScript coming down the pipe. Surely the libraries are gzipped? jQuery 1.4 is 19 KB when minifed and gzipped.
1) I would recommend gather all the common scripts and most important like jquery and etc in one file to reduce number of requests for this files and compress it and i would recommend google closure u will find it here
2) Make the loading in a page the user have to open it in the beginning like login page and put the scripts at the end of the page to let all the content render first and this recommended by most of the performance tools like yslow and page speed
3) don't write scripts in your page , try to write everything in a file to make it easier later on for compression and encryption
4) put the scripts and all statics files like images and css on other domain to separate the loading on your server

Categories

Resources