My website uses about 10 third party javascript libraries like jQuery, jQuery UI, prefixfree, a few jQuery plugins and also my own javascript code. Currently I pull the external libraries from CDNs like Google CDN and cloudflare. I was wondering what is a better approach:
Pulling the external libraries from CDNs (like I do today).
Combining all the files to a single js and a single css file and storing them locally.
Any opinions are welcome as long as they are explained.
Thanks :)
The value of a CDN lies in the likelihood of the user having already visited another site calling that same file from that CDN, and becomes increasingly valuable depending on the size of the file. The likelihood of this being the case increases with the ubiquity of the file being requested and the popularity of the CDN.
With this in mind, pulling a relatively large and popular file from a popular CDN makes absolute sense. jQuery, and, to a lesser degree, jQuery UI, fit this bill.
Meanwhile, concatenating files makes sense for smaller files which are not likely to change much — your commonly used plugins will fit this bill, but your core application-specific code probably doesn't: it might change from week to week, and if you're concatenating it with all your other files, you'd have to force the user to download everything all over again.
The HTML5 boilerplate does a pretty good job of providing a generic solution for this:
Modernizr is loaded from local in the head: it's very small and
differs quite a lot from instance to instance, so it doesn't make
sense to source it from a CDN and it won't hurt the user too much to
load it from your server. It's put in the head because CSS may be
making use of it, so you want it's effects to be known before the
body renders. Everything else goes at the bottom, to stop your
heavier scripts blocking rendering while they load and execute.
jQuery from the CDN, since almost everyone uses it and it's quite heavy. The user will probably already have this cached before they
visit your site, in which case they'll load it from cache instantly.
All your smaller 3rd party dependencies and code snippets that aren't likely to change much get concatenating into a plugins.js
file loaded from your own server. This will get cached with a
distant expiry header the first time the user visits and loaded from
cache on subsequent visits.
Your core code goes in main.js, with a closer expiry header to account for the fact that your application logic may change from
week to week or month to month. This way when you've fixe a bug or
introduced new functionality when the user visits a fortnight from
now, this can get loaded fresh while all the content above can be
brought in from cache.
For your other major libraries, you should look at them individually and ask yourself whether they should follow jQuery's lead, be loaded individually from your own server, or get concatenated. An example of how you might come to those decisions:
Angular is incredibly popular, and very large. Get it from the CDN.
Twitter Bootstrap is on a similar level of popularity, but you've got a relatively slim selection of its components, and if the user doesn't already have it it might not be worth getting them to download the full thing. Having said that, the way it fits into the rest of your code is pretty intrinsic, and you're not likely to be changing it without rebuilding the whole site — so you may want to keep it hosted locally but keep it's files separate from your main plugins.js. This way you can always update your plugins.js with Bootstrap extensions without forcing the user to download all of Bootstrap core.
But there's no imperative — your mileage may vary.
Related
I'm used to working with Java in which (as we know) each object is defined in its own file (generally speaking). I like this. I think it makes code easier to work with and manage.
I'm beginning to work with javascript and I'm finding myself wanting to use separate files for different scripts I'm using on a single page. I'm currently limiting myself to only a couple .js files because I'm afraid that if I use more than this I will be inconvenienced in the future by something I'm currently failing to foresee. Perhaps circular references?
In short, is it bad practice to break my scripts up into multiple files?
There are lots of correct answers, here, depending on the size of your application and whom you're delivering it to (by whom, I mean intended devices, et cetera), and how much work you can do server-side to ensure that you're targeting the correct devices (this is still a long way from 100% viable for most non-enterprise mortals).
When building your application, "classes" can reside in their own files, happily.
When splitting an application across files, or when dealing with classes with constructors that assume too much (like instantiating other classes), circular-references or dead-end references ARE a large concern.
There are multiple patterns to deal with this, but the best one, of course is to make your app with DI/IoC in mind, so that circular-references don't happen.
You can also look into require.js or other dependency-loaders. How intricate you need to get is a function of how large your application is, and how private you would like everything to be.
When serving your application, the baseline for serving JS is to concatenate all of the scripts you need (in the correct order, if you're going to instantiate stuff which assumes other stuff exists), and serve them as one file at the bottom of the page.
But that's baseline.
Other methods might include "lazy/deferred" loading.
Load all of the stuff that you need to get the page working up-front.
Meanwhile, if you have applets or widgets which don't need 100% of their functionality on page-load, and in fact, they require user-interaction, or require a time-delay before doing anything, then make loading the scripts for those widgets a deferred event. Load a script for a tabbed widget at the point where the user hits mousedown on the tab. Now you've only loaded the scripts that you need, and only when needed, and nobody will really notice the tiny lag in downloading.
Compare this to people trying to stuff 40,000 line applications in one file.
Only one HTTP request, and only one download, but the parsing/compiling time now becomes a noticeable fraction of a second.
Of course, lazy-loading is not an excuse for leaving every class in its own file.
At that point, you should be packing them together into modules, and serving the file which will run that whole widget/applet/whatever (unless there are other logical places, where functionality isn't needed until later, and it's hidden behind further interactions).
You could also put the loading of these modules on a timer.
Load the baseline application stuff up-front (again at the bottom of the page, in one file), and then set a timeout for a half-second or so, and load other JS files.
You're now not getting in the way of the page's operation, or of the user's ability to move around. This, of course is the most important part.
Update from 2020: this answer is very old by internet standards and is far from the full picture today, but still sees occasional votes so I feel the need to provide some hints on what has changed since it was posted. Good support for async script loading, HTTP/2's server push capabilities, and general browser optimisations to the loading process over the years, have all had an impact on how breaking up Javascript into multiple files affects loading performance.
For those just starting out with Javascript, my advice remains the same (use a bundler / minifier and trust it to do the right thing by default), but for anybody finding this question who has more experience, I'd invite them to investigate the new capabilities brought with async loading and server push.
Original answer from 2013-ish:
Because of download times, you should always try to make your scripts a single, big, file. HOWEVER, if you use a minifier (which you should), they can combine multiple source files into one for you. So you can keep working on multiple files then minify them into a single file for distribution.
The main exception to this is public libraries such as jQuery, which you should always load from public CDNs (more likely the user has already loaded them, so doesn't need to load them again). If you do use a public CDN, always have a fallback for loading from your own server if that fails.
As noted in the comments, the true story is a little more complex;
Scripts can be loaded synchronously (<script src="blah"></script>) or asynchronously (s=document.createElement('script');s.async=true;...). Synchronous scripts block loading other resources until they have loaded. So for example:
<script src="a.js"></script>
<script src="b.js"></script>
will request a.js, wait for it to load, then load b.js. In this case, it's clearly better to combine a.js with b.js and have them load in one fell swoop.
Similarly, if a.js has code to load b.js, you will have the same situation no matter whether they're asynchronous or not.
But if you load them both at once and asynchronously, and depending on the state of the client's connection to the server, and a whole bunch of considerations which can only be truly determined by profiling, it can be faster.
(function(d){
var s=d.getElementsByTagName('script')[0],f=d.createElement('script');
f.type='text/javascript';
f.async=true;
f.src='a.js';
s.parentNode.insertBefore(f,s);
f=d.createElement('script');
f.type='text/javascript';
f.async=true;
f.src='b.js';
s.parentNode.insertBefore(f,s);
})(document)
It's much more complicated, but will load both a.js and b.js without blocking each other or anything else. Eventually the async attribute will be supported properly, and you'll be able to do this as easily as loading synchronously. Eventually.
There are two concerns here: a) ease of development b) client-side performance while downloading JS assets
As far as development is concerned, modularity is never a bad thing; there are also Javascript autoloading frameworks (like requireJS and AMD) you can use to help you manage your modules and their dependencies.
However, to address the second point, it is better to combine all your Javascript into a single file and minify it so that the client doesn't spend too much time downloading all your resources. There are tools (requireJS) that let you do this as well (i.e., combine all your dependencies into a single file).
It's depending on the protocol you are using now. If you are using http2, I suggest you to split the js file. If you use http, I advise you to use minified js file.
Here is the sample of website using http and http2
Thanks, hope it helps.
It does not really matter. If you use the same JavaScript in multiple files, it can surely be good to have a file with the JavaScript to fetch from. So you just need to update the script from one place.
I'm used to working with Java in which (as we know) each object is defined in its own file (generally speaking). I like this. I think it makes code easier to work with and manage.
I'm beginning to work with javascript and I'm finding myself wanting to use separate files for different scripts I'm using on a single page. I'm currently limiting myself to only a couple .js files because I'm afraid that if I use more than this I will be inconvenienced in the future by something I'm currently failing to foresee. Perhaps circular references?
In short, is it bad practice to break my scripts up into multiple files?
There are lots of correct answers, here, depending on the size of your application and whom you're delivering it to (by whom, I mean intended devices, et cetera), and how much work you can do server-side to ensure that you're targeting the correct devices (this is still a long way from 100% viable for most non-enterprise mortals).
When building your application, "classes" can reside in their own files, happily.
When splitting an application across files, or when dealing with classes with constructors that assume too much (like instantiating other classes), circular-references or dead-end references ARE a large concern.
There are multiple patterns to deal with this, but the best one, of course is to make your app with DI/IoC in mind, so that circular-references don't happen.
You can also look into require.js or other dependency-loaders. How intricate you need to get is a function of how large your application is, and how private you would like everything to be.
When serving your application, the baseline for serving JS is to concatenate all of the scripts you need (in the correct order, if you're going to instantiate stuff which assumes other stuff exists), and serve them as one file at the bottom of the page.
But that's baseline.
Other methods might include "lazy/deferred" loading.
Load all of the stuff that you need to get the page working up-front.
Meanwhile, if you have applets or widgets which don't need 100% of their functionality on page-load, and in fact, they require user-interaction, or require a time-delay before doing anything, then make loading the scripts for those widgets a deferred event. Load a script for a tabbed widget at the point where the user hits mousedown on the tab. Now you've only loaded the scripts that you need, and only when needed, and nobody will really notice the tiny lag in downloading.
Compare this to people trying to stuff 40,000 line applications in one file.
Only one HTTP request, and only one download, but the parsing/compiling time now becomes a noticeable fraction of a second.
Of course, lazy-loading is not an excuse for leaving every class in its own file.
At that point, you should be packing them together into modules, and serving the file which will run that whole widget/applet/whatever (unless there are other logical places, where functionality isn't needed until later, and it's hidden behind further interactions).
You could also put the loading of these modules on a timer.
Load the baseline application stuff up-front (again at the bottom of the page, in one file), and then set a timeout for a half-second or so, and load other JS files.
You're now not getting in the way of the page's operation, or of the user's ability to move around. This, of course is the most important part.
Update from 2020: this answer is very old by internet standards and is far from the full picture today, but still sees occasional votes so I feel the need to provide some hints on what has changed since it was posted. Good support for async script loading, HTTP/2's server push capabilities, and general browser optimisations to the loading process over the years, have all had an impact on how breaking up Javascript into multiple files affects loading performance.
For those just starting out with Javascript, my advice remains the same (use a bundler / minifier and trust it to do the right thing by default), but for anybody finding this question who has more experience, I'd invite them to investigate the new capabilities brought with async loading and server push.
Original answer from 2013-ish:
Because of download times, you should always try to make your scripts a single, big, file. HOWEVER, if you use a minifier (which you should), they can combine multiple source files into one for you. So you can keep working on multiple files then minify them into a single file for distribution.
The main exception to this is public libraries such as jQuery, which you should always load from public CDNs (more likely the user has already loaded them, so doesn't need to load them again). If you do use a public CDN, always have a fallback for loading from your own server if that fails.
As noted in the comments, the true story is a little more complex;
Scripts can be loaded synchronously (<script src="blah"></script>) or asynchronously (s=document.createElement('script');s.async=true;...). Synchronous scripts block loading other resources until they have loaded. So for example:
<script src="a.js"></script>
<script src="b.js"></script>
will request a.js, wait for it to load, then load b.js. In this case, it's clearly better to combine a.js with b.js and have them load in one fell swoop.
Similarly, if a.js has code to load b.js, you will have the same situation no matter whether they're asynchronous or not.
But if you load them both at once and asynchronously, and depending on the state of the client's connection to the server, and a whole bunch of considerations which can only be truly determined by profiling, it can be faster.
(function(d){
var s=d.getElementsByTagName('script')[0],f=d.createElement('script');
f.type='text/javascript';
f.async=true;
f.src='a.js';
s.parentNode.insertBefore(f,s);
f=d.createElement('script');
f.type='text/javascript';
f.async=true;
f.src='b.js';
s.parentNode.insertBefore(f,s);
})(document)
It's much more complicated, but will load both a.js and b.js without blocking each other or anything else. Eventually the async attribute will be supported properly, and you'll be able to do this as easily as loading synchronously. Eventually.
There are two concerns here: a) ease of development b) client-side performance while downloading JS assets
As far as development is concerned, modularity is never a bad thing; there are also Javascript autoloading frameworks (like requireJS and AMD) you can use to help you manage your modules and their dependencies.
However, to address the second point, it is better to combine all your Javascript into a single file and minify it so that the client doesn't spend too much time downloading all your resources. There are tools (requireJS) that let you do this as well (i.e., combine all your dependencies into a single file).
It's depending on the protocol you are using now. If you are using http2, I suggest you to split the js file. If you use http, I advise you to use minified js file.
Here is the sample of website using http and http2
Thanks, hope it helps.
It does not really matter. If you use the same JavaScript in multiple files, it can surely be good to have a file with the JavaScript to fetch from. So you just need to update the script from one place.
In my project each page has a bunch of dependent Javascript and Css. Whilst developing I just dumped this code right into the page but now I'm looking to clean it up...
it appears that the general approach out there is to package all the Javascript/CSS for an application into two big files that get minimised.
This approach has the benefit that it reduces bandwidth since all the front-end code gets pulled in just once from the server... however, I'm concerned I will be increasing the memory footprint of the application by defining a whole ton of functions for each page that I don't actually need - which is why I had them on a per-page basis to begin with.
is that something anyone else cares about or is there some way to manage this issue?
yes, I have thought of doing conditional function creation since I need to run code conditionally for each page anyway - though that starts to get a bit hackish in my view.
also, is there much cost to defining a whole ton of Css that is never used?
Serving the javascript/CSS in one big hit for the application, allows the browser to cache all it needs for all your pages. If the standard use case for your site is that users will stay and navigate around for a while then this is a good option to use.
If, however, you wish your landing page to load quickly, since there is a chance that the user will navigate away, consider only serving the CSS/javascript required for this page.
In terms of a performance overhead of a large CSS file - there will be none that is noticeable. All modern browsers are highly optimised for applying styles.
As for your javascript - try not to use conditional function creation, conditional namespace creation is acceptable and required, but your functions should be declared only in one place.
The biggest thing you can do for bandwidth is make sure your server is compressing output. Any static document type should be compressed (html, js, css, etc.).
For instance the jQuery Core goes from approx. 90KB to 30KB only because of the compressed output the server is sending to browsers.
If you take into account the compression, then you have to create some mammoth custom JS includes to really need to split-up your JS files.
I really like minifying and obfuscating my code because I can put my documentation right into the un-minified version and then the minification process removes all the comments for the production environment.
One approach would be to have all the shared javascript minified and compressed into one file and served out on each page. Then the page-specific javascript can be compressed/minified to its own files (although I would consider putting any very common page's javascript into the main javascript file).
I've always been in the habit of compressing/minifying all of the CSS into one file, rather than separate files for each page. This is because some of the page-specific files can be very small, and ideally we share as much css across the site as possible.
Like Jasper mentioned the most important thing would be to make sure that your sever is GZIPing the static resources (such as javascript and css).
If you have a lot of javascript code you can take a look on asynchronous loading of js files.
Some large project like ExtJs or Qooxdoo have build in loaders to load only required code, but here is a lot of libs which simplify this, and you can use in your project (e.g. head.js, LAB.js).
Thanks to them you can build application which loads only necessary files, not whole javascript code which in case of big apps can be a heavy stuff for browser.
Although it is always recommended to put JavaScript and CSS code into appropriate files (as .js and .css), most of major websites (like Amazon, facebook, etc.) put a significant part of their JavaScript and CSS code directly within the main HTML page.
Where is the best choice?
Place your .js in multiple files, then package, minify and gzip that into one file.
Keep your HTML into multiple seperate files.
Place your .css in multiple files, then package, minify and gzip that into one file.
Then you simply send one css file and one js file to the client to be cached.
You do not inline them with your HTML.
If you inline them then any change to the css or html or js forces to user to download all three again.
The main reason major websites have js & cs in their files, is because major websites code rot. Major companies don't uphold standards and best practices, they just hack it until it works then say "why waste money on our website, it works doesn't it?".
Don't look at examples of live websites, because 99% of all examples on the internet show bad practices.
Also for the love of god, Separation of concerns please. You should never ever use inline javascript or inline css in html pages.
http://developer.yahoo.com/performance/rules.html#external
Yahoo (even though they have many inline styles and scripts), recommends making them external. I believe google page speed used to (or still does?) do the same as well.
It's really a logical thing to have them separate. There are so many benefits to keeping CSS and JS separate to the HTML. Things like logical code management, caching of those pages, lower page size (would you rather a ~200ms request for a 400kb cached resource, or a 4000ms delay from having to download that data on every page?), SEO options (less crap for google to look through when scripts/styles are external), easier to minify external scripts (online tools etc), can load them synchronously from different servers....
That should be your primary objective in any website. All styles that make up your whole website should be in the one file (or files for each page, then merged and minified when updated), the same for javascript.
In the real world (not doing a project for yourself, doing one for a client or stakeholder that wants results), the only time where it doesn't make sense to load in another javascript resource or another stylesheet (and thus use inline styles/javascript) is if there's some kind of dynamic information that is on a kind of per-user, per-session or per-time-period that can't be accomplished as simply any other way. Example: when my website has a promotion, we dump a script tag with a small JSON object of information. Because we don't minify and merge multiple files, it makes more sense to just include it in the page. Sure there are other ways to do this, but it costs $20 to do that, whereas it could cost > $100 to do it another way.
Perhaps Amazon/Facebook/Google etc use so much inline code is so their servers aren't taxed so much. I'm not too sure on the benchmarking between requesting a 1MB file in one hit or requesting 10 100KB files (presuming 1MB/10 = 100KB for examples' sake), but what would be faster? Potentially the 1MB file, BUT smaller requests can be loaded synchronously, meaning each one of those 10 requests could come from a separate server/domain potentially, thus reducing overall load time.
Further, google homepages for example seem to dump a JSON array of information for the widgets, presumably because it compiles all that information from various sources, minifies it, caches it, then puts in on the page, then the javascript functions build the layout (client side processing power rather than server-side).
An interesting investigation might be whether they include various .css files regardless of the style blocks you're also seeing. Perhaps it's overhead or perhaps it's convenience.
I've found that while working with different styles of interface developer (and content deployers) that convenience/authority often wins in the face of deadlines and "getting the job done". In a project of a large scale there could be factors involved like "No, you ain't touching our stylesheets", or perhaps if there isn't a stylesheet using an http request already then convenience has won a battle against good practice.
If your css and javascript code is for a global usage, then it is best to put them into appropriate files.
Otherwise, if the code is used just by a certain page, like the home page, put them directly into html is acceptable, and is good for maintenance.
Our team keeps it all seperate. All resources like this goes into a folder called _Content.
CSS goes into _Content/css/xxx.js
JS goes into _Content/js/lib/xxx.js (For all the library packages)
Custom page events and functions get called from the page, but are put into a main JS file in _Content/js/Main.js
Images will go into the same place under _Content/images/xxx.x
This is just how we lay it out as it keeps the HTML markup as it should be, for markup.
I think putting css and js into the main html makes the page loads fast.
I know that best practice for including javascript is having all code in a separate .js file and allowing browsers to cache that file.
But when we begin to use many jquery plugins which have their own .js, and our functions depend on them, wouldn't it be better to load dynamically only the js function and the required .js for the current page?
Wouldn't that be faster, in a page, if I only need one function to load dynamically embedding it in html with the script tag instead of loading the whole js with the js plugins?
In other words, aren't there any cases in which there are better practices than keeping our whole javascript code in a separate .js?
It would seem at first glance that this would be a good idea, but in fact it would actually make matters worse. For example, if one page needs plugins 1, 2 and 3, then a file would be build server side with those plugins in it. Now, the browser goes to another page that needs plugins 2 and 4. This would cause another file to be built, this new file would be different from the first one, but it would also contain the code for plugin 2 so the same code ends up getting downloaded twice, bypassing the version that the browser already has.
You are best off leaving the caching to the browser, rather than trying to second-guess it. However, there are options to improve things.
Top of the list is using a CDN. If the plugins you are using are fairly popular ones, then the chances are that they are being hosted with a CDN. If you link to the CDN-hosted plugins, then any visitors who are hitting your site for the first time and who have also happened to have hit another site that's also using the same plugins from the same CDN, the plugins will already be cached.
There are, of course, other things you can to to speed your javascript up. Best practice includes placing all your script include tags as close to the bottom of the document as possible, so as to not hold up page rendering. You should also look into lazy initialization. This involves, for any stuff that needs significant setup to work, attaching a minimalist event handler that when triggered removes itself and sets up the real event handler.
One problem with having separate js files is that will cause more HTTP requests.
Yahoo have a good best practices guide on speeding up your site: http://developer.yahoo.com/performance/rules.html
I believe Google's closure library has something for combining javascript files and dependencies, but I havn't looked to much into it yet. So don't quote me on it: http://code.google.com/closure/library/docs/calcdeps.html
Also there is a tool called jingo http://code.google.com/p/jingo/ but again, I havn't used it yet.
I keep separate files for each plug-in and page during development, but during production I merge-and-minify all my JavaScript files into a single JS file loaded uniformly throughout the site. My main layout file in my web framework (Sinatra) uses the deployment mode to automatically either generate script tags for all JS files (in order, based on a manifest file) or perform the minification and include a single querystring-timestamped script inclusion.
Every page is given a body tag with a unique id, e.g. <body id="contact">.
For those scripts that need to be specific to a particular page, I either modify the selectors to be prefixed by the body:
$('body#contact form#contact').submit(...);
or (more typically) I have the onload handlers for that page bail early:
jQuery(function($){
if (!$('body#contact').length) return;
// Do things specific to the contact page here.
});
Yes, including code (or even a plug-in) that may only be needed by one page of the site is inefficient if the user never visits that page. On the other hand, after the initial load the entire site's JS is ready to roll from the cache.
The network latency is the main problem.You can get a very responsive page if you reduce the http calls to one.
It means all the JS, CSS are bundled into the HTML page.And if your can forget IE6/7 you can put the images as data:image/png;base64
When we release a new version of our web app, a shell script minify and bundle everything into a single html page.
Then there is a second call for the data, and we render all the HTML client-side using a JS template library: PURE
Ensure the page is cached and gzipped. There is probably a limit in size to consider.We try to stay under 400kb unzipped, and load secondary resources later when needed.
You can also try a service like http://www.blaze.io. It automatically peforms most front end optimization tactics and also couples in a CDN.
There currently in private beta but its worth submitting your website to.
I would recommend you join common bits of functionality into individual javascript module files and load them only in the pages they are being used using RequireJS / head.js or a similar dependency management tool.
An example where you are using lighbox popups, contact forms, tracking, and image sliders in different parts of the website would be to separate these into 4 modules and load them only where needed. That way you optimize caching and make sure your site has no unnecessary flab.
As a general rule its always best to have less files than more, its also important to work on the timing of each JS file, as some are needed BEFORE the page completes loading and some AFTER (ie, when user clicks something)
See a lot more tips in the article: 25 Techniques for Javascript Performance Optimization.
Including a section on managing Javascript file dependencies.
Cheers, hope this is useful.