Related
What are some standard practices for managing a medium-large JavaScript application? My concerns are both speed for browser download and ease and maintainability of development.
Our JavaScript code is roughly "namespaced" as:
var Client = {
var1: '',
var2: '',
accounts: {
/* 100's of functions and variables */
},
orders: {
/* 100's of functions and variables and subsections */
}
/* etc, etc for a couple hundred kb */
}
At the moment, we have one (unpacked, unstripped, highly readable) JavaScript file to handle all the business logic on the web application. In addition, there is jQuery and several jQuery extensions. The problem we face is that it takes forever to find anything in the JavaScript code and the browser still has a dozen files to download.
Is it common to have a handful of "source" JavaScript files that gets "compiled" into one final, compressed JavaScript file? Any other handy hints or best practices?
The approach that I've found works for me is having seperate JS files for each class (just as you would in Java, C# and others). Alternatively you can group your JS into application functional areas if that's easier for you to navigate.
If you put all your JS files into one directory, you can have your server-side environment (PHP for instance) loop through each file in that directory and output a <script src='/path/to/js/$file.js' type='text/javascript'> in some header file that is included by all your UI pages. You'll find this auto-loading especially handy if you're regularly creating and removing JS files.
When deploying to production, you should have a script that combines them all into one JS file and "minifies" it to keep the size down.
Also, I suggest you to use Google's AJAX Libraries API in order to load external libraries.
It's a Google developer tool which bundle majors JavaScript libraries and make it easier to deploy, upgrade and make them lighter by always using compressed versions.
Also, it make your project simpler and lighter because you don't need to download, copy and maintain theses libraries files in your project.
Use it this way :
google.load("jquery", "1.2.3");
google.load("jqueryui", "1.5.2");
google.load("prototype", "1.6");
google.load("scriptaculous", "1.8.1");
google.load("mootools", "1.11");
google.load("dojo", "1.1.1");
Just a sidenode - Steve already pointed out, you should really "minify" your JS files. In JS, whitespaces actually matter. If you have thousand lines of JS and you strip only the unrequired newlines you have already saved about 1K. I think you get the point.
There are tools, for this job. And you should never modify the "minified"/stripped/obfuscated JS by hand! Never!
In our big javascript applications, we write all our code in small separate files - one file per 'class' or functional group, using a kind-of-like-Java namespacing/directory structure. We then have:
A compile-time step that takes all our code and minifies it (using a variant of JSMin) to reduce download size
A compile-time step that takes the classes that are always or almost always needed and concatenates them into a large bundle to reduce round trips to the server
A 'classloader' that loads the remaining classes at runtime on demand.
For server efficiency's sake, it is best to combine all of your javascript into one minified file.
Determine the order in which code is required and then place the minified code in the order it is required in a single file.
The key is to reduce the number of requests required to load your page, which is why you should have all javascript in a single file for production.
I'd recommend keeping files split up for development and then create a build script to combine/compile everything.
Also, as a good rule of thumb, make sure you include your JavaScript toward the end of your page. If JavaScript is included in the header (or anywhere early in the page), it will stop all other requests from being made until it is loaded, even if pipelining is turned on. If it is at the end of the page, you won't have this problem.
Read the code of other (good) javascript apps and see how they handle things. But I start out with a file per class. But once its ready for production, I would combine the files into one large file and minify.
The only reason, I would not combine the files, is if I didn't need all the files on all the pages.
My strategy consist of 2 major techniques: AMD modules (to avoid dozens of script tags) and the Module pattern (to avoid tightly coupling of the parts of your application)
AMD Modules: very straight forward, see here: http://requirejs.org/docs/api.html also it's able to package all the parts of your app into one minified JS file: http://requirejs.org/docs/optimization.html
Module Pattern: i used this Library: https://github.com/flosse/scaleApp you asking now what is this ? more infos here: http://www.youtube.com/watch?v=7BGvy-S-Iag
In order to improve performance of our web pages, we are recommended to use CDNs to serve .js files on our web pages. That makes sense.
Also, we are recommended to bundle our .js files in order to reduce the number of requests which are being made to server on load.
So, we need to sit down and make decision between if we use CDN or bundle .js files.
What are the pros and cons? Which ones make more sense?
Why can't you bundle them and place them are the CDN? It should hardly be a decision of one or the other?
If you have to choose one or the other, it depends on how many .js files you are including. For a small number of files, I'd suggest that a CDN would be quicker, where-as for a greater number of files, a bundle of .js files would definitely be quicker. Where the switch-over would be, is something for you to experiment with.
My answer: both. Bundle them and place them on a CDN.
The downside of doing this? Depends. What does you build process look like? Can you easily automate the bundling and minification? Are you using Yahoo YUI or Google Closure or something else?
Also, if there is a lot of GUI dependent jQuery there might be some time consuming friction due to constantly changing elements/effects/css.
Testing is important too because due to possible minification quirks.
Bottom line: 5 javascript files safely bundled into 1 file === 4 fewer requests.
A page with just plain old Html and one external javascript reference === 2 requests to your server. However, a page with just plain old Html and one external javascript reference on a CDN === 1 request to your server.
Currently we are using the Google Closure tools. The Google Closure Inspector helps with the following:
Closure Compiler modifies your original JavaScript code and produces code that's smaller and more efficient than the original, but harder to read and debug. Closure Inspector helps by providing a source mapping feature, which identifies the line of original source code that corresponds to the compiled code.
As others have already stated, the answer is both if possible. Bundled (and minifying) gives a benefit to your users because it decreases the page weight. The CDN benefits your servers because you are offloading work. Generally speaking, you need not optimize either unless you have observed performance issues or you just have nothing better to do.
There's a few things you need to think about...
How much of the JS do you need to load early in the page load, and how much can you delay until later?
If you can delay loading JS (e.g. put it at the bottom of the page) or load it asynchronously as Google Analytics does, then you will minimise the amount of time downloading the JS spends blocking the UI thread.
After working out how the load of the JS can be split, I'd deal with the merge / minify of the various JS files - cutting down HTTP requests is key to improving performance.
Then look at moving to the CDN and ensure the CDN can serve the JS content compressed and allow you to set headers so it's "cached forever" (you'll need to version the files if you cache forever). A CDN helps reduce the latency but will also reduce size by being cookieless
Other thing you might want to consider is setting up a separate domain for static content, point it to your server(s) while you sort things out and then switch to a CDN if it looks worthwhile.
Andy
I am developing in ASP.NET MVC and using multiple JavaScript files. E.g. jQuery, jQuery UI, Google Maps, my own JavaScript files, etc.
For performance, should I combine these into one? If so, how?
The reason you want to combine many files into one is so to minimize latency of setting up and tearing down http requests, the fewer you make the better. However, many newer browsers are downloading JavaScript files in parallel (still executing sequentially). The consequence is that downloading a single 1Mb file may be slower than three 350Kb files. What I've come to do is to separate my files into three bundles:
External lib files (jquery, flot, plugins)
Internal lib files (shared by multiple pages)
Page specific files (used only by that page, maybe by two pages)
This way, I get the best of both worlds: not an excessive number of http requests at startup, but also, it's not a single file that can't benefit from parallel downloads
The same applies to CSS, each page load three CSS bundles. So in total, our pages download six bundled files plus the main html file. I find this to be a good compromise. You may find that a different grouping of files works better for you, but my advice is don't create a single bundle, unless it's a one page app. if you find yourself putting the same file into different bundles a lot, it's time to re-think the bundling strategy since you're downloading the same content multiple times.
What to use? Martijn's suggestions are on the money. YUI is the most widely used from my experience, that's what we used at my previous and current jobs.
For the question of whether you should, check out the link in Shoban’s comment.
For the question of how:
Google’s Closure Compiler
Yahoo!’s YUI Compressor
If they are all going to be included on all of your pages, then it doesn't really make a difference. But if some are only relevant to certain pages, it would technically be better to keep them separated and only include the relevant ones on relevant pages.
As far as I know, you should indeed : less files means less http get, hence better performance for the user when they first load the page.
So they will save a split second they will come on your page for the first page. But after, these files are cashed, and it makes then no difference at all...
I haven't digged into the javascript engines itselves, but a function in one file will be handled in the same way if it is in a big file or a small file. So it makes no difference in the execution.
So, save your time, don't change anything as it'll cost you too much time for too little reward, especially when you'll discover that you want the latest version of jquery (a new version came out today btw), and that you have to re-concatene everything...
I know that best practice for including javascript is having all code in a separate .js file and allowing browsers to cache that file.
But when we begin to use many jquery plugins which have their own .js, and our functions depend on them, wouldn't it be better to load dynamically only the js function and the required .js for the current page?
Wouldn't that be faster, in a page, if I only need one function to load dynamically embedding it in html with the script tag instead of loading the whole js with the js plugins?
In other words, aren't there any cases in which there are better practices than keeping our whole javascript code in a separate .js?
It would seem at first glance that this would be a good idea, but in fact it would actually make matters worse. For example, if one page needs plugins 1, 2 and 3, then a file would be build server side with those plugins in it. Now, the browser goes to another page that needs plugins 2 and 4. This would cause another file to be built, this new file would be different from the first one, but it would also contain the code for plugin 2 so the same code ends up getting downloaded twice, bypassing the version that the browser already has.
You are best off leaving the caching to the browser, rather than trying to second-guess it. However, there are options to improve things.
Top of the list is using a CDN. If the plugins you are using are fairly popular ones, then the chances are that they are being hosted with a CDN. If you link to the CDN-hosted plugins, then any visitors who are hitting your site for the first time and who have also happened to have hit another site that's also using the same plugins from the same CDN, the plugins will already be cached.
There are, of course, other things you can to to speed your javascript up. Best practice includes placing all your script include tags as close to the bottom of the document as possible, so as to not hold up page rendering. You should also look into lazy initialization. This involves, for any stuff that needs significant setup to work, attaching a minimalist event handler that when triggered removes itself and sets up the real event handler.
One problem with having separate js files is that will cause more HTTP requests.
Yahoo have a good best practices guide on speeding up your site: http://developer.yahoo.com/performance/rules.html
I believe Google's closure library has something for combining javascript files and dependencies, but I havn't looked to much into it yet. So don't quote me on it: http://code.google.com/closure/library/docs/calcdeps.html
Also there is a tool called jingo http://code.google.com/p/jingo/ but again, I havn't used it yet.
I keep separate files for each plug-in and page during development, but during production I merge-and-minify all my JavaScript files into a single JS file loaded uniformly throughout the site. My main layout file in my web framework (Sinatra) uses the deployment mode to automatically either generate script tags for all JS files (in order, based on a manifest file) or perform the minification and include a single querystring-timestamped script inclusion.
Every page is given a body tag with a unique id, e.g. <body id="contact">.
For those scripts that need to be specific to a particular page, I either modify the selectors to be prefixed by the body:
$('body#contact form#contact').submit(...);
or (more typically) I have the onload handlers for that page bail early:
jQuery(function($){
if (!$('body#contact').length) return;
// Do things specific to the contact page here.
});
Yes, including code (or even a plug-in) that may only be needed by one page of the site is inefficient if the user never visits that page. On the other hand, after the initial load the entire site's JS is ready to roll from the cache.
The network latency is the main problem.You can get a very responsive page if you reduce the http calls to one.
It means all the JS, CSS are bundled into the HTML page.And if your can forget IE6/7 you can put the images as data:image/png;base64
When we release a new version of our web app, a shell script minify and bundle everything into a single html page.
Then there is a second call for the data, and we render all the HTML client-side using a JS template library: PURE
Ensure the page is cached and gzipped. There is probably a limit in size to consider.We try to stay under 400kb unzipped, and load secondary resources later when needed.
You can also try a service like http://www.blaze.io. It automatically peforms most front end optimization tactics and also couples in a CDN.
There currently in private beta but its worth submitting your website to.
I would recommend you join common bits of functionality into individual javascript module files and load them only in the pages they are being used using RequireJS / head.js or a similar dependency management tool.
An example where you are using lighbox popups, contact forms, tracking, and image sliders in different parts of the website would be to separate these into 4 modules and load them only where needed. That way you optimize caching and make sure your site has no unnecessary flab.
As a general rule its always best to have less files than more, its also important to work on the timing of each JS file, as some are needed BEFORE the page completes loading and some AFTER (ie, when user clicks something)
See a lot more tips in the article: 25 Techniques for Javascript Performance Optimization.
Including a section on managing Javascript file dependencies.
Cheers, hope this is useful.
What are some standard practices for managing a medium-large JavaScript application? My concerns are both speed for browser download and ease and maintainability of development.
Our JavaScript code is roughly "namespaced" as:
var Client = {
var1: '',
var2: '',
accounts: {
/* 100's of functions and variables */
},
orders: {
/* 100's of functions and variables and subsections */
}
/* etc, etc for a couple hundred kb */
}
At the moment, we have one (unpacked, unstripped, highly readable) JavaScript file to handle all the business logic on the web application. In addition, there is jQuery and several jQuery extensions. The problem we face is that it takes forever to find anything in the JavaScript code and the browser still has a dozen files to download.
Is it common to have a handful of "source" JavaScript files that gets "compiled" into one final, compressed JavaScript file? Any other handy hints or best practices?
The approach that I've found works for me is having seperate JS files for each class (just as you would in Java, C# and others). Alternatively you can group your JS into application functional areas if that's easier for you to navigate.
If you put all your JS files into one directory, you can have your server-side environment (PHP for instance) loop through each file in that directory and output a <script src='/path/to/js/$file.js' type='text/javascript'> in some header file that is included by all your UI pages. You'll find this auto-loading especially handy if you're regularly creating and removing JS files.
When deploying to production, you should have a script that combines them all into one JS file and "minifies" it to keep the size down.
Also, I suggest you to use Google's AJAX Libraries API in order to load external libraries.
It's a Google developer tool which bundle majors JavaScript libraries and make it easier to deploy, upgrade and make them lighter by always using compressed versions.
Also, it make your project simpler and lighter because you don't need to download, copy and maintain theses libraries files in your project.
Use it this way :
google.load("jquery", "1.2.3");
google.load("jqueryui", "1.5.2");
google.load("prototype", "1.6");
google.load("scriptaculous", "1.8.1");
google.load("mootools", "1.11");
google.load("dojo", "1.1.1");
Just a sidenode - Steve already pointed out, you should really "minify" your JS files. In JS, whitespaces actually matter. If you have thousand lines of JS and you strip only the unrequired newlines you have already saved about 1K. I think you get the point.
There are tools, for this job. And you should never modify the "minified"/stripped/obfuscated JS by hand! Never!
In our big javascript applications, we write all our code in small separate files - one file per 'class' or functional group, using a kind-of-like-Java namespacing/directory structure. We then have:
A compile-time step that takes all our code and minifies it (using a variant of JSMin) to reduce download size
A compile-time step that takes the classes that are always or almost always needed and concatenates them into a large bundle to reduce round trips to the server
A 'classloader' that loads the remaining classes at runtime on demand.
For server efficiency's sake, it is best to combine all of your javascript into one minified file.
Determine the order in which code is required and then place the minified code in the order it is required in a single file.
The key is to reduce the number of requests required to load your page, which is why you should have all javascript in a single file for production.
I'd recommend keeping files split up for development and then create a build script to combine/compile everything.
Also, as a good rule of thumb, make sure you include your JavaScript toward the end of your page. If JavaScript is included in the header (or anywhere early in the page), it will stop all other requests from being made until it is loaded, even if pipelining is turned on. If it is at the end of the page, you won't have this problem.
Read the code of other (good) javascript apps and see how they handle things. But I start out with a file per class. But once its ready for production, I would combine the files into one large file and minify.
The only reason, I would not combine the files, is if I didn't need all the files on all the pages.
My strategy consist of 2 major techniques: AMD modules (to avoid dozens of script tags) and the Module pattern (to avoid tightly coupling of the parts of your application)
AMD Modules: very straight forward, see here: http://requirejs.org/docs/api.html also it's able to package all the parts of your app into one minified JS file: http://requirejs.org/docs/optimization.html
Module Pattern: i used this Library: https://github.com/flosse/scaleApp you asking now what is this ? more infos here: http://www.youtube.com/watch?v=7BGvy-S-Iag