I currently use Angular with ng-routes, ng-table and ng-animate. I also use other libraries such as jQuery and Modernizr. Finally I have my own custom javascript, which can be multiple files.
So my projects load a long list of resources like this:
<script type="text/javascript" src="/lib/jquery.min.js"></script>
<script type="text/javascript" src="/lib/modernizr.min.js></script>
<script type="text/javascript" src="/lib/angular.min.js"></script>
<script type="text/javascript" src="/lib/angular-route.min.js"></script>
<script type="text/javascript" src="/lib/angular-animate.min.js"></script>
<script type="text/javascript" src="/js/custom.js"></script>
I'm aware that I can speed up my projects by reducing HTTP calls and compressing all files.
I use CodeKit which gives options to import JS files into each other and compile them into a single compressed file. If I do that for the list above I would then simply load a single file like this:
<script type="text/javascript" src="/js/main.min.js"></script>
A single HTTP call rather than 6+, and all code is minified. The problem here is that the single file is 500kb+ and would be much larger on a complete project, possibly over 1mb.
Will this mean the file won't get cached by browsers because it's too large?
Will Angular or other libraries have any issues running when compiled in this format? (console throws errors if .map file is not in same directory)
Surely a 500kb+ file will still take a long time to load, so will this really improve load times?
The way I see it is it has to wait for that entire large file to load before it can do anything. Whereas if the files are separate, they can begin executing the script as soon as they each load.
Will this mean the file won't get cached by browsers because it's too large?
That depends on the browser. Desktop browsers shouldn't have any problems with this, however mobile will.
Will Angular or other libraries have any issues running when compiled in this format? (console throws errors if .map file is not in same directory)
Nope. the .map issues won't affect your end-user unless they open the console. This can also be fixed by your build process producing a .map file for the combined version.
Surely a 500kb+ file will still take a long time to load, so will this really improve load times?
Yes, because you're not passing cookies and http headers back and forth with each request.
However, there are things that you can do to improve load time. For example, I split my files into two categories:
Files that are needed immediately, and files that can be loaded later.
Unfortunately, you can't really make this distinction with angular without consequences because all routes need to be defined up front, along with all controllers and services required by said routes. The only thing you can really remove would be in-line templating because templates can be requested later via ajax.
I was able to make a split like this for one of my applications; I split the administration code and client code, and only included the administration code after successful admin login. This is inconvenient for admins because an admin can't go directly to an admin page and can't reload an admin page without having problems, but it did reduce load time for non-admin users.
This leaves you with only one real option: reduce the overall size of the content you are including. This means making controllers and services as reusable as possible, and not including code that isn't necessary. Looking at your example, you're including jQuery. jQuery's purpose is to make it easier to write ajax requests and select/manipulate DOM nodes, both of which are already covered by Angular. Therefore, you can reduce the size of your codebase by removing jQuery and instead using angular-based plugins.
Related
I'm making a site using Express + EJS. I need the site to be server side rendered so I'm using web components without shadow dom. I use a separate EJS view per page type (e.g. home, post, page).
If I'm loading the post.ejs view when I visit example.com/post/my-post, I need to import the web component classes to register my web components on the page, for example if my HTML has:
<navbar-element>, <big-button>, and <clock-element>
how bad is it to put
<script src="/javascript/components/navbar-element.js" defer></script>
<script src="/javascript/components/big-button.js" defer></script>
<script src="/javascript/components/clock-element.js" defer></script>
in my head? I'm concerned because in the event that I have, say, 50 components on a single page, that's 50 additional HTTP requests to download all those scripts. Right now the performance is better than using javascript modules, but I'm not sure if a page with a lot of components making many HTTP requests to download all those scripts is bad. If there are faster yet still simple alternatives, I'd love to hear them. Thanks!
It all depends on the number of components and their size. Even 50 components is not a big problem for a modern browser, which is very efficient in loading static assets in parallels. The difference almost always will be negligible if you do not have a heavy Single Page application.
However, if you MEASURED your performance in 50 separate files vs putting it all in one file, and decided that it is indeed a problem, you can use tools like Webpack to package all of your JS files into one automatically, while keeping the source files as separate files.
However, it is yet another dependency for your project, and introducing another step in your development process, so called BUILD, and you need to think hard on whether it is worth it in your case.
I'm used to working with Java in which (as we know) each object is defined in its own file (generally speaking). I like this. I think it makes code easier to work with and manage.
I'm beginning to work with javascript and I'm finding myself wanting to use separate files for different scripts I'm using on a single page. I'm currently limiting myself to only a couple .js files because I'm afraid that if I use more than this I will be inconvenienced in the future by something I'm currently failing to foresee. Perhaps circular references?
In short, is it bad practice to break my scripts up into multiple files?
There are lots of correct answers, here, depending on the size of your application and whom you're delivering it to (by whom, I mean intended devices, et cetera), and how much work you can do server-side to ensure that you're targeting the correct devices (this is still a long way from 100% viable for most non-enterprise mortals).
When building your application, "classes" can reside in their own files, happily.
When splitting an application across files, or when dealing with classes with constructors that assume too much (like instantiating other classes), circular-references or dead-end references ARE a large concern.
There are multiple patterns to deal with this, but the best one, of course is to make your app with DI/IoC in mind, so that circular-references don't happen.
You can also look into require.js or other dependency-loaders. How intricate you need to get is a function of how large your application is, and how private you would like everything to be.
When serving your application, the baseline for serving JS is to concatenate all of the scripts you need (in the correct order, if you're going to instantiate stuff which assumes other stuff exists), and serve them as one file at the bottom of the page.
But that's baseline.
Other methods might include "lazy/deferred" loading.
Load all of the stuff that you need to get the page working up-front.
Meanwhile, if you have applets or widgets which don't need 100% of their functionality on page-load, and in fact, they require user-interaction, or require a time-delay before doing anything, then make loading the scripts for those widgets a deferred event. Load a script for a tabbed widget at the point where the user hits mousedown on the tab. Now you've only loaded the scripts that you need, and only when needed, and nobody will really notice the tiny lag in downloading.
Compare this to people trying to stuff 40,000 line applications in one file.
Only one HTTP request, and only one download, but the parsing/compiling time now becomes a noticeable fraction of a second.
Of course, lazy-loading is not an excuse for leaving every class in its own file.
At that point, you should be packing them together into modules, and serving the file which will run that whole widget/applet/whatever (unless there are other logical places, where functionality isn't needed until later, and it's hidden behind further interactions).
You could also put the loading of these modules on a timer.
Load the baseline application stuff up-front (again at the bottom of the page, in one file), and then set a timeout for a half-second or so, and load other JS files.
You're now not getting in the way of the page's operation, or of the user's ability to move around. This, of course is the most important part.
Update from 2020: this answer is very old by internet standards and is far from the full picture today, but still sees occasional votes so I feel the need to provide some hints on what has changed since it was posted. Good support for async script loading, HTTP/2's server push capabilities, and general browser optimisations to the loading process over the years, have all had an impact on how breaking up Javascript into multiple files affects loading performance.
For those just starting out with Javascript, my advice remains the same (use a bundler / minifier and trust it to do the right thing by default), but for anybody finding this question who has more experience, I'd invite them to investigate the new capabilities brought with async loading and server push.
Original answer from 2013-ish:
Because of download times, you should always try to make your scripts a single, big, file. HOWEVER, if you use a minifier (which you should), they can combine multiple source files into one for you. So you can keep working on multiple files then minify them into a single file for distribution.
The main exception to this is public libraries such as jQuery, which you should always load from public CDNs (more likely the user has already loaded them, so doesn't need to load them again). If you do use a public CDN, always have a fallback for loading from your own server if that fails.
As noted in the comments, the true story is a little more complex;
Scripts can be loaded synchronously (<script src="blah"></script>) or asynchronously (s=document.createElement('script');s.async=true;...). Synchronous scripts block loading other resources until they have loaded. So for example:
<script src="a.js"></script>
<script src="b.js"></script>
will request a.js, wait for it to load, then load b.js. In this case, it's clearly better to combine a.js with b.js and have them load in one fell swoop.
Similarly, if a.js has code to load b.js, you will have the same situation no matter whether they're asynchronous or not.
But if you load them both at once and asynchronously, and depending on the state of the client's connection to the server, and a whole bunch of considerations which can only be truly determined by profiling, it can be faster.
(function(d){
var s=d.getElementsByTagName('script')[0],f=d.createElement('script');
f.type='text/javascript';
f.async=true;
f.src='a.js';
s.parentNode.insertBefore(f,s);
f=d.createElement('script');
f.type='text/javascript';
f.async=true;
f.src='b.js';
s.parentNode.insertBefore(f,s);
})(document)
It's much more complicated, but will load both a.js and b.js without blocking each other or anything else. Eventually the async attribute will be supported properly, and you'll be able to do this as easily as loading synchronously. Eventually.
There are two concerns here: a) ease of development b) client-side performance while downloading JS assets
As far as development is concerned, modularity is never a bad thing; there are also Javascript autoloading frameworks (like requireJS and AMD) you can use to help you manage your modules and their dependencies.
However, to address the second point, it is better to combine all your Javascript into a single file and minify it so that the client doesn't spend too much time downloading all your resources. There are tools (requireJS) that let you do this as well (i.e., combine all your dependencies into a single file).
It's depending on the protocol you are using now. If you are using http2, I suggest you to split the js file. If you use http, I advise you to use minified js file.
Here is the sample of website using http and http2
Thanks, hope it helps.
It does not really matter. If you use the same JavaScript in multiple files, it can surely be good to have a file with the JavaScript to fetch from. So you just need to update the script from one place.
We have an MVC 4 web application with a number of areas.
There is a main layout view that is used by all the pages on the site and it contains all of the CSS includes, the render body tag, then all the JavaScript libraries.
<head>
<link rel="stylesheet" media="screen" href="~/Content/jquery-ui-1.10.3.custom.min.css" />
..
</head
<body>
<div id="main-content">#RenderBody()</div>
<script type="text/javascript" src="~/Scripts/jquery-1.10.2.min.js"></script>
..
</body>
The JavaScript consists of common libraries such as jquery, jqueryui and plug-ins.
There is also a single JavaScript file that contains the custom code for the whole site
Since there is only 1 large JavaScript file with thousands of lines, code routines are initialized by checking for the existence of a particular DOM element to decide if it proceeds.
runExample = function() {
if ($(".Example").length > 0){
// execute code
}
}
..
runExample();
This is of course problematic since there is a great deal of script included for all files, while there is code that applies to all pages, most of the code only applies to certain areas or pages.
Is there a better way to split the JavaScript up for the site? Keep in mind it is the custom code that is conditional, not necessarily the plug ins
Even if there way a way to create a JavaScript file for each area, how
would that be referenced within the main layout?
Is it best to load the JavaScript include files at the end of the include file?
What is the effect of minification on performance and would it benefit the custom code file?
Any advice would be appreciated.
First, use bundling. Give BundleConfig.cs under the App_Start folder in your project a gander. By simply minifying and bundling all your JS together, it's sometimes inconsequential that certain code is not actually being used on the current page (the savings you gain from having one cached JS file that every page uses is sometimes better than loading a new different bit of JS on each page.)
If you need more fine grained control, you can use something like Require.js. You essentially write your JS in modules that depend on other modules to run (all of your plugins, jQuery, etc. become "modules" in this scenario). You'll need to manually minify and combine your JS as much as logically possible, but this will allow you to integrate various scripts together without having to worry about load order and missing dependencies.
As a side note, I would respectfully disagree with Kevin B. If maintainability dictates that your JS has to be in the head, I would say that's a symptom of a larger problem with your code design. The only good reason to add JS in the head is when it's essential that the JS be run before the page is rendered. A good example is Modernizr, which for one adds classes to the html element to allow you to specify different styles and such depending on whether certain features are available in the user's browser or in the case of IE, what version the user is running. Without loading in the head, your style would changed after page load leading to flashes of unstyled content and such. Other than situations like these, all JS should go before the closing body tag, as JS is blocking: the browser will completely stop what it's doing and all rendering of the page, and run the script completely before continuing. Too much of this in the head, and your users stare at a blank page for far too long.
Also all script (and CSS for that matter) should be minified. There's no good reason not to, and the difference in bytes the user has to download is often quite dramatic. Especially in this day and age of mobile-everything and far-too-limited data plans, every byte truly does count.
I know that best practice for including javascript is having all code in a separate .js file and allowing browsers to cache that file.
But when we begin to use many jquery plugins which have their own .js, and our functions depend on them, wouldn't it be better to load dynamically only the js function and the required .js for the current page?
Wouldn't that be faster, in a page, if I only need one function to load dynamically embedding it in html with the script tag instead of loading the whole js with the js plugins?
In other words, aren't there any cases in which there are better practices than keeping our whole javascript code in a separate .js?
It would seem at first glance that this would be a good idea, but in fact it would actually make matters worse. For example, if one page needs plugins 1, 2 and 3, then a file would be build server side with those plugins in it. Now, the browser goes to another page that needs plugins 2 and 4. This would cause another file to be built, this new file would be different from the first one, but it would also contain the code for plugin 2 so the same code ends up getting downloaded twice, bypassing the version that the browser already has.
You are best off leaving the caching to the browser, rather than trying to second-guess it. However, there are options to improve things.
Top of the list is using a CDN. If the plugins you are using are fairly popular ones, then the chances are that they are being hosted with a CDN. If you link to the CDN-hosted plugins, then any visitors who are hitting your site for the first time and who have also happened to have hit another site that's also using the same plugins from the same CDN, the plugins will already be cached.
There are, of course, other things you can to to speed your javascript up. Best practice includes placing all your script include tags as close to the bottom of the document as possible, so as to not hold up page rendering. You should also look into lazy initialization. This involves, for any stuff that needs significant setup to work, attaching a minimalist event handler that when triggered removes itself and sets up the real event handler.
One problem with having separate js files is that will cause more HTTP requests.
Yahoo have a good best practices guide on speeding up your site: http://developer.yahoo.com/performance/rules.html
I believe Google's closure library has something for combining javascript files and dependencies, but I havn't looked to much into it yet. So don't quote me on it: http://code.google.com/closure/library/docs/calcdeps.html
Also there is a tool called jingo http://code.google.com/p/jingo/ but again, I havn't used it yet.
I keep separate files for each plug-in and page during development, but during production I merge-and-minify all my JavaScript files into a single JS file loaded uniformly throughout the site. My main layout file in my web framework (Sinatra) uses the deployment mode to automatically either generate script tags for all JS files (in order, based on a manifest file) or perform the minification and include a single querystring-timestamped script inclusion.
Every page is given a body tag with a unique id, e.g. <body id="contact">.
For those scripts that need to be specific to a particular page, I either modify the selectors to be prefixed by the body:
$('body#contact form#contact').submit(...);
or (more typically) I have the onload handlers for that page bail early:
jQuery(function($){
if (!$('body#contact').length) return;
// Do things specific to the contact page here.
});
Yes, including code (or even a plug-in) that may only be needed by one page of the site is inefficient if the user never visits that page. On the other hand, after the initial load the entire site's JS is ready to roll from the cache.
The network latency is the main problem.You can get a very responsive page if you reduce the http calls to one.
It means all the JS, CSS are bundled into the HTML page.And if your can forget IE6/7 you can put the images as data:image/png;base64
When we release a new version of our web app, a shell script minify and bundle everything into a single html page.
Then there is a second call for the data, and we render all the HTML client-side using a JS template library: PURE
Ensure the page is cached and gzipped. There is probably a limit in size to consider.We try to stay under 400kb unzipped, and load secondary resources later when needed.
You can also try a service like http://www.blaze.io. It automatically peforms most front end optimization tactics and also couples in a CDN.
There currently in private beta but its worth submitting your website to.
I would recommend you join common bits of functionality into individual javascript module files and load them only in the pages they are being used using RequireJS / head.js or a similar dependency management tool.
An example where you are using lighbox popups, contact forms, tracking, and image sliders in different parts of the website would be to separate these into 4 modules and load them only where needed. That way you optimize caching and make sure your site has no unnecessary flab.
As a general rule its always best to have less files than more, its also important to work on the timing of each JS file, as some are needed BEFORE the page completes loading and some AFTER (ie, when user clicks something)
See a lot more tips in the article: 25 Techniques for Javascript Performance Optimization.
Including a section on managing Javascript file dependencies.
Cheers, hope this is useful.
I'm working on a project which uses many scripts (Google Maps, jQuery, jQuery plugins, jQuery UI...). Some pages have almost 350 kB of Javascript.
We are concerned about performance and I'm asking myself what is the best way to integrate those heavy scripts.
We have 2 solutions:
Include all scripts in the head, even if they are not utilized on the page.
Include some common scripts in the head, and include page specific ones when they are needed.
I would like to have your advice.
Thanks.
For the best performance I would create a single static minified javascript file (using a tool like YUI compressor) and reference it from the head section. For good tips on website performance check out googles website optimizations page.
Note that the performance penalty of retrieving all your javascript files only happen on the first page, as the browser will use the cache version of the file on subsequent pages.
For even better responsiveness you would split your javascript in two files. Load the first with all the javascript you need when the page loads, then after the page loads load the second file in the background.
If your interested, I have an open source AJAX javascript framework that simplifies compresses and concatenates all your html, css and javascript (including 3rd party libraries) into a single javascript file.
If you think it's likely that some users will never need the Google Maps JavaScript for example, just include that in the relevant pages. Otherwise, put them all in the head - they'll be cached soon enough (and those from Google's CDN may be cached already).
Scripts in the <head> tag do (I think) stop the page from rendering further until they’ve finished downloading, so you might want to move them down to the end of the <body> tag.
It won’t actually make anything load faster, but it should make your page content appear more quickly in some situations, so it should feel faster.
I’d also query whether you’ve really got 350 KB of JavaScript coming down the pipe. Surely the libraries are gzipped? jQuery 1.4 is 19 KB when minifed and gzipped.
1) I would recommend gather all the common scripts and most important like jquery and etc in one file to reduce number of requests for this files and compress it and i would recommend google closure u will find it here
2) Make the loading in a page the user have to open it in the beginning like login page and put the scripts at the end of the page to let all the content render first and this recommended by most of the performance tools like yslow and page speed
3) don't write scripts in your page , try to write everything in a file to make it easier later on for compression and encryption
4) put the scripts and all statics files like images and css on other domain to separate the loading on your server