Combining JavaScript files as recommended by YSlow - optimal size? - javascript

We have about 30 external JavaScripts on our page. Each is already minified.
To reduce HTTP requests and page load time, we are considering combining them to a single file.
This was recommended by the YSlow tool.
Is this wise, or is it better to combine them into, say, two files with 15 scripts each?
Is there an optimal size for the combined JavaScript files?

The fewer the HTTP requests, the better. If you want your page to work on Mobile devices as well, then keep the total size of each script node under 1MB (See http://www.yuiblog.com/blog/2010/07/12/mobile-browser-cache-limits-revisited/)
You might also want to check whether any of your scripts can be deferred to after onload fires. You could then make two combined files, one that's loaded in the page, and the other that's loaded after page load.
The main reason we ask people to reduce HTTP requests is because you pay the price of latency on each request. This is a problem if those requests are run sequentially. If you can make multiple requests in parallel, then this is a much better use of your bandwidth[*], and you pay the price of latency only once. Loading scripts asynchronously is a good way to do this.
To load a script after page load, do something like this:
// This function should be attached to your onload handler
// it assumes a variable named script_url exists. You could easily
// extend it to use an array of scripts or figure it out some other
// way (see note late)
function lazy_load() {
setTimeout(function() {
var s = document.createElement("script");
s.src=script_url;
document.body.appendChild(s);
}, 50);
}
This is called from onload, and sets a timeout for 50ms later at which point it will add a new script node to the document's body. The script will start downloading after that. Now since javascript is single threaded, the timeout will only fire after onload has completed even if onload takes more than 50ms to complete.
Now instead of having a global variable named script_url, you could have script nodes at the top of your document but with unrecognised content-types like this:
<script type="text/x-javascript-deferred" src="...">
Then in your function, you just need to get all script nodes with this content type and load their srcs.
Note that some browsers also support a defer attribute for script nodes that will do all this automatically.
[*] Due to TCP window size limits, you won't actually use all the bandwidth that you have available on a single download. Multiple parallel downloads can make better use of your bandwidth.

The browser has to interpret just as much javascript when it is all combined into one monolithic file as it does when the files are broken up, so I would say it doesn't matter at all for interpretation and execution performance.
For network performance, the less HTTP requests the better.

Related

Asynchronous loading JavaScript functions.

I am building a framework in which I have merged all JavaScript files into one file (minify).
Example:
function A() {} function B() {}
Through minified file i want to load function asynchronous and remove from HTML when its work is done.
Example: load function A when it is required but not function B.
I have seen one framework Require.js but in that it loads JavaScript file asynchronous based on requirement.
Is there any framework which loads JavaScript functions on demand which are defined in same js file.
The downside to concatenation is you get less fine-grained control over what you are including on a given page load. The solution to this is, rather than creating one concatenated file, create layers of functionality that can be included a little more modularly. Thus you don't need all your JS on a page that may only use a few specific functions. This can actually be a win in speed as well, since having just one JS file might not take advantage of the browsers 6 concurrent connections. Moreover, once SPDY is fully adopted, one large file will actually be less performant than more smaller ones (since connections can be reused). Minification will still be important, however.
All that said, it seems you are asking for something a little difficult to pull off. When a browser loads a script, it gets parsed and executed immediately. You can't load the file then... only load part of the file. By concatenating, you are restricting yourself to that large payload.
It is possible to delay execution by wrapping a script in a block comment, then accessing it from the script node and eval()ing it... but that doesn't seem like what you are asking. It can be a useful strategy, though, if you want to preload modules without locking the UI.
That's not how javascript works. When the function's source file is loaded, the function is available in memory. Since the language is interpreted, the functions that are defined would be loaded as soon as the source file was read by the browser.
Your best bet is to use Require.js or something similar if you want to have explicit dependency chains.

Storing document now for multiple pages in external files

Each of my 10-15 pages has 100-200 lines of js inside $(document).ready()
Would it be wise to combine them all into one external file? I don't understand how that would work. Wouldn't the browser have to check for everything all at once, even the functions that are not on the current page? The second issue would probably be function conflicts.
Please give me some tips on how to handle this.
You can split those lines to several files if your js file becomes too big.
Just note that 100-200 lines is very small. You should minify your code if the size is really important for you.
"We should forget about small efficiencies, say about 97% of the time: premature
optimization is the root of all evil"
Regarding to functions conflicts, use namespaces, and closures and keep your global object clean as possible.
would it be wise? yes.
you give the browser one big cachable chunk that he doesnt have to worry about anymore what improves page loading speed.
put all your onload events in differently named functions and call them from the relevant pages, or reuse functions on different pages if they have to do the same.
There are various things you must look into.
First remove all the js to public cdn where possible for ex jquery
Secondly always minify & combine all js to one file ( same applies for css too)
The advantage here is if you have 10 js file browser has to make ten requests and receive each file separately. Not to mention about the number of requests per domain in mobile devices.
On the other end if all the files are are sent as one file there is only one request . You are right that it will take a lot of time executing (or at least checking all the onready stuffs) but this processing time is much much much lower than the request time that would occur if files are different.

Calling functions when needed

So in my page I have some little scripts which I dont really need to load once you visit the site and in fact the user might not need them at all in their entire session.
Also, according to this: http://code.google.com/speed/page-speed/docs/payload.html#DeferLoadingJS its not a good practise either.
So for example, currently I have everything in 'When dom ready':
$(function() {
// most of the code of which is not needed
});
If I dont place the code inside the Dom ready, its not executable at most of the times. So I thought of doing seperate functions for each snippet.
For exmaple:
function snippet1() {
// Code here
}
and then when I need that snippet, I load it when needed with mouseclick. (Not always a mouselcick, depends what I need to load).
For example:
$('#button1').click(function() {
snippet1();
});
So my question is: Is this the way of loading functions async so you reduce the page load time or is there a better way? I havent read this anywhere my examples, I just thought of it.
Note that I am aware of the asynch loading's but that is not my option here, since I could combine all the functions in just one js file which will be cached, so page load time will be less than loading every time asynch js files.
You're mixing several things:
Page load time
JavaScript parsing time - After the script is loaded, it has to be parsed (error checking, compiling to byte code, etc)
Function execution time
You can't do much about the page load time since you don't want to split the script. You may consider to split it into two parts: One which you always need and an "optional" part which is rarely needed. Load the rare functions in the background.
Note that page load times become pretty moot after the site has been visited once and you've made sure the browser can cache the files.
If you want to reduce parse times, you have two options:
Don't load parts that you don't need.
Compress the scripts. Google has a great tool for that: the Closure Compiler. Besides making your scripts faster, it will also check for many common mistakes.
The last part is the execution times. These are only relevant if the functions are called at all and when they do a lot. In your case, I guess you can ignore this point.
Yes, as much as possible you should define objects, functions, etc. outside of the document.ready wrapper. Some devs will define absolutely everything outside the wrapper and then just call an init() function inside the wrapper to load everything else. I am one such dev.
As for async, this doesn't do true async loading, but it speeds up your page since there is much less work to do on page load.
In general, if you're not using a script loader like requireJS or yepnope, it's a good idea to put all your script references – or at least those that don't need to be run instantly – at the end of your body so the page renders before the resources that aren't going to be run until after page load anyway.
I would load all additional resources using RequireJS ( http://requirejs.org/ ) or similar library.
Put everything that you don't need immediately to separate script and load it after main content is loaded.

Is there a provision in LABJS for a callback function if loading times out?

I am asynchronously loading scripts through LabJS and have a chain of dependent scripts. Now if one of the scripts in the chain breaks (in the sense that it can not be downloaded, or connection times out) I believe that the remaining scripts under the dependency chain will not be executed. In such an event, is it possible to provide a custom callback function to take appropriate measures if a particular script fails to load ?
If this is not possible with LabJS, is it possible with any other Asynchronous script loader ?
Here's an example showing how to wrap setTimeout() timeouts around LABjs code... in this case it provides a fallback mechanism where it tries to load jquery from a CDN, then if the timeout passes, it aborts that and tries to load jquery from a local file instead.
https://gist.github.com/670840
According to getify, who happens to be sitting about 20 feet away from me, there's not a way to handle timeouts like that in general, mostly because a timeout is not an explicit, "positive" event. (In the specific case of how the library handles the dependency chain in such cases, I'll let the author himself clarify.)
What you can do is use your own watchdog to wait as long as you feel is appropriate. Just run an interval timer, checking for some tell-tale sign that your script has made it onto the page, and if after some number of iterations you fail to see it you can fall back on an alternative (different script host, whatever).
What about this? I have not tested this:
$LAB.script('jquery-from-cdn.js').wait(function(){
if(!window.jQuery) {
$LAB.script('local-jquery.js').wait(load_scripts);
} else {
load_scripts();
}
});
function load_scripts() {
$LAB.script('other-js.js');
}

Any difference between lazy loading Javascript files vs. placing just before </body>

Looked around, couldn't find this specific question discussed. Pretty sure the difference is negligible, just curious as to your thoughts.
Scenario: All Javascript that doesn't need to be loaded before page render has been placed just before the closing </body> tag. Are there any benefits or detriments to lazy loading these instead through some Javascript code in the head that executes when the DOM load/ready event is fired? Let's say that this only concerns downloading one entire .js file full of functions and not lazy loading several individual files as needed upon usage.
Hope that's clear, thanks.
There is a big difference, in my opinion.
When you inline the JS at the bottom of the <body> tag, you're forcing the page to load those <script>s synchronously (must happen now) and sequentially (in a row), so you're slowing down the page a bit, as you must wait for those HTTP calls to finish and the JS engine to interpret your scripts. If you're putting lots of JS stacked up together at the bottom of the page, you could be wasting the user's time with network queueing (in older browsers only 2 connections per host at a time), as the scripts may depend on each other, so they must be downloaded in order.
If you want your DOM to be ready faster (usually what most wait on to do any event handling and animation), you must reduce the size of the scripts you need to as little as possible as well as parallelize them.
For instance, YUI3 has a small dependency resolution and downloading script that you must load sequentially in the page (see YUI3's seed.js). After that, you go through the page and gather the dependencies and make 1 asynchronous and pipelined call to their CDN (or your own servers) to get a big ball of JS. After the JS ball is returned, your scripts execute the callbacks you've supplied. Here's the general pattern:
<script src="seed.js"></script>
<script>
YUI().use('module', function(Y) {
// done when the ball returns and is interpretted
});
</script>
I'm not a particularly big fan of putting your scripts into 1 big ball (because if 1 dependency changes, you must download and interpret the whole thing over again!), but I am a fan of pipe-lining (combining scripts) and the event-based model.
When you do allow for asynchronous, event-based loading, you get better performance, but perhaps not perceived performance (though this can be counteracted).
For instance, parts of the page may not load for a second or two, and hence look different (if you're using JS to affect the page style, which I don't advise) or not be ready for user interaction until you (or those hosting your site) return your scripts.
Additionally, you must do some work to ensure your <script>s have the right dependencies to be able to execute properly. For instance, if you don't have jQuery or Prototype, you can't successfully call:
<script>
$(function () {
/* do something */
});
</script>
or
<script>
document.observe('dom:loaded', function {
/* do something */
});
</script>
as the interpretter will say something like "Variable $ undefined". This can happen even if you've added both <script>s to the DOM at the same time, as I'd bet jQuery or Prototype are bigger than you're application's JS (so the request for the data takes longer). Either way, without some type of limiting, you're leaving this up to chance.
So, the choice is really up to you. If you can properly segment your dependencies - i.e. put the stuff you need up front and lazily load the other stuff later, it'll result in faster overall time until you hit the DOM being ready.
However, if you use a monolithic library like jQuery or the user expects to be able to see something involving JS animation or style right away, inlining might be better for you.
In terms of Usability, you definitely shouldn't do this with anything that the user expects a quick response from like having a button do double duty as the load trigger in addition to it's other function.
OTOH replacing pagination with continuously loading the page as the user scrolls is a very good idea. I do find it a distraction when the load trigger is towards the end of the page, better to put it 1/2 to 3/4 the way down.

Categories

Resources