Single .js File vs. Lazy Loading - javascript

Scenario: You are building a large javascript-driven web application where you want as few page refreshes as possible. Imagine 80-100MB of unminified javascript, just for the sake of having a number to define "large".
My assumption is that if you lazy-load your javascript files you can get a better balance on your load times (meaning, you don't have to wait a few seconds each time the page refreshes), hopefully resulting in the user not really noticing a lag during loading. I'm guessing that in a scenario like this, lazy-loading would be more desireable than your typical single minified .js file.
Now, theoretically, there is a fixed cost for a request to any file on a given server, regardless of the file's size. So, too many requests would not be desirable. For example, if a small javascript file loads at the same time as 10 other small- or medium-sized files, it might be better to group them together to save on the cost of multiple requests.
My question is, assuming reasonable defaults (say the client has a 3-5Mbps connection and a decent piece of hardware), what is an ideal size of a file to request? Too large, and you are back to loading too much at one time; too small, and the cost of the request becomes more expensive than the amount of data you are getting back, reducing your data-per-second economy.
Edit: All the answers were fantastic. I only chose Ben's because he gave a specific number.

Google's Page Speed initiative covers this in some detail:
http://code.google.com/speed/page-speed/docs/rules_intro.html
Specifically http://code.google.com/speed/page-speed/docs/payload.html

I would try to keep the amount that needs to be loaded to show the page (even if just the loading indicator) under 300K. After that, I would pull down additional data in chunks of up to 5MB at a time, with a loading indicator (maybe a progress bar) shown. I've had 15MB downloads fail on coffee shop broadband wifi that otherwise seemed OK. If it was bad enough that <5MB downloads failed I probably wouldn't blame a website for not working.
I also would consider downloading two files at a time, beyond the initial <300K file, using a loader like LabJS or HeadJS to programmatically add script tags.

I think it's clear that making the client download more than a MB of js before they can do anything is bad. And also making the client download more of anything than is necessary is also bad. But there's a clear benefit to having it all cached.
Factors that will influence the numbers:
Round-trip time
Server response time
Header Size (including cookies)
Caching Technique
Browser (see http://browserscope.com)
Balancing parallel downloading and different cache requirements are also factors to worry about.
This was partially covered recently by Kyle Simpson here: http://www.reddit.com/r/javascript/comments/j7ng4/do_you_use_script_loaders/c29wza8

Related

Why do I get a lower PageSpeed score after I "eliminate render-blocking resources"?

I'm brand new to PageSpeed and trying to get a grasp on the various aspects it wants me to optimize. Is there a guideline I can follow, such as what code can be removed, deferred, etc.? I am by no means a developer.
My score fluctuates with each .js file I defer. Sometimes it gets better, sometimes it gets worse. This doesn't make sense to me and is rather frustrating.
I expect the PageSpeed score to improve, but instead it fluctuates and sometimes gets worse with each .js file I defer.
While PageSpeed shows some interesting statics such as first contentful paint, time to interactive etc. It's hard to figure out what is going under the hood without looking at a timeline of the page loading process. The screen shot below shows how your page loads in google chrome. You can view this yourself by following this guide: https://developers.google.com/web/tools/chrome-devtools/network/
One of the major holdbacks is your server response time, the server takes at least 2 seconds to handle the request for the website, which is quite high and leads me to believe that you maybe on a cost effective plan on hostmonster. In contrast StackOverflow.com responds in 300ms. An easy way to benchmark your webhost is by running this test on a fresh wordpress installation. It will tell you whether an inefficient wordpress setup on your part is causing the delay or whether the host is inherently slow at serving php. I have a feeling that there is significant bloat in your php code causing this.
Next you have a css file that takes 880ms and is 200kb compressed but 1.9Mb uncompressed (shown as 1 860 013 bytes in the coverage tab) This is extremely big and should not exceed 1Mb even on the largest of websites. 98.6% of your CSS file is unused. It is not uncommon to have over 90% unused CSS when using a generic framework, but generic frameworks never have a css file that is close to 2MB. A css bundle is below 500kb on most websites. Not only does this add to the download time, it also adds to the time required for the browser to parse that file and render the page.
There's also the header image which is 423KB in size an takes 1.3s to download. Usually, an image of that resolution can be compressed to less than a 100kB.
The red line on the water fall indicates when the page is considered to be fully loaded. The oversized header is adding a lot to it. While the entire process of loading is pushed back by 2.2s due to the server response time. You'll see that the script loading times are marginal compared to the effect that these two have. This may by why you observed an indifference in loading times when you changed the tags to defer.
To summarize, look into why it takes 2+s for the server to respond, clean up your CSS. Aim for half the size. Compress your header image. Aim for around 100kb. Once you fix these, you can look into deferring scripts to achieve the optimal loading time.

Aggressive Caching vs Combining of Javascript files for an Intranet Site

I'm working on improving the page performance of my company's intranet page. We're looking to (dynamically) combine our javascript files as well as cache them for 30+ days. The page launches on login for everyone.
One of my coworkers asked if it's worth the time to combine the JS files if we're already caching them for a month. He's hesitant to do both because the combining tool is server side and doesn't run on our desktop, requiring a somewhat hacky workaround.
I've been doing some research and most of the recommendations for performance I've seen are for external sites. Since we're in a closed system it would seem like we wouldn't get much benefit from combining the files once everyone's cache is primed. What would combining the files buy us that aggressive caching wouldn't?
We're on IE8 if that makes any difference.
The most notable impact with having multiple JavaScript files is the time required to render the page. Each script tag is processed separately and adds time to the overall render process.
A pretty good answer can be found here # multiple versus single script tags
If we are talking a large number of scripts then you may see an improvement in render time; if it is just two or three files then it likely won't bring abount a noticable difference once the files have been cached.
I would suggest testing the page render time in both cases and see how much improvement you see in your case and decide based on that information.
As a useful example, here are some stats from Xpedite (runtime minification tool I created a while back); note the difference in time from load to ready for combined vs uncombined scripts.
Combine all your JavaScript files into a single big file (thus minimizing the number of requests made to the server), and set its name to something like "application_234234.js"; the number represents the last time you have changed the file and will help the browser know it's a new file (thus no cache when you change it).
Add an Expires or a Cache-Control Header (set it really far into the future). Since the file name will change each time you'll modify it, you don't have to worry.
Last and not least, compress and gzip the JavaScript file.
These are some important advices, but learn more about best practices on: http://developer.yahoo.com/performance/rules.html

Considering only page speed, at what point is CSS or JS big enough to externalize

Common advice is to keep your CSS and JS files external. The reason: What you lose in the additional HTTP request*, you often gain back by not having to download cacheable static content (which CSS and JS usually is).
However, the smaller your external file, the more penalizing the additional HTTP request -- even if it is a 304 Not Modified response. So the smaller the external file, the more reason to favor inlining your CSS and JS content, at least when speed is your primary concern.
I ran some tests. Without going through the details, my results look like this:
External File Size Average Gain
----------------------------------
1KB -3.7ms
2KB -3.7ms
4KB -4.0ms
8KB -3.0ms
16KB -2.7ms
32KB 1.0ms
64KB 2.7ms
128KB 10.0ms
256KB 493.7ms
512KB 1047.0ms
1024KB 2569.7ms
My general conclusion is that using external files doesn't really matter until they get BIG. And by BIG, I mean the 50-100KB range... And that's minified and gzipped, if you take advantage of those.
Can anyone confirm or refute these results with additional data?
(* Assuming you don't use the HTTP "Expires" header)
I don't have additional data, but I can confirm that your results logically make sense. Most people today are on fast broadband connections, and most web servers will automatically gzip any text-based content that they send, so in many cases the overhead of sending a second request to load an external resource (or verify that it has not been modified) is going to be greater than the cost incurred by downloading a bit more data as part of the original request.
You can even work this out mathematically if you want, by assuming an average connection speed of 5 Mbps and a typical round-trip time of 100 ms. With those assumptions you will see that you can add up to approximately 62,500 bytes to the payload of the first request before the overhead of making the second request becomes justified. And that correlates very nicely with your numbers.
However, that doesn't mean that "using external files doesn't really matter", as there are other reasons to use them apart from the caching/page-load aspect. For instance, they help keep your code and overall site structure sane, particularly if you have common CSS styles and JavaScript utilities that are reused across multiple pages. I'd argue that this is at least as important as any small gain or loss in page-load time that you might get from using/not using external files. So in that context using external files makes sense even for smaller resources.

Javascript errors / bad practice

Is it bad practice to have a single javascript file that gets loaded accross all pages even if certain functions are only needed on certain pages? Or should the files be split up according to functionality on a given page and loaded only by pages that need them?
According to YSlow less files is better, but try to keep each file under 25k. Also make sure you minify or otherwise reduce the size of the js (and css) files. If possible turn on GZip for js on the webserver and set a far future expires header.
See here for the Yahoo Developer performance best practices.
If this file is really large, it could impact certain user's perceived performance (i.e. download and render times). IMHO you should split it up into reasonable groups of functions, with each group having similar functions (such that pages only reference the files they need).
depends on the size and complexity of the unused functions.
the javascript-parser anyway only stores the location and the signature of each function. as far as i know, it is only parsed when executed.
if traffic is a problem for you, rather include only those you need...
regards
Since the JS files are cached once they are downloaded and the JS parser shows no noticable performance difference btw a big JS file(not a HUGE one ;)) and a small js file, you should go with the single file approach.
Also it is known that multiple js files reduces the performance.
You're best off with a single JS file, as browsers will cache it after the first request for it (unless they've turned that off, in which case they deserve what they get). Another thing that will vastly, vastly increase your perceived performance on page load is turning on gzip compression in the web server - most JS files compress very well.
I would recommend to use one big file, for each file the browser launches a web request. Most browsers, I'm not quite sure how much it is with the newest versions of the known browsers, only launch a few concurrent web requests. The browser will wait until the files have been downloaded before launching the next web requests.
The one big file will be cached and other pages will load faster.
As #Frozenskys mentions YSlow states that less files is better, one of the major performance enhancements proposed by the Yahoo team is to minimize the amount of http requests.
Of course if you have a HUGE javascript file that literally takes seconds to download, it's better to split it up to prevent that the user has to wait seconds before the page loads.
A single file means a single download; as this article explains, most browsers will only allow a limited number of parallel requests to a single domain. Although your single file will be bigger than multiple small ones, as the other answers have pointed out:
The file will be cached
Techniques like minification and server-side gzip compression will help to reduce the download time.
You can also include the script at the end of the page to improve the perceived load time.

Javascript and CSS parsing performance

I am trying to improve the performance of a web application. I have metrics that I can use to optimize the time taken to return the main HTML page, but I'm concerned about the external CSS and JavaScript files that are included from these HTML pages. These are served statically, with HTTP Expires headers, but are shared between all the pages of the application.
I'm concerned that the browser has to parse these CSS and JavaScript files for each page that is displayed and so having all the CSS and JavaScript for the site shared into common files will negatively affect performance. Should I be trying to split out these files so I link from each page to only the CSS and JavaScript needed for that page, or would I get little return for my efforts?
Are there any tools that could help me generate metrics for this?
­­­­­­­­­­­­­­­­­­­­­­­­­­­
Context: While it's true that HTTP overhead is more significant than parsing JS and CSS, ignoring the impact of parsing on browser performance (even if you have less than a meg of JS) is a good way to get yourself in trouble.
YSlow, Fiddler, and Firebug are not the best tools to monitor parsing speed. Unless they've been updated very recently, they don't separate the amount of time required to fetch JS over HTTP or load from cache versus the amount of time spent parsing the actual JS payload.
Parse speed is slightly difficult to measure, but we've chased this metric a number of times on projects I've worked on and the impact on pageloads were significant even with ~500k of JS. Obviously the older browsers suffer the most...hopefully Chrome, TraceMonkey and the like help resolve this situation.
Suggestion: Depending on the type of traffic you have at your site, it may be well worth your while to split up your JS payload so some large chunks of JS that will never be used on a the most popular pages are never sent down to the client. Of course, this means that when a new client hits a page where this JS is needed, you'll have to send it over the wire.
However, it may well be the case that, say, 50% of your JS is never needed by 80% of your users due to your traffic patterns. If this is so, you should definitely user smaller, packaged JS payloads only on pages where the JS is necessary. Otherwise 80% of your users will suffer unnecessary JS parsing penalties on every single pageload.
Bottom Line: It's difficult to find the proper balance of JS caching and smaller, packaged payloads, but depending on your traffic pattern it's definitely well worth considering a technique other than smashing all of your JS into every single pageload.
I believe YSlow does, but be aware that unless all requests are over a loopback connection you shouldn't worry. The HTTP overhead of split-up files will impact performance far more than parsing, unless your CSS/JS files exceed several megabytes.
To add to kamen's great answer, I would say that on some browsers, the parse time for larger js resources grows non-linearly. That is, a 1 meg JS file will take longer to parse than two 500k files. So if a lot of your traffic is people who are likely to have your JS cached (return visitors), and all your JS files are cache-stable, it may make sense to break them up even if you end up loading all of them on every pageview.

Categories

Resources