Limit jquery file to required functions - javascript

I use jQuery. Although I'd like to think of myself as a fairly good programmer in general and also specifically for JS, I don't think I understand the DOM api and its variable behavior in different browsers. Hence the use of jQuery.
I use a small subset of jquery, though:
1) Ajax
2) event handlers
3) Selectors/find/child/parent
I don't use anything else, no filter, no UI events, nothing! (okay fine slideup slidedown, but I can do that myself using css)
Are there any already existing tools command-line/browser based that would do a static analysis on the jquery script so that I don't have to force the user to download the full 100KB? If you're suggesting I do it myself, thank you, manually doing it would be the next step, and if I feel like there's a lot of interest, I might consider writing such a tool
Re: CDN- thanks for your suggestions, please see my comment to #Jonathan

You can take each function from the Github repository, but since there are various dependencies, you will not save as much as you think. Instead of using the 100kb uncompressed, development version, you'll do better using the 32kb minified version from http://jquery.com/download/.
There are also three good reasons to use jQuery from Google's CDN (<script src="//ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js" ></script>):
1. Decreased Latency
In the case of Google’s AJAX Libraries CDN, what this means is that any users not physically near your server will be able to download jQuery faster than if you force them to download it from your arbitrarily located server.
2. Increased parallelism
To avoid needlessly overloading servers, browsers limit the number of connections that can be made simultaneously. Depending on which browser, this limit may be as low as two connections per hostname.
3. Better caching
[W]hen a browser sees references to CDN-hosted copies of jQuery, it understands that all of those references do refer to the exact same file. With all of these CDN references point to exactly the same URLs, the browser can trust that those files truly are identical and won’t waste time re-requesting the file if it’s already cached. Thus, the browser is able to use a single copy that’s cached on-disk, regardless of which site the CDN references appear on.
Source of excerpt and further reading: http://encosia.com/3-reasons-why-you-should-let-google-host-jquery-for-you/

You can just use Google hosted libraries and stop carrying about jquery size.
I am sure every user browser has it in cache already, so its beneficial both for you and user.

Related

How robust is online sourcing of js scripts, and best practices

I come from an R background and I am starting to learn some javascript for data visualization purposes (think leaflet, d3, chart,...).
I trying to wrap my head around the fact that many tutorials and templates suggest loading packages, CSS, or even data directly from online sources. For example, https://leafletjs.com/examples/quick-start/ recommends:
Before writing any code for the map, you need to do the following
preparation steps on your page:
Include Leaflet CSS file in the head section of your document:
<link rel="stylesheet" href="https://unpkg.com/leaflet#1.7.1/dist/leaflet.css"
integrity="sha512-xodZBNTC5n17Xt2atTPuE1HxjVMSvLVW9ocqUKLsCC5CXdbqCmblAshOMAS6/keqq/sMZMZ19scR4PsZChSR7A=="
crossorigin=""/>
Include Leaflet JavaScript file after Leaflet’s CSS:
<!-- Make sure you put this AFTER Leaflet's CSS -->
<script src="https://unpkg.com/leaflet#1.7.1/dist/leaflet.js"
integrity="sha512-XQoYMqMTK8LvdxXYG3nZ448hOEQiglfqkJs1NOQV44cWnUrBc8PkAOcXy20w0vlaXaVUearIOBhiXZ5V3ynxwA=="
crossorigin=""></script>
It's not that you can't do things like that in R as well. But still, coming from "an R culture", I am used to the feeling that I have a local "hard copy" of every package and piece of data my code relies on. Then, when I ship my code (e.g., when I publish a Shiny app), an instantaneous of all required dependencies ship with it so it works as a standalone.
I understand the downside in terms of storage space on the server, but my sense is this might be faster and less more reliable.
What I'd like to know is whether my understanding of online sourcing and its tradeoffs in javascript is correct, and if so, what the best practices are to address potential shortcomings. In particular:
Do I understand correctly that dependencies like https://unpkg.com/leaflet#1.7.1/dist/leaflet.js or https://unpkg.com/leaflet#1.7.1/dist/leaflet.css are reloaded every time I refresh the page?
The page is therefore dependent on those links not breaking, right? Or are there some inner mechanics I am not aware of that avoid this kind of wasteful reloading and risky dependency?
If there are not, do people just live with the risk of links breaking down? Or is it best practice to keep a local copy of scripts like https://unpkg.com/leaflet#1.7.1/dist/leaflet.js and source them locally instead? Or even, is there yet another best practice, like using a "safer provider" as a source for dependencies (do I understand correctly that this is the role of services like https://www.jsdelivr.com/?)?
Do I understand correctly that dependencies like https://unpkg.com/leaflet#1.7.1/dist/leaflet.js or https://unpkg.com/leaflet#1.7.1/dist/leaflet.css are reloaded every time I refresh the page?
No. HTTP clients perform caching.
The page is therefore dependent on those links not breaking, right?
Yes. (Where "breaking" includes "being blocked by a firewall" (a particular problem for users in China who often find that they can access a website but the JS doesn't work because it is hosted somewhere blocked by the Great Firewall) and "the CDN server being taken over by someone malicious")
do people just live with the risk of links breaking down?
Yes. Risk is relative though. CDNs are generally selected because the provider is trusted.
The potential benefits include faster access to the JS through the CDN making use of edge servers and the possibility that (for popular libraries, at least) a client will have already cached the data because another site used the same library.
You're also using the CDN host's bandwidth to serve the JS instead of your own, which can be a cost saving.
Do I understand correctly that dependencies like https://unpkg.com/leaflet#1.7.1/dist/leaflet.js or https://unpkg.com/leaflet#1.7.1/dist/leaflet.css are reloaded every time I refresh the page?
Yes and no. The important thing here is caching. Browsers will cache resources that have been loaded. Therefore, if a user goes on the page and hits refresh over and over, they would only download these resources once and each reload will use the cached version. Thus no they are not reloaded every time.
However, any time the user clears the cache or a new user comes in without the resource in their cache, then the file will be downloaded. Cache expiration for browsers is not entirely predictable as it is controlled by the users to a large extent. However, chances are that if a user visited today and then again next week using the same browser, they would still have the item in their cache. However, if their cache is flushed, or they use a different browser, or a different machine, or it is an entirely different user who visits, then yes - they would load the resource again.
The page is therefore dependent on those links not breaking, right? Or are there some inner mechanics I am not aware of that avoid this kind of wasteful reloading and risky dependency?
The inner mechanics are caching from above. However, if a resource link is taken down for whatever reason, then the page cannot use it. This could happen because:
the source of the link is no longer working.
the source is blocked by a firewall or other mechanism thus the user does not have access to it.
the user employs some blocking mechanism like an adblocker or a script blocker extension which means they have opted into preventing requests for certain resources.
In all these cases the result is similar: the user might have access to the full functionality of the page if they have a cached copy of the resources it needs. Otherwise, they cannot use them. Script files will not be executed, stylesheets will not be applied, images will not show, etc.
The way to fix each of these would be different:
For non-working links you need to find a new source or maybe even host it yourself.
For blocked resources, you might need to find a hosting site that is acceptable for the blocking mechanism. Self-hosting might be an option.
If the user is blocking the scripts you they would most likely need to unblock them. Although a combination of the above two approaches might also work - hosting on a domain not known for ads might avoid being blocked and self-hosting might also work. At least in the case of uMatrix - the addon by default blocks all scripts external to the page (with very few exceptions). If the scripts come from the same domain, then uMatrix would allow them by default.
If there are not, do people just live with the risk of links breaking down? Or is it best practice to keep a local copy of scripts like https://unpkg.com/leaflet#1.7.1/dist/leaflet.js and source them locally instead?
There are essentially two approaches here. Each with their strengths and downsides. A quick breakdown is:
You can accept externally hosted resources.
Advantage: There are several big content delivery networks (CDN) such as unpkg or jsdelivr. They are widely used for their reliability. In addition, some libraries might offer their own CDN - jQuery does that, for example. Getting the resource from a CDN saves you badwidth and space but can also improve the load speed. CDNs might have better speed than your hosting does. Furthermore, for widely used libraries, the CDN copy is likely to be cached on the user site from visiting another site that used the same CDN copy as you.
Disadvantage: it does leave you dependent on resources you do not control. Big CDNs are reliable but you still cannot have any direct control if anything happened. And if you use a smaller CDN (e.g., library specific or otherwise) then you do not have data about its reliability. If an external link dies, your website will not work until it is fixed and that might take days - you have to find out about it, figure out what was changed, hopefully find a replacement link, update the site. If you cannot find a drop-in replacement, you might need to make more changes.
You can host the resources yourself.
Advantage: You are in complete control of when and how are things stored. You can even process the resources in some way to help with your application. Scripts can be minified, images can be scaled and resized into several different sizes to to optimise the display in different places (e.g., an icon, a small image, and a full size image).
Disadvantage: more space taken, more bandwidth taken. Also, now you have to manage all of these resources and make sure they exist, they are available etc.
You can of course also use a mixed approach. Host some resources, use others from an external place. Depends on what you want to do with your application and what level of control you want to retain versus how much extra effort and costs you want.
Saying all that, for a lot of small projects it does not matter that much which path is chosen. If you only use a handful of libraries it matters little whether you use them from a CDN or host them yourself. As long a reliable CDN provider is chosen, the chance of an outage is acceptably minimal. If you host the resources, chances are they would take up few hundred kilobytes (if that).
If your project grows and the list of dependencies you have starts to get bigger and bigger, it might be time to take stock and decide how where you host them and how you consume them. There is no single answer to this question, it will likely depend on what you already have. Perhaps your hosting has very little space. Or you pay per megabyte downloaded. In that case, external hosting would make more sense. Or perhaps you have a robust storage option for yourself and you are confident you can ensure the availability of your application, in which case self-hosting might be preferable

How to reduce jquery.min.js CPU time?

I've noticed that almost all my browser's Javascript CPU resources get spent on jquery.min.js, specifically loaded from :
http://ajax.googleapis.com/ajax/libs/jquery/1.6.2/jquery.min.js
Are there any tool to minimize the resources consumed by JavaScript generally and/or jQuery specifically without outright blacklisting specific scripts?
I suppose the most obvious approach would be dynamically reducing the number of timer and other events a script receives. In fact, you could probably halt all events to scripts not in the foreground page, except for a specific whitelist of sites you actually want to permit to receive events in the background.
I'm perfectly happy with Javascript performance going way down so long as overall browser performance improves.
Sounds more like there is another script using jquery to do specific tasks. The jquery script itself after loading in the browser, to my knowledge, does not use any additional resource after that point in time.
Based my assumption of what is happening, there is nothing you can do at the moment (specifically because you haven't provided enough information to help).
Change all the getElementsByClassName to getElementsByTagName. This will improve the performance drastically as the getElementsByTagName is more efficient

Lightweight JS Library vs Google-hosted CDN

When page-load speed is the priority, is it better to use a minimal, lightweight javascript library (hosted on a CDN), or is it better to use something like jQuery, hosted on Google's CDN that the browser more than likely already has loaded?
Edit: What my question really boils down to is whether the cross-site caching effect of using jQuery hosted on Google's CDN outweighs the benefits of using an ultra-light library, also on a CDN.
jQuery is not heavy as compared to any other javascript library at present looking at the amount of features and browsers it supports.
You can consider this factor while selecting the plugins to be used on the page because they are written by various users and some may right it intelligently considering this factor or some may just right it for the sake.
Yes, if you use CDN like Google for jQuery it is most likely that the library must be cached by the browser and also Google has number of servers based on location so you don't have to worry about it.
Decreased Latency
A CDN distributes your static content across servers in various, diverse physical locations. When a user’s browser resolves the URL for these files, their download will automatically target the closest available server in the network.
In the case of Google’s AJAX Libraries CDN, what this means is that any users not physically near your server will be able to download jQuery faster than if you force them to download it from your arbitrarily located server.
There are a handful of CDN services comparable to Google’s, but it’s hard to beat the price of free! This benefit alone could decide the issue, but there’s even more.
Increased parallelism
To avoid needlessly overloading servers, browsers limit the number of connections that can be made simultaneously. Depending on which browser, this limit may be as low as two connections per hostname.
Using the Google AJAX Libraries CDN eliminates one request to your site, allowing more of your local content to downloaded in parallel. It doesn’t make a gigantic difference for users with a six concurrent connection browser, but for those still running a browser that only allows two, the difference is noticeable.
Better caching
Potentially the greatest benefit of using the Google AJAX Libraries CDN is that your users may not need to download jQuery at all.
No matter how well optimized your site is, if you’re hosting jQuery locally then your users must download it at least once. Each of your users probably already has dozens of identical copies of jQuery in their browser’s cache, but those copies of jQuery are ignored when they visit your site.
However, when a browser sees references to CDN-hosted copies of jQuery, it understands that all of those references do refer to the exact same file. With all of these CDN references point to exactly the same URLs, the browser can trust that those files truly are identical and won't waste time re-requesting the file if it's already cached. Thus, the browser is able to use a single copy that's cached on-disk, regardless of which site the CDN references appear on.
This creates a potent "cross-site caching" effect which all sites using the CDN benefit from. Since Google's CDN serves the file with headers that attempt to cache the file for up to one year, this effect truly has amazing potential. With many thousands of the most trafficked sites on the Internet already using the Google CDN to serve jQuery, it's quite possible that many of your users will never make a single HTTP request for jQuery when they visit sites using the CDN.
Even if someone visits hundreds of sites using the same Google hosted version of jQuery, they will only need download it once!
It's better to use the library that best suits the needs of your application and your development team. A super-lightweight library might save you a few hundred milliseconds of load time, but may end up costing you in development hours if your team has significantly more experience with jQuery/MooTools/Dojo etc.
If new feature implementation and bug fixing is hindered by using a second-rate tool solely to improve load times, your users are ultimately going to suffer.

Serve jQuery UI from Google's CDN or as a local copy?

while it's better to serve jQuery from Google's CDN jQuery UI is a different beast. My local modified copy weighs 60kb and the one in Google's CDN ~200kb.
Are there any numbers on how many sites uses the CDN? (read: how many users have it in their cache). How do I know/calculate if it's better to serve it locally?
Coming late to the party here, but allowing for gzip compression, you're basically comparing a download of ~51k from Google's CDN (the 197.14k content becomes 51.30k on-the-wire) vs. ~15.5k from your own servers (assuming your 60k file gzips at the same ratio as the full jQuery UI file does, and that you have gzip compression enabled). This takes us into a complex realm of:
pre-existing cache copy
latency
transfer time
number of requests
proper cache headers
And the answer to your question is a big: It depends, try each of them and measure the result in a real world scenario.
Pre-Existing Cache Copy
If a first-time visitor to your site has previously been to a site using jQuery UI from Google's CDN and it's still in their cache, that wins hands down. Full stop. No need to think about it any further. Google uses appropriate caching headers and the browser doesn't even have to send the request to the server, provided you link to a fully-specified version of jQuery UI (not one of the "any version of 1.8.x is fine" URLs — if you ask for jQuery UI 1.8.16, Google will return a resource that can be cached for up to a year, but if you ask for jQuery UI 1.8.x [e.g., any dot rev of 1.8], that resource is only good for an hour).
But let's suppose they haven't...
Latency and Transfer Time
Latency is how long it takes to set up the connection to the server, and transfer time is the time actually spent transferring the resource. Using my DSL connection (I'm not very close to my exchange, so I typically get about 4Mbit throughput on downloads; e.g., it's an okay connection, but nothing like what Londoners get, or those lucky FiOS people in the States), in repeated experiments downloading Google's copy of jQuery UI I typically spend ~50ms waiting for the connection (latency) and then 73ms doing data transfer (SSL would change that profile, but I'm assuming a non-SSL site here). Compare that with downloading Google's copy of jQuery itself (89.52k gzipped to 31.74k), which has the same ~50ms latency followed by ~45ms of downloading. Note how the download time is proportional to the size of the resource (31.74k / 51.30k = 0.61871345, and sure enough, 73ms x 0.61871345 = 45ms), but the latency is constant. So assuming your copy comes in at 15.5k, you could expect (for me) a 50ms latency plus about 22ms of actual downloading. All other things being equal, by hosting your own 60k copy vs. Google's 200k copy, you would save me a whopping 52ms. Let's just say that I wouldn't notice the difference.
All is not equal, however. Google's CDN is highly optimized, location-aware, and very fast indeed. For instance, let's compare downloading jQuery from Heroku.com. I chose them because they're smart people running a significant hosting business (currently using the AWS stack), and so you can expect they've at least spent some time optimizing their delivery of static content — and it happens they use a local copy of jQuery for their website; and they're in the U.S. (you'll see why in a moment). If I download jQuery from them (shockingly, they don't appear to have gzip enabled!), the latency is consistently in the 135ms range (with occasional outliers). That's consistently 2.7 times as much latency as to Google's CDN (and my throughput from them is slower, too, roughly half the speed; perhaps they only use AWS instances in the U.S., and since I'm in the UK I'm further away from them).
The point here being that latency may well wash out any benefit you get from the smaller file size.
Number of Requests
If you have any JavaScript files you're going to host locally, your users are still going to have to get those. Say you have 100k of your own script for your site. If you use Google's CDN, your users have to get 200k of jQuery UI from Google and 100k of your script from you. The browser may put those requests in parallel (barring your using async or defer on your script tags, the browser has to execute the scripts in strict document order, but that doesn't mean it can't download them in parallel). Or it may well not.
As we've established that for non-mobile users, at these sizes the actual data transfer time doesn't really matter that much, you may find that taking your local jQuery UI file and combining it with your own script, thus requiring only one download rather than two, may be more efficient even despite the Google CDN goodness.
This is the old "At most one HTML file, one CSS file, and one JavaScript file" rule. Minimizing HTTP requests is a Good ThingTM. Similarly, if you can use sprites rather than individual images for various things, that helps keep image requests down.
Proper Cache Headers
If you're hosting your own script, you'll want to be absolutely sure it's cacheable, which means paying attention to the cache headers. Google's CDN basically doesn't trust HTTP/1.0 caches (it sets the Expires header to the current date/time), but does trust HTTP/1.1 caches — the overwhelming majority — because it sends a max-age header (of a year for fully-specified resources). I'm guessing they have a reason for that, you might consider following suit.
Since you want to change your own scripts sometimes, you'll want to put a version number on them, e.g. "my-nifty-script-1.js" and then "my-nifty-script-2.js", etc. That's so you can set long max-age headers, but know that when you update your script, your users will get the new one. (This goes for CSS files, too.) Do not use the query string for the versioning, put the number actually in the resource name.
Since your HTML presumably changes regularly, you probably want short expirations on that, but of course it totally depends on your content.
Conclusion
It depends. If you don't want to combine your script with your local copy of jQuery UI, you're probably better off using Google for jQuery UI. If you're happy to combine them, you'll want to do real-world experiments either way to make your own decision. It's entirely possible other factors will wash this out and it won't really matter. If you haven't already, it's worth reviewing Yahoo's and Google's website speed advice pages:
Yahoo! Best Practices for Speeding Up Your Website
Google's Let's make the web faster site
Google's CDN of jquery UI weighs in at 51 Kb:
https://ajax.googleapis.com/ajax/libs/jqueryui/1.8.16/jquery-ui.min.js
The HTML5 Boilerplate uses a fallback for jquery loading:
<!-- Grab Google CDN's jQuery, with a protocol relative URL; fall back to local if necessary -->
<script src="//ajax.googleapis.com/ajax/libs/jquery/1.5.1/jquery.js"></script>
<script>window.jQuery || document.write('<script src="js/libs/jquery-1.5.1.min.js">\x3C/script>')</script>
You can apply it to jquery ui:
<script src="//ajax.googleapis.com/ajax/libs/jqueryui/1.8.16/jquery-ui.min.js"></script>
<script>window.jQuery.ui || document.write('<script src="js/jquery-ui-1.8.16.min.js">\x3C/script>')</script>
You load the CDN version then check for the existence of jquery ui (you can't guarantee 100% up-time for any CDN). If jquery ui doesn't exist, fall back to your local. In this way, if they have it already in their cache, you are good to go. If they don't and the CDN can't be retrieved for any reason, your good to go with your local. Fail safe.
I think size comparisons miss the point of the CDN. By serving a copy of jQuery (or other library) from a public, commonly-used CDN, many users will have a cached copy of the library before they arrive at your site. When they do, the effective size of the download is 0KB compared to 60KB from your server.
Google's CDN is the most widely used, so you will have the best chance of a cache hit if you reference it.
For numbers comparing the various CDNs please see this article.
For what it's worth, the minified version of Google's jQuery copy is much smaller than the size you mentioned.
I would say what matter is the load you have on your server. For the user it doesn't really matter if they are downloading it from your server or from google's server. These days there is enough bandwidth for 140kb to be easy to ignore on the user's side.
Now the really question is if you made changes to jQuery UI. If yes then you should serve your own copy. If not, then it's ok to serve google's. Because after all what you are aiming to is to lower load on your side.
And besides the caching doesn't happen just on the user's browser, but also on content distribution nodes that they are accessing. So it's safe to say that google's copy is cached almost for sure.
With sizes this small, what matters is number of http requests for a first-time visitor to your site.
If for example your site has script combining and minification configured so the entire script for a first time visitor is either one request or included in html itself, using your local copy is better because even a cached copy of JqueryUI isn't faster than all the script for the site showing up at once (the cached call still has to go out and check for Modified).
If you don't have a good script combining and minification setup (so you were going to send jqueryui separately, either from your site or elsewhere), use outside caches wherever possible.

When should I use Inline vs. External Javascript?

I would like to know when I should include external scripts or write them inline with the html code, in terms of performance and ease of maintenance.
What is the general practice for this?
Real-world-scenario - I have several html pages that need client-side form validation. For this I use a jQuery plugin that I include on all these pages. But the question is, do I:
write the bits of code that configure this script inline?
include all bits in one file that's share among all these html pages?
include each bit in a separate external file, one for each html page?
Thanks.
At the time this answer was originally posted (2008), the rule was simple: All script should be external. Both for maintenance and performance.
(Why performance? Because if the code is separate, it can easier be cached by browsers.)
JavaScript doesn't belong in the HTML code and if it contains special characters (such as <, >) it even creates problems.
Nowadays, web scalability has changed. Reducing the number of requests has become a valid consideration due to the latency of making multiple HTTP requests. This makes the answer more complex: in most cases, having JavaScript external is still recommended. But for certain cases, especially very small pieces of code, inlining them into the site’s HTML makes sense.
Maintainability is definitely a reason to keep them external, but if the configuration is a one-liner (or in general shorter than the HTTP overhead you would get for making those files external) it's performance-wise better to keep them inline. Always remember, that each HTTP request generates some overhead in terms of execution time and traffic.
Naturally this all becomes irrelevant the moment your code is longer than a couple of lines and is not really specific to one single page. The moment you want to be able to reuse that code, make it external. If you don't, look at its size and decide then.
If you only care about performance, most of advice in this thread is flat out wrong, and is becoming more and more wrong in the SPA era, where we can assume that the page is useless without the JS code. I've spent countless hours optimizing SPA page load times, and verifying these results with different browsers. Across the board the performance increase by re-orchestrating your html, can be quite dramatic.
To get the best performance, you have to think of pages as two-stage rockets. These two stages roughly correspond to <head> and <body> phases, but think of them instead as <static> and <dynamic>. The static portion is basically a string constant which you shove down the response pipe as fast as you possibly can. This can be a little tricky if you use a lot of middleware that sets cookies (these need to be set before sending http content), but in principle it's just flushing the response buffer, hopefully before jumping into some templating code (razor, php, etc) on the server. This may sound difficult, but then I'm just explaining it wrong, because it's near trivial. As you may have guessed, this static portion should contain all javascript inlined and minified. It would look something like
<!DOCTYPE html>
<html>
<head>
<script>/*...inlined jquery, angular, your code*/</script>
<style>/* ditto css */</style>
</head>
<body>
<!-- inline all your templates, if applicable -->
<script type='template-mime' id='1'></script>
<script type='template-mime' id='2'></script>
<script type='template-mime' id='3'></script>
Since it costs you next to nothing to send this portion down the wire, you can expect that the client will start receiving this somewhere around 5ms + latency after connecting to your server. Assuming the server is reasonably close this latency could be between 20ms to 60ms. Browsers will start processing this section as soon as they get it, and the processing time will normally dominate transfer time by factor 20 or more, which is now your amortized window for server-side processing of the <dynamic> portion.
It takes about 50ms for the browser (chrome, rest maybe 20% slower) to process inline jquery + signalr + angular + ng animate + ng touch + ng routes + lodash. That's pretty amazing in and of itself. Most web apps have less code than all those popular libraries put together, but let's say you have just as much, so we would win latency+100ms of processing on the client (this latency win comes from the second transfer chunk). By the time the second chunk arrives, we've processed all js code and templates and we can start executing dom transforms.
You may object that this method is orthogonal to the inlining concept, but it isn't. If you, instead of inlining, link to cdns or your own servers the browser would have to open another connection(s) and delay execution. Since this execution is basically free (as the server side is talking to the database) it must be clear that all of these jumps would cost more than doing no jumps at all. If there were a browser quirk that said external js executes faster we could measure which factor dominates. My measurements indicate that extra requests kill performance at this stage.
I work a lot with optimization of SPA apps. It's common for people to think that data volume is a big deal, while in truth latency, and execution often dominate. The minified libraries I listed add up to 300kb of data, and that's just 68 kb gzipped, or 200ms download on a 2mbit 3g/4g phone, which is exactly the latency it would take on the same phone to check IF it had the same data in its cache already, even if it was proxy cached, because the mobile latency tax (phone-to-tower-latency) still applies. Meanwhile, desktop connections that have lower first-hop latency typically have higher bandwidth anyway.
In short, right now (2014), it's best to inline all scripts, styles and templates.
EDIT (MAY 2016)
As JS applications continue to grow, and some of my payloads now stack up to 3+ megabytes of minified code, it's becoming obvious that at the very least common libraries should no longer be inlined.
Externalizing javascript is one of the yahoo performance rules:
http://developer.yahoo.com/performance/rules.html#external
While the hard-and-fast rule that you should always externalize scripts will generally be a good bet, in some cases you may want to inline some of the scripts and styles. You should however only inline things that you know will improve performance (because you've measured this).
i think the specific to one page, short script case is (only) defensible case for inline script
Actually, there's a pretty solid case to use inline javascript. If the js is small enough (one-liner), I tend to prefer the javascript inline because of two factors:
Locality. There's no need to navigate an external file to validate the behaviour of some javascript
AJAX. If you're refreshing some section of the page via AJAX, you may lose all of your DOM handlers (onclick, etc) for that section, depending on how you binded them. For example, using jQuery you can either use the live or delegate methods to circumvent this, but I find that if the js is small enough it is preferrable to just put it inline.
Another reason why you should always use external scripts is for easier transition to Content Security Policy (CSP). CSP defaults forbid all inline script, making your site more resistant to XSS attacks.
I would take a look at the required code and divide it into as many separate files as needed. Every js file would only hold one "logical set" of functions etc. eg. one file for all login related functions.
Then during site developement on each html page you only include those that are needed.
When you go live with your site you can optimize by combining every js file a page needs into one file.
The only defense I can offer for inline javascipt is that when using strongly typed views with .net MVC you can refer to c# variables mid javascript which I've found useful.
On the point of keeping JavaScript external:
ASP.NET 3.5SP1 recently introduced functionality to create a Composite script resource (merge a bunch of js files into one). Another benefit to this is when Webserver compression is turned on, downloading one slightly larger file will have a better compression ratio then many smaller files (also less http overhead, roundtrip etc...). I guess this saves on the initial page load, then browser caching kicks in as mentioned above.
ASP.NET aside, this screencast explains the benefits in more detail:
http://www.asp.net/learn/3.5-SP1/video-296.aspx
Three considerations:
How much code do you need (sometimes libraries are a first-class consumer)?
Specificity: is this code only functional in the context of this specific document or element?
Every code inside the document tends to make it longer and thus slower. Besides that SEO considerations make it obvious, that you minimize internal scripting ...
External scripts are also easier to debug using Firebug. I like to Unit Test my JavaScript and having it all external helps. I hate seeing JavaScript in PHP code and HTML it looks like a big mess to me.
Another hidden benefit of external scripts is that you can easily run them through a syntax checker like jslint. That can save you from a lot of heartbreaking, hard-to-find, IE6 bugs.
In your scenario it sounds like writing the external stuff in one file shared among the pages would be good for you. I agree with everything said above.
During early prototyping keep your code inline for the benefit of fast iteration, but be sure to make it all external by the time you reach production.
I'd even dare to say that if you can't place all your Javascript externally, then you have a bad design under your hands, and you should refactor your data and scripts
Google has included load times into it's page ranking measurements, if you inline a lot, it will take longer for the spiders to crawl thru your page, this may be influence your page ranking if you have to much included. in any case different strategies may have influence on your ranking.
well I think that you should use inline when making single page websites as scripts will not need to be shared across multiple pages
Having internal JS pros:
It's easier to manage & debug
You can see what's happening
Internal JS cons:
People can change it around, which really can annoy you.
external JS pros:
no changing around
you can look more professional (or at least that's what I think)
external JS cons:
harder to manage
its hard to know what's going on.
Always try to use external Js as inline js is always difficult to maintain.
Moreover, it is professionally required that you use an external js since majority of the developers recommend using js externally.
I myself use external js.

Categories

Resources