I've searched wide and broad but I haven't found any source stating either pros of cons of these two methods, or what the "best" way to enqueue scripts is in Wordpress, with regards to performance and compatibility.
So, my question is the following:
What are the pros and cons of placing scripts inside a <script> in the footer in Wordpress, compared to using wp_enqueue_script()?
My thoughts and example of script
I'm taking page/load speed very much into consideration and hence I've followed the various "speed optimization" tips of reducing HTTP requests as far as possible. Because of that I've mostly placed scripts "inline" (inside a <script>) in the footer, either by using the wp_footer() hook or placing them directly in the footer.php file. However, most of my scripts (my own) are small lines of javascript that, however, compile over time. An example of a script can be seen below. I've got many equally-sized codes/scripts as well as some smaller and some larger. It's probably amounting to about 200-300 lines of codes (not minified).
So, with regards to performance I'm kinda guessing that this is preferable, although the impact of having a single HTTP request more for loading the scipt as a file might be completely insignificant. However, this is also where I'm starting to wonder. Would it be better to load this javascript/jquery part from an extra file asynchronously?
Furthermore, another wonder I have is whether it would be better to load the script earlier to avoid any "loss of functionality" compared to loading it as a file? I would rather not load it in the <head> as jQuery is required to load first. At the same time it should also be noted that I'm in doubt whether it actually makes a difference, considering that the page load is quite fast nowadays, if optimized.
jQuery('.menu-toggle').click(function() {
jQuery('#site-navigation .menu').slideToggle('slow');
jQuery(function($) {
if (jQuery('#sitenavigation .menu .children').is(":visible")) {
jQuery("#sitenavigation .menu .children").slideUp();
};
});
});
jQuery(".searchform").click(function(e) {
jQuery(".selected").removeClass("selected");
jQuery(this).parent("div").addClass("selected");
e.stopPropagation();
if (jQuery("#search-dropdown-content").hasClass("selected")) {
jQuery("#search-dropdown").css("color", "#B87E28");
};
});
What's your thoughts on this, and do you have more considerations or pros/cons?
Regards
It depends.
Firstly, there's of course the option to use wp_enqueue_script and have your scripts printed inline in the footer if you don't mind getting your hands dirty (or find a plugin to do it for you).
As for "which should I do", I'd say it depends, mostly on a) the kind of site you run and b) who your visitors are.
Is it a site where you don't expect many visitors to return (e.g. affiliate landing page), and you don't expect your visitors to have multiple page views ("I need to know this specific information. OK, thanks, bye")? Inline everything.
Is it a site where your users will return regularly, and will hit multiple pages? Then you really want to take advantage of caching static resources, as it will save bandwidth and you will require less server power.
Are your users primarily tech-savvy people on high-bandwidth landlines? You might profit quite a bit from using google's (or other big player's) CDN because they've probably encountered the jquery lib from that CDN before and have it cached.
Are your users primarily mobile, possibly on the move, aka on high-latency, unstable connections? Every back and forth will hurt, performance wise, and inlining will be/feel much quicker.
If you don't outline any js, and have all your js in your footer, you shouldn't experience any lacking functionality, unless you are encountering very specific circumstances (e.g. a mobile user who has been throttled to GPRS and is now crawling along with 10kb/s) in which case whatever you do, nothing will work unless the user waits a long time for everything to complete.
My advise: build both options (hey, it'll also give you new skills), and compare them for the scenarios you envision your users to be in. Use your browser's dev tools, they can usually simulate a slower connection, and might be able to simulate a bad mobile connection. And don't consider it a final decision. As usage patterns change, the other option might become the better decision, so keep your mind (and an eye) open.
I use jQuery. Although I'd like to think of myself as a fairly good programmer in general and also specifically for JS, I don't think I understand the DOM api and its variable behavior in different browsers. Hence the use of jQuery.
I use a small subset of jquery, though:
1) Ajax
2) event handlers
3) Selectors/find/child/parent
I don't use anything else, no filter, no UI events, nothing! (okay fine slideup slidedown, but I can do that myself using css)
Are there any already existing tools command-line/browser based that would do a static analysis on the jquery script so that I don't have to force the user to download the full 100KB? If you're suggesting I do it myself, thank you, manually doing it would be the next step, and if I feel like there's a lot of interest, I might consider writing such a tool
Re: CDN- thanks for your suggestions, please see my comment to #Jonathan
You can take each function from the Github repository, but since there are various dependencies, you will not save as much as you think. Instead of using the 100kb uncompressed, development version, you'll do better using the 32kb minified version from http://jquery.com/download/.
There are also three good reasons to use jQuery from Google's CDN (<script src="//ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js" ></script>):
1. Decreased Latency
In the case of Google’s AJAX Libraries CDN, what this means is that any users not physically near your server will be able to download jQuery faster than if you force them to download it from your arbitrarily located server.
2. Increased parallelism
To avoid needlessly overloading servers, browsers limit the number of connections that can be made simultaneously. Depending on which browser, this limit may be as low as two connections per hostname.
3. Better caching
[W]hen a browser sees references to CDN-hosted copies of jQuery, it understands that all of those references do refer to the exact same file. With all of these CDN references point to exactly the same URLs, the browser can trust that those files truly are identical and won’t waste time re-requesting the file if it’s already cached. Thus, the browser is able to use a single copy that’s cached on-disk, regardless of which site the CDN references appear on.
Source of excerpt and further reading: http://encosia.com/3-reasons-why-you-should-let-google-host-jquery-for-you/
You can just use Google hosted libraries and stop carrying about jquery size.
I am sure every user browser has it in cache already, so its beneficial both for you and user.
I've noticed that almost all my browser's Javascript CPU resources get spent on jquery.min.js, specifically loaded from :
http://ajax.googleapis.com/ajax/libs/jquery/1.6.2/jquery.min.js
Are there any tool to minimize the resources consumed by JavaScript generally and/or jQuery specifically without outright blacklisting specific scripts?
I suppose the most obvious approach would be dynamically reducing the number of timer and other events a script receives. In fact, you could probably halt all events to scripts not in the foreground page, except for a specific whitelist of sites you actually want to permit to receive events in the background.
I'm perfectly happy with Javascript performance going way down so long as overall browser performance improves.
Sounds more like there is another script using jquery to do specific tasks. The jquery script itself after loading in the browser, to my knowledge, does not use any additional resource after that point in time.
Based my assumption of what is happening, there is nothing you can do at the moment (specifically because you haven't provided enough information to help).
Change all the getElementsByClassName to getElementsByTagName. This will improve the performance drastically as the getElementsByTagName is more efficient
This post probably will need some modification. I'll do my best to explain...
Basically, as a tester, I have noticed that sometimes programers who use template-based web back ends push a lot of stuff into onload handlers that then do stuff like load menu items, change display values in forms, etc.
For example, a page that displays your network configuration loads blank (or dummy values) for the IP info, then loads a block of variables in an onload function that sets the values when the page is rendered.
My experience (and gut feeling) is that this is a really bad practice, for a couple reasons.
1- If the page is displayed in an environment where Javascript is off (such as using "Send Page") the page will not display properly in that environment.
2- The HTML page becomes very hard to diagnose, because what is actually on screen is needs to be pieced together by executing the javascript in your head (this problem is less prominent w/ Firefox because of Firebug).
3- Most of the time, this is not being done via a standard practice of feature of the environment. In other words, there isn't a service on the back-end, the back-end code looks just as spaghetti as the resulting HTML.
and, not really a reason, more a correlation:
I have noticed that most coders that do this are generally the coders that have a lot of code-related bugs or critical integration bugs.
So, I'm not saying we shouldn't use javascript, I think what I'm saying is, when you produce a page dynamically, the dynamic behavior should be isolated to the back-end, and you should avoid changing the displayed information after the page is loaded and rendered.
I think what you're saying is what we should be doing is Progressive Enhancement with JavaScript.
Also related: Progressive Enhancement with CSS, Understanding Progressive Enhancement and Test-Driven Progressive Enhancement.
So the actual question is "What are advantages/disadvantages" of javascript content generation?
here's one: a lot of the things designers want are hard in straight html/css, or not fully supported. using Jquery to do zebra-tables with ":odd" for instance. Sometimes the server-side framework doesn't have good ways to accomplish this, so the way to get the cleanest code is actually to split it up like that.
I would like to know when I should include external scripts or write them inline with the html code, in terms of performance and ease of maintenance.
What is the general practice for this?
Real-world-scenario - I have several html pages that need client-side form validation. For this I use a jQuery plugin that I include on all these pages. But the question is, do I:
write the bits of code that configure this script inline?
include all bits in one file that's share among all these html pages?
include each bit in a separate external file, one for each html page?
Thanks.
At the time this answer was originally posted (2008), the rule was simple: All script should be external. Both for maintenance and performance.
(Why performance? Because if the code is separate, it can easier be cached by browsers.)
JavaScript doesn't belong in the HTML code and if it contains special characters (such as <, >) it even creates problems.
Nowadays, web scalability has changed. Reducing the number of requests has become a valid consideration due to the latency of making multiple HTTP requests. This makes the answer more complex: in most cases, having JavaScript external is still recommended. But for certain cases, especially very small pieces of code, inlining them into the site’s HTML makes sense.
Maintainability is definitely a reason to keep them external, but if the configuration is a one-liner (or in general shorter than the HTTP overhead you would get for making those files external) it's performance-wise better to keep them inline. Always remember, that each HTTP request generates some overhead in terms of execution time and traffic.
Naturally this all becomes irrelevant the moment your code is longer than a couple of lines and is not really specific to one single page. The moment you want to be able to reuse that code, make it external. If you don't, look at its size and decide then.
If you only care about performance, most of advice in this thread is flat out wrong, and is becoming more and more wrong in the SPA era, where we can assume that the page is useless without the JS code. I've spent countless hours optimizing SPA page load times, and verifying these results with different browsers. Across the board the performance increase by re-orchestrating your html, can be quite dramatic.
To get the best performance, you have to think of pages as two-stage rockets. These two stages roughly correspond to <head> and <body> phases, but think of them instead as <static> and <dynamic>. The static portion is basically a string constant which you shove down the response pipe as fast as you possibly can. This can be a little tricky if you use a lot of middleware that sets cookies (these need to be set before sending http content), but in principle it's just flushing the response buffer, hopefully before jumping into some templating code (razor, php, etc) on the server. This may sound difficult, but then I'm just explaining it wrong, because it's near trivial. As you may have guessed, this static portion should contain all javascript inlined and minified. It would look something like
<!DOCTYPE html>
<html>
<head>
<script>/*...inlined jquery, angular, your code*/</script>
<style>/* ditto css */</style>
</head>
<body>
<!-- inline all your templates, if applicable -->
<script type='template-mime' id='1'></script>
<script type='template-mime' id='2'></script>
<script type='template-mime' id='3'></script>
Since it costs you next to nothing to send this portion down the wire, you can expect that the client will start receiving this somewhere around 5ms + latency after connecting to your server. Assuming the server is reasonably close this latency could be between 20ms to 60ms. Browsers will start processing this section as soon as they get it, and the processing time will normally dominate transfer time by factor 20 or more, which is now your amortized window for server-side processing of the <dynamic> portion.
It takes about 50ms for the browser (chrome, rest maybe 20% slower) to process inline jquery + signalr + angular + ng animate + ng touch + ng routes + lodash. That's pretty amazing in and of itself. Most web apps have less code than all those popular libraries put together, but let's say you have just as much, so we would win latency+100ms of processing on the client (this latency win comes from the second transfer chunk). By the time the second chunk arrives, we've processed all js code and templates and we can start executing dom transforms.
You may object that this method is orthogonal to the inlining concept, but it isn't. If you, instead of inlining, link to cdns or your own servers the browser would have to open another connection(s) and delay execution. Since this execution is basically free (as the server side is talking to the database) it must be clear that all of these jumps would cost more than doing no jumps at all. If there were a browser quirk that said external js executes faster we could measure which factor dominates. My measurements indicate that extra requests kill performance at this stage.
I work a lot with optimization of SPA apps. It's common for people to think that data volume is a big deal, while in truth latency, and execution often dominate. The minified libraries I listed add up to 300kb of data, and that's just 68 kb gzipped, or 200ms download on a 2mbit 3g/4g phone, which is exactly the latency it would take on the same phone to check IF it had the same data in its cache already, even if it was proxy cached, because the mobile latency tax (phone-to-tower-latency) still applies. Meanwhile, desktop connections that have lower first-hop latency typically have higher bandwidth anyway.
In short, right now (2014), it's best to inline all scripts, styles and templates.
EDIT (MAY 2016)
As JS applications continue to grow, and some of my payloads now stack up to 3+ megabytes of minified code, it's becoming obvious that at the very least common libraries should no longer be inlined.
Externalizing javascript is one of the yahoo performance rules:
http://developer.yahoo.com/performance/rules.html#external
While the hard-and-fast rule that you should always externalize scripts will generally be a good bet, in some cases you may want to inline some of the scripts and styles. You should however only inline things that you know will improve performance (because you've measured this).
i think the specific to one page, short script case is (only) defensible case for inline script
Actually, there's a pretty solid case to use inline javascript. If the js is small enough (one-liner), I tend to prefer the javascript inline because of two factors:
Locality. There's no need to navigate an external file to validate the behaviour of some javascript
AJAX. If you're refreshing some section of the page via AJAX, you may lose all of your DOM handlers (onclick, etc) for that section, depending on how you binded them. For example, using jQuery you can either use the live or delegate methods to circumvent this, but I find that if the js is small enough it is preferrable to just put it inline.
Another reason why you should always use external scripts is for easier transition to Content Security Policy (CSP). CSP defaults forbid all inline script, making your site more resistant to XSS attacks.
I would take a look at the required code and divide it into as many separate files as needed. Every js file would only hold one "logical set" of functions etc. eg. one file for all login related functions.
Then during site developement on each html page you only include those that are needed.
When you go live with your site you can optimize by combining every js file a page needs into one file.
The only defense I can offer for inline javascipt is that when using strongly typed views with .net MVC you can refer to c# variables mid javascript which I've found useful.
On the point of keeping JavaScript external:
ASP.NET 3.5SP1 recently introduced functionality to create a Composite script resource (merge a bunch of js files into one). Another benefit to this is when Webserver compression is turned on, downloading one slightly larger file will have a better compression ratio then many smaller files (also less http overhead, roundtrip etc...). I guess this saves on the initial page load, then browser caching kicks in as mentioned above.
ASP.NET aside, this screencast explains the benefits in more detail:
http://www.asp.net/learn/3.5-SP1/video-296.aspx
Three considerations:
How much code do you need (sometimes libraries are a first-class consumer)?
Specificity: is this code only functional in the context of this specific document or element?
Every code inside the document tends to make it longer and thus slower. Besides that SEO considerations make it obvious, that you minimize internal scripting ...
External scripts are also easier to debug using Firebug. I like to Unit Test my JavaScript and having it all external helps. I hate seeing JavaScript in PHP code and HTML it looks like a big mess to me.
Another hidden benefit of external scripts is that you can easily run them through a syntax checker like jslint. That can save you from a lot of heartbreaking, hard-to-find, IE6 bugs.
In your scenario it sounds like writing the external stuff in one file shared among the pages would be good for you. I agree with everything said above.
During early prototyping keep your code inline for the benefit of fast iteration, but be sure to make it all external by the time you reach production.
I'd even dare to say that if you can't place all your Javascript externally, then you have a bad design under your hands, and you should refactor your data and scripts
Google has included load times into it's page ranking measurements, if you inline a lot, it will take longer for the spiders to crawl thru your page, this may be influence your page ranking if you have to much included. in any case different strategies may have influence on your ranking.
well I think that you should use inline when making single page websites as scripts will not need to be shared across multiple pages
Having internal JS pros:
It's easier to manage & debug
You can see what's happening
Internal JS cons:
People can change it around, which really can annoy you.
external JS pros:
no changing around
you can look more professional (or at least that's what I think)
external JS cons:
harder to manage
its hard to know what's going on.
Always try to use external Js as inline js is always difficult to maintain.
Moreover, it is professionally required that you use an external js since majority of the developers recommend using js externally.
I myself use external js.