When should I use Inline vs. External Javascript? - javascript

I would like to know when I should include external scripts or write them inline with the html code, in terms of performance and ease of maintenance.
What is the general practice for this?
Real-world-scenario - I have several html pages that need client-side form validation. For this I use a jQuery plugin that I include on all these pages. But the question is, do I:
write the bits of code that configure this script inline?
include all bits in one file that's share among all these html pages?
include each bit in a separate external file, one for each html page?
Thanks.

At the time this answer was originally posted (2008), the rule was simple: All script should be external. Both for maintenance and performance.
(Why performance? Because if the code is separate, it can easier be cached by browsers.)
JavaScript doesn't belong in the HTML code and if it contains special characters (such as <, >) it even creates problems.
Nowadays, web scalability has changed. Reducing the number of requests has become a valid consideration due to the latency of making multiple HTTP requests. This makes the answer more complex: in most cases, having JavaScript external is still recommended. But for certain cases, especially very small pieces of code, inlining them into the site’s HTML makes sense.

Maintainability is definitely a reason to keep them external, but if the configuration is a one-liner (or in general shorter than the HTTP overhead you would get for making those files external) it's performance-wise better to keep them inline. Always remember, that each HTTP request generates some overhead in terms of execution time and traffic.
Naturally this all becomes irrelevant the moment your code is longer than a couple of lines and is not really specific to one single page. The moment you want to be able to reuse that code, make it external. If you don't, look at its size and decide then.

If you only care about performance, most of advice in this thread is flat out wrong, and is becoming more and more wrong in the SPA era, where we can assume that the page is useless without the JS code. I've spent countless hours optimizing SPA page load times, and verifying these results with different browsers. Across the board the performance increase by re-orchestrating your html, can be quite dramatic.
To get the best performance, you have to think of pages as two-stage rockets. These two stages roughly correspond to <head> and <body> phases, but think of them instead as <static> and <dynamic>. The static portion is basically a string constant which you shove down the response pipe as fast as you possibly can. This can be a little tricky if you use a lot of middleware that sets cookies (these need to be set before sending http content), but in principle it's just flushing the response buffer, hopefully before jumping into some templating code (razor, php, etc) on the server. This may sound difficult, but then I'm just explaining it wrong, because it's near trivial. As you may have guessed, this static portion should contain all javascript inlined and minified. It would look something like
<!DOCTYPE html>
<html>
<head>
<script>/*...inlined jquery, angular, your code*/</script>
<style>/* ditto css */</style>
</head>
<body>
<!-- inline all your templates, if applicable -->
<script type='template-mime' id='1'></script>
<script type='template-mime' id='2'></script>
<script type='template-mime' id='3'></script>
Since it costs you next to nothing to send this portion down the wire, you can expect that the client will start receiving this somewhere around 5ms + latency after connecting to your server. Assuming the server is reasonably close this latency could be between 20ms to 60ms. Browsers will start processing this section as soon as they get it, and the processing time will normally dominate transfer time by factor 20 or more, which is now your amortized window for server-side processing of the <dynamic> portion.
It takes about 50ms for the browser (chrome, rest maybe 20% slower) to process inline jquery + signalr + angular + ng animate + ng touch + ng routes + lodash. That's pretty amazing in and of itself. Most web apps have less code than all those popular libraries put together, but let's say you have just as much, so we would win latency+100ms of processing on the client (this latency win comes from the second transfer chunk). By the time the second chunk arrives, we've processed all js code and templates and we can start executing dom transforms.
You may object that this method is orthogonal to the inlining concept, but it isn't. If you, instead of inlining, link to cdns or your own servers the browser would have to open another connection(s) and delay execution. Since this execution is basically free (as the server side is talking to the database) it must be clear that all of these jumps would cost more than doing no jumps at all. If there were a browser quirk that said external js executes faster we could measure which factor dominates. My measurements indicate that extra requests kill performance at this stage.
I work a lot with optimization of SPA apps. It's common for people to think that data volume is a big deal, while in truth latency, and execution often dominate. The minified libraries I listed add up to 300kb of data, and that's just 68 kb gzipped, or 200ms download on a 2mbit 3g/4g phone, which is exactly the latency it would take on the same phone to check IF it had the same data in its cache already, even if it was proxy cached, because the mobile latency tax (phone-to-tower-latency) still applies. Meanwhile, desktop connections that have lower first-hop latency typically have higher bandwidth anyway.
In short, right now (2014), it's best to inline all scripts, styles and templates.
EDIT (MAY 2016)
As JS applications continue to grow, and some of my payloads now stack up to 3+ megabytes of minified code, it's becoming obvious that at the very least common libraries should no longer be inlined.

Externalizing javascript is one of the yahoo performance rules:
http://developer.yahoo.com/performance/rules.html#external
While the hard-and-fast rule that you should always externalize scripts will generally be a good bet, in some cases you may want to inline some of the scripts and styles. You should however only inline things that you know will improve performance (because you've measured this).

i think the specific to one page, short script case is (only) defensible case for inline script

Actually, there's a pretty solid case to use inline javascript. If the js is small enough (one-liner), I tend to prefer the javascript inline because of two factors:
Locality. There's no need to navigate an external file to validate the behaviour of some javascript
AJAX. If you're refreshing some section of the page via AJAX, you may lose all of your DOM handlers (onclick, etc) for that section, depending on how you binded them. For example, using jQuery you can either use the live or delegate methods to circumvent this, but I find that if the js is small enough it is preferrable to just put it inline.

Another reason why you should always use external scripts is for easier transition to Content Security Policy (CSP). CSP defaults forbid all inline script, making your site more resistant to XSS attacks.

I would take a look at the required code and divide it into as many separate files as needed. Every js file would only hold one "logical set" of functions etc. eg. one file for all login related functions.
Then during site developement on each html page you only include those that are needed.
When you go live with your site you can optimize by combining every js file a page needs into one file.

The only defense I can offer for inline javascipt is that when using strongly typed views with .net MVC you can refer to c# variables mid javascript which I've found useful.

On the point of keeping JavaScript external:
ASP.NET 3.5SP1 recently introduced functionality to create a Composite script resource (merge a bunch of js files into one). Another benefit to this is when Webserver compression is turned on, downloading one slightly larger file will have a better compression ratio then many smaller files (also less http overhead, roundtrip etc...). I guess this saves on the initial page load, then browser caching kicks in as mentioned above.
ASP.NET aside, this screencast explains the benefits in more detail:
http://www.asp.net/learn/3.5-SP1/video-296.aspx

Three considerations:
How much code do you need (sometimes libraries are a first-class consumer)?
Specificity: is this code only functional in the context of this specific document or element?
Every code inside the document tends to make it longer and thus slower. Besides that SEO considerations make it obvious, that you minimize internal scripting ...

External scripts are also easier to debug using Firebug. I like to Unit Test my JavaScript and having it all external helps. I hate seeing JavaScript in PHP code and HTML it looks like a big mess to me.

Another hidden benefit of external scripts is that you can easily run them through a syntax checker like jslint. That can save you from a lot of heartbreaking, hard-to-find, IE6 bugs.

In your scenario it sounds like writing the external stuff in one file shared among the pages would be good for you. I agree with everything said above.

During early prototyping keep your code inline for the benefit of fast iteration, but be sure to make it all external by the time you reach production.
I'd even dare to say that if you can't place all your Javascript externally, then you have a bad design under your hands, and you should refactor your data and scripts

Google has included load times into it's page ranking measurements, if you inline a lot, it will take longer for the spiders to crawl thru your page, this may be influence your page ranking if you have to much included. in any case different strategies may have influence on your ranking.

well I think that you should use inline when making single page websites as scripts will not need to be shared across multiple pages

Having internal JS pros:
It's easier to manage & debug
You can see what's happening
Internal JS cons:
People can change it around, which really can annoy you.
external JS pros:
no changing around
you can look more professional (or at least that's what I think)
external JS cons:
harder to manage
its hard to know what's going on.

Always try to use external Js as inline js is always difficult to maintain.
Moreover, it is professionally required that you use an external js since majority of the developers recommend using js externally.
I myself use external js.

Related

Pros/cons of placing scripts "inline" in footer or using wp_enqueue_script (wordpress)

I've searched wide and broad but I haven't found any source stating either pros of cons of these two methods, or what the "best" way to enqueue scripts is in Wordpress, with regards to performance and compatibility.
So, my question is the following:
What are the pros and cons of placing scripts inside a <script> in the footer in Wordpress, compared to using wp_enqueue_script()?
My thoughts and example of script
I'm taking page/load speed very much into consideration and hence I've followed the various "speed optimization" tips of reducing HTTP requests as far as possible. Because of that I've mostly placed scripts "inline" (inside a <script>) in the footer, either by using the wp_footer() hook or placing them directly in the footer.php file. However, most of my scripts (my own) are small lines of javascript that, however, compile over time. An example of a script can be seen below. I've got many equally-sized codes/scripts as well as some smaller and some larger. It's probably amounting to about 200-300 lines of codes (not minified).
So, with regards to performance I'm kinda guessing that this is preferable, although the impact of having a single HTTP request more for loading the scipt as a file might be completely insignificant. However, this is also where I'm starting to wonder. Would it be better to load this javascript/jquery part from an extra file asynchronously?
Furthermore, another wonder I have is whether it would be better to load the script earlier to avoid any "loss of functionality" compared to loading it as a file? I would rather not load it in the <head> as jQuery is required to load first. At the same time it should also be noted that I'm in doubt whether it actually makes a difference, considering that the page load is quite fast nowadays, if optimized.
jQuery('.menu-toggle').click(function() {
jQuery('#site-navigation .menu').slideToggle('slow');
jQuery(function($) {
if (jQuery('#sitenavigation .menu .children').is(":visible")) {
jQuery("#sitenavigation .menu .children").slideUp();
};
});
});
jQuery(".searchform").click(function(e) {
jQuery(".selected").removeClass("selected");
jQuery(this).parent("div").addClass("selected");
e.stopPropagation();
if (jQuery("#search-dropdown-content").hasClass("selected")) {
jQuery("#search-dropdown").css("color", "#B87E28");
};
});
What's your thoughts on this, and do you have more considerations or pros/cons?
Regards
It depends.
Firstly, there's of course the option to use wp_enqueue_script and have your scripts printed inline in the footer if you don't mind getting your hands dirty (or find a plugin to do it for you).
As for "which should I do", I'd say it depends, mostly on a) the kind of site you run and b) who your visitors are.
Is it a site where you don't expect many visitors to return (e.g. affiliate landing page), and you don't expect your visitors to have multiple page views ("I need to know this specific information. OK, thanks, bye")? Inline everything.
Is it a site where your users will return regularly, and will hit multiple pages? Then you really want to take advantage of caching static resources, as it will save bandwidth and you will require less server power.
Are your users primarily tech-savvy people on high-bandwidth landlines? You might profit quite a bit from using google's (or other big player's) CDN because they've probably encountered the jquery lib from that CDN before and have it cached.
Are your users primarily mobile, possibly on the move, aka on high-latency, unstable connections? Every back and forth will hurt, performance wise, and inlining will be/feel much quicker.
If you don't outline any js, and have all your js in your footer, you shouldn't experience any lacking functionality, unless you are encountering very specific circumstances (e.g. a mobile user who has been throttled to GPRS and is now crawling along with 10kb/s) in which case whatever you do, nothing will work unless the user waits a long time for everything to complete.
My advise: build both options (hey, it'll also give you new skills), and compare them for the scenarios you envision your users to be in. Use your browser's dev tools, they can usually simulate a slower connection, and might be able to simulate a bad mobile connection. And don't consider it a final decision. As usage patterns change, the other option might become the better decision, so keep your mind (and an eye) open.

Does using more external files, opposed to cramming everything into one file, reduce run-time-efficiency?

I am relatively new to web design and the world of jquery, javascript and php. I guess this question would also suit css style sheets as well. Is it better to have everything stuffed into one "external document"? Or does this not affect the run time speeds?
Also to go along with this. Is it wrong, or less efficient, to use php in places where jquery / javascript could be implemented? Which of the two languages is generally faster?
The way you should look at it would be to load the minimum resources required initially which would be needed on page load, not everything. Make sure you group all of these resources together into a single file, and minify them.
Once your page is loaded, you can thereafter load other resources on demand. For e.g a modal, which does not need to be immediately visible can be loaded at a later point of time, when user does some action, and it needs to be shown. This is called lazy loading. But when you do load any module on demand, make sure you load all of its resources together and minified as well.
It's important to structure your code correctly and define the way you batch files together for concatenation and minification. It will help you save on performance by optimizing the number of calls made to the server.
About PHP and JavaScript, I would say in general JavaScript is faster than PHP, but it depends on your application, as one runs on the server and other on the client. So if you are doing too heavy and memory intensive operations, the browser might limit your capabilities. If that is not a problem, go ahead with JavaScript.
There's a lot of different factors that come into play here. Ultimately, it is better to call the least amount of resources possible to make the site run faster. Many sites that check page speed will dock points if you call a ton of resources. However, you don't want to go insane condensing and try to cram everything into a single file either... The best way to approach it is to use as few files as possible while maintaining a logical organization.
For example, maybe you're using a few different JS libraries... well merging those all into one would eventually get confusing and hard to update so it makes sense to keep them all separate. However, you can keep all your custom JS where you call those libraries in one separate file. This can even be applied to images. Let's say you're uploading 5 different social media icons and 5 different hover states for them. Well, instead of making the site call 10 different files, use a sprite and just call one.
You can also do things like use google's hosted libraries: https://developers.google.com/speed/libraries/ Many sites use these and therefore many users already have these resources cached which means they don't need to freshly load the libraries when visiting your site. It's very helpful for things like jQuery.
Another thing to keep in mind is minifying those files. Any library you use should have a minified version and you should use that as opposed to a full version. While you should keep unminified copies of your work around, whatever ends up on the live site should be minified to help with page speed. Here are a few resources for that: https://cssminifier.com/ https://javascript-minifier.com/ If you're using WP, there's tons of plugins out there that have similar functions like WP Fastest Cache.
You php/js/jquery question I can't really weigh in on too heavily. As mentioned, the base difference between php and JS ist whether the requests are client-side or server-side. Personally, I use whatever is prevalent in the project and whatever works best for your changes. For example, if you're working with variables and transferring data, PHP can be a really great

Should I inline CSS & JS in mobile sites to save bandwidth?

Is there a reason not to inline CSS & JS when I make a mobile-ONLY site, to save bandwidth?
The only possible benefit I can think of is a couple less HTTP requests, but you totally give up the benefits of having the files cached if you do so.
Caching is a good thing and it saves bandwidth, so I can't see why you'd want to lose that advantage.
Besides that (not related to performance), maintenance will be a nightmare with everything inline, as it would be with any site.
I wouldn't be the least bit surprised if there were even more compelling reasons not to.
Use separate files.
Yes. First of all, you'll either have to code like that or inline them dynamically. Dynamically = waste of processing power. Code like that = hard to maintain and bad practice. And for what? You barely save any bandwidth at all, and it makes caching impossible and might actually slow you down. Now minification, on the other hand... that's what you should do instead. Minify your CSS and JavaScript, combine them into one file, and it's okay if you do this dynamically because the benefits outweigh the problems.
In-lining everything has different effects:
Reduces number of requests -- but increases your HTML file size
Increased HTML file size -- Load time increases considerably
No caching -- you have lost a good opportunity
Maintenance is like hell -- unless you inline as a step of your development process
A good blog post you can read - Why inlining everything is not the answer
There he recommends only to inline very small files (less than 1KB)
Hey by the way, why not inline -- Google does it in their homepage. Anyone who has 'View-ed Source' Google has seen it. But still its your choice.
If you are still thinking to reduce the number of HTTP requests then it is better to use a build tool to do the inlining autonomously. Otherwise you'll have to go through the 'maintenance hell'.
Yes, this reason is named cache :-) not inline css and js will be cached (Mobile browsers with html support use cache)

Javascript ouside public_html?

Is there any way/reason to keep your js/jQuery ouside public_html? Are there security benefits?
All javascript is readable no matter where you put it on your server. The best you can do is obfuscate it:
How can I obfuscate (protect) JavaScript?
Plus, jQuery is open source, so there is no benefit of storing it any place in particular.
JavaScript is code that is executed by your visitor's browsers so it must be public. It is good practice to compress your JavaScript code because it will require less bandwidth to transfer and it will make it harder for other to read it (i.e. figure out how to exploit it). Yahoo offers a free JavaScript compressor.
Is there any reasons to keep your JavaScript outside public_html?
It allows for a more optimized server-side and client-side caching of the resources.
It allows for a more parallelized download of the resources from the server by the client.
It makes it easier for you to separate your UI's presentational layer (your HTML code) from its controls and logic layer (your JavaScript code), which also means:
It makes it easier for you to manage.
It's easier to share across your team (you won't have designers and developers working on the same pieces of code, and you separate functionalities in smaller resources (even if you compile them together for a production server).
As a result of this, the design of your JS code will most probably be more prone to testability.
It also allows you to not load some bits and pieces you may not need all the time. It thus also facilitates code reuse for some of these bits when they're needed somewhere else, without having to duplicate them in other huge files of stuff put together.
Is there any ways to keep your JavaScript outside public_html?
Sure. Just put the code in a external JS files and inject them normally using script tags. Or inline them with your build script or template engine. Be sure to decouple your HTML from your JS (no ugly JS injected directly with on<something>=javascript:doThisOrThat() in your tags) and to use more standard and robust methods instead (use a system compliant with the W3C DOM Level 2 Event model. jQuery is very practical for this anyway, as it's designed with progressive enhancement in mind).
Are there security benefits?
Short answer: no. But you have a clear increase in code quality, which would be easily relating to your overall security. But having your JS within the same file is not a security flaw. If what you meant is that you're concerned about your JS code being visible by visitors, as other answers already mentioned: you cannot do anything it about this. You can make it harder and more painful to look at (via obfuscation), but this won't prevent someone who really wants to to dig into your code. My recommendation: don't waste your time and efforts on that.
Unless it's in a writable location (which it shouldn't be), no.

Using DOMContentReady considered anti-pattern by Google

A Google Closure library team member asserts that waiting for DOMContentReady event is a bad practice.
The short story is that we don't want
to wait for DOMContentReady (or worse
the load event) since it leads to bad
user experience. The UI is not
responsive until all the DOM has been
loaded from the network. So the
preferred way is to use inline scripts
as soon as possible.
Since they still don't provide more details on this, so I wonder how they deal with Operation Aborted dialog in IE. This dialog is the only critical reason I know to wait for DOMContentReady (or load) event.
Do you know any other reason?
How do you think they deal with that IE issue?
A little explanation first: The point with inline JavaScript is to include it as soon as possible. However, that "possible" is dependent on the DOM nodes that that script requires being declared. For example, if you have some navigation menu that requires JavaScript, you would include the script immediately after the menu is defined in the HTML.
<ul id="some-nav-menu">
<li>...</li>
<li>...</li>
<li>...</li>
</ul>
<script type="text/javascript">
// Initialize menu behaviors and events
magicMenuOfWonder( document.getElementById("some-nav-menu") );
</script>
As long as you only address DOM nodes that you know have been declared, you wont run into DOM unavailability problems. As for the IE issue, the developer must strategically include their script so that this doesn't happen. It's not really that big of a concern, nor is it difficult to address. The real problem with this is the "big picture", as described below.
Of course, everything has pros and cons.
Pros
As soon as a DOM element is displayed to the user, whatever functionality that is added to it by JavaScript is almost immediately available as well (instead of waiting for the whole page to load).
In some cases, Pro #1 can result in faster perceived page load times and an improved user experience.
Cons
At worst, you're mixing presentation and business logic, at best you're mixing your script includes throughout your presentation, both of which can be difficult to manage. Neither are acceptable in my opinion as well as by a large portion of the community.
As eyelidlessness pointed out, if the script's in question have external dependencies (a library for example), then those dependencies must be loaded first, which will lock page rendering while they are parsed and executed.
Do I use this technique? No. I prefer to load all script at the end of the page, just before the closing </body> tag. In almost every case, this is sufficiently fast for perceived and actual initialization performance of effects and event handlers.
Is it okay for other people to use it? Developers are going to do what they want/need to get the job done and to make their clients/bosses/marketing department happy. There are trade-offs, and as long as you understand and manage them, you should be okay either way.
The biggest problem with inline scripts is that it cannot be cached properly. If you have all your scripts stored away in one js file that is minified(using a compiler) then that file can be cached just once, for the entire site, by the browser.
That leads to better performance in the long run if your site tends to be busy. Another advantage of having a separate file for your scripts is that you tend to not "repeat yourself" and declare reusable functions as much as possible. DOMContentReady does not lead to bad user experience. Atleast it provides the user with the content before-hand rather than make the user wait for the UI to load which might end up becoming a big turn-off for the user.
Also using inline scripts does not ensure that the UI will be more responsive than that when used with DOMContentReady. Imagine a scenario where you are using inline scripts for making ajax calls. If you have one form to submit its fine. Have more than one form and you end up repeating your ajax calls.. hence repeating the same script everytime. In the end, it leads to the browser caching more javascript code than it would have if it was separated out in a js file loaded when the DOM is ready.
Another big disadvantage of having inline scripts is that you need to maintain two separate code bases: one for development and another for production. You must ensure that both code bases are kept in-sync. The development version contains non-minified version of your code and the production version contains the minified version. This is a big headache in the development cycle. You have to manually replace all your code snippets hidden away in those bulky html files with the minified version and also in the end hope that no code breaks! However with maintaing a separate file during development cycle, you just need to replace that file with the compiled minified version in the production codebase.
If you use YSlow you see that:
Using external JavaScript and CSS
files generally produces faster pages
because the files are cached by the
browser. JavaScript and CSS that are
inlined in HTML documents get
downloaded each time the HTML document
is requested. This reduces the number
of HTTP requests but increases the
HTML document size. On the other hand,
if the JavaScript and CSS are in
external files cached by the browser,
the HTML document size is reduced
without increasing the number of HTTP
requests.
I can vouch for inline scripts if and only if the code changes so often that having it in a separate js file is immaterial and would in the end have the same impact as that of these inline scripts. Still, these scripts are not cached by the browser. The only way the browser can cache scripts is if its stored away in an external js file with an etag.
However, this is not JQuery vs Google closure in any manner. Closure has its own advantages. However closure library makes it hard to have all your scripts in external files(though its not that its impossible, just makes it hard). You just tend to use inline scripts.
One reason to avoid inline scripts is that it requires you place any dependency libraries before them in the document, which will probably negate the performance gains of inline scripts. The best practice I'm familiar with is to place all scripts (in a single HTTP request!) at the very end of the document, just before </body>. This is because script-loading blocks the current request and all sub-requests until the script is completely loaded, parsed, and executed.
Short of a magic wand, we'll always have to make these trade-offs. Thankfully, the HTML document itself is increasingly going to become the least resource-intensive request made (unless you're doing silly stuff like huge data: URLs and huge inline SVG documents). To me, the trade-off of waiting for the end of the HTML document seems the most obvious choice.
I think this advise is not really helpful. DOMContentReady may only be bad practice because it currently is over-used (maybe because of jquery's easy-to-use ready event). Many people use it as the "startup" event for any javascript action. Though even jQuery's ready() event was only meant to be used as the startup point for DOM manipulations.
Inferential, DOM manipulations on page load lead to bad user experience!! Because they are not necessary, the server side could just have completely generated the initial page.
So, maybe the closure team members just try to steer in the opposite direction, and prevent people from doing DOM manipulations on page load at all?
If the time it takes to parse, load, and render layout (the point at which domready should fire) without consideration for image loads takes so long that it causes noticeable UI delay, you've probably got something really ugly building that page on the back-end, and that's your real problem.
Furthermore, JavaScript before the end of the page halts HTML parsing/DOM evaluating until the JS is parsed and evaluated. Google should look to their Java before giving advice on JavaScript.
If you need to dispose your HTML on your own, Googles approach seems very awkward. They use the compiled approach and can waste their brain-cycles on other issues, which is sane given the complex ui of their apps. If you head for something with more relaxed requirements (read: almost everything else), maybe it's not worth the effort.
It's a pity that GWT only is talking Java, though.
That's probably because Google doesn't care if you don't have Javascript, they require it for just about everything they do. If you use Javascript as an addition on top of your already-functioning website then loading scripts in DOMContentReady is just fine. The point is to use Javascript to enhance the user's experience, not seclude them if they don't have it.

Categories

Resources