What is the cost of using external libraries? - javascript

This is some front-end web development libraries that is most commonly used:
jquery-min.js (95.9 kB)
angular.min.js (108.0 kB)
bootstrap.min.css (113.5 kB)
bootstrap-theme.min.css (19.8 kB)
bootstrap-fonts (185.7 kB)
bootstrap.min.js (35.6 kB)
All together, this means + 558.5 kB to every page of our website. And few more server requests. So, is that all? Are there some more (performance or other) costs of using external libraries on our website?

Each library you include has both costs and benefits throughout the stack
Costs
time/latency (performance) to load and render
bandwidth to serve to clients, bandwidth to deploy if bundled
build time and file size if you are bundling these into your deployment package
complexity to understand, modify, debug
complexity required to prove that a given CSS rule is in fact not involved
memory to load into your page
Benefits
generally more correct and complete implementations (like bootstrap's accessibility)
re-use of existing solutions, shorter development time
guided toward consistent structures and away from ad-hoc messes
better and easier APIs (vanilla XHR vs $http.get, etc)
Given your specific examples, in general if you are using angular you should not need jQuery and should avoid it. You can also cherry-pick only the portions of bootstrap your site actually uses or only the ui.bootstrap angular directives you actually use.

Thankfully, that 558.5kB only hits a maximum of 1 time (as long as you don't change domains or change SSL encryption between sites). After those files are first downloaded, it's on the client to load them from the cache.
As #Felix Kling suggested, if you pull them from a CDN, it's very likely you can turn that 1-time hit to a 0-time hit because the client already downloaded them from some other website. This is important to think about if you're considering baking in the bootstrap css with your custom css file. It could be faster to let bootstrap come from the CDN (or the local cache) and load your custom stuff on top.
Performance for snagging the files won't be that big of a deal, as modern browsers have no problems quickly pulling 6 files. However, the browser has to load all that junk into memory, which is where the real hit comes in. This is because, for instance, you have to load jquery before you can call jquery. Therefore, all your on-page scripts are going to be waiting for jquery to load until they act.

Without more information it is hard to say anything tangible, however using libraries is usually a tradeoff between the ease of development and performance. Some of the libraries may perform worse, than "native" JavaScript solutions, but provide more easily maintainable code.
So it is useful to test the website for potential bottlenecks in the JavaScript code, sometimes there is a valid reason to fall back to vanilla JavaScript, instead of higher level abstractions (and nicer code).
Another result of using JavaScript libraries can be, that the website will no longer function without disable JavaScript at all - this should be considered sometimes.
PS: One simple optimization is putting all JavaScript into a bundle, and minimizing it to reach the lowest possible load cost. CDNs should be also considered.

Related

Does using more external files, opposed to cramming everything into one file, reduce run-time-efficiency?

I am relatively new to web design and the world of jquery, javascript and php. I guess this question would also suit css style sheets as well. Is it better to have everything stuffed into one "external document"? Or does this not affect the run time speeds?
Also to go along with this. Is it wrong, or less efficient, to use php in places where jquery / javascript could be implemented? Which of the two languages is generally faster?
The way you should look at it would be to load the minimum resources required initially which would be needed on page load, not everything. Make sure you group all of these resources together into a single file, and minify them.
Once your page is loaded, you can thereafter load other resources on demand. For e.g a modal, which does not need to be immediately visible can be loaded at a later point of time, when user does some action, and it needs to be shown. This is called lazy loading. But when you do load any module on demand, make sure you load all of its resources together and minified as well.
It's important to structure your code correctly and define the way you batch files together for concatenation and minification. It will help you save on performance by optimizing the number of calls made to the server.
About PHP and JavaScript, I would say in general JavaScript is faster than PHP, but it depends on your application, as one runs on the server and other on the client. So if you are doing too heavy and memory intensive operations, the browser might limit your capabilities. If that is not a problem, go ahead with JavaScript.
There's a lot of different factors that come into play here. Ultimately, it is better to call the least amount of resources possible to make the site run faster. Many sites that check page speed will dock points if you call a ton of resources. However, you don't want to go insane condensing and try to cram everything into a single file either... The best way to approach it is to use as few files as possible while maintaining a logical organization.
For example, maybe you're using a few different JS libraries... well merging those all into one would eventually get confusing and hard to update so it makes sense to keep them all separate. However, you can keep all your custom JS where you call those libraries in one separate file. This can even be applied to images. Let's say you're uploading 5 different social media icons and 5 different hover states for them. Well, instead of making the site call 10 different files, use a sprite and just call one.
You can also do things like use google's hosted libraries: https://developers.google.com/speed/libraries/ Many sites use these and therefore many users already have these resources cached which means they don't need to freshly load the libraries when visiting your site. It's very helpful for things like jQuery.
Another thing to keep in mind is minifying those files. Any library you use should have a minified version and you should use that as opposed to a full version. While you should keep unminified copies of your work around, whatever ends up on the live site should be minified to help with page speed. Here are a few resources for that: https://cssminifier.com/ https://javascript-minifier.com/ If you're using WP, there's tons of plugins out there that have similar functions like WP Fastest Cache.
You php/js/jquery question I can't really weigh in on too heavily. As mentioned, the base difference between php and JS ist whether the requests are client-side or server-side. Personally, I use whatever is prevalent in the project and whatever works best for your changes. For example, if you're working with variables and transferring data, PHP can be a really great

Lightweight JS Library vs Google-hosted CDN

When page-load speed is the priority, is it better to use a minimal, lightweight javascript library (hosted on a CDN), or is it better to use something like jQuery, hosted on Google's CDN that the browser more than likely already has loaded?
Edit: What my question really boils down to is whether the cross-site caching effect of using jQuery hosted on Google's CDN outweighs the benefits of using an ultra-light library, also on a CDN.
jQuery is not heavy as compared to any other javascript library at present looking at the amount of features and browsers it supports.
You can consider this factor while selecting the plugins to be used on the page because they are written by various users and some may right it intelligently considering this factor or some may just right it for the sake.
Yes, if you use CDN like Google for jQuery it is most likely that the library must be cached by the browser and also Google has number of servers based on location so you don't have to worry about it.
Decreased Latency
A CDN distributes your static content across servers in various, diverse physical locations. When a user’s browser resolves the URL for these files, their download will automatically target the closest available server in the network.
In the case of Google’s AJAX Libraries CDN, what this means is that any users not physically near your server will be able to download jQuery faster than if you force them to download it from your arbitrarily located server.
There are a handful of CDN services comparable to Google’s, but it’s hard to beat the price of free! This benefit alone could decide the issue, but there’s even more.
Increased parallelism
To avoid needlessly overloading servers, browsers limit the number of connections that can be made simultaneously. Depending on which browser, this limit may be as low as two connections per hostname.
Using the Google AJAX Libraries CDN eliminates one request to your site, allowing more of your local content to downloaded in parallel. It doesn’t make a gigantic difference for users with a six concurrent connection browser, but for those still running a browser that only allows two, the difference is noticeable.
Better caching
Potentially the greatest benefit of using the Google AJAX Libraries CDN is that your users may not need to download jQuery at all.
No matter how well optimized your site is, if you’re hosting jQuery locally then your users must download it at least once. Each of your users probably already has dozens of identical copies of jQuery in their browser’s cache, but those copies of jQuery are ignored when they visit your site.
However, when a browser sees references to CDN-hosted copies of jQuery, it understands that all of those references do refer to the exact same file. With all of these CDN references point to exactly the same URLs, the browser can trust that those files truly are identical and won't waste time re-requesting the file if it's already cached. Thus, the browser is able to use a single copy that's cached on-disk, regardless of which site the CDN references appear on.
This creates a potent "cross-site caching" effect which all sites using the CDN benefit from. Since Google's CDN serves the file with headers that attempt to cache the file for up to one year, this effect truly has amazing potential. With many thousands of the most trafficked sites on the Internet already using the Google CDN to serve jQuery, it's quite possible that many of your users will never make a single HTTP request for jQuery when they visit sites using the CDN.
Even if someone visits hundreds of sites using the same Google hosted version of jQuery, they will only need download it once!
It's better to use the library that best suits the needs of your application and your development team. A super-lightweight library might save you a few hundred milliseconds of load time, but may end up costing you in development hours if your team has significantly more experience with jQuery/MooTools/Dojo etc.
If new feature implementation and bug fixing is hindered by using a second-rate tool solely to improve load times, your users are ultimately going to suffer.

Properly managing javascripts for large website / code concatenation

The site I am developing has a large amount of javascript that is shared across various functionality, and an equally large amount of feature-specific javascript. I've read all about using one monolithic javascript file vs. many smaller ones.
For my purposes, the huge-file approach would not only result in a script difficult to maintain, but contain a lot of unneeded javascript as well. At the same time, separating the javascript so that only the required code is included would result in an excessive number of files / HTTP requests. The idea of including even a moderate amount of unneeded code seems contrary to the concepts of proper software design, besides the additional file size overhead for the user.
I have found the mod_concat module for Apache which seems like it would solve my problem entirely - I could separate my javascripts into as many files as I want, include only those necessary, and take almost no hit on performance.
Is this actually the case? Is the only potential drawback the need to manage many files? I know mod_concat has not been around forever, so I'm also looking for a bit of background on a) how this was handled before, and b) if, even with code concatenation, including a moderate amount of unneeded javascript is considered acceptable (or even a best practice).
Thanks, Brian
I don't think you need an apache module for that. Creating one minified JS file for production should be the best way to go, because it is only loaded once and then cached by the browser. Although for development of course, it makes sense to have your application split into separate files.
My personal favorite for JavaScript module management and compression is Steal JS which is part of the great JavaScript MVC framework (could be generally interesting for larger JS applications). It can load module files dynamically during development and for production you can create one compressed JavaScript file (it can do CSS, too).
Another alternative is RequireJS but I only had a quick look at it.

Which JavaScript framework is generally used for high performance websites?

There are different JavaScript frameworks like jQuery, Dojo, mooTools, Google Web Toolkit(GWT), YUI, etc.
Which one from this is suitable for high performance websites?
(Full disclaimer: I am a Dojo developer and this is my unofficial perspective).
All major libraries can be used in high load scenarios. There are several things to consider:
Initial load
The initial load affects your response time: from requesting a web page to being responsive and in working mode. Trivial things to do are:
concatenate several JavaScript files together (works for CSS files too)
minimize and/or compress your JavaScript
The idea is to send less — good for the server, and good for the client.
The less trivial thing to do:
structure your program in such a way so it is operational without all modules loaded
Example of the latter: divide your modules into essential (e.g., the core logic), and non-essential (e.g., helpers: tooltips, hints, verifiers, help facilities, various "gradual enhancers", and so on). The idea is that frequently there are things which are not important for frequent users, but nice for casual users ⇒ they can be delayed.
We can load essential modules first and load the rest asynchronously. Example: if user wants to edit an object we need to show it first, after that we have several hundred milliseconds to load the rest: lookup tables, hints, and so on.
Obviously it helps when asynchronous loading of modules is supported by the framework you use. Dojo has this facility built-in.
Distribute files
Everybody knows that due to browser restrictions on number of parallel downloads from the same site it is beneficial to load resources (images, CSS, JavaScript) from different domains:
we can download more in parallel, if user's line has enough bandwidth — these days it is almost always true
we can set up web servers optimized for serving static files: huge disk cache, small workers, keep-alive, async serving, and so on
we can remove all unnecessary features we don't need when serving static files: sessions, cookies, and so on
One frequently overlooked optimization in JavaScript applications is to use CDN:
your web site can benefit from the geographical distribution of CDN (files can be served from the closest/fastest server)
user may have required files in her cache, if they were used by other application
intermediate/corporate caches increase the probability that required files are already cached
the last but not least: these are files that you don't serve — think about it
Again, Dojo supports CDNs for a long time and distributed publicly by AOL CDN and Google CDN. The latter carries practically all popular JavaScript toolkits too. Obviously you can create your own CDN and your very own CDN- and app- specific Dojo build, if you feel you need it — it is trivial and well documented.
Communication bandwidth
How that can be different for different toolkits? XHR is XHR.
You need to reduce the load on your servers as much as possible. Analyze all traffic and consider how much static/immutable stuff is sent over the pipe. For example, typically a lot of HTML is redundant across several pages: a header, a footer, a menu, and so on. Do you really need all of these to be sent over every time?
One obvious solution is to move from static HTML + "gradual enhancements" with JavaScript to real "one page" JavaScript applications. Again, this is a frequently overlooked, but the most rewarding optimization.
While the idea sounds easy, in reality it is not as simple as it seems. As soon as we go from one-liners to apps we have a plethora of questions, and the biggest of them is the packaging: what your components are, what components are provided by the toolkit, and how to package and deliver them.
Dojo provides modules, good OOP for general classes, widgets (a combination of an optional HTML and related behaviors), and a lot of facilities to work with them. You can:
load modules on demand rather than in the head
load modules asynchronously
find all dependencies between modules automatically and create a "build" — one file in simple cases, or more, if your app is big and requires several layers
while doing the "build" it can inline all HTML snippets for your widgets, optimize CSS, and minify/compress JavaScript
Dojo can automatically find and instantiate widgets in HTML saving a lot of boilerplate code
and much much more
All these features help greatly when building applications on the client side. That's why I like Dojo.
Obviously there are more ways to optimize high load web sites but according to my practice these are the most specific for JavaScript frameworks.
Quite simply: all of them.
All frameworks have been built in order to provide the fastest performance possible and provide the developers with useful functions and tools. Your choice should be based on your requirements.
JavaScript runs on the client-side, so none will affect your server performance. The only difference server-side is the amount of bandwidth used to transfer the .js files to the client.
I'm personally fond of MooTools because it answers my requirements and also sticks to my coding ideals. A lot of people adopted jQuery (I personally don't like it, doesn't mean it's not great). I haven't used the other ones.
But none is better than the other, it's all a question of requirements and personal preference.
I do not really think it makes a bit of difference. The big ones seem to use a mixture of Jquery & prototype along with others.
Quite frankly, it makes no difference what you use for heavily visited websites as we are talking about client technologies. After the file is loaded, there are not really any overheads. So, if you just want to do one simple thing and multiple frameworks support it, use whatever one has the smaller file size (of course, if it performs really bad, use another!)
This being said, google hosts a lot of the frameworks, so even this is really a non issue. I use Jquery hosted by Google and am very happy.
http://code.google.com/apis/ajaxlibs/
Backend and what the server should be using is a whole different question where you will get a thousand different answers!
I'd recommend you look into Dojo.
Dojo 1.6 is also the first (and only) popular JavaScript Library that can be successfully used with the Closure Compiler's Advanced mode, with the massive size, performance and obfuscation benefits attached to it -- other than Google's own Closure Library, that is.
http://dojo-toolkit.33424.n3.nabble.com/file/n2636749/Using_the_Dojo_Toolkit_with_the_Closure_Compiler.pdf?by-user=t
In other words, a program using Dojo can be 100% obfuscated -- even the library itself.
Compiled code has exactly the same behavior as plain-text code, except that it is much smaller (average 25% over minifiers), runs much faster (especially on mobile devices), and almost impossible to reverse-engineer, even after passing through a beautifier, because the entire code base (including the library) is obfuscated.
Code that is only "minified" (e.g. YUI compressor, Uglify) can be easily reverse-engineered after passing through a beautifier.
Well - as an example stackoverflow relies on jQuery ( and uses the google apis link ) - it's one of the speediest and most popular libraries and not only that but I'd say it's the easiest to use. What type of behavior are you going to have on the site? It really all depends on your needs.
The answer, as always, is: it depends. What kind of performance are you talking about? Download speed? Use a minimiser and there's probably not a lot of difference. Or client-side performance, and what are you doing with it?
But, I would suggest that if you're after raw performance, I would not use a framework at all, and create low level javascript that will be far more difficult to maintain.
Some good information can be found on the YUI site.
As other answers already explained, the framework's not going to be the bottleneck in your site's performance -- rather, many other factors are. If you use popular frameworks and load them from popular URLs for them (e.g. AOL's or Google's) they're likely to be cached in your users' browsers, so you don't have to worry much about that, either.
If you care at all about performance, however, absolutely DO check out Steve Souders; work -- including both of his books, "High Performance Web Sites" and "Even Faster Web Sites".
I'm biased, as Steve is a friend and a colleague (and we share publishers as well), but I praised and admired his work even before we met in person and became colleagues -- I'm mostly a back-end person, as he used to be, so I just can't help admire somebody who, coming from the same background, had the integrity and courage to switch almost entirely to front-end focus as he realized THAT was by far the bottleneck for user-perceived performance (i.e., somebody who had the gumption to put user experience first, something we all pay homage to, of course, but don't always practice, when that "overriding priority" gets in the way of our own professional specialties, interests and skills...).

When should I use Inline vs. External Javascript?

I would like to know when I should include external scripts or write them inline with the html code, in terms of performance and ease of maintenance.
What is the general practice for this?
Real-world-scenario - I have several html pages that need client-side form validation. For this I use a jQuery plugin that I include on all these pages. But the question is, do I:
write the bits of code that configure this script inline?
include all bits in one file that's share among all these html pages?
include each bit in a separate external file, one for each html page?
Thanks.
At the time this answer was originally posted (2008), the rule was simple: All script should be external. Both for maintenance and performance.
(Why performance? Because if the code is separate, it can easier be cached by browsers.)
JavaScript doesn't belong in the HTML code and if it contains special characters (such as <, >) it even creates problems.
Nowadays, web scalability has changed. Reducing the number of requests has become a valid consideration due to the latency of making multiple HTTP requests. This makes the answer more complex: in most cases, having JavaScript external is still recommended. But for certain cases, especially very small pieces of code, inlining them into the site’s HTML makes sense.
Maintainability is definitely a reason to keep them external, but if the configuration is a one-liner (or in general shorter than the HTTP overhead you would get for making those files external) it's performance-wise better to keep them inline. Always remember, that each HTTP request generates some overhead in terms of execution time and traffic.
Naturally this all becomes irrelevant the moment your code is longer than a couple of lines and is not really specific to one single page. The moment you want to be able to reuse that code, make it external. If you don't, look at its size and decide then.
If you only care about performance, most of advice in this thread is flat out wrong, and is becoming more and more wrong in the SPA era, where we can assume that the page is useless without the JS code. I've spent countless hours optimizing SPA page load times, and verifying these results with different browsers. Across the board the performance increase by re-orchestrating your html, can be quite dramatic.
To get the best performance, you have to think of pages as two-stage rockets. These two stages roughly correspond to <head> and <body> phases, but think of them instead as <static> and <dynamic>. The static portion is basically a string constant which you shove down the response pipe as fast as you possibly can. This can be a little tricky if you use a lot of middleware that sets cookies (these need to be set before sending http content), but in principle it's just flushing the response buffer, hopefully before jumping into some templating code (razor, php, etc) on the server. This may sound difficult, but then I'm just explaining it wrong, because it's near trivial. As you may have guessed, this static portion should contain all javascript inlined and minified. It would look something like
<!DOCTYPE html>
<html>
<head>
<script>/*...inlined jquery, angular, your code*/</script>
<style>/* ditto css */</style>
</head>
<body>
<!-- inline all your templates, if applicable -->
<script type='template-mime' id='1'></script>
<script type='template-mime' id='2'></script>
<script type='template-mime' id='3'></script>
Since it costs you next to nothing to send this portion down the wire, you can expect that the client will start receiving this somewhere around 5ms + latency after connecting to your server. Assuming the server is reasonably close this latency could be between 20ms to 60ms. Browsers will start processing this section as soon as they get it, and the processing time will normally dominate transfer time by factor 20 or more, which is now your amortized window for server-side processing of the <dynamic> portion.
It takes about 50ms for the browser (chrome, rest maybe 20% slower) to process inline jquery + signalr + angular + ng animate + ng touch + ng routes + lodash. That's pretty amazing in and of itself. Most web apps have less code than all those popular libraries put together, but let's say you have just as much, so we would win latency+100ms of processing on the client (this latency win comes from the second transfer chunk). By the time the second chunk arrives, we've processed all js code and templates and we can start executing dom transforms.
You may object that this method is orthogonal to the inlining concept, but it isn't. If you, instead of inlining, link to cdns or your own servers the browser would have to open another connection(s) and delay execution. Since this execution is basically free (as the server side is talking to the database) it must be clear that all of these jumps would cost more than doing no jumps at all. If there were a browser quirk that said external js executes faster we could measure which factor dominates. My measurements indicate that extra requests kill performance at this stage.
I work a lot with optimization of SPA apps. It's common for people to think that data volume is a big deal, while in truth latency, and execution often dominate. The minified libraries I listed add up to 300kb of data, and that's just 68 kb gzipped, or 200ms download on a 2mbit 3g/4g phone, which is exactly the latency it would take on the same phone to check IF it had the same data in its cache already, even if it was proxy cached, because the mobile latency tax (phone-to-tower-latency) still applies. Meanwhile, desktop connections that have lower first-hop latency typically have higher bandwidth anyway.
In short, right now (2014), it's best to inline all scripts, styles and templates.
EDIT (MAY 2016)
As JS applications continue to grow, and some of my payloads now stack up to 3+ megabytes of minified code, it's becoming obvious that at the very least common libraries should no longer be inlined.
Externalizing javascript is one of the yahoo performance rules:
http://developer.yahoo.com/performance/rules.html#external
While the hard-and-fast rule that you should always externalize scripts will generally be a good bet, in some cases you may want to inline some of the scripts and styles. You should however only inline things that you know will improve performance (because you've measured this).
i think the specific to one page, short script case is (only) defensible case for inline script
Actually, there's a pretty solid case to use inline javascript. If the js is small enough (one-liner), I tend to prefer the javascript inline because of two factors:
Locality. There's no need to navigate an external file to validate the behaviour of some javascript
AJAX. If you're refreshing some section of the page via AJAX, you may lose all of your DOM handlers (onclick, etc) for that section, depending on how you binded them. For example, using jQuery you can either use the live or delegate methods to circumvent this, but I find that if the js is small enough it is preferrable to just put it inline.
Another reason why you should always use external scripts is for easier transition to Content Security Policy (CSP). CSP defaults forbid all inline script, making your site more resistant to XSS attacks.
I would take a look at the required code and divide it into as many separate files as needed. Every js file would only hold one "logical set" of functions etc. eg. one file for all login related functions.
Then during site developement on each html page you only include those that are needed.
When you go live with your site you can optimize by combining every js file a page needs into one file.
The only defense I can offer for inline javascipt is that when using strongly typed views with .net MVC you can refer to c# variables mid javascript which I've found useful.
On the point of keeping JavaScript external:
ASP.NET 3.5SP1 recently introduced functionality to create a Composite script resource (merge a bunch of js files into one). Another benefit to this is when Webserver compression is turned on, downloading one slightly larger file will have a better compression ratio then many smaller files (also less http overhead, roundtrip etc...). I guess this saves on the initial page load, then browser caching kicks in as mentioned above.
ASP.NET aside, this screencast explains the benefits in more detail:
http://www.asp.net/learn/3.5-SP1/video-296.aspx
Three considerations:
How much code do you need (sometimes libraries are a first-class consumer)?
Specificity: is this code only functional in the context of this specific document or element?
Every code inside the document tends to make it longer and thus slower. Besides that SEO considerations make it obvious, that you minimize internal scripting ...
External scripts are also easier to debug using Firebug. I like to Unit Test my JavaScript and having it all external helps. I hate seeing JavaScript in PHP code and HTML it looks like a big mess to me.
Another hidden benefit of external scripts is that you can easily run them through a syntax checker like jslint. That can save you from a lot of heartbreaking, hard-to-find, IE6 bugs.
In your scenario it sounds like writing the external stuff in one file shared among the pages would be good for you. I agree with everything said above.
During early prototyping keep your code inline for the benefit of fast iteration, but be sure to make it all external by the time you reach production.
I'd even dare to say that if you can't place all your Javascript externally, then you have a bad design under your hands, and you should refactor your data and scripts
Google has included load times into it's page ranking measurements, if you inline a lot, it will take longer for the spiders to crawl thru your page, this may be influence your page ranking if you have to much included. in any case different strategies may have influence on your ranking.
well I think that you should use inline when making single page websites as scripts will not need to be shared across multiple pages
Having internal JS pros:
It's easier to manage & debug
You can see what's happening
Internal JS cons:
People can change it around, which really can annoy you.
external JS pros:
no changing around
you can look more professional (or at least that's what I think)
external JS cons:
harder to manage
its hard to know what's going on.
Always try to use external Js as inline js is always difficult to maintain.
Moreover, it is professionally required that you use an external js since majority of the developers recommend using js externally.
I myself use external js.

Categories

Resources