How to reduce jquery.min.js CPU time? - javascript

I've noticed that almost all my browser's Javascript CPU resources get spent on jquery.min.js, specifically loaded from :
http://ajax.googleapis.com/ajax/libs/jquery/1.6.2/jquery.min.js
Are there any tool to minimize the resources consumed by JavaScript generally and/or jQuery specifically without outright blacklisting specific scripts?
I suppose the most obvious approach would be dynamically reducing the number of timer and other events a script receives. In fact, you could probably halt all events to scripts not in the foreground page, except for a specific whitelist of sites you actually want to permit to receive events in the background.
I'm perfectly happy with Javascript performance going way down so long as overall browser performance improves.

Sounds more like there is another script using jquery to do specific tasks. The jquery script itself after loading in the browser, to my knowledge, does not use any additional resource after that point in time.
Based my assumption of what is happening, there is nothing you can do at the moment (specifically because you haven't provided enough information to help).

Change all the getElementsByClassName to getElementsByTagName. This will improve the performance drastically as the getElementsByTagName is more efficient

Related

Limit jquery file to required functions

I use jQuery. Although I'd like to think of myself as a fairly good programmer in general and also specifically for JS, I don't think I understand the DOM api and its variable behavior in different browsers. Hence the use of jQuery.
I use a small subset of jquery, though:
1) Ajax
2) event handlers
3) Selectors/find/child/parent
I don't use anything else, no filter, no UI events, nothing! (okay fine slideup slidedown, but I can do that myself using css)
Are there any already existing tools command-line/browser based that would do a static analysis on the jquery script so that I don't have to force the user to download the full 100KB? If you're suggesting I do it myself, thank you, manually doing it would be the next step, and if I feel like there's a lot of interest, I might consider writing such a tool
Re: CDN- thanks for your suggestions, please see my comment to #Jonathan
You can take each function from the Github repository, but since there are various dependencies, you will not save as much as you think. Instead of using the 100kb uncompressed, development version, you'll do better using the 32kb minified version from http://jquery.com/download/.
There are also three good reasons to use jQuery from Google's CDN (<script src="//ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js" ></script>):
1. Decreased Latency
In the case of Google’s AJAX Libraries CDN, what this means is that any users not physically near your server will be able to download jQuery faster than if you force them to download it from your arbitrarily located server.
2. Increased parallelism
To avoid needlessly overloading servers, browsers limit the number of connections that can be made simultaneously. Depending on which browser, this limit may be as low as two connections per hostname.
3. Better caching
[W]hen a browser sees references to CDN-hosted copies of jQuery, it understands that all of those references do refer to the exact same file. With all of these CDN references point to exactly the same URLs, the browser can trust that those files truly are identical and won’t waste time re-requesting the file if it’s already cached. Thus, the browser is able to use a single copy that’s cached on-disk, regardless of which site the CDN references appear on.
Source of excerpt and further reading: http://encosia.com/3-reasons-why-you-should-let-google-host-jquery-for-you/
You can just use Google hosted libraries and stop carrying about jquery size.
I am sure every user browser has it in cache already, so its beneficial both for you and user.

How to optimize javascript/jquery code to speed up it's performance?

On one of my web projects, I use a lot of javascript/jQuery code, which is pretty slow on browsers (Windows 7 x64), especially on IE.
I use 3 Ajax requests at the same time only on Home page.
On Search page, I also use ajax requests, which are fired on scroll event, on any 'search tag' (simple anchor tag) click event and etc. which in general is making data loading very slow.
I use jQuery plugins such as, Anythingslider, jquery coockies plugin, Raty (rating plugin), Tipsuy, jQuery coreUISelect, jScrollPane, mouse wheel and etc. All those 3rd party plugins I have minified and combined in jquery.plugins.js file, which is almost 80KB.
I select a lot of DOM elements with jQuery. For example I use the following code:
$("#element")
instead of:
document.getElementById('element');
I also have one big CSS file, which is more than 5 000 lines, because I have combined all 3rd party jQuery plugins's css files into one file, for caching and less HTTP requests.
Well, I wonder, what can I do to optimize my code for better performance and speeding up web page load?
What kind of tools can I use to debug and my JS code? I forgot to mention that, when I refresh page in Google Chrome or Firefox with firebug or Chrome native developer tools opened, the page in that case loads also very slow. Sometimes the Firefox is even crushed.
Will selecting of DOM elements with raw js give me a better and faster way to parse the document? Or should I leave, the jQuery selecting? Talk about is about 50 elements.
Should I separate and after that minify external plugins, such as Anythingslider? Or is it better when I have 'all in one' js file?
Is it better to also separate jQuery plugins's css code from main style.css? Because even hovering on element and affecting the :hover state from css file, is pretty slow.
Well guys, I'm really counting on you.
I've been googling all night to find answers on my questions and really hope to find it here.
Thanks.
1) minify it
2) all the browsers come with built in debugging tools
3) reduce access to the dom by storing references in variables, don't look up the same element twice
4) separate and use a well known cdn
5) separate just cos its easier to manage
More jQuery tips here : jquery-performance-rules and here : improve-your-jquery-25-excellent-tips.
Make sure all your static resources are cached with no disk lookup
This library is pretty cool
You can compare selector performance here: http://jsperf.com/
Just setup your HTML code, include jQuery lib and place each selector you want to compare as various test case.
Many of the jquery-performance-rules still apply,
Also have look at here jquery-proven-performance-tips-tricks
Since there are a lot of ways to improve code, especially with such big websites like yours, I think it will be more useful to just post the external links, since these are very nicely written and concise tutorials and tools. Here are some of them:
Yahoo's tutorial, one of the most complete tutorials I know
W3Schools' tutorial on using image sprites, especially useful when there are a lot of small images on the page
Tips on efficient jQuery usage
Firebug, Firefox plugin for debugging javascript, html, css
HTML validator, can be very useful to quickly find errors in markup
HTML compressor for minifying your HTML code (there are a lot of tools on the web for this purpose, it's just a matter of finding the best one)
CSS compressor, same as for HTML
I would also recommend IDE for building web applications/websites called JetBrains' PHPStorm. It is commercial software, but definitely worth every cent, since it gives hints and tips for improvement while typing.
Raw performance may not be your issue. If you notice poor performance on page load, see this article I wrote specifically about minimizing javascript execution on page initialization.
http://blog.lavoie.sl/2013/12/optimizing-page-loads-by-reducing-javascript.html

Any significant reasons not to use AJAX?

I'm planning on making my web app quite AJAX heavy.
Before I do, I'm wondering what people think of such sites. Are there any significant reasons not to do this?
BTW, no need to mention SEO reasons. Also, I think the benefits make up for the fact that people without javascript will have a limited experience (though I'm open to being convinced otherwise).
It depends on how you plan to use it, IMO.
1) If the site will absolutely fail without it, you are excluding users with scripting disabled. I think it is fair in many scenarios to limit but not remove functionality for no-script users (for example, Google doesn't autocomplete searches if you have scripting disabled; it can't...but the basic search still works).
2) The right techniques need to be used in the right place. For example, an ASP.Net UpdatePanel will perform horribly if you dump thousands of elements into it.
3) I am becoming a bigger and bigger fan of content that is loaded in small blocks on the page that does not require a full refresh NOR does it require the whole page to be executed again. This lends itself to a SOA nicely, but is even more subject to the limits of #1.
4) EDIT: Don't create UI elements that (due to AJAX) behave unexpectedly. For example, I once built a dropdown list that only populated when it was toggled. Because of latency and DOM creation time, it wasn't responsive. Furthermore, the size would often change based on what elements were dynamically added. You could propose ways around these problems, but that was still an incorrect use of the technology.
AJAX is a tool for a job. If your application is best served by the tool, use it.
Edit -- just make sure the tradeoffs are well understood. Also, nothing in using AJAX prevents you from having a non-ajax backup ready for if you need it...
Obviously, there are many popular sites that rely on AJAX, so it certainly need not be avoided if used well. However, there are things to consider:
Will users need to be able to deep-link (i.e. save bookmarks to "pages" that have been created dynamically)? Will they need to use the back button to navigate? (Both of these things can be done using AJAX but they need to be explicitly considered, as naive implementations of AJAX can make them work poorly or not at all.)
Will the use of AJAX have a negative impact for disabled users (e.g. those using a screen reader)?
It depends on how you use AJAX. Pages where you have to wait while the page is rendered AND THEN wait another 10secs while the scripts execute and load the actual content, make people angry. Pages, which load fast and do a good job with AJAX, are fine of course.
Search engine optimization is one thing, but ability to find things in your site is another. You will have to think of a way for Google to index your content. Hence, you will still have to have a "plain text" version, where links behave as links.
Deep linking can be an issue with Ajax heavy sites. There are ways around it (i.e. using the url-hash technique) but those are not always fail safe.
Users who are on the other side of the planet, with a 3×108 m/s speed limit enforced by physics, will find your site sluggish and unresponsive if you have a lot of UI interaction going on with AJAX.
A typical packet turnaround time (round trip) from New Zealand to California is roughly 200 ms, and a user interface should respond within 100 ms to not feel sluggish.

Using DOMContentReady considered anti-pattern by Google

A Google Closure library team member asserts that waiting for DOMContentReady event is a bad practice.
The short story is that we don't want
to wait for DOMContentReady (or worse
the load event) since it leads to bad
user experience. The UI is not
responsive until all the DOM has been
loaded from the network. So the
preferred way is to use inline scripts
as soon as possible.
Since they still don't provide more details on this, so I wonder how they deal with Operation Aborted dialog in IE. This dialog is the only critical reason I know to wait for DOMContentReady (or load) event.
Do you know any other reason?
How do you think they deal with that IE issue?
A little explanation first: The point with inline JavaScript is to include it as soon as possible. However, that "possible" is dependent on the DOM nodes that that script requires being declared. For example, if you have some navigation menu that requires JavaScript, you would include the script immediately after the menu is defined in the HTML.
<ul id="some-nav-menu">
<li>...</li>
<li>...</li>
<li>...</li>
</ul>
<script type="text/javascript">
// Initialize menu behaviors and events
magicMenuOfWonder( document.getElementById("some-nav-menu") );
</script>
As long as you only address DOM nodes that you know have been declared, you wont run into DOM unavailability problems. As for the IE issue, the developer must strategically include their script so that this doesn't happen. It's not really that big of a concern, nor is it difficult to address. The real problem with this is the "big picture", as described below.
Of course, everything has pros and cons.
Pros
As soon as a DOM element is displayed to the user, whatever functionality that is added to it by JavaScript is almost immediately available as well (instead of waiting for the whole page to load).
In some cases, Pro #1 can result in faster perceived page load times and an improved user experience.
Cons
At worst, you're mixing presentation and business logic, at best you're mixing your script includes throughout your presentation, both of which can be difficult to manage. Neither are acceptable in my opinion as well as by a large portion of the community.
As eyelidlessness pointed out, if the script's in question have external dependencies (a library for example), then those dependencies must be loaded first, which will lock page rendering while they are parsed and executed.
Do I use this technique? No. I prefer to load all script at the end of the page, just before the closing </body> tag. In almost every case, this is sufficiently fast for perceived and actual initialization performance of effects and event handlers.
Is it okay for other people to use it? Developers are going to do what they want/need to get the job done and to make their clients/bosses/marketing department happy. There are trade-offs, and as long as you understand and manage them, you should be okay either way.
The biggest problem with inline scripts is that it cannot be cached properly. If you have all your scripts stored away in one js file that is minified(using a compiler) then that file can be cached just once, for the entire site, by the browser.
That leads to better performance in the long run if your site tends to be busy. Another advantage of having a separate file for your scripts is that you tend to not "repeat yourself" and declare reusable functions as much as possible. DOMContentReady does not lead to bad user experience. Atleast it provides the user with the content before-hand rather than make the user wait for the UI to load which might end up becoming a big turn-off for the user.
Also using inline scripts does not ensure that the UI will be more responsive than that when used with DOMContentReady. Imagine a scenario where you are using inline scripts for making ajax calls. If you have one form to submit its fine. Have more than one form and you end up repeating your ajax calls.. hence repeating the same script everytime. In the end, it leads to the browser caching more javascript code than it would have if it was separated out in a js file loaded when the DOM is ready.
Another big disadvantage of having inline scripts is that you need to maintain two separate code bases: one for development and another for production. You must ensure that both code bases are kept in-sync. The development version contains non-minified version of your code and the production version contains the minified version. This is a big headache in the development cycle. You have to manually replace all your code snippets hidden away in those bulky html files with the minified version and also in the end hope that no code breaks! However with maintaing a separate file during development cycle, you just need to replace that file with the compiled minified version in the production codebase.
If you use YSlow you see that:
Using external JavaScript and CSS
files generally produces faster pages
because the files are cached by the
browser. JavaScript and CSS that are
inlined in HTML documents get
downloaded each time the HTML document
is requested. This reduces the number
of HTTP requests but increases the
HTML document size. On the other hand,
if the JavaScript and CSS are in
external files cached by the browser,
the HTML document size is reduced
without increasing the number of HTTP
requests.
I can vouch for inline scripts if and only if the code changes so often that having it in a separate js file is immaterial and would in the end have the same impact as that of these inline scripts. Still, these scripts are not cached by the browser. The only way the browser can cache scripts is if its stored away in an external js file with an etag.
However, this is not JQuery vs Google closure in any manner. Closure has its own advantages. However closure library makes it hard to have all your scripts in external files(though its not that its impossible, just makes it hard). You just tend to use inline scripts.
One reason to avoid inline scripts is that it requires you place any dependency libraries before them in the document, which will probably negate the performance gains of inline scripts. The best practice I'm familiar with is to place all scripts (in a single HTTP request!) at the very end of the document, just before </body>. This is because script-loading blocks the current request and all sub-requests until the script is completely loaded, parsed, and executed.
Short of a magic wand, we'll always have to make these trade-offs. Thankfully, the HTML document itself is increasingly going to become the least resource-intensive request made (unless you're doing silly stuff like huge data: URLs and huge inline SVG documents). To me, the trade-off of waiting for the end of the HTML document seems the most obvious choice.
I think this advise is not really helpful. DOMContentReady may only be bad practice because it currently is over-used (maybe because of jquery's easy-to-use ready event). Many people use it as the "startup" event for any javascript action. Though even jQuery's ready() event was only meant to be used as the startup point for DOM manipulations.
Inferential, DOM manipulations on page load lead to bad user experience!! Because they are not necessary, the server side could just have completely generated the initial page.
So, maybe the closure team members just try to steer in the opposite direction, and prevent people from doing DOM manipulations on page load at all?
If the time it takes to parse, load, and render layout (the point at which domready should fire) without consideration for image loads takes so long that it causes noticeable UI delay, you've probably got something really ugly building that page on the back-end, and that's your real problem.
Furthermore, JavaScript before the end of the page halts HTML parsing/DOM evaluating until the JS is parsed and evaluated. Google should look to their Java before giving advice on JavaScript.
If you need to dispose your HTML on your own, Googles approach seems very awkward. They use the compiled approach and can waste their brain-cycles on other issues, which is sane given the complex ui of their apps. If you head for something with more relaxed requirements (read: almost everything else), maybe it's not worth the effort.
It's a pity that GWT only is talking Java, though.
That's probably because Google doesn't care if you don't have Javascript, they require it for just about everything they do. If you use Javascript as an addition on top of your already-functioning website then loading scripts in DOMContentReady is just fine. The point is to use Javascript to enhance the user's experience, not seclude them if they don't have it.

When should I use Inline vs. External Javascript?

I would like to know when I should include external scripts or write them inline with the html code, in terms of performance and ease of maintenance.
What is the general practice for this?
Real-world-scenario - I have several html pages that need client-side form validation. For this I use a jQuery plugin that I include on all these pages. But the question is, do I:
write the bits of code that configure this script inline?
include all bits in one file that's share among all these html pages?
include each bit in a separate external file, one for each html page?
Thanks.
At the time this answer was originally posted (2008), the rule was simple: All script should be external. Both for maintenance and performance.
(Why performance? Because if the code is separate, it can easier be cached by browsers.)
JavaScript doesn't belong in the HTML code and if it contains special characters (such as <, >) it even creates problems.
Nowadays, web scalability has changed. Reducing the number of requests has become a valid consideration due to the latency of making multiple HTTP requests. This makes the answer more complex: in most cases, having JavaScript external is still recommended. But for certain cases, especially very small pieces of code, inlining them into the site’s HTML makes sense.
Maintainability is definitely a reason to keep them external, but if the configuration is a one-liner (or in general shorter than the HTTP overhead you would get for making those files external) it's performance-wise better to keep them inline. Always remember, that each HTTP request generates some overhead in terms of execution time and traffic.
Naturally this all becomes irrelevant the moment your code is longer than a couple of lines and is not really specific to one single page. The moment you want to be able to reuse that code, make it external. If you don't, look at its size and decide then.
If you only care about performance, most of advice in this thread is flat out wrong, and is becoming more and more wrong in the SPA era, where we can assume that the page is useless without the JS code. I've spent countless hours optimizing SPA page load times, and verifying these results with different browsers. Across the board the performance increase by re-orchestrating your html, can be quite dramatic.
To get the best performance, you have to think of pages as two-stage rockets. These two stages roughly correspond to <head> and <body> phases, but think of them instead as <static> and <dynamic>. The static portion is basically a string constant which you shove down the response pipe as fast as you possibly can. This can be a little tricky if you use a lot of middleware that sets cookies (these need to be set before sending http content), but in principle it's just flushing the response buffer, hopefully before jumping into some templating code (razor, php, etc) on the server. This may sound difficult, but then I'm just explaining it wrong, because it's near trivial. As you may have guessed, this static portion should contain all javascript inlined and minified. It would look something like
<!DOCTYPE html>
<html>
<head>
<script>/*...inlined jquery, angular, your code*/</script>
<style>/* ditto css */</style>
</head>
<body>
<!-- inline all your templates, if applicable -->
<script type='template-mime' id='1'></script>
<script type='template-mime' id='2'></script>
<script type='template-mime' id='3'></script>
Since it costs you next to nothing to send this portion down the wire, you can expect that the client will start receiving this somewhere around 5ms + latency after connecting to your server. Assuming the server is reasonably close this latency could be between 20ms to 60ms. Browsers will start processing this section as soon as they get it, and the processing time will normally dominate transfer time by factor 20 or more, which is now your amortized window for server-side processing of the <dynamic> portion.
It takes about 50ms for the browser (chrome, rest maybe 20% slower) to process inline jquery + signalr + angular + ng animate + ng touch + ng routes + lodash. That's pretty amazing in and of itself. Most web apps have less code than all those popular libraries put together, but let's say you have just as much, so we would win latency+100ms of processing on the client (this latency win comes from the second transfer chunk). By the time the second chunk arrives, we've processed all js code and templates and we can start executing dom transforms.
You may object that this method is orthogonal to the inlining concept, but it isn't. If you, instead of inlining, link to cdns or your own servers the browser would have to open another connection(s) and delay execution. Since this execution is basically free (as the server side is talking to the database) it must be clear that all of these jumps would cost more than doing no jumps at all. If there were a browser quirk that said external js executes faster we could measure which factor dominates. My measurements indicate that extra requests kill performance at this stage.
I work a lot with optimization of SPA apps. It's common for people to think that data volume is a big deal, while in truth latency, and execution often dominate. The minified libraries I listed add up to 300kb of data, and that's just 68 kb gzipped, or 200ms download on a 2mbit 3g/4g phone, which is exactly the latency it would take on the same phone to check IF it had the same data in its cache already, even if it was proxy cached, because the mobile latency tax (phone-to-tower-latency) still applies. Meanwhile, desktop connections that have lower first-hop latency typically have higher bandwidth anyway.
In short, right now (2014), it's best to inline all scripts, styles and templates.
EDIT (MAY 2016)
As JS applications continue to grow, and some of my payloads now stack up to 3+ megabytes of minified code, it's becoming obvious that at the very least common libraries should no longer be inlined.
Externalizing javascript is one of the yahoo performance rules:
http://developer.yahoo.com/performance/rules.html#external
While the hard-and-fast rule that you should always externalize scripts will generally be a good bet, in some cases you may want to inline some of the scripts and styles. You should however only inline things that you know will improve performance (because you've measured this).
i think the specific to one page, short script case is (only) defensible case for inline script
Actually, there's a pretty solid case to use inline javascript. If the js is small enough (one-liner), I tend to prefer the javascript inline because of two factors:
Locality. There's no need to navigate an external file to validate the behaviour of some javascript
AJAX. If you're refreshing some section of the page via AJAX, you may lose all of your DOM handlers (onclick, etc) for that section, depending on how you binded them. For example, using jQuery you can either use the live or delegate methods to circumvent this, but I find that if the js is small enough it is preferrable to just put it inline.
Another reason why you should always use external scripts is for easier transition to Content Security Policy (CSP). CSP defaults forbid all inline script, making your site more resistant to XSS attacks.
I would take a look at the required code and divide it into as many separate files as needed. Every js file would only hold one "logical set" of functions etc. eg. one file for all login related functions.
Then during site developement on each html page you only include those that are needed.
When you go live with your site you can optimize by combining every js file a page needs into one file.
The only defense I can offer for inline javascipt is that when using strongly typed views with .net MVC you can refer to c# variables mid javascript which I've found useful.
On the point of keeping JavaScript external:
ASP.NET 3.5SP1 recently introduced functionality to create a Composite script resource (merge a bunch of js files into one). Another benefit to this is when Webserver compression is turned on, downloading one slightly larger file will have a better compression ratio then many smaller files (also less http overhead, roundtrip etc...). I guess this saves on the initial page load, then browser caching kicks in as mentioned above.
ASP.NET aside, this screencast explains the benefits in more detail:
http://www.asp.net/learn/3.5-SP1/video-296.aspx
Three considerations:
How much code do you need (sometimes libraries are a first-class consumer)?
Specificity: is this code only functional in the context of this specific document or element?
Every code inside the document tends to make it longer and thus slower. Besides that SEO considerations make it obvious, that you minimize internal scripting ...
External scripts are also easier to debug using Firebug. I like to Unit Test my JavaScript and having it all external helps. I hate seeing JavaScript in PHP code and HTML it looks like a big mess to me.
Another hidden benefit of external scripts is that you can easily run them through a syntax checker like jslint. That can save you from a lot of heartbreaking, hard-to-find, IE6 bugs.
In your scenario it sounds like writing the external stuff in one file shared among the pages would be good for you. I agree with everything said above.
During early prototyping keep your code inline for the benefit of fast iteration, but be sure to make it all external by the time you reach production.
I'd even dare to say that if you can't place all your Javascript externally, then you have a bad design under your hands, and you should refactor your data and scripts
Google has included load times into it's page ranking measurements, if you inline a lot, it will take longer for the spiders to crawl thru your page, this may be influence your page ranking if you have to much included. in any case different strategies may have influence on your ranking.
well I think that you should use inline when making single page websites as scripts will not need to be shared across multiple pages
Having internal JS pros:
It's easier to manage & debug
You can see what's happening
Internal JS cons:
People can change it around, which really can annoy you.
external JS pros:
no changing around
you can look more professional (or at least that's what I think)
external JS cons:
harder to manage
its hard to know what's going on.
Always try to use external Js as inline js is always difficult to maintain.
Moreover, it is professionally required that you use an external js since majority of the developers recommend using js externally.
I myself use external js.

Categories

Resources