I'm writing some tests for a CMS, and I need to know if a certain classname is in the document.
So I went to investigate what is the fastest way to check if a classname exists in the document. You can see my benchmarks here: http://jsperf.com/if-class-exists
If you run the test, you'll see 'getElementsByClassName' is by far the fastest(99%). This made me wonder if jQuery even checks if there is a native class selector available.
This leaves me wondering what is the best approach, as it is crucial for me to test classnames really fast.
I think you've already answered your own question with the jsperf. If speed is really important to you in a particular operation and this test is a valid measure of what you need, then do your own test for getElementsByClassName and use it if available as it shows 400x faster in your jsperf.
jQuery calls have a reasonable amount of setup overhead that you cans see if you ever step through one. I could imagine in a small document that this setup overhead might skew your jsperf results in a way that wouldn't be seen as much in a document with a much larger DOM - so I'd suggest you verify your results with a much larger DOM that might be more typical of the documents you will be calling this on.
According to this doc, jQuery should be using getElementsByClassName for a simple class selector.
Edit: I stepped through this function call in jQuery $('.select'). It is using getElementsByClassName internally, but there is a LOT of overhead before it gets there (including even running a complicated regular expression) because of jQuery's incredible general nature (it has to test a lot of things before it figures out that what you want is a simple class name selector).
I thought that if you add a big DOM to your jsPerf, the performance gap might narrow because the jQuery setup overhead will be a much smaller part of the overall execution time, but I didn't see much change. The getElementsByClassName('.selector') called all by itself is just way faster than jQuery('.selector').
Related
So why are we supposed to cache jQuery objects?
In the following scenario:
var foo = $('#bar');
foo.attr('style','cool');
foo.attr('width','123');
$('#bar').attr('style','cool');
$('#bar').attr('width','123');
Why is the first option so much better than the second option?
If it's because of performance, how does it reduce usage?
Because the jQuery function has a lot of code in it, which involves unnecessary overhead if you execute it more than once with the same inputs expecting the same outputs. By caching the result, you store a reference to the exact element or set of elements you're looking for so you don't have to search the entire DOM again (even if it's a fairly fast search). In many cases (simple pages with small amounts of code) you won't notice a difference, but in the cases where you do it can become a big difference.
You can see this in action by testing your example in jsPerf.
You can also think of it as an example of the Introduce Explaining Variable refactoring pattern for readability purposes, particularly with more complex examples than the one in the question.
The jQuery selector $('#foo') searches the entire DOM for the matching element(s) and then returns the result(s).
Caching these results means that jQuery doesn't have to search the DOM every time the selector is used.
EDIT: document.getElementById() is what jQuery uses to search the DOM, but there's never enough jQuery.
Using jQuery, if I am writing code like this
$('blah').doSomething();
//bunch of code
$('blah').doSomethingElse();
//bunch of code
$('blah').doOtherStuff();
Is a new jQuery object being created each time I say $('blah') ?
If so, would it reduce object creation overhead to do something like this instead:
var blah = $('blah');
blah.doSomething();
//bunch of code
blah.doSomethingElse();
//bunch of code
blah.doOtherStuff();
Make sense?
Absolutely correct!
Why Cache
Another benefit of caching is code maintainability. If the selector is only in one place, changing it only needs to be done in one place. Trust me that makes life easier when maintaining code.
Chain
Although you can go one step further (where relevant), and chain your calls:
$('blah')
.doSomething()
.doSomethingElse()
.doOtherStuff();
This is slightly better for 2 reasons. Firstly, you're not using the extra variable, so you save a tiny amount of memory. Secondly, the language doesn't perform lookups for each identifier use, so it's quicker.
So, why do people still [over]use $()
One reason people use lots of $() is because they don't know any better.
Another reason is because element selection used to be slower. But as with most things in the browser, querying objects with selectors is fast being optimised (document.querySelectorAll), which means it's not such a big deal to cache or not to cache, so maybe with that in mind they allow themselves not to cache.
Finally, there are benchmarks hanging around (one two from a quick google) that try to claim that it doesn't matter or is maybe even faster not to cache. The problem with most of these benchmarks and why I suggest you be very wary about drawing conclusions from them, is that the DOM in the example is not real-world; it's overly simplistic. Secondly, the selectors are simplistic as well. So of course querying will be lightning fast and caching won't make a big difference, but I don't find them at all conclusive.
That said, if your example is similar to those of the benchmarks, then maybe you can draw your own circumstantial conclusions, and caching might just be a nuisance.
As I have learnt, its better to cache the values in objects which we need repeatedly. For example, doing
var currentObj = myobject.myinnerobj.innermostobj[i]
and using 'currentObj' for further operations is better for performance than just
myobject.myinnerobj.innermostobj[i]
everywhere, like say in loops.. I am told it saves the script from looking-up inside the objects every time..
I have around 1000 lines of code, the only change I did to it with the intention of improving performance is this (at many locations) and the total time taken to execute it increased from 190ms to 230ms. Both times were checked using firebug 1.7 on Firefox 4.
Is what I learnt true (meaning either I am overusing it or mis-implemented it)? Or are there any other aspects to it that I am unaware of..?
There is an initial cost for creating the variable, so you have to use the variable a few times (depending on the complexity of the lookup, and many other things) before you see any performance gain.
Also, how Javascript is executed has changed quite a bit in only a few years. Nowadays most browsers compile the code in some form, which changes what's performant and what's not. It's likely that the perforance gain from caching reference is less now than when the advice was written.
The example you have given appears to simply be Javascript, not jQuery. Since you are using direct object property references and array indices to existing Javascript objects, there is no lookup involved. So in your case, adding var currentObj... could potentially increase overhead by the small amount needed to instantiate currentObj. Though this is likely very minor, and not uncommon for convenience and readability in code, in a long loop, you could possibly see the difference when timing it.
The caching you are probably thinking of has to do with jQuery objects, e.g.
var currentObj = $('some_selector');
Running a jQuery selector involves a significant amount of processing because it must look through the entire DOM (or some subset of it) to resolve the selector. So doing this, versus running the selector each time you refer to something, can indeed save a lot of overhead. But that's not what you're doing in your example.
See this fiddle:
http://jsfiddle.net/SGqGu/7/
In firefox and chrome (didn't test IE) -- the time is identical in pretty much any scenario.
Is what I learnt true (meaning either
I am overusing it or mis-implemented
it)? Or are there any other aspects to
it that I am unaware of..?
It's not obvious if either is the case because you didn't post a link to your code.
I think most of your confusion comes from the fact that JavaScript developers are mainly concerned with caching DOM objects. DOM object lookups are substantially more expensive than looking up something like myobj.something.something2. I'd hazard a guess that most of what you've been reading about the importance of caching are examples like this (since you mentioned jQuery):
var myButton = $('#my_button');
In such cases, caching the DOM references can pay dividends in speed on pages with a complex DOM. With your example, it'd probably just reduce the readability of the code by making you have to remember that currentObj is just an alias to another object. In a loop, that'd make sense, but elsewhere, it wouldn't be worth having another variable to remember.
I have embarked on a mission to start using jQuery and JavaScript properly. I'm sad to say that historically I have fallen into the class of developer that makes a lot of really terrible mistakes with jQuery (polluting the global namespace, not caching jQuery selectors, and much more fun stuff - some of which I'm sure I have yet to discover).
The fact of the matter is that jQuery allows people to easily implement some really powerful functionality. However, because everything "just works", performance concerns and best practices immediately take a back seat.
As I've been reading articles on JavaScript and jQuery performance and best practices, I've learned just enough to fully realize how inexperienced I really am. I'm left feeling frustrated because I'm unsure of when I should be using jQuery or just plain JavaScript. The main reason jQuery is so appealing to me is that it takes care of browser compatibility. From what I understand though, there are things you can do with jQuery that you can also do with regular JavaScript that aren't subject to compatibility issues. Basically I'm looking for a guide that explains when using jQuery over regular JavaScript is wise.
A few questions to recap:
Are there parts of jQuery that you shouldn't use due to performance?
What are the parts of jQuery that you should always use to avoid browser inconsistencies?
What are the parts of jQuery that you shouldn't use because there is a reliable and faster way to do the same thing natively in JavaScript?
What are the parts of jQuery that offer multiple ways to do the same thing, with one way being more efficient? For example, the :not() selector versus the .not() method.
I'm looking for existing articles, blog posts, books, videos, etc. I know where the docs are. I read them frequently. I'm hoping for more of an overview that addresses the above issues.
Thanks!
EDIT:
Check out this very similar question: When to use Vanilla JavaScript vs. jQuery?
Wow, I simply cannot believe noone has mentioned storing objects in variables for future use.
Consider this scenario.
You have a complex menu that you're going to write 100 lines of jQuery for.
VERY OFTEN I see something like
$(".menu").fadeIn("slow");
$(".menu li").each(bla bla);
$(".menu li").hover(more bla bla);
if($(".menu").attr('id') == 'menu1') {some hoo ha}
If you're going to reuse an element in jQuery, ALWAYS store it in a variable. It's also common practice to add a dollar sign ($) before the variable name to indicate a jQuery object.
var $menu = $(".menu"); // store once and reuse a million times
var $menu_item = $("li", $menu); // same here
$menu.fadeIn("slow");
$menu_item.each(bla bla);
$menu_item.hover(more bla bla);
if($menu.attr('id') == 'menu1') {some hoo ha}
I definitely say
use the event model as it abstracts the differences across browsers and also provides a means to raise your own custom events too.
don't use .each() or $.each() unneccessarily. Sometimes it can help as it introduces a closure, but often a simple loop will suffice.
the only way to know whether a complicated selector string or a bunch of chained function calls is going to be faster is to benchmark all approaches.
use event delegation when binding the same event handler to more than three elements (I'll see if I can dig out the resource for more than three elements, I seem to remember an article that benchmarked direct binding versus delegation on a number of different factors and found more than three to be the magic numbers).
Above all else, don't worry about performance unless it's a problem. 200ms compared to 300ms, who'll know the difference? 200ms compared to 1000ms, maybe time to look at optimizing something :)
be as specific as possible with your selectors and help those poor older versions of IE out.
Several of your questions focus on performance.
As a rule, jQuery cannot possibly perform better than the underlying native Javascript. jQuery does not interact directly with the browser or operating system; it's just providing a wrapper around built-in Javascript functions. So at an absolute minimum calling a jQuery function incurs the overhead of an extra function call.
In many cases, jQuery is indeed doing quite a bit of heavy lifting in order to produce a result, when hand-written "pure" Javascript might be able to avoid that work.
The point of the framework is to make the programmer's life easier, and historically everything that's ever made programmers' lives easier cost performance. Hand-written machine language is almost universally more efficient than the best compiled code ever assembled.
So the best answer to your questions about performance is: don't worry about it. If you ever encounter a performance problem, then consider jQuery as one possible target for optimization.
As far as browser inconsistencies, one of the major purposes of the framework is to avoid them entirely. There have been bugs historically where certain features didn't work in one browser or another, but these were bugs specific to a particular version of the library. So avoiding them entirely wouldn't be quite the right solution. And trying to identify them here (rather than jQuery's own bug reports) would make this discussion almost instantly out of date.
Nowadays, the primary rule of thumb with javascript is that it has wicked-fast execution time (on non-ie modern browsers), but dom access/manipulation is crazy slow. The faster the JS runtimes get, the more the DOM becomes the bottleneck.
As a rule, you shouldn't really overly worry about performance until it becomes an issue, since most code doesn't need to be fast, and you usually don't know where the problems will be until you test it. That being said, try to minimize dom interaction wherever you can.
as a side note, idiomatic jquery is using javascript the right way. Favor composing functions over OOP, use closures for encapsulation, don't mix javascript handlers (or heaven forbid, script blocks) in your html, and treat objects as property bags that you can attach or detach things to at will.
I'm no expert but I learnt a couple of things.
Don't abuse HTML attributes, that means don't store your intended roll-over images in a rel/rev, use a background image instead. This also helps with the performance of roll overs in IE, as IE doesn't like it when you are changing the src attribute on the fly.
Also hover-intent is very useful to have instead of just using .hover :)
My two cents: do not underestimate the power of the jQuery team (Resig an Co.)---their intent is not to lead you easily into performance gotchas. That being said, here's one: when you use a selector (which is the query in jQuery), do insure to use [context].
So let's say you have a table with 243 rows---and you have not tagged each tr with an id (because you are cool). So you click, say, a button in a row with an event. The event for the button needs to search the current row for a select widget. The innards of the click() event might have these lines:
var tr = $(this).closest('tr'); //where $(this) is your button
$('td select', tr).css('color', 'red');
So the last line there does a search for select elements in the context of a table row (tr). This search means to be faster than searching the entire table (or the entire page) for an id or something similar.
What is also implied here is that I'm putting my trust in the jQuery team that their implementation of the jQuery.closest() method is fast and efficient. So far, I've no reason not to have this trust.
This question already has answers here:
Closed 12 years ago.
Possible Duplicates:
Javascript (jQuery) performance measurement and best practices (not load time)
Good ways to improve jQuery selector performance?
Hello,
This might be a bit of a vague or general question, but I figure it might be able to serve as a good resource for other jQuery-ers.
I'm interested in common causes of slow running jQuery and how to optimize these cases.
We have a good amount of jQuery/JavaScript performing actions on our page... and performance can really suffer with a large number off elements.
What are some obvious performance pitfalls you know of with jQuery? What are some general optimizations a jQuery-er can do to squeeze every last bit of performance out of his/her scripts?
One example: a developer may use a selector to access an element that is slower than some other way.
Thanks
Not caching queries
I see something like this way too often (exaggerated to make a point):
$("div#container ul > li a.myselector").imagine();
$("div#container ul > li a.myselector").this();
$("div#container ul > li a.myselector").code();
$("div#container ul > li a.myselector").in();
$("div#container ul > li a.myselector").a();
$("div#container ul > li a.myselector").loop();
Binding events to all rows in a table...when the table has 1000+ rows
This smells bad:
$("table tr").click(function(){}).hover(function(){},function(){});
or worse (function declarations inside a loop [yes, each() is a loop]):
$("table tr").each(function(){
$(this).click(function(){});
$(this).hover(function(){},function(){});
});
instead you can do:
$("table").delegate("click","tr",function(){}); //etc
jQuery performance usually comes down to selector performance. The following are guidelines I provide to the team Im currently working with:
Cache your selectors
Try use Id's instead of classes eg $('#myDiv')
Qualify your classes with the type of element eg $('div.content')
2019 edit:
Modern browsers expose a very efficient getElementsByClassName() method that returns the elements having a given class. As seen in this answer.
So in modern browsers, $('.content') is faster than $('div.content')
$('.content') // 925,600 ops/s ±0.9%
$('div.content') // 548,302 ops/s ±1.2% --- 40.76% slower
Provide a scope for your selector , especially if nested inside another selector eg $('div.content', this)
Use chaining of selected elements eg $('div.content').css('display', 'block').show();
There are also non-selector based optimisations such as
Upgrade to the latest version of jQuery! Each release seems to bring more performance enhancements
Make sure you are using the minified
version of jQuery.
Minify your own jQuery code (Google Closure compiler is the best imho)
Beware of poorly written third party plug-ins
Move your jQuery script tags (including jQuery) to the bottom of the page - this gives a faster page load time.
Understand the difference between statically bound events and live events (using the live or delegate functions)
When appending to the DOM, try group all the code into one insert instead of lots of little appends
Also, be sure to find out about javascript performance optimisations, as these two things go hand in hand and can make a huge difference.
In some cases, an overuse of jQuery itself can be cause for slower performance. For example, $('#selector').html(<VALUE>) has more overhead than document.getElementById('selector').innerHTML = <VALUE>;
Usually the biggest single thing you can do is improve your DOM selectors to limit the amount of querying/walk-throughs when carrying out actions. I'd suggest googling "improve jquery performance" for the tons of blog articles on the topic since the question is vague. Here are two that cover the points I mostly think about when doing my own jquery coding:
http://jonraasch.com/blog/5-performance-tuning-tricks-for-jquery
http://hungred.com/useful-information/jquery-optimization-tips-and-tricks/
The most obvious performance bottleneck is none-cached queries:
$('.selector').hide();
// and later
$('.selector').css("height", "23px");
// and even later still
$('.selector').attr("src", "http://blah.com");
This is a very primitive example but matching many elements and looping at the same time could drastically reduce performance, especially on browsers that don't support querySelectorAll or where using complex selectors that aren't supported by the browser (thus requiring use of Sizzle to do all the DOM iteration). Storing the matched collection in a variable is the smart thing to do:
var $sel = $(".selector");
$sel.hide();
// and later
$sel.css("height", "23px");
// and even later still
$sel.attr("src", "http://blah.com");
Well jQuery performance is synonymous with Javascript performance. There are lots of articles about this. Check out some tips here
There are also some good slides on this by Nicolas Zakas (Yahoo! Front End Engineer and author of Javascript Performance books) here
Here are the important tips:
Limit DOM Manipulation and DOM
Parsing - parse by ID when you can
because it is fastest. Parsing by
Class is way slower.
Limit what you do inside Loops
Check Variable Scope and Closures and
use Local Variables
Check your Data Access Methods - it
is best to access data from Object
Literals, or a local variable
The deeper the property is within an
object, the longer it takes to access