Efficiency of repeated checks with jQuery selector - javascript

Does jQuery cache selector lookups?
I am wondering whether this code
if ($("#ItemID").length) {
doSomethingWith($("#ItemID"));
}
will be significantly slower than this code
item = $("#ItemID");
if (item.length) {
doSomethingWith(item);
}
What about if you're extracting much more of the DOM e.g., $("div") rather than just $("#ItemID")? Same answer?
External references/explanation would be helpful, rather than just opinion.

According to the jQuery Learning Center, "jQuery doesn't cache elements for you. If you've made a selection that you might need to make again, you should save the selection in a variable rather than making the selection repeatedly."
This is particularly important when your selector is slow by its nature. For example, $("#foo") makes a call under the covers to document.getElementById() which should be very fast. However, a selector like $("a[rel$='thinger']") might be significantly slower on an older browser.
Remember that jQuery supports chaining, so you don't always have to introduce a variable to save the selection, you can just chain sequential calls together. For example, in this code I don't introduce a variable just to save the result of .find("img"), rather I chain three calls to prop and one to show.
clusterBlock.find("img").prop("src", clusterElt.imageUri)
.prop("alt", clusterElt.description)
.prop("id", clusterElt.imageId)
.show();
Finally, remember that a saved selection is not updated as the DOM changes. If I get all elements with class "foo" and save that in a variable, then some other piece of code adds and removes a couple of "foo" elements, then my saved selection will be missing the new elements and contain the references to the removed elements.

This
var item = $("#ItemID");
if (item.length) {
doSomethingWith(item);
}
will be faster but marginally. doSomethingWith will go to pointer where object is stored.
Turns out people do cache selectors btw

Related

How should I use Variables and jQuery Dom navigation?

I was just wondering which is the correct or most efficient way of navigating through the Dom using variables.
For example, can I concatenate selectors
var $container = '.my-container';
$($container).addClass('hidden');
$($container + ' .button').on('click', function(){
//something here
});
or should I use the jQuery traversal functions
var $container = $('.my-container');
$container.addClass('hidden');
$container.children('.button').on('click', function(){
//something here
});
Is there a different approach, is one best, or can you use them at different times?
The $ is usually used only when working with an actual jquery object. You generally shouldn't prefix anything with that unless it's really something from jquery.
Beyond that little bit though, performance-wise, your second bit of code is going to be faster. I made an example jsperf here: http://jsperf.com/test-jquery-select
The reason the second bit of code is faster is because (if I remember correctly) jquery caches the selection, and then any actions performed on that selection are scoped. When you use .find (which is really what you meant in your code, not .children), instead of trying to find elements through the entire document, it only tries to find them within the scope of whatever my-container is.
The time when you wouldn't want to use the second pattern is when you expect the dom to change frequently. Using a previous selection of items, while efficient, is potentially a problem if more buttons are added or removed. Granted, this isn't a problem if you're simply chaining up a few actions on an item, then discarding the selection anyway.
Besides all of that, who really wants to continuously type $(...). It's awkward.

Does jQuery cache elements internally?

I know jQuery doesn’t cache collections of element, f.ex calling:
$('.myclass').html('hello');
$('.myclass').html('bye');
Will make jQuery climb the DOM twice.
But how about cached DOM nodes?
var elems = document.querySelectorAll('.myclass');
$(elems).html('hello');
$(elems).html('bye');
Will jQuery cache those internally, or will they be equally slow as the first example?
To clarify: will jQuery keep a reference to elems and cache $(elems) internally so it won’t have to apply the same $() wrapper every time?
Something like:
cache = {}
constructor = function(collection)
if collection in cache
return cache[collection]
else construct(collection)
Assuming I've understood your question correctly, then no, jQuery won't keep a reference to the selected nodes beyond the statement that uses them:
$('.myclass').html('hello'); //Select all .myclass elements, then change their HTML
$('.myclass').html('bye'); //Select all .myclass elements, then change their HTML again
If you maintain a reference to those selected nodes separately, it will be faster:
var elems = document.querySelectorAll('.myclass'); //Select all .myclass elements
$(elems).html('hello'); //Change their HTML (no need to select them)
$(elems).html('bye'); //Change their HTML (no need to select them)
The difference won't be massive (unless your DOM is very complicated) but there will be a difference:
Update
will jQuery keep a reference to elems and cache $(elems) internally so
it won’t have to apply the same $() wrapper every time?
No, it won't. As stated above, the reference to the matched set of elements will not be maintained beyond the statement to which it applies. You can improve the performance of your code by keeping a reference to jQuery objects that are used throughout, rather than selecting them again each time, or even wrapping a stored set of native DOM nodes in a jQuery object each time.
If "cache" means "keep the DOM elements in the internal collection for that jQuery object" .. then yes.
Imagine
jq = $(elementListOrSelector)
where jq[0..jq.length-1] evaluate to the respectively DOM element. For instance, jq[0] evaluates to the first DOM element represented by the jQuery object, if any. Note that this collection is not magically changed once it has been built (and the way in which it was built does not matter).
However there is no "cache" outside of simply keeping the immediate results for that particular jQuery object.

CSS Selectors performance, DOM Parsing

My question is related to DOM parsing getting triggered, i would like to know why it's faster to use a CSS ID selector than a Class selector. When does the DOM tree have to be parsed again, and what tricks and performance enhancements should I use... also, someone told me that if I do something like
var $p = $("p");
$p.css("color", "blue");
$p.text("Text changed!");
instead of
$("p").css("color", "blue");
$("p").text("Text changed!");
improves performance, is this true for all browsers? Also how do I know if my DOM tree has been re-parsed?
Well, an #id selector is faster than class selectors because: (a) there can only be one element with a given id value; (b) browsers can hold a map id -> element, so the #id selector can work as quick as a single map lookup.
Next, the first option suggested above is definitely faster, as it avoids the second lookup, thereby reducing the total selector-based lookup time by a factor of 2.
Last, you can use Chrome Developer Tools' Selector Profiler (in the Profiles panel) to profile the time it takes a browser to process selectors in your page (match + apply styles to the matching elements.)
An ID selector is faster than a class selector because there is only one element with an ID but many elements could share a class and they have to be searched.
The code below is needlessly parsing the DOM twice, so of course it will be slower:
$("p").css("color", "blue");
$("p").text("Text changed!");
I encourage you to make your own performance tests whenever you have a doubt. You can get more info on how to do that here: How do you performance test JavaScript code?. Once you've tested performance on your own, you'll never forget the results.
In particular, the execution of the $() function on a given jquery selector must obtain the matching DOM nodes. I'm not sure exactly how this works but I'm guessing it is a combination of document.getElementById(), document.getElementsByTagName() and others. This has a processing cost, no matter how small it may be, if you call it only once and then reuse it you save some processing time.

Why should y.innerHTML = x.innerHTML; be avoided?

Let's say that we have a DIV x on the page and we want to duplicate ("copy-paste") the contents of that DIV into another DIV y. We could do this like so:
y.innerHTML = x.innerHTML;
or with jQuery:
$(y).html( $(x).html() );
However, it appears that this method is not a good idea, and that it should be avoided.
(1) Why should this method be avoided?
(2) How should this be done instead?
Update:
For the sake of this question let's assume that there are no elements with ID's inside the DIV x.
(Sorry I forgot to cover this case in my original question.)
Conclusion:
I have posted my own answer to this question below (as I originally intended). Now, I also planed to accept my own answer :P, but lonesomeday's answer is so amazing that I have to accept it instead.
This method of "copying" HTML elements from one place to another is the result of a misapprehension of what a browser does. Browsers don't keep an HTML document in memory somewhere and repeatedly modify the HTML based on commands from JavaScript.
When a browser first loads a page, it parses the HTML document and turns it into a DOM structure. This is a relationship of objects following a W3C standard (well, mostly...). The original HTML is from then on completely redundant. The browser doesn't care what the original HTML structure was; its understanding of the web page is the DOM structure that was created from it. If your HTML markup was incorrect/invalid, it will be corrected in some way by the web browser; the DOM structure will not contain the invalid code in any way.
Basically, HTML should be treated as a way of serialising a DOM structure to be passed over the internet or stored in a file locally.
It should not, therefore, be used for modifying an existing web page. The DOM (Document Object Model) has a system for changing the content of a page. This is based on the relationship of nodes, not on the HTML serialisation. So when you add an li to a ul, you have these two options (assuming ul is the list element):
// option 1: innerHTML
ul.innerHTML += '<li>foobar</li>';
// option 2: DOM manipulation
var li = document.createElement('li');
li.appendChild(document.createTextNode('foobar'));
ul.appendChild(li);
Now, the first option looks a lot simpler, but this is only because the browser has abstracted a lot away for you: internally, the browser has to convert the element's children to a string, then append some content, then convert the string back to a DOM structure. The second option corresponds to the browser's native understanding of what's going on.
The second major consideration is to think about the limitations of HTML. When you think about a webpage, not everything relevant to the element can be serialised to HTML. For instance, event handlers bound with x.onclick = function(); or x.addEventListener(...) won't be replicated in innerHTML, so they won't be copied across. So the new elements in y won't have the event listeners. This probably isn't what you want.
So the way around this is to work with the native DOM methods:
for (var i = 0; i < x.childNodes.length; i++) {
y.appendChild(x.childNodes[i].cloneNode(true));
}
Reading the MDN documentation will probably help to understand this way of doing things:
appendChild
cloneNode
childNodes
Now the problem with this (as with option 2 in the code example above) is that it is very verbose, far longer than the innerHTML option would be. This is when you appreciate having a JavaScript library that does this kind of thing for you. For example, in jQuery:
$('#y').html($('#x').clone(true, true).contents());
This is a lot more explicit about what you want to happen. As well as having various performance benefits and preserving event handlers, for example, it also helps you to understand what your code is doing. This is good for your soul as a JavaScript programmer and makes bizarre errors significantly less likely!
You can duplicate IDs which need to be unique.
jQuery's clone method call like, $(element).clone(true); will clone data and event listeners, but ID's will still also be cloned. So to avoid duplicate IDs, don't use IDs for items that need to be cloned.
It should be avoided because then you lose any handlers that may have been on that
DOM element.
You can try to get around that by appending clones of the DOM elements instead of completely overwriting them.
First, let's define the task that has to be accomplished here:
All child nodes of DIV x have to be "copied" (together with all its descendants = deep copy) and "pasted" into the DIV y. If any of the descendants of x has one or more event handlers bound to it, we would presumably want those handlers to continue working on the copies (once they have been placed inside y).
Now, this is not a trivial task. Luckily, the jQuery library (and all the other popular libraries as well I assume) offers a convenient method to accomplish this task: .clone(). Using this method, the solution could be written like so:
$( x ).contents().clone( true ).appendTo( y );
The above solution is the answer to question (2). Now, let's tackle question (1):
This
y.innerHTML = x.innerHTML;
is not just a bad idea - it's an awful one. Let me explain...
The above statement can be broken down into two steps.
The expression x.innerHTML is evaluated,
That return value of that expression (which is a string) is assigned to y.innerHTML.
The nodes that we want to copy (the child nodes of x) are DOM nodes. They are objects that exist in the browser's memory. When evaluating x.innerHTML, the browser serializes (stringifies) those DOM nodes into a string (HTML source code string).
Now, if we needed such a string (to store it in a database, for instance), then this serialization would be understandable. However, we do not need such a string (at least not as an end-product).
In step 2, we are assigning this string to y.innerHTML. The browser evaluates this by parsing the string which results in a set of DOM nodes which are then inserted into DIV y (as child nodes).
So, to sum up:
Child nodes of x --> stringifying --> HTML source code string --> parsing --> Nodes (copies)
So, what's the problem with this approach? Well, DOM nodes may contain properties and functionality which cannot and therefore won't be serialized. The most important such functionality are event handlers that are bound to descendants of x - the copies of those elements won't have any event handlers bound to them. The handlers got lost in the process.
An interesting analogy can be made here:
Digital signal --> D/A conversion --> Analog signal --> A/D conversion --> Digital signal
As you probably know, the resulting digital signal is not an exact copy of the original digital signal - some information got lost in the process.
I hope you understand now why y.innerHTML = x.innerHTML should be avoided.
I wouldn't do it simply because you're asking the browser to re-parse HTML markup that has already been parsed.
I'd be more inclined to use the native cloneNode(true) to duplicate the existing DOM elements.
var node, i=0;
while( node = x.childNodes[ i++ ] ) {
y.appendChild( node.cloneNode( true ) );
}
Well it really depends. There is a possibility of creating duplicate elements with the same ID, which is never a good thing.
jQuery also has methods that can do this for you.

Filtering elements out of a jQuery selector

I have a page that selects all the elements in a form and serializes them like this:
var filter = 'form :not([name^=ww],[id$=IDF] *,.tools *)';
var serialized = $(filter).serialize();
This works, unless the form gets around 600+ elements. Then the user gets s javascript error saying that the script is running slow and may make their browsers unresponsive. It then gives them the option to stop running the script.
I have tried running the filters separately, I have tried using .not on the selectors, then serializing them, but I run into one of two problems. Either it runs faster without the error, but also does not filter the elements, or it does filter the elements and gives me the slow script error.
Any ideas?
With 600+ elements this is going to be dead slow. You need to offer Sizzle (jQuery's selector engine) some opportunities for optimisation.
First, consider the fact that jQuery can use the natively-supported querySelectorAll method (in modern browsers) if your selector complies with the CSS3 spec (or at least to the extent of what's currently supported in browsers).
With your case, that would mean passing only one simple selector to :not instead of 3 (1 simple, 2 complex).
form :not([name^=ww])
That would be quite fast... although you're not being kind to browsers that don't support querySelectorAll.
Look at your selector and think about how much Sizzle has to do with each element. First it needs to get ALL elements within the page (you're not pre-qualifying the :not selector with a tag/class/id). Then, on each element it does the following:
(assume that it exits if a result of a check is false)
Check that the parent has an ancestor with the nodeName.toLowerCase() of form.
Check that it does not have a name attribute starting with ww (basic indexOf operation).
Check that it does not have an ancestor with an id attribute ending in IDF. (expensive operation)
Check that it does not have an ancestor with a class attribute containing tools.
The last two operations are slow.
It may be best to manually construct a filter function, like so:
var jq = $([1]);
$('form :input').filter(function(){
// Re-order conditions so that
// most likely to fail is at the top!
jq[0] = this; // faster than constructing a new jQ obj
return (
!jq.closest('[id$=IDF]')[0]
// this can be improved. Maybe pre-qualify
// attribute selector with a tag name
&& !jq.closest('.tools')[0]
&& this.name.indexOf('ww') !== 0
);
});
Note: that function is untested. Hopefully you get the idea...
Could you maybe just serialize the whole form and do your filtering on the backend? Also, why-oh-why is the form growing to 600+ fields?
use the :input selector to only select applicable elements..

Categories

Resources