Does jQuery cache elements internally? - javascript

I know jQuery doesn’t cache collections of element, f.ex calling:
$('.myclass').html('hello');
$('.myclass').html('bye');
Will make jQuery climb the DOM twice.
But how about cached DOM nodes?
var elems = document.querySelectorAll('.myclass');
$(elems).html('hello');
$(elems).html('bye');
Will jQuery cache those internally, or will they be equally slow as the first example?
To clarify: will jQuery keep a reference to elems and cache $(elems) internally so it won’t have to apply the same $() wrapper every time?
Something like:
cache = {}
constructor = function(collection)
if collection in cache
return cache[collection]
else construct(collection)

Assuming I've understood your question correctly, then no, jQuery won't keep a reference to the selected nodes beyond the statement that uses them:
$('.myclass').html('hello'); //Select all .myclass elements, then change their HTML
$('.myclass').html('bye'); //Select all .myclass elements, then change their HTML again
If you maintain a reference to those selected nodes separately, it will be faster:
var elems = document.querySelectorAll('.myclass'); //Select all .myclass elements
$(elems).html('hello'); //Change their HTML (no need to select them)
$(elems).html('bye'); //Change their HTML (no need to select them)
The difference won't be massive (unless your DOM is very complicated) but there will be a difference:
Update
will jQuery keep a reference to elems and cache $(elems) internally so
it won’t have to apply the same $() wrapper every time?
No, it won't. As stated above, the reference to the matched set of elements will not be maintained beyond the statement to which it applies. You can improve the performance of your code by keeping a reference to jQuery objects that are used throughout, rather than selecting them again each time, or even wrapping a stored set of native DOM nodes in a jQuery object each time.

If "cache" means "keep the DOM elements in the internal collection for that jQuery object" .. then yes.
Imagine
jq = $(elementListOrSelector)
where jq[0..jq.length-1] evaluate to the respectively DOM element. For instance, jq[0] evaluates to the first DOM element represented by the jQuery object, if any. Note that this collection is not magically changed once it has been built (and the way in which it was built does not matter).
However there is no "cache" outside of simply keeping the immediate results for that particular jQuery object.

Related

Changing inner text value of tab through javascript

I'm learning Javascript right now, and attempting to change the text title of a particular tab. It's actually part of a larger Shiny dashboard project, but I want to add some custom functionality to a few tabs. Below are the tabs in question:
Simple enough. I first access my tabs in my Javascript file:
var tabScrub2 = $(document).find('[data-value="scrubTab2"]');
console.log(tabScrub2);
When I use Firefox's developer console, I see that the tab is an object:
Moreover, it looks like I need to change the innerText property of 0, whatever this is, since that corresponds to the title of my tab (the innerText of 1 corresponds to the text inside scrubTab2). However, I'm not familiar with the actual object type being returned here:
Simply put, how the heck do I access and manipulate properties from this? And am I actually accessing an array? When I type in
var scrub2 = tabScrub2["1"];
console.log(scrub2);
I get an HTML element. I'm seen the a element in CSS and jQuery, but am not super familiar with how to manipulate its properties programmatically? How do I go about accessing and manipulating the innerText properties of this via Javascript? For instance, how would I hide scrubTab2, or change its title to something else?
The first object you're seeing is jQuery's wrapper around the real DOM elements. It's not an actual array, but it does contain all of the elements that matched your query under zero-indexed properties (e.g. "0" and "1") which allows you to access to them via an array-like API (e.g. tabScrub[1]).
Your method of grabbing a node using tabScrub2["1"] is correct (see this question in the jQuery FAQ). It's more likely to see that done with a numeric key though (i.e. tabScrub[1]) because that matches the way you would access an element in a normal array.
As far as manipulating properties of the DOM node, the DOM's API is notoriously inconsistent and quirky (hence the need for things like jQuery in the first place). However, for your use case you can just assign a string to the innerText property directly (e.g. tagScrub2[1].innerText = "Tab title"). MDN is a great resource if you're looking for reference material on other parts of the DOM.
A side note: if you're looking for a specific element you should use a query that will only match that element. It's generally a bad sign if you're grabbing extra elements and then accessing the element you want at a key other than 0. If you're doing this then your code depends on other (potentially unrelated) nodes in the DOM existing before your node, and if/when you change those nodes your original code will break.
Just use jQuery eq method to get the relevant object index from the array.
For an example
//Query and get first element.
var tabScrub2 = $(document).find('[data-value="scrubTab2"]:eq(0)');
//Hide
tabScrub2.hide();
//Change title
tabScrub2.attr("title", "New Title Text");
Lean more about jQuery eq here.
https://api.jquery.com/eq/
Since you use jquery selectors tabScrub2[0] returns the native DOM element instead of another jQuery object. Therefore the hide function won't work in that object since the native DOM element doesn't implement such type of functionality for an element. That's why you have to use jQuery pseudo selector as above. Because hide will only work with a jQuery object.

Efficiency of repeated checks with jQuery selector

Does jQuery cache selector lookups?
I am wondering whether this code
if ($("#ItemID").length) {
doSomethingWith($("#ItemID"));
}
will be significantly slower than this code
item = $("#ItemID");
if (item.length) {
doSomethingWith(item);
}
What about if you're extracting much more of the DOM e.g., $("div") rather than just $("#ItemID")? Same answer?
External references/explanation would be helpful, rather than just opinion.
According to the jQuery Learning Center, "jQuery doesn't cache elements for you. If you've made a selection that you might need to make again, you should save the selection in a variable rather than making the selection repeatedly."
This is particularly important when your selector is slow by its nature. For example, $("#foo") makes a call under the covers to document.getElementById() which should be very fast. However, a selector like $("a[rel$='thinger']") might be significantly slower on an older browser.
Remember that jQuery supports chaining, so you don't always have to introduce a variable to save the selection, you can just chain sequential calls together. For example, in this code I don't introduce a variable just to save the result of .find("img"), rather I chain three calls to prop and one to show.
clusterBlock.find("img").prop("src", clusterElt.imageUri)
.prop("alt", clusterElt.description)
.prop("id", clusterElt.imageId)
.show();
Finally, remember that a saved selection is not updated as the DOM changes. If I get all elements with class "foo" and save that in a variable, then some other piece of code adds and removes a couple of "foo" elements, then my saved selection will be missing the new elements and contain the references to the removed elements.
This
var item = $("#ItemID");
if (item.length) {
doSomethingWith(item);
}
will be faster but marginally. doSomethingWith will go to pointer where object is stored.
Turns out people do cache selectors btw

Why should y.innerHTML = x.innerHTML; be avoided?

Let's say that we have a DIV x on the page and we want to duplicate ("copy-paste") the contents of that DIV into another DIV y. We could do this like so:
y.innerHTML = x.innerHTML;
or with jQuery:
$(y).html( $(x).html() );
However, it appears that this method is not a good idea, and that it should be avoided.
(1) Why should this method be avoided?
(2) How should this be done instead?
Update:
For the sake of this question let's assume that there are no elements with ID's inside the DIV x.
(Sorry I forgot to cover this case in my original question.)
Conclusion:
I have posted my own answer to this question below (as I originally intended). Now, I also planed to accept my own answer :P, but lonesomeday's answer is so amazing that I have to accept it instead.
This method of "copying" HTML elements from one place to another is the result of a misapprehension of what a browser does. Browsers don't keep an HTML document in memory somewhere and repeatedly modify the HTML based on commands from JavaScript.
When a browser first loads a page, it parses the HTML document and turns it into a DOM structure. This is a relationship of objects following a W3C standard (well, mostly...). The original HTML is from then on completely redundant. The browser doesn't care what the original HTML structure was; its understanding of the web page is the DOM structure that was created from it. If your HTML markup was incorrect/invalid, it will be corrected in some way by the web browser; the DOM structure will not contain the invalid code in any way.
Basically, HTML should be treated as a way of serialising a DOM structure to be passed over the internet or stored in a file locally.
It should not, therefore, be used for modifying an existing web page. The DOM (Document Object Model) has a system for changing the content of a page. This is based on the relationship of nodes, not on the HTML serialisation. So when you add an li to a ul, you have these two options (assuming ul is the list element):
// option 1: innerHTML
ul.innerHTML += '<li>foobar</li>';
// option 2: DOM manipulation
var li = document.createElement('li');
li.appendChild(document.createTextNode('foobar'));
ul.appendChild(li);
Now, the first option looks a lot simpler, but this is only because the browser has abstracted a lot away for you: internally, the browser has to convert the element's children to a string, then append some content, then convert the string back to a DOM structure. The second option corresponds to the browser's native understanding of what's going on.
The second major consideration is to think about the limitations of HTML. When you think about a webpage, not everything relevant to the element can be serialised to HTML. For instance, event handlers bound with x.onclick = function(); or x.addEventListener(...) won't be replicated in innerHTML, so they won't be copied across. So the new elements in y won't have the event listeners. This probably isn't what you want.
So the way around this is to work with the native DOM methods:
for (var i = 0; i < x.childNodes.length; i++) {
y.appendChild(x.childNodes[i].cloneNode(true));
}
Reading the MDN documentation will probably help to understand this way of doing things:
appendChild
cloneNode
childNodes
Now the problem with this (as with option 2 in the code example above) is that it is very verbose, far longer than the innerHTML option would be. This is when you appreciate having a JavaScript library that does this kind of thing for you. For example, in jQuery:
$('#y').html($('#x').clone(true, true).contents());
This is a lot more explicit about what you want to happen. As well as having various performance benefits and preserving event handlers, for example, it also helps you to understand what your code is doing. This is good for your soul as a JavaScript programmer and makes bizarre errors significantly less likely!
You can duplicate IDs which need to be unique.
jQuery's clone method call like, $(element).clone(true); will clone data and event listeners, but ID's will still also be cloned. So to avoid duplicate IDs, don't use IDs for items that need to be cloned.
It should be avoided because then you lose any handlers that may have been on that
DOM element.
You can try to get around that by appending clones of the DOM elements instead of completely overwriting them.
First, let's define the task that has to be accomplished here:
All child nodes of DIV x have to be "copied" (together with all its descendants = deep copy) and "pasted" into the DIV y. If any of the descendants of x has one or more event handlers bound to it, we would presumably want those handlers to continue working on the copies (once they have been placed inside y).
Now, this is not a trivial task. Luckily, the jQuery library (and all the other popular libraries as well I assume) offers a convenient method to accomplish this task: .clone(). Using this method, the solution could be written like so:
$( x ).contents().clone( true ).appendTo( y );
The above solution is the answer to question (2). Now, let's tackle question (1):
This
y.innerHTML = x.innerHTML;
is not just a bad idea - it's an awful one. Let me explain...
The above statement can be broken down into two steps.
The expression x.innerHTML is evaluated,
That return value of that expression (which is a string) is assigned to y.innerHTML.
The nodes that we want to copy (the child nodes of x) are DOM nodes. They are objects that exist in the browser's memory. When evaluating x.innerHTML, the browser serializes (stringifies) those DOM nodes into a string (HTML source code string).
Now, if we needed such a string (to store it in a database, for instance), then this serialization would be understandable. However, we do not need such a string (at least not as an end-product).
In step 2, we are assigning this string to y.innerHTML. The browser evaluates this by parsing the string which results in a set of DOM nodes which are then inserted into DIV y (as child nodes).
So, to sum up:
Child nodes of x --> stringifying --> HTML source code string --> parsing --> Nodes (copies)
So, what's the problem with this approach? Well, DOM nodes may contain properties and functionality which cannot and therefore won't be serialized. The most important such functionality are event handlers that are bound to descendants of x - the copies of those elements won't have any event handlers bound to them. The handlers got lost in the process.
An interesting analogy can be made here:
Digital signal --> D/A conversion --> Analog signal --> A/D conversion --> Digital signal
As you probably know, the resulting digital signal is not an exact copy of the original digital signal - some information got lost in the process.
I hope you understand now why y.innerHTML = x.innerHTML should be avoided.
I wouldn't do it simply because you're asking the browser to re-parse HTML markup that has already been parsed.
I'd be more inclined to use the native cloneNode(true) to duplicate the existing DOM elements.
var node, i=0;
while( node = x.childNodes[ i++ ] ) {
y.appendChild( node.cloneNode( true ) );
}
Well it really depends. There is a possibility of creating duplicate elements with the same ID, which is never a good thing.
jQuery also has methods that can do this for you.

jQuery object and DOM element

I would like to understand relationship between jQuery object and DOM element..
When jQuery returns an element it shows up as [object Object] in an alert.
When getElementByID returns an element it shows up as [object HTMLDivElement]. What does that mean exactly? I mean are both of them objects with a difference ?
Also what methods can operate on jQuery object vs DOM element? Can a single jQuery object represent multiple DOM elements ?
I would like to understand relationship between jQuery object and DOM element
A jQuery object is an array-like object that contains DOM element(s). A jQuery object can contain multiple DOM elements depending on the selector you use.
Also what methods can operate on jQuery object vs DOM element? Can a single jQuery object represent multiple DOM elements ?
jQuery functions (a full list is on the website) operate on jQuery objects and not on DOM elements. You can access the DOM elements inside a jQuery function using .get() or accessing the element at the desired index directly:
$("selector")[0] // Accesses the first DOM element in this jQuery object
$("selector").get(0) // Equivalent to the code above
$("selector").get() // Retrieve a true array of DOM elements matched by this selector
In other words, the following should get you the same result:
<div id="foo"></div>
alert($("#foo")[0]);
alert($("#foo").get(0));
alert(document.getElementById("foo"));
For more information on the jQuery object, see the documentation. Also check out the documentation for .get()
When you use jQuery to obtain an DOM element, the jQuery object returns contains a reference to the element. When you use a native function like getElementById, you get the reference to the element directly, not contained within a jQuery object.
A jQuery object is an array-like object that can contain multiple DOM elements:
var jQueryCollection = $("div"); //Contains all div elements in DOM
The above line could be performed without jQuery:
var normalCollection = document.getElementsByTagName("div");
In fact, that's exactly what jQuery will do internally when you pass in a simple selector like div. You can access the actual elements within a jQuery collection using the get method:
var div1 = jQueryCollection.get(0); //Gets the first element in the collection
When you have an element, or set of elements, inside a jQuery object, you can use any of the methods available in the jQuery API, whereas when you have the raw element you can only use native JavaScript methods.
I just barely started playing with jQuery this last month, and I had a similar question running around in my mind. All the answers you have received so far are valid and on the dot, but a very precise answer may be this:
Let's say you are in a function, and to refer to the calling element, you can either use this, or $(this); but what is the difference? Turns out, when you use $(this), you are wrapping this inside a jQuery object. The benefit is that once an object is a jQuery object, you can use all the jQuery functions on it.
It's pretty powerful, since you can even wrap a string representation of elements, var s = '<div>hello <a href='#'>world</a></div><span>!</span>', inside a jQuery object just by literally wrapping it in $(): $(s). Now you can manipulate all those elements with jQuery.
Most jQuery member Functions do not have a return value but rather return the current jQuery Object or another jQuery Object.
So,
console.log("(!!) jquery >> " + $("#id") ) ;
will return [object Object], i.e. a jQuery Object which maintains the collection which is the result of evaluating the selector String ("#id") against the Document,
while ,
console.log("(!!) getElementById >> " + document.getElementById("id") ) ;
will return [object HTMLDivElement] (or in fact [object Object] in IE) because/if the return value is a div Element.
Also what methods can operate on jQuery object vs DOM element? (1) Can a single jQuery object represent multiple DOM elements ? (2)
(1) There is a host of member Functions in jQuery that pertain to DOM Objects. The best thing to do imo is search the jQuery API documentation for a relevant Function once you have a specific task (such as selecting Nodes or manipulating them).
jQuery documentation
(2) Yes, a single jQuery Object may maintain a list of multiple DOM Elements. There are multiple Functions (such as jQuery.find or jQuery.each) that build upon this automatic caching behaviour.
That's just your browser being clever. They're both objects but DOMElements are special objects. jQuery just wraps DOMElements in a Javascript object.
If you want to get more debug info I recommend you look at debugging tools like Firebug for Firefox and Chrome's built-in inspector (very similar to Firebug).
Besides what has been mentioned, I'd like to add something about why jQuery object is imported according to description from jquery-object
Compatibility
The implementation of element methods varies across browser vendors and versions.
As an example, set innerHTML of element may not work in most versions of Internet Explorer.
You can set innerHTML in jQuery way and jQuery will help you hide the differences of browser.
// Setting the inner HTML with jQuery.
var target = document.getElementById( "target" );
$( target ).html( "<td>Hello <b>World</b>!</td>" );
Convenience
jQuery provides a list of methods bound to jQuery object to smooth developer's experience, please check some of them under http://api.jquery.com/. The website also provides a common DOM manipulation, let's see how to insert an element stored in newElement after the target element in both way.
The DOM way,
// Inserting a new element after another with the native DOM API.
var target = document.getElementById( "target" );
var newElement = document.createElement( "div" );
target.parentNode.insertBefore( newElement, target.nextSibling );
The jQuery way,
// Inserting a new element after another with jQuery.
var target = document.getElementById( "target" );
var newElement = document.createElement( "div" );
$( target ).after( newElement );
Hope this is a supplement.

Filtering elements out of a jQuery selector

I have a page that selects all the elements in a form and serializes them like this:
var filter = 'form :not([name^=ww],[id$=IDF] *,.tools *)';
var serialized = $(filter).serialize();
This works, unless the form gets around 600+ elements. Then the user gets s javascript error saying that the script is running slow and may make their browsers unresponsive. It then gives them the option to stop running the script.
I have tried running the filters separately, I have tried using .not on the selectors, then serializing them, but I run into one of two problems. Either it runs faster without the error, but also does not filter the elements, or it does filter the elements and gives me the slow script error.
Any ideas?
With 600+ elements this is going to be dead slow. You need to offer Sizzle (jQuery's selector engine) some opportunities for optimisation.
First, consider the fact that jQuery can use the natively-supported querySelectorAll method (in modern browsers) if your selector complies with the CSS3 spec (or at least to the extent of what's currently supported in browsers).
With your case, that would mean passing only one simple selector to :not instead of 3 (1 simple, 2 complex).
form :not([name^=ww])
That would be quite fast... although you're not being kind to browsers that don't support querySelectorAll.
Look at your selector and think about how much Sizzle has to do with each element. First it needs to get ALL elements within the page (you're not pre-qualifying the :not selector with a tag/class/id). Then, on each element it does the following:
(assume that it exits if a result of a check is false)
Check that the parent has an ancestor with the nodeName.toLowerCase() of form.
Check that it does not have a name attribute starting with ww (basic indexOf operation).
Check that it does not have an ancestor with an id attribute ending in IDF. (expensive operation)
Check that it does not have an ancestor with a class attribute containing tools.
The last two operations are slow.
It may be best to manually construct a filter function, like so:
var jq = $([1]);
$('form :input').filter(function(){
// Re-order conditions so that
// most likely to fail is at the top!
jq[0] = this; // faster than constructing a new jQ obj
return (
!jq.closest('[id$=IDF]')[0]
// this can be improved. Maybe pre-qualify
// attribute selector with a tag name
&& !jq.closest('.tools')[0]
&& this.name.indexOf('ww') !== 0
);
});
Note: that function is untested. Hopefully you get the idea...
Could you maybe just serialize the whole form and do your filtering on the backend? Also, why-oh-why is the form growing to 600+ fields?
use the :input selector to only select applicable elements..

Categories

Resources