jQuery class selector vs cached element + find - javascript

I have a situation where I'm wondering what the best approach is performance-wise.
I have a class name, let's call it .class-test.
I also have a cached element, $body.
I can either retrieve the .class-test element by:
$('.class-test')
or by
$body.find('.class-test')
In a worst-case scenario, does one of these approaches win out over the other? Also, if someone could describe what is being done under the hood by the second approach that would be great (i.e. I know that .find defers to Sizzle, but if the element is cached does it already have a tree of its DOM elements stored, then it only needs to traverse that sub-tree to find the class? or is that tree only built as-needed?).

The difference is how many times you dip in the DOM pool so to speak. In the first query, jQuery will search from within the document( top level ) and travel down in to the DOM tree checking every level till it gets to the very end and then returns all of the matching elements.
In the second option, you specify the starting point, so instead of starting at the very top and working its way down you are starting at the body element. In this particular case you are only going one level lower but here is the real plus, since you have the body cached, jquery doesn't have to find that it can just reference the cached element.
When you get deeper into the DOM tree this can be a big time saver. You can save 10s to 100s of level checks. And although you wont notice this much for small sites, one day you may be working with enterprise level code bases where these performance gains will be very beneficial to you.

Related

What kind of performance optimization is done when assigning a new value to the innerHTML attribute

I have a DOM element (let's call it #mywriting) which contains a bigger HTML subtree (a long sequence of paragraph elements). I have to update the content of #mywriting regularly (but only small things will change, the majority of the content remains unchanged).
I wonder what is the smartest way to do this. I see two options:
In my application code I find out which child elements of #mywriting has been changed and I only update the changed child elements.
I just update the innerHTML attribute of #mywriting with the new content.
Is it worth to develop the logic of approach one to find out the changed child nodes or will the browser perform this kind of optimization when I apply approach two?
No, the browser doesn't do such optimisation. When you reassign innerHTML, it will throw away the old contents, parse the HTML, and place the new elements in the DOM.
Doing a diff to only replace (or rather, update) the parts that need an update can be worth a lot, and is done with great success in rendering libraries that employ a so-called virtual DOM.
However, they're doing that diff on an element data structure, not an HTML string. Parsing that to find out which elements changed is going to be horribly inefficient. Don't use HTML strings. (If you're already sold on them, you might as well just use innerHTML).
Without concdering the overhead of calculating which child elements has to be updated option 1 seems to be much faster (at least in chrome), according to this simple benchmark:
https://jsbench.github.io/#6d174b84a69b037c059b6a234bb5bcd0

Removing an element - optimized way

What is the most efficient way in removing an element in DOM? (js or jquery)
removeChild()
This is what I was using always. But, recently came across this
The removed child node still exists in memory, but is no longer part
of the DOM. With the first syntax-form shown, you may reuse the
removed node later in your code, via the oldChild object reference.
So, If I don't want to preserve the removed element in memory (for better performance), what is the best method?
Or like in java, if reference is null, is it automatically garbage collected and no need to worry about performance? I am asking this specifically as I am dealing with svg and many append/remove calls are made.

Right to Left jQuery Selectors not working

In reading post on StackOverflow about jQuery Selector performance, I keep reading the same thing over and over saying jQuery uses a Bottom up or Right to Left approach to selectors.
Take this example...
$("#dnsTitle a.save").removeClass("disabled");
According to what I have been reading it is better performance to use this instead...
$("a.save #dnsTitle").removeClass("disabled");
The problem I am running into is this does not even work at all! Can someone clarify the real best method for doing selectors?
I am working on an existing project that has some really long selectors and I am trying to improve them where I can but it seems I am getting bad information or outdated. I am using jQuery 2.0
The concept of "Bottom Up/Right to Left/Leaf to Root" is only related to the implementation of the selector parser and not the order of the selectors in usage.
Usage:
From the usage standpoint, selectors are "read" left to right, where your first selector is your root, and the succeeding selectors your descendants. The elements that match the last selector are returned. And so:
#dnsTitle a.save - looks for an element that has an id of dnsTitle and from there, looks for a decendant a element with class save. You end up with the a elements with the class save.
a.save #dnsTitle - looks for an a element with class save and from that, finds a decendant with an id of dnsTitle. You end up with whatever elements with the id dnsTitle
Parsing:
Now from the parsing point of view, there's 2 common ways you approach parsing a selector string, and they're the "Top-down" and the "Bottom-Up":
Top-down / Root to Leaves / Left to Right
If you've been through a Data Structures course, then this is how you normally parse a tree. You find the node where you want to start, which would be your first selector. Then you work your way down in finding the succeeding nodes.
A problem in this approach is that it uses a recursive approach and uses a lot of memory, especially if your tree is huge. Also, the issue of back-tracking is a problem since the succeeding selectors are descendants, and matches may vary in depth. The next selector might match a great^N grandchild. Recursion goes N steps deep to find that great^N child and take N steps to return back up.
Bottom-Up / Right to Left / Leaves to Root
With this approach, the parser looks for all elements that match the last selector and you end up with an array of matches. With that array of matches, you filter them if they match the succeeding previous selectors.
The advantage of this approach is that you have a fixed array to work on, and not a variable-depth tree. Also, you are filtering linearly since a node, in this case, can only have one parent in contrast to top-down which deals with multiple children. This also means you only need loops to do the job, not recursion. One loop can go over each result, and the other, nested, to go through each ancestor if it matches the succeeding previous selectors.

What's a good way to figure out which code is causing runaway DOM node creation?

The Chrome Dev Tools have unearthed some problems similar to those posted here, more DOM nodes being created than I feel should be given my design choices.
What's a good way to figure out what area of code is causing runaway DOM node creation? The information is really useful but figuring out what to do with it seems much less straightforward than, for example, dealing with a CPU profile.
Try taking two heap snapshots (the Profiles panel), one with few DOM nodes and one with lots of them, then compare and see if many nodes are retained. If yes, you will be able to detect the primary retainers.
I would suggest creating code that walks the DOM and collects some statistics about what nodes are in the DOM (tag type, class name, id value, parent, number of children, textContent, etc...). If you know what is supposed to be in your page, you should be able to look at this data dump and determine what's in there that you aren't expecting. You could even run the code at page load time, then run it again after your page has been exercised a bit and compare the two.

Javascript performance problems with too many dom nodes?

I'm currently debugging a ajax chat that just endlessly fills the page with DOM-elements. If you have a chat going for like 3 hours you will end up with god nows how many thousands of DOM-nodes.
What are the problems related to extreme DOM Usage?
Is it possible that the UI becomes totally unresponsive (especially in Internet Explorer)?
(And related to this question is off course the solution, If there are any other solutions other than manual garbage collection and removal of dom nodes.)
Most modern browser should be able to deal pretty well with huge DOM trees. And "most" usually doesn't include IE.
So yes, your browser can become unresponsive (because it needs too much RAM -> swapping) or because it's renderer is just overwhelmed.
The standard solution is to drop elements, say after the page has 10'000 lines worth of chat. Even 100'000 lines shouldn't be a big problem. But I'd start to feel uneasy for numbers much larger than that (say millions of lines).
[EDIT] Another problem is memory leaks. Even though JS uses garbage collection, if you make a mistake in your code and keep references to deleted DOM elements in global variables (or objects references from a global variable), you can run out of memory even though the page itself contains only a few thousand elements.
Just having lots of DOM nodes shouldn't be much of an issue (unless the client is short on RAM); however, manipulating lots of DOM nodes will be pretty slow. For example, looping through a group of elements and changing the background color of each is fine if you're doing this to 100 elements, but may take a while if you're doing it on 100,000. Also, some old browsers have problems when working with a huge DOM tree--for example, scrolling through a table with hundreds of thousands of rows may be unacceptably slow.
A good solution to this is to buffer the view. Basically, you only show the elements that are visible on the screen at any given moment, and when the user scrolls, you remove the elements that get hidden, and show the ones that get revealed. This way, the number of DOM nodes in the tree is relatively constant, but you don't really lose anything.
Another similar solution to this is to implement a cap on the number of messages that are shown at any given time. This way, any messages past, say, 100 get removed, and to see them you need to click a button or link that shows more. This is sort of what Facebook does with their profiles, if you need a reference.
Problems with extreme DOM usage can boil down to performance. DOM scripting is very expensive, so constantly accessing and manipulating the DOM can result in a poor performance (and user experience), particularly when the number of elements becomes very large.
Consider HTML collections such as document.getElementsByTagName('div'), for example. This is a query against the document and it will be reexecuted every time up-to-date information is required, such as the collection's length. This could lead to inefficiencies. The worst cases will occur when accessing and manipulating collections inside loops.
There are many considerations and examples, but like anything it depends on the application.

Categories

Resources