I am curious as which selector would be quicker for the browser to look up. I am interested in general rather than specific browsers.
$('accountDetailsTable').getElements('.toggler')
This will look for accountDetailsTable as if you were using document.getElementById('accountDetailsTable'); and then look for .toggler inside of the element.
$$('.toggler')
Whereas this one will return all the selectors directly. But ultimately they will both give me the same result.
So which one would be quicker? How could I test this?
performance of selectors will differ between browsers significantly. eg. with querySelector and querySelectorAll (QSA) vs without. When without QSA, with getElementsByClassName or without.
the amount of work that the selector engine needs to do will vary.
you can write this differently. 3 ways to do so, not just 2:
1. $('accountDetailsTable').getElements('.toggler')
anatomy of the above is:
get element (1 call).
call method getElements from element prototype (or element directly in old IE)
look for all childNodes that match the class selector
this will be consistent in browsers in that it gets a root node and calls methods off it. if no QSA, it will go to getElementsByClassName or walk all childNodes and filter the className property until it has a list of matches. Performance wise, this will be worse in modern browsers because it needs to chain whereas method 3 will be a direct result.
because of how selectors work in JS, unlike in CSS - it's from left to right and more qualified selectors improve performance, having something like .toggler means when there is an older browser, it needlessly needs to consider every single DOMnode in the tree (except for text nodes). always qualify the selectors when you can, i.e. div.toggler or a.toggler.
2. $$('.toggler')
if QSA is available, rely on the browser to return stuff (no extra hacks here like :not, or :contains or :has or ! (reverse combinators).
if NO QSA, it will parse the expression via the Slick parser and basically internally degrade to something like in case 1 above. The biggest difference here is the lack of context - it will run against the whole document as $$ internally does something like document.getElements('.toggler') so there are a lot more nodes to consider. Always anchor your queries to a stable upper-most common node. or pass it into the query string for qualification like in case 3
once again, this will be improved for performance by making it more qualified eg:
$$('a.toggler')
3. $$('#accountDetailsTable .toggler')
Similar to case 2 but when QSA is available, it will be faster.
When QSA it is not there, it will run against the context of the #accountDetailsTable node so it will be better than case 2.
Making this more qualified will make a difference:
$$('#accountDetailsTable td.control > a.toggler')
big discount: it will depend on how big your DOM is, how many matches are found and returned. on a simple DOM, the expected performance order may differ.
Performance optimisations of selectors are increasingly irrelevant these days.
The days of SlickText comparisons of frameworks are over and application performance has have little to do with selector speed, or you are doing something wrong.
If you do your job right, you won't need to be constantly selecting elements. You can cache things, reuse, be smart and reduce DOM lookups to a minimum.
Events etc can be attached through smart event delegation, where appropriate, completely negating the need to select and add events to multiple nodes etc - use common sense and don't get hung up on theoretical performance benchmarks. Instead, use a profiler and test your actual app and see where you lose time / CPU cycles.
You can run a simple selector like that over 50000 times in the space of a second. That's micro-benchmarking and does not measure actual real world performance inside your APP, DOM, browser etc.
More thoughts on benchmarking performance and premature optimisations: http://www.youtube.com/watch?v=65-RbBwZQdU
Careful
Write for correctness and readability first. Even if you can get extra performance by doing something, don't do it at the expense of readability if it's not critical / reused all the time. Selectors like .toggler, .new, .button etc tend to be too generic and can be re-used across the app in different parts of the DOM. You need to qualify selectors to also ensure your intended functionality will continue working when it gets moved into a different page / widget / DOM and your project gets an influx of new developers.
just my two cents, I know you already accepted the answer.
My internal MooTools knowledge got kind of rusty over the last two years, but generally speaking I expect the $$-selector to be much faster, since it has to run through the elements only once.
Why not give it a try? See this JSfiddle:
Rough HTML:
<div id="accountDetailsTable">
<div id=".toggler">1</div>
<div id=".toggler">2</div>
<div id=".toggler">3</div>
</div>
JScript (please don't comment on the bad code, its just to demonstrate the functionality):
window.addEvent('domready', function() {
var iter = 50000;
var tstart = new Date().getTime();
for (var i=0;i<iter;i++) {
var x = $('accountDetailsTable').getElements('.toggler');
}
var tend = new Date().getTime();
var tdur = tend - tstart;
console.log('$: ' + tdur + ' ms');
var tstart = new Date().getTime();
for (var i=0;i<iter;i++) {
var x = $$('.toggler');
}
var tend = new Date().getTime();
var tdur = tend - tstart;
console.log('$$: ' + tdur + ' ms');
});
That said, my tests with about 50000 iterations lead to roughly this results:
$('accountDetailsTable').getElements('.toggler') : ~ 4secs
$$('.toggler') : ~ 2secs
Results will vary across browsers, elements and much more, so this is only a rough approximation.
This said, you'll probably feel only a difference if you have that many selectors in such a short period. Yes, performance should be thought about, if you only have a few selectors in your application, it shouldn't really matter.
I prefer the $$(), not because of the better performance, but because of better readability.
Related
When appending elements to a page with jQuery, seems I have basically 3 options, summarized here:
function tripleAppend() {
//1 - append using jQuery, like so:
var withjquery = $("<p></p>").text("Appending with jquery")
//2 - Using the html directly formatted:
var withhtml = "<p>Appending html</p>"
//3 - Create & append a DOM element:
var withDOM = document.createElement("p");
withDOM.innerHTML = "Appending a DOM element"
//or withDOM.textContent = (....)
$("body").append(withjquery, withhtml, withDOM)
}
Are there important differences between those 3 approaches - any one that is more portable or more limited for a more advanced append? If I'm going to pick one these as my "go-to" solution, any reason to lean for one or the other (saved personal syntactic preferences)?
Whichever one fits your project.
It's both circumstantial and preferential.
Do you like shorter syntax and the features of the JQuery object? Then use JQuery.
On that note, JQuery is a subset of JavaScript, so in the background JQuery is doing the same thing that JavaScript is doing, just with an easier to understand and less cluttered syntax.
One difference between JavaScript createElement and innerHTML/JQuery in your example is that JQuery and innerHTML both require the provided HTML to be parsed, and then turned into DOM nodes, and then appended. Pure JavaScript will always be faster in that scenario,because you are skipping that parsing step, however, it's most likely a negligible difference in performance.
It really all comes down to what you're comfortable with and how concerned you are with optimizing your code for performance - and in this case I would venture to say that if you did decide to use createElement because of performance, it could be considered a superfluous optimization.
I know in jQuery if we use ID to select elements,it's so efficient.I have a question about this selectors:
please consider this 3 selectors:
$('#MyElement')
$('#Mytbl #MyElement')
$('#Mytbl .MyClass')
which one is faster and why?How I can check the time elapsed to select my element in jQuery?
A direct ID selector will always be the fastest.
I've created a simple test case based on your question...
http://jsperf.com/selector-test-id-id-id-id-class
Selecting nested ID's is just wrong, because if an ID is unique (which it should be), then it doesn't matter if it's nested or not.
this is the way to stop times between some javascript calls
selectorTimes = [];
var start = new Date().getTime();
$('#MyElement')
selectorTimes.push(new Date().getTime()-start);
start = new Date().getTime()
$('#Mytbl #MyElement')
selectorTimes.push(new Date().getTime()-start);
start = new Date().getTime()
$('#Mytbl .MyClass')
selectorTimes.push(new Date().getTime()-start);
console.log(selectorTimes);
i think the second selector is not efficient, if you have a domid select this directly:
$('#MyElement')
The first one is the fastest, simply because it only has 1 property to look for. However,
document.getElementById("MyElement")
is even faster, though. It's native javascript, and unlike jQuery, the browser immediately knows what you want to do, instead of having to run through a load of jQuery code to figure out what it is you're looking for, in the first place.
You can use jsPerf to run a speed test, to compare the functions: Test Case. Results:
$('#MyElement')
Ops/sec: 967,509
92% slower
$('#Mytbl #MyElement')
Ops/sec: 83,837
99% slower
$('#Mytbl .MyClass')
Ops/sec: 49,413
100% slower
document.getElementById("MyElement")
Ops/sec: 10,857,632
fastest
Like expected, the native getter is the fastest, followed by the jQuery getter with only 1 selector at less than 10% of the native speed. The jQuery getters with 2 parameters don't even get close to the operations per second of native code, especially the class selector, since classes are usually applied to multiple elements, compared to ID's. (Native ID selectors stop searching after they've found one element, I'm not sure if jQuery does, too.)
A couple things:
More selectors = slower search. If you can get your desired result with fewer predicates, do so.
Getting an element by ID is quicker than getting by a class. getElementById is a core function of JavaScript that is heavily optimised because it is used so frequently. Leverage this when possible.
The space selector (' ') is much more costly than the child selector ('>'). If you can use the child selector, do so.
These rules apply to CSS as they do JavaScript and jQuery.
Here you are. See the comments on top of each example:
//fastest because it is just one id lookup: document.getElementById("MyElement") with no further processing.
$('#MyElement')
//a little slower. JQuery interprets selectors from right to left so it first looks up for #MyElement and then checks if it is hosted in #Mytbl
$('#Mytbl #MyElement')
//the slowest of all. JQuery interprets selectors from right to left so it first finds all elements with .MyClass as their class and then searches for those hosted in #Mytbl.
$('#Mytbl .MyClass')
If you can, always use just the id (like the first example) but if you have to have multiple selectors and classes chained together, try to put the most strict one at the right. For example an id. Because JQuery interprets selectors from right to left.
Also, if you need nested selectors, it's faster to use $().find().find()
http://jsperf.com/selector-test-id-id-id-id-class/2
$('#Mytbl .MyClass')
$('#Mytbl').find('.MyClass')
The latter is about 65% faster.
Obviously the first one, $("#MyElement") is faster than other 2.
Accessing an element with its id always faster but sometimes we had to find some element in some container. In that case we prefer .find() or .filter() (depending upon situation).
The difference between selectors depends on browser to browser. e.g. if you access through class on IE it would be slower than FF. FF is faster when accessing an element with class rather than ID.
In your second example, i.e. $("#mytbl #MyElement"), here you are finding #MyElement under #mytbl which is legal but not appropriate way. Since you already know the ID of the element (assuming you have only one element with this id on your page) so its better to use $("#MyElement"). $("#mytbl #MyElement") will read #mytbl first and traverse to find #MyElement under it hence time consuming and slow.
To test different cases you can write small snippet to read/access atleast 10000 elements in a loop otherwise it would be hard to determine which way is faster.
The fastest will be:
$('#Mytbl', '#MytblContainer' );
because in this case, jquery don't have to search the whole dom tree to find '#Mytbl'. It will search in the scope given only. IE it will serach in '#MytblContainer' only .
I would say that the first one is the fastest because you're simply searching for one id.
And
$('#Mytbl .MyClass')
is the slowest because you're not specifying the type of element that has the class "MyClass"
My question is related to DOM parsing getting triggered, i would like to know why it's faster to use a CSS ID selector than a Class selector. When does the DOM tree have to be parsed again, and what tricks and performance enhancements should I use... also, someone told me that if I do something like
var $p = $("p");
$p.css("color", "blue");
$p.text("Text changed!");
instead of
$("p").css("color", "blue");
$("p").text("Text changed!");
improves performance, is this true for all browsers? Also how do I know if my DOM tree has been re-parsed?
Well, an #id selector is faster than class selectors because: (a) there can only be one element with a given id value; (b) browsers can hold a map id -> element, so the #id selector can work as quick as a single map lookup.
Next, the first option suggested above is definitely faster, as it avoids the second lookup, thereby reducing the total selector-based lookup time by a factor of 2.
Last, you can use Chrome Developer Tools' Selector Profiler (in the Profiles panel) to profile the time it takes a browser to process selectors in your page (match + apply styles to the matching elements.)
An ID selector is faster than a class selector because there is only one element with an ID but many elements could share a class and they have to be searched.
The code below is needlessly parsing the DOM twice, so of course it will be slower:
$("p").css("color", "blue");
$("p").text("Text changed!");
I encourage you to make your own performance tests whenever you have a doubt. You can get more info on how to do that here: How do you performance test JavaScript code?. Once you've tested performance on your own, you'll never forget the results.
In particular, the execution of the $() function on a given jquery selector must obtain the matching DOM nodes. I'm not sure exactly how this works but I'm guessing it is a combination of document.getElementById(), document.getElementsByTagName() and others. This has a processing cost, no matter how small it may be, if you call it only once and then reuse it you save some processing time.
In a piece of example code I wrote
var as = toArray(document.getElementsByClassName("false")).filter(function (el) {
return el.tagName === "A";
});
And I was thinking I could replace that with
var as = document.querySelectorAll("a.false");
Now after reading the following facts
Pretend browser support isn't an issue (we have shims and polyfills).
Pretend your not in your generic jQuery mindset of you shall use QSA for getting every element.
I'm going to write qsa instead of document.querySelectorAll because I'm lazy.
Question: When should I favour QSA over the normal methods?
It's clear that if your doing qsa("a") or qsa(".class") or qsa("#id") your doing it wrong because there are methods (byTagName, byClassName, byId) that are better.
It's also clear that qsa("div > p.magic") is a sensible use-case.
Question: But is qsa("tagName.class") a good use-case of QSA?
As a futher aside there are also these things called NodeIterator
I've asked a question about QSA vs NodeIterator
You should use QSA when the gEBI, gEBN, gEBCN do not work because your selector is complex.
QSA vs DOM parsing is a matter of preference and what your going to be doing with the returned data set.
If browser support was not an issue I would just use it everywhere. Why use 4 different methods (...byId, ...byTagName, ...byClassName) if you could just use one.
QSA seems to be slower (...byId), but still only takes a few miliseconds or less. Most of the times you only call it a few times, so not a problem. When you hit a speed bottleneck you could always replace QSA with the appropriate other one.
I've set up some tests for you to mess around with. It appears that QSA is a lot slower. But if you are not calling it that much, it shouldn't be a problem.
http://jsfiddle.net/mxZq3/
EDIT - jsperf version
http://jsperf.com/qsa-vs-regular-js
This question already has answers here:
Deep cloning vs setting of innerHTML: what's faster?
(2 answers)
Closed 8 years ago.
I'm looking at a problem that needs a complex block of divs to be created once for each element in a set of ~100 elements.
Each individual element is identical except for the content, and they look (in HTML) something like this:
<div class="class0 class1 class3">
<div class="spacer"></div>
<div id="content">content</div>
<div class="spacer"></div>
<div id="content2">content2</div>
<div class="class4">content3</div>
<div class="spacer"></div>
<div id="footer">content3</div>
</div>
I could either:
1) Create all the elements as innerHTML with string concatenation to add the content.
2) Use createElement, setAttribute and appendChild to create and add each div.
Option 1 gets a slightly smaller file to download, but option 2 seems to be slightly faster to render.
Other than performance is there a good reason to go via one route or the other? Any cross-browser problems / performance gremlins I should test for?
...or should I try the template and clone approach?
Many thanks.
Depends on what's "better" for you.
Performance
From a performance point of view, createElement+appendChild wins by a LOT. Take a look at this jsPerf I created when I compare both and the results hit in the face.
innerHTML: ~120 ops/sec
createElement+appendChild: ~145000 ops/sec
(on my Mac with Chrome 21)
Also, innerHTML triggers page reflow.
On Ubuntu with Chrome 39 tests get similar results
innerHTML: 120000 ops/sec
createElement: 124000 ops/sec
probably some optimisation take place.
On Ubuntu with QtWebkit based browser Arora (wkhtml also QtWebkit) results are
innerHTML: 71000 ops/sec
createElement: 282000 ops/sec
it seems createElement faster in average
Mantainability
From a mantainability point of view, I believe string templates help you a lot. I use either Handlebars (which I love) or Tim (for project which need smallest footprints). When you "compile" (prepare) your template and it's ready for appending it to the DOM, you use innerHTML to append the template string to the DOM. On trick I do to avoid reflow is createElement for a wrapper and in that wrapper element, put the template with innerHTML. I'm still looking for a good way to avoid innerHTML at all.
Compatibility
You don't have to worry here, both methods are fully supported by a broad range of browsers (unlike altCognito says). You can check compatibility charts for createElement and appendChild.
Neither. Use a library like jQuery, Prototype, Dojo or mooTools because both of these methods are fraught with trouble:
Did you know that innerHTML on tables for IE is readonly?
Did you know for the select element it's broken as well?
How about problems with createElement?
The writers of the major javascript libraries have spent a lot of time and have entire bug tracking systems to make sure that when you call their DOM modifying tools they actually work.
If you're writing a library to compete with the above tools (and good luck to you if you are), then I'd choose the method based on performance, and innerHTML has always won out in the past, and since innerHTML is a native method, it's a safe bet it will remain the fastest.
altCognito makes a good point - using a library is the way to go. But if was doing it by hand, I would use option #2 - create elements with DOM methods. They are a bit ugly, but you can make an element factory function that hides the ugliness. Concatenating strings of HTML is ugly also, but more likely to have security problems, especially with XSS.
I would definitely not append the new nodes individually, though. I would use a DOM DocumentFragment. Appending nodes to a documentFragment is much faster than inserting them into the live page. When you're done building your fragment it just gets inserted all at once.
John Resig explains it much better than I could, but basically you just say:
var frag = document.createDocumentFragment();
frag.appendChild(myFirstNewElement);
frag.appendChild(mySecondNewElement);
...etc.
document.getElementById('insert_here').appendChild(frag);
Personally, I use innerHTML because it's what I'm used to and for something like this, the W3C methods add a lot of clutter to the code.
Just a possible way to cut down on the number of div's however, are there any reasons you are using spacer elements instead of just editing the margins on the content divs?
I don't think there's much to choose between them. In the olden days (IE6, FF1.5), innerHTML was faster (benchmark), but now there doesn't seem to be a noticeable difference in most cases.
According to the mozilla dev. docs there are a few situations where innerHTML behaviour varies between browsers (notably inside tables), so createElement will give you more consistency - but innerHTML is usually less to type.
Since you mentioned template and clone, you may be interested in this question:
Deep cloning vs setting of innerHTML: what’s faster?
Another option is to use a DOM wrapper, such as DOMBuilder:
DOMBuilder.apply(window);
DIV({"class": "class0 class1 class3"},
DIV({"class": "spacer"}),
DIV({id: "content"}, "content"),
DIV({"class": "spacer"}),
DIV({id: "content2"}, "content2"),
DIV({"class": "class4"}, "content3"),
DIV({"class": "spacer"}),
DIV({id: "footer"}, "content3")
);
Personally, if each item is going to need the exact same structure created I would go with the cloning approach. If there's any logic involved in creating the structure into which the content will go, I'd rather maintain something like the above than fiddling about with strings. If that approach turned out to be too slow, I'd fall back to innerHTML.
Neither. Use cloneNode.
var div = document.createElement('div');
var millionDivs = [div];
while (millionDivs.length < 1e6) millionDivs.push(div.cloneNode())
As I know, the fastest way is to evade DOM editing as long as it is possible. That mean, it is better to create a big string and then put it into innerHTML. But there is a remark for this: don't make too many operation on big strings, it is faster to use array of strings and then join them all.
For a complex problem like this, I usually use the innerHTML methods because they are a)easier to read and modify code-wise b)more convenient to use in loops. As the top post says, they unfortunately fail in IE(6,7,8) on <table>, <thead>,<tbody>,<tr>,<tfoot>, <select>, <pre> elements.
1) Create all the elements as innerHTML with string concatenation to add the content.
2) Use createElement, setAttribute and appendChild to create and add each div.
3) compromise. Create all the elements in one go as innerHTML (which avoids a lot of childnode list manipulation slowness), then write the content that changes on each item using data/attribute access (avoiding all that nasty mucking around with having to HTML-escape content). eg. something like:
var html= (
'<div class="item">'+
'<div class="title">x</div>'+
'<div class="description">x</div>'+
'</div>'
);
var items= [
{'id': 1, 'title': 'Potato', 'description': 'A potato'},
{'id': 2, 'title': 'Kartoffel', 'description': 'German potato'},
// ... 100 other items ...
];
function multiplyString(s, n) {
return new Array(n+1).join(s);
}
var parent= document.getElementById('itemcontainer');
parent.innerHTML= multiplyString(html, items.length);
for (var i= 0; i<items.length; i++) {
var item= items[i];
var node= parent.childNodes[i];
node.id= 'item'+item.id;
node.childNodes[0].firstChild.data= item.title;
node.childNodes[1].firstChild.data= item.description;
}
Can also be combined with Neall's tip about DocumentFragments.