This is for a single page, mobile web-app....
For readability I've been concatenating my html, then injecting. I'm pretty certain there's a more efficient way, and would like to get a js expert's opinion!
heres an example of one of my concatenated html strings...
var html_str = '';
$.each(events_array, function(k, ev_type){
if( localStorage.getItem('show_type'+ev_type.type_num) !== 'false' ){
$.each(ev_type, function(k2, e){
if(typeof e != 'string'){
if(fav_mode && last_date_num != e.date){
html_str += '<li class="date">'+e.date_text+'</li>';
last_date_num = e.date;
}
html_str += '<li';
if(fav_mode | (FAVOURITES && $.inArray(parseInt(e.event_id), FAVOURITES) >= 0) ){
html_str += ' class="fav"';
}
html_str += '>';
html_str += '<div class="l_'+e.gig_club+'"></div>';
html_str += '<p rel="'+e.event_id+'"><span>'+e.venue+' : </span>'+e.nameofnight+'</p>';
html_str += '</li>';
}
});
}
});
return html_str
There is no "Fastest". There is only "Fastest" for a browser.
There are 3 common techniques. HTML string manipulation, templating and DOM manipulation.
Because templating can use both HTML string manipulation and the DOM internally I would recommend it for readability / maintainability.
Here are a few benchmarks
Templating
More templating
Templating with data for mobile platforms
Loads of templates
Dust js benchmark
If you're talking huge amounts of HTML and perf is paramount
Definitely Don't:
Inject dom nodes iteratively in a loop. This can mess with perf even in not-gigantic HTML scenarios.
Try to save work by using live existing HTML by tweaking attributes and swapping out content across a large variety of elements. As with the previous point this can get ugly even in things not involving thousands of elements. Large tables that don't have table-layout set to fixed can get particularly nasty when you trigger a ton of reflow.
Use jQuery's direct string to dom building - which, IMO, is actually excellent for most uses since it does a great job of validating for nasty escape sequences if injecting data from sources that may one day not be that secure. Use the html method to build from and inject from strings instead if you're going with strings. Although really for max perf, I'd probably just drop JQ if I 100% trusted my data.
Do:
Build it all out in a document fragment and append with that. Or build an HTML string and drop it all in at once. I prefer the hybrid method described below under "Maybe" but haven't tested in 2-3 years. That way the browser only has to repaint the page once. Google "CSS reflow" for more on that.
If you have to make lots of tweaks to a large volume of HTML, just change data and rebuild the whole set for same reason as above.
If you build from strings, minimize concatenation with the '+' operator and use array joins instead. Templating with arrays works great in a pinch.
Worry about the loops vs iterators that take a function argument if IE<=8 is a concern. Repeated function calls can get expensive without a JIT compiler. They can actually help you in JITs if they're inline (defined inside another function without any references returned outside).
Test everything you try. There's gray areas in everything but the multiple vs. one giant append rule. Multiple will always be slower.
Maybe:
An excellent trick in legacy browsers was to innerHTML large strings to document fragments and append those via DOM methods. This wasn't the fastest for all browsers but it was the best approach across the board back when IE7 mattered and the difference in modern JIT browsers between one-block innerHTML, DOM methods only into document fragment, and the hybrid approach were mostly negligible.
I would totally recommend templating too.
But I think you already make use of best practices about injecting HTML: it's far better to build the HTML, then inject it at one time, rather than injecting many times small bits of HTML, as the browser may repaint/reflow the document on each injection.
An explicit for loop will definitely be much faster than $.each(), mainly since that executes a function call for each element, but also for other reasons, e.g. with the new execution frame the lookup time for html_str will be longer.
There is some empirical evidence to suggest (I think this was valid with older browsers, I'm not sure what is faster nowadays or on mobile devices, it's worth checking out) that adding the elements to an array (with the loop variable html_str[i], and not html_str.push()) and then calling .join is faster than string concatenation.
As has been mentioned, adding one large DOM string is faster than small appends, and much faster than using DOM methods (appendChild, insertBefore, etc.).
A good templating engine would do these things for you (at a small extra cost), although I'm not sure if many of them do. And if it's only a small amount of "templating" then it might be overkill to use a library, when a simple loop does the trick.
You might as well consider using documentFragment, though it may not as readable as html string, it is very much effective (performance-wise) and readable maybe in object-oriented way.
you can visit this page for details: http://ejohn.org/blog/dom-documentfragments/
Related
I am confused on following style of writing code. I want to know which is the practical and efficient method to insert HTML tag in document.
Javascript:
document.getElementById('demoId').innerHTML='<div>hello world and <a>click me</a> also</div>';
OR
var itd=document.createElement('div'),
ita=document.createElement('a'),
idt=document.createTextNode('hello world');
iat=document.createTextNode('click me');
idn=document.createTextNode('also');
ita.appendChild(iat);
itd.appendChild(idt);
itd.appendChild(ita);
itd.appendChild(idn)
docuemtn.getElementById('demoId').appendChild(itd);
Which is the fastest and best method?
Well just think about what each of them are doing. The first one is taking the string, parsing it and then calling document.createElement, document.appendChild, etc. (or similar methods) based on the output from the parsed string. The second is reducing the work load on the browser as you're stating directly what you want it to do.
See a comparison on jsPerf
According to jsPerf option 1 is approximately 3 times slower than option 2.
On the topic of maintainability option 1 is far easier to write but call me old fashioned, I'd prefer to use option 2 as it just feels much safer.
Edit
After a few results started coming in, the differing results piqued my curiosity. I ran the test twice with each of my browsers installed, Here is a screenshot of the results from jsPerf after all the tests (operations/second, higher is better).
So browsers seem to differ greatly in how they handle the two techniques. On average option 2 seems to be the faster due to Chrome and Safari's large gap favouring option 2. My conclusion is that we should just use whichever feels the most comfortable or fits best with your organisation's standards and not worry about which is more efficient.
The main thing this exercise has taught me is to not use IE10 despite the great things I've heard about it.
I am making an add-on for firefox and it loads a html page using ajax (add-on has it's XUL panel).
Now at this point, i did not search for a ways of creating a document object and placing the ajax request contents into it and then using xPath to find what i need.
Instead i am loading the contents and parsing it as text with regular expresion.
But i got a question. Which would be better to use, xPath or regular expression? Which is faster to perform?
The HTML page would consist of hundreds of elements which contain same text, and what i basically want to do is count how many elements are there.
I want my add-on to work as fast as possible and i do not know the mechanics behind regexp or xPath, so i don't know which is more effective.
Hope i was clear. Thanks
Whenever you are dealing with XML, use XPath (or XSLT, XQuery, SAX, DOM or any other XML-aware method to go through your data). Do never use regular expressions for this task.
Why? XML processing is intricate and dealing with all its oddities, external/parsed/unparsed entities, DTD's, processing instructions, whitespace handling, collapsing, unicode normalization, CDATA sections etc makes it very hard to create a reliable regex-way of getting your data. Just consider that it has taken the industry years to learn how to best parse XML, should be enough reason not to try to do this by yourself.
Answering your q.: when it comes to speed (which should not be your primary concern here), it highly depends on the implementation of either the XPath or Regex compiler / processor. Sometimes, XPath will be faster (i.e., when using keys, if possible, or compiled XSLT), other times, regexes will be faster (if you can use a precompiled regex and your query is easy). But regexes are never easy with HTML/XML simply because of the matching nested parentheses (tags) problem, which cannot be reliably solved with regexes alone.
If input is huge, regex will tend to be faster, unless the XPath implementation can do streaming processing (which I believe is not the method inside Firefox).
You wrote:
"which is more effective"*
the one that brings you quickest to a reliable and stable implementation that's comparatively speedy. Use XPath. It's what's used inside Firefox and other browsers as well if you need your code to run from a browser.
I have a big data should be shown in a Table.
I use javascript to fill the table instead of priting in HTML.
Here is a sample data I use:
var aUsersData = [[1, "John Smith", "...."],[...],.......];
the problem is that Firefox warns me that "There is a heavy script running, should i continue or stop?"
I don't want my visitors see the warning. how can I make performance better? jQuery? pure script? or another library you suggest?
you can use the method here to show a progress bar and not have the browser lock up on you.
http://www.kryogenix.org/days/2009/07/03/not-blocking-the-ui-in-tight-javascript-loops
I am using almost that method on this page:
http://www.bacontea.com/bb/
to get the browser not to hang and show feedback while loading.
jQuery doesn't usually make things faster, just easier. I use jQuery to populate tables, but they're pretty small (at most 2 columns by 40 rows). How much data are you populating into the table? This could be the limiting factor.
If you post some of your table-populating code we can see if it's possible to improve performance in any way.
My suspicion is that it won't make much difference either way, although sometimes adding a layer of abstraction like jQuery can impact performance. Alternately, the jQuery team may have found an obscure, really fast way of doing something that you would have done in a more obvious, but slower, way if you weren't using it. It all depends.
Two suggestions that apply regardless:
First, since you're already relying on your users having JavaScript enabled, I'd use paging and possibly filtering as well. My suspicion is that it's building the table that takes the time. Users don't like to scroll through really long tables anyway, adding some paging and filtering features to the page to only show them the entries from the array they really want to see may help quite a lot.
Second, when building the table, the way you're do it can have a big impact on performance. It almost seems a bit counter-intuitive, but with most browsers, building up a big string and then setting the innerHTML property of a container is usually faster than using the DOM createElement function over and over to create each row and cell. The fastest overall tends to be to push strings onto an array (rather than repeated concatenation) and then join the array:
var markup, rowString;
markup = [];
markup.push("<table><tbody>");
for (index = 0, length = array.length; index < length; ++index) {
rowString = /* ...code here to build a row as a string... */;
markup.push(rowString);
}
markup.push("</tbody></table>");
document.getElementById('containerId').innerHTML = markup.join("");
(That's raw JavaScript/DOM; if you're already using jQuery and prefer, that last line can be rewritten $('#containerId').html(markup.join(""));)
This is faster than using createElement all over the place because it allows the browser to process the HTML by directly manipulating its internal structures, rather than responding to the DOM API methods layered on top of them. And the join thing helps because the strings are not constantly being reallocated, and the JavaScript engine can optimize the join operation at the end.
Naturally, if you can use a pre-built, pre-tested, and pre-optimised grid, you may want to use that instead -- as it can provide the paging, filters, and fast-building required if it's a good one.
You can try a JS templating engine like PURE (I'm the main contributor)
It is very fast on all browsers, and keeps the HTML totally clean and separated from the JS logic.
If you prefer the <%...%> type of syntax, there are plenty of other JS template engines available.
Too often I find myself building selectors with string manipulation (split, search, replace, concat, +, join).
Good or bad?
What's wrong with that? What are the alternatives — just hardcoding them as single strings? But you may use conventions on your site for how the layout is organized. If you just define the selector components in one place, and use it to build a selector, sounds like this would be less hassle than going through all the code and doing search-replace everywhere it shows up.
I'd say it's good assuming you have the strings otherwise organized (defined in one place, used in several places).
This is somewhat unrelated to your question, but:
One thing I would recommend is to be cautious with descendant-based or child selectors (e.g.: 'div.product > span.price'). Often, UI parts are being reorganized, moved around or wrapped with something else. When it happens, descendant-based selectors break.
Another thing to keep in mind is that attribute-based selectors (e.g.: 'input[value="Login"]') are often fragile when dealing with localized content (if attribute values are localized).
I know that accessing and manipulating the DOM can be very costly, so I want to do this as efficiently as possible. My situation is that a certain div will always contain a list of items, however sometimes I want to refresh that list with a completely different set of items. In this case, I can build the new list and append it to that div, but I also need to clear out the out list. What's the best way? Set the innerHTML to the empty string? Iterate over the child nodes and call "removeChild"? Something else?
Have a look on QuirksMode. It may take some digging, but there are times for just this operation in various browsers. Although the tests were done more than a year ago, setting innerHTML to "" was the fastest in most browsers.
P.S. here is the page.
set innerHTML to an empty string.
Emphasis on the can be costly. It is not always costly; in fact when dealing with a small number of elements it can be trivial. Remember optimizations should always be done last, and only after you have demonstrable evidence that the specific aspect you want to improve is really your performance bottleneck.
I recommend avoiding mucking with innerHTML if possible; it's easy to mess up and do something nasty to the DOM that you can't recover from as gracefully.
This method is quite fast for 99.9% of cases, unless you are removing massive hierarchy chunks from the DOM:
while(ele.childNodes.length > 0) {
ele.removeChild(ele.childNodes[ele.childNodes.length - 1])
}
I'd recommend testing a couple implementations on the browsers you care about. Different Javascript engines have different performance characteristics and different DOM manipulation methods have different performance on different engines. Beware of premature optimization and beware of differences in browsers.