Faster to append or create html elements before? -- JQuery - javascript

I am trying to make my application faster. I want some elements to appear only when an ajax request has succeeded. Is it faster to create the elements with append when the request has succeeded, or is it faster to create the html element in the actual html and simply insert the content in the element with .html?

According to this JSPerf: plain old innerHTML beats .html() (and .html() beats .append()).
However according to this JSPerf: DOM beats innerHTML.
So, might want to look into documentFragment, specified in DOM1 and even supported in IE6 (so there is no reason not to use it).
Since the document fragment is in memory and not part of the main DOM tree, appending children to it does not cause page 'reflow' (computation of element's position and geometry). Consequently, using document fragments often results in better performance.
John Resig did a nice write-up about it here and concludes:
A method that is largely ignored in modern web development can provide
some serious (2-3x) performance improvements to your DOM manipulation.
You just might want to combine some of those techniques per case you want to optimize.
Hope this helps!

Related

Potential performance improvement from inserting chunks of DOM elements dynamically as needed?

I'm looking at ways to improve performance of my single page web app, which will need to run on a variety of devices including lower-end phones.
I've got 8 modals (Twitter Bootstrap but this question applies to any framework) that add over 200 elements to my total page DOM element count (783). Is it worth having these as strings in Javascript rather than code in the HTML, and injecting them when needed into the DOM immediately before display, then removing them afterward? Would strip live DOM size down by a quarter, thus making e.g. JQuery element searches quicker, lighterweight page etc.
I was thinking to use JQuery's $.detach() and $.append() for example
Anytime you modify the DOM, you take a performance hit because the browser will have to "reflow" and "repaint" the UI. As such, keeping those modifications to a minimum will help, however doing modifications in batches absorbs some of that performance hit (i.e. changing 3 DOM elements separately is more expensive than changing all 3 at once). So, group together your DOM changes as best you can.
The actual mechanism you use to inject the new content could either be by:
Passing a string of HTML to the HTML parser and asking it to parse
on demand the string. This is essentially the same as the process
that happens when the page is being parsed from the server. Using
the standard .innerHTML or JQuery .html() accomplishes this.
You could also build up the DOM element in memory first and then
inject that node into the DOM at the right time (i.e. document.createElement or document.createDocumentFragment). I generally favor
this approach as it is more programmatic, vastly reduces the
possibility of string concatenation and quotation errors and is
cleaner to read. From a performance standpoint, this gives you the
benefit of getting some of the work done prior to DOM injection
time. This would be the equivalent of the DOM .appendChild() or
the JQuery .append() methods.
In the end, today's modern user agents handle DOM changes much better than they used to and either approach is viable. It's the really bad techniques (like modifying the DOM in a loop) that you want to stay away from that, in the end, will make a difference.

removing unused DOM elements for performance

I'm writing a single page application. When the page is initially served, it contains many DOM elements that contain json strings that I'm injecting into the page.
When the page loads on the client, the first thing that happens is that these DOM elements are parsed from json to javascript objects and then they're never used again.
Would there be a performance benefit into deleting them from the DOM and reducing its size? I haven't found any conclusive data about this aspect. For info, these elements are about 500K in size.
Thanks for your suggestions.
Would there be a performance benefit into deleting them from the DOM
and reducing its size?
In terms of the performance of the DOM, generally, the less you interact with your DOM, the better.
Remember that your selectors traverse the dom, so if you use inefficient selectors, it will be poor performance no matter what, but if you're selecting elements by ID rather than class or attribute wherever possible, you can squeeze good performance out of the page, regardless of whether you've deleted those extraneous markup items.
When the page is initially served, it contains many DOM elements that
contain json strings that I'm injecting into the page.... For info,
these elemnts are about 500K in size.
In terms of page load, your performance is suffering already from having to transfer all of that data. if you can find a way to transfer the json only, it could only help. Deleting after the fact wouldn't be solving this particular problem, though.
Independent of whether you pull the JSON from the DOM, you might consider this technique for including your JSON objects:
<script type='text/json' id='whatever'>
{ "some": "json" }
</script>
Browsers will completely ignore the contents of those scripts (except of course for embedded strings that look like </script>, which it occurs to me as things a JSON serializer might want to somehow deal with), so you don't risk running into proplems with embedded ampersands and things. That's probably a performance win in and of itself: the browser probably has an easier time looking for the end of the <script> block than it does looking through stuff that it thinks is actual content. You can get the content verbatim with .innerHTML.

fastest search on elements in html

i have a page with many divs on which i need to implement some JavaScript functions. Is there a way to tell the browser to sort all divs by for example id so that I can find quickly an element.I generally don't know how browsers handle searching of elements, is there some sorting or not on major browsers firefox, chrome, ie? How do elements get indexed?
Every browser already holds a such an index on id's and classes for use with, for example, css.
You shouldn't worry about indexing the DOM, it's been done and ready for use.
If you want to hang events on elements, just do so. Either by use of 'document.getElementById(id) or document.getElementsByClassName(class) (that last one might bump into IE-issues)
I think jquery will help you in that case...
or with simple javascript you can getElementById, or getElementsByTagName
the browsers creates a tree like structure named DOM (Document Object Model), with the root being the html tag for example, then it's children, their children's children, etc..
There are functions that let's you access the DOM and find the required elements. But this is done in the browser's inner implementation. You can not change how it handles the page elements, just use the browser's API to locate elements

Accessing DOM after big innerHTML injection

I have some code that looks like this
//create a long string of html, which includes a div with id="mydiv"
someElement.innerHTML = s; //s is the string above
document.getElementById('mydiv')
Now, after I set the innerHTML, it is going to take a while for the browser to actual render the DOM that includes id="mydiv" . So, will the javascript sit and wait for the dom to be properly rendered after the innerHTML injection, or will it move right along and call the getElementById which is now unsafe since the DOM for that id may not be created yet?
Here's an example that inserts nearly 12,000 elements into the DOM using innerHTML, then calls getElementById() to find the one element at the end that has an ID.
It successfully finds the element.
Example: http://jsfiddle.net/fzUUU/
There is no standard for innerHTML. So literally anything is allowed to happen including not being implemented (I believe Opera toyed with this once). It's just a hack introduced by Microsoft in IE.
However. Implementing innerHTML in any way that diverges from the way IE does it will break lots of pages on the web. So browser makers are forced to implement it the way IE does it. And here's how IE does it: when innerHTML is running the script interpreter stops. That is to say innerHTML blocks until the DOM is fully parsed.
So it should be safe to access the DOM directly after innerHTML for all current browsers and, due to the number of pages on the web that requires it, in the foreseeable future.
Additional answer:
It appears that the draft HTML5 spec specifies innerHTML: http://www.w3.org/TR/2008/WD-html5-20080610/dom.html#innerhtml0
It basically describes what IE already does. It should be noted that it doesn't specifically say weather the operation is bolcking or non-blocking but as mentioned earlier a non-blocking implementation will break lots of pages on the web.

How efficient is element.cloneNode(true) (deep clone)?

I'm building the HTML code within an XML DOM object to be used as the contents of the innerHTML of a div element using an XSL template. Traditionally we create a new XML DOM document and add the input parameters as XML Elements for the transform via javascript. This is all very time-consuming as we are basically hand picking the data from another XML document that represents our current account and copying the data into a transient XML DOM document.
What I'd like to do is clone the relevant node of the account document (i.e. customer info) and use it as the basis for the transform. I don't want to use the account document directly as I'd like to be able to add transform specific input, without making changes to the account object.
How efficient is using .cloneNode(true) for a desired node of about typically less than 200 elements from a document of typically 2000+ elements? The target platform is IE6 with no external tools (i.e. ActiveX).
CloneNode is pretty efficient but it will be consuming more memory doing it that way.
Another approach to consider is to use a Template object and a processor, pass your additional/changed data as parameters to the processor and the element that you would have otherwise cloned as the input element. This approach would require fairly significant mods the XSL though.
IE will fail on certain things.
e.g. checked radio/checkboxes will not be checked when you add your copy to the DOM.
Example:
http://webbugtrack.blogspot.com/2008/03/bug-199-cant-clone-form-element-in-ie.html
http://webbugtrack.blogspot.com/2007/08/bug-242-setattribute-doesnt-always-work.html
To see what IE will actually return, try replacing the url with this in the Address Bar of one of your pages, and press enter.
javascript:'<xmp>'+window.document.body.outerHTML+'</xmp>';
If you are happy with the results, great!, but I think you'll end up less than satisfied at what IE returns (both in the DOM, and this "string" value equivelant.
If you don't need form-elements, cloneNode is a real reliable tool ...
-- and in inserting ajax-data it is incredible in efficiency ...
However, as especially IE has a history of having problems with name-attributes, it is inconvenient to address any of these if you insert data ...
-- I don't really understand your XSL(T)-using, to me it sounds like using a gas-station as a (not !-) convenient place to change a 1960 WV to a 2008 Skoda ...
Userely they have some common technology, though it is not used in the same way, computerization in some way is just a minor problem, the major problems is in nearly any other way !o]
Have you got any need for form-elements ?-)

Categories

Resources