I'm building the HTML code within an XML DOM object to be used as the contents of the innerHTML of a div element using an XSL template. Traditionally we create a new XML DOM document and add the input parameters as XML Elements for the transform via javascript. This is all very time-consuming as we are basically hand picking the data from another XML document that represents our current account and copying the data into a transient XML DOM document.
What I'd like to do is clone the relevant node of the account document (i.e. customer info) and use it as the basis for the transform. I don't want to use the account document directly as I'd like to be able to add transform specific input, without making changes to the account object.
How efficient is using .cloneNode(true) for a desired node of about typically less than 200 elements from a document of typically 2000+ elements? The target platform is IE6 with no external tools (i.e. ActiveX).
CloneNode is pretty efficient but it will be consuming more memory doing it that way.
Another approach to consider is to use a Template object and a processor, pass your additional/changed data as parameters to the processor and the element that you would have otherwise cloned as the input element. This approach would require fairly significant mods the XSL though.
IE will fail on certain things.
e.g. checked radio/checkboxes will not be checked when you add your copy to the DOM.
Example:
http://webbugtrack.blogspot.com/2008/03/bug-199-cant-clone-form-element-in-ie.html
http://webbugtrack.blogspot.com/2007/08/bug-242-setattribute-doesnt-always-work.html
To see what IE will actually return, try replacing the url with this in the Address Bar of one of your pages, and press enter.
javascript:'<xmp>'+window.document.body.outerHTML+'</xmp>';
If you are happy with the results, great!, but I think you'll end up less than satisfied at what IE returns (both in the DOM, and this "string" value equivelant.
If you don't need form-elements, cloneNode is a real reliable tool ...
-- and in inserting ajax-data it is incredible in efficiency ...
However, as especially IE has a history of having problems with name-attributes, it is inconvenient to address any of these if you insert data ...
-- I don't really understand your XSL(T)-using, to me it sounds like using a gas-station as a (not !-) convenient place to change a 1960 WV to a 2008 Skoda ...
Userely they have some common technology, though it is not used in the same way, computerization in some way is just a minor problem, the major problems is in nearly any other way !o]
Have you got any need for form-elements ?-)
Related
I'm looking at ways to improve performance of my single page web app, which will need to run on a variety of devices including lower-end phones.
I've got 8 modals (Twitter Bootstrap but this question applies to any framework) that add over 200 elements to my total page DOM element count (783). Is it worth having these as strings in Javascript rather than code in the HTML, and injecting them when needed into the DOM immediately before display, then removing them afterward? Would strip live DOM size down by a quarter, thus making e.g. JQuery element searches quicker, lighterweight page etc.
I was thinking to use JQuery's $.detach() and $.append() for example
Anytime you modify the DOM, you take a performance hit because the browser will have to "reflow" and "repaint" the UI. As such, keeping those modifications to a minimum will help, however doing modifications in batches absorbs some of that performance hit (i.e. changing 3 DOM elements separately is more expensive than changing all 3 at once). So, group together your DOM changes as best you can.
The actual mechanism you use to inject the new content could either be by:
Passing a string of HTML to the HTML parser and asking it to parse
on demand the string. This is essentially the same as the process
that happens when the page is being parsed from the server. Using
the standard .innerHTML or JQuery .html() accomplishes this.
You could also build up the DOM element in memory first and then
inject that node into the DOM at the right time (i.e. document.createElement or document.createDocumentFragment). I generally favor
this approach as it is more programmatic, vastly reduces the
possibility of string concatenation and quotation errors and is
cleaner to read. From a performance standpoint, this gives you the
benefit of getting some of the work done prior to DOM injection
time. This would be the equivalent of the DOM .appendChild() or
the JQuery .append() methods.
In the end, today's modern user agents handle DOM changes much better than they used to and either approach is viable. It's the really bad techniques (like modifying the DOM in a loop) that you want to stay away from that, in the end, will make a difference.
I wanted to know if there is a way for filter the innerHTML of a DOM to just contain the actual HTML and discard all the comment nodes?
Actually, I'm working with Angularjs and writing some tests with Selenium. And Angular litters the rendered HTML with a lot of comments such as:
<!-- ngSwitchWhen: join -->
<div data-ng-switch-when="leave">
<!-- ngIf: isNow -->
.
.
.
</div>
I'm trying this currently for matching the result: #client is the WebDriver instance.
#client.findElement(By.xpath("//*[#id='log']/li")).getAttribute('innerHTML').then (innerHtml) ->
html = innerHtml.trim()
expect(html).to.equal """
<div class="image"><i class="icon-refresh"></i></div>
<div class="fade-6 content">Getting more activities...</div>
"""
This creates a big problem when I'm trying to test the returned DOM's structure with Mocha. What do I test for? I can't possibly repeat all the useless comments in my expected value, that would be immensely wasteful.
Is there a better way?
Writing tests that rely on innerHTML is not a good idea at all.
When you fetch innerHTML, the browser serialises the information in the DOM into a new markup string which is not necessarily the same as the markup that was originally parsed to make the DOM.
Markup details such as:
what order attributes are in
what case tags are
what whitespace there is in tags
what quotes are used to delimit attribute values
what content characters are encoded as entity or character references
are not stored in the DOM information set so are not preserved. Different browsers can and will produce different output. In some cases IE even returns invalid markup, or markup that does not round-trip back to the same information set when parsed.
+1 katspaugh's answer demonstrates ways to get the information out of the DOM rather than relying on innerHTML, which avoids this problem.
However, more generally, it is usually a bad idea to write tests that depend strongly on the exact markup your application uses. This is too-tight coupling between the requirements in the test and the implementation details. And any little change you make to the markup for even a trivial stylistic reason or textual update means you have to update all your tests to match. Tests are a useful tool to catch things that you didn't mean to break; tests that always break on every change are giving you no feedback on whether you broke something so are non-useful.
Whilst there's generally no magic bullet to separate tests completely from application markup, generally you should reduce the test to the minimum that satisfies the user's requirement, and add signalling to catch those cases. I don't know what exactly your app is doing but I would guess the requirement is something like: "When the user clicks the 'more' button, a busy-spinner should appear to let them know the information is being fetched".
To test this you might do a check like "does the element with id 'log' contain an element with class 'icon-refresh'?". If you wanted to be more specific that it's a spinner to do with fetching activities, you could add a class like "refresh-activities" to the "Getting more activities..." div, and detect the element with that class instead of relying on text which is likely to change (especially if you ever translate your app).
Comment nodes are DOM nodes, as you know. You can iterate over all nodes and filter comments out by their node type:
recursivelyIterate(container, function (subNode) {
if (subNode.nodeType == Node.COMMENT_NODE) {
subNode.parentNode.removeChild(subNode);
}
});
(I haven't included the code for recursivelyIterate function, but it should be trivial to write one.)
Alternatively, leave them comments be and don't work with DOM nodes, work with DOM elements. getElementsByTagName, querySelectorAll and friends.
I'm writing a single page application. When the page is initially served, it contains many DOM elements that contain json strings that I'm injecting into the page.
When the page loads on the client, the first thing that happens is that these DOM elements are parsed from json to javascript objects and then they're never used again.
Would there be a performance benefit into deleting them from the DOM and reducing its size? I haven't found any conclusive data about this aspect. For info, these elements are about 500K in size.
Thanks for your suggestions.
Would there be a performance benefit into deleting them from the DOM
and reducing its size?
In terms of the performance of the DOM, generally, the less you interact with your DOM, the better.
Remember that your selectors traverse the dom, so if you use inefficient selectors, it will be poor performance no matter what, but if you're selecting elements by ID rather than class or attribute wherever possible, you can squeeze good performance out of the page, regardless of whether you've deleted those extraneous markup items.
When the page is initially served, it contains many DOM elements that
contain json strings that I'm injecting into the page.... For info,
these elemnts are about 500K in size.
In terms of page load, your performance is suffering already from having to transfer all of that data. if you can find a way to transfer the json only, it could only help. Deleting after the fact wouldn't be solving this particular problem, though.
Independent of whether you pull the JSON from the DOM, you might consider this technique for including your JSON objects:
<script type='text/json' id='whatever'>
{ "some": "json" }
</script>
Browsers will completely ignore the contents of those scripts (except of course for embedded strings that look like </script>, which it occurs to me as things a JSON serializer might want to somehow deal with), so you don't risk running into proplems with embedded ampersands and things. That's probably a performance win in and of itself: the browser probably has an easier time looking for the end of the <script> block than it does looking through stuff that it thinks is actual content. You can get the content verbatim with .innerHTML.
i have a page with many divs on which i need to implement some JavaScript functions. Is there a way to tell the browser to sort all divs by for example id so that I can find quickly an element.I generally don't know how browsers handle searching of elements, is there some sorting or not on major browsers firefox, chrome, ie? How do elements get indexed?
Every browser already holds a such an index on id's and classes for use with, for example, css.
You shouldn't worry about indexing the DOM, it's been done and ready for use.
If you want to hang events on elements, just do so. Either by use of 'document.getElementById(id) or document.getElementsByClassName(class) (that last one might bump into IE-issues)
I think jquery will help you in that case...
or with simple javascript you can getElementById, or getElementsByTagName
the browsers creates a tree like structure named DOM (Document Object Model), with the root being the html tag for example, then it's children, their children's children, etc..
There are functions that let's you access the DOM and find the required elements. But this is done in the browser's inner implementation. You can not change how it handles the page elements, just use the browser's API to locate elements
I'm trying to make an AJAXy submission and have the resulting partial be inserted into my list at the proper place. I can think of a few options, but none is terribly good:
Option 1: Return JSON, do rendering in Javascript. That seems like the wrong place to render this, especially since the list itself is rendered in my application server. It has the benefit, though, of making it easy to access the value to be sorted (response.full_name).
Option 2: Return an HTML fragment, parse the sort value out. Parsing HTML in Javascript is probably worse than rendering it.
Option 3: Return an HTML fragment that also contains a <script> section that gets evaluated. This could add the DOM node to a master list and then make a JS call to insert itself at the right point. The downside here is that IE doesn't evaluate <script> tags when innerHTML or appendChild are called.
Personally I would do #1. Nothing is wrong with combining the server-side generated HTML with the client-side generated one, but if it is a complicated procedure it is better to keep it in one place (on the server in your case). So you may want to return (as JSON) two values: the sort value, and the HTML snippet.
After that it is simple: find the position, instantiate the snippet (e.g., using dojo.html.set()), and place it with dojo.place(). Or instantiate it directly in-place.