I am developing a site which creates many table rows dynamically. The total amount of rows right now is 187. Everything works fine when creating the rows, but in IE when I leave the page, there is a large amount of lag. I do not know if this is some how related to the heavy DOM manipulation I am doing in the page? I do not create any function closures when building the dynamic content's event handlers so I do not believe this problem is related to memory leaks. Any insight is much appreciated.
Are you creating the element nodes by hand, or using innerHTML? Although I'm not sure, my suspicion is that IE has its own memory leaks related to HTML nodes.
I made a demo page that adds 187 rows to a table via jQuery. I believe jQuery.append() uses a clever little trick to turn a string into a set of nodes. It creates a div and sets the innerHTML of that div to your string, and then clones all the child nodes of that div into the node you specify before finally deleting the div it created.
http://www.andrewpeace.com/stackoverflow/rows/rows.html
I'm not getting any lag in IE8, but maybe it will lag in the version you're using. I'd love it if you'd let me know! Maybe I can help some more.
Peace
YUI (and probably some other popular javascript libraries) provides automatic listener cleanup, so I highly recommend using YUI or another library with this feature to minimize problems with IE. However, it sounds like you might be experiencing plain slowness rather than any kind of memory leak issue; you are attaching event handlers to a whole bunch of elements. IE6 is known to be less than optimized, so it might just be taking forever to clean everything up.
apeace also has a good point: innerHTML can get you in trouble and set you up with DOM weirdness. It sounds like JQuery has a fix for that.
Try taking advantage of event bubbling to replace all event handlers with just one.
I agree with porneL. Attach one event handler to the <table> and let bubbling work its magic. Most frameworks provide a way for you to find the element that caused the original event (usually referred to as a "target").
If you're making lots of elements using document.createElement(), you can add them to a DOM fragment. When you append the fragment to the page, it appends all the child nodes attached to it. This operation is faster than appending each node one-at-a-time. John Resig has a great write-up on DOM document fragments: http://ejohn.org/blog/dom-documentfragments/
Related
My way of thinking:
If we want to perform something on dom element we can do it by:
document.getElementById("#someId").DoSomething();
document.getElementById("#someId").DoSomethingElse();
In that situation browser needs to search entire DOM for #someId object. Then it forgets element and searches again to perform DoSomethingElse().
To solve "forgetting and searching again" problem we can save our element as JavaScript object.
var someElement = document.getElementById("#someId");
someElement .DoSomething();
someElement .DoSomethingElse();
Going further we can save entire group of elements or entire nodes to achieve better performance. One more step and we have whole DOM saved as an JavaScript object named virtual dom.
Is that correct way to understand purpose of virtual DOM?
Sorry for noob questions, I'm not front end developer, I'm just curious :)
The main point of the VirtualDOM is that, effectively, you're working on a copy of the real DOM. But the working with that copy is ways faster that working with the actual DOM, because it only has the thing that React actually needs, leaving specific browser issues aside.
The main problem with working with the actual DOM is that it's slow. At least, it's faster to work with that kind of copy, make your work there, and the changes have been done, then you update the actual DOM.
Yes, it sounds a bit crazy, but is faster to compute the differences between state changes and the change everything in "just one step", than making that changes with the actual DOM.
Additionally, you've used for your example just a single DOM node, but you're working on changes on DOM subtrees the thing is not that easy.
For an explanation with more detail you can take a look to this article: http://reactkungfu.com/2015/10/the-difference-between-virtual-dom-and-dom/
I have a list with quite a few elements (each of them is a nested div). Each element has a custom onclick handler.
JS updates the list several times per second, this may result in:
adding or removing some elements
changing text in some elements
changing styles in some elements
changing height of some elements
etc.
Most of the time the update makes small changes to the majority of the elements.
To minimize reflows I should remove the list from DOM, make the changes and append it back. The problem I have with this approach is that when user selects some text, the next update will reset the selection. (And the next update comes within a second) If user clicks a button his click may fail to register if there was an update between mose_down and mouse_up.
I understand when the selection resets on text that have been changed. It makes sense. But with such approach any selection in this list will reset.
Is there any better way to do this? How would you implement such list?
This list is fully generated by JS. If I'm removing it from DOM anyway, is there any benefit to modifying it instead of recreating it from scratch? Creating it anew each time would require less code.
This sounds like 2 way data binding, there are a couple of good custom solutions to data-binding answers on here: Handy stack link. Alternatively backbone.js and knockout.js have good techniques amongst quite a few other frameworks (angular ect).
Additionally, if you want to have a pop at it yourself (which I highly recommend to get a better understanding) you could use the proposed 'Object Observe' function. There's some handy documentation with some examples on how this works over at Mozilla. as well as The trusty HTML5 Rocks, which is a nice simple tutorial on using the new Object.Observe functionality, well worth a read.
Hope this helps!
This question might be stupid, or basic.
Can someone explain which is the best method in adding DOM elements. We have these two ways of adding DOM elements.
Scenario: Need to add <strong>Hi</strong> inside an existing <div id="theEl"></div>.
By editing the HTML inside them.
document.getElementById("theEl").innerHTML = '<strong>Hi</strong>';
By using document.createElement().
var hi = document.createTextNode("Hi"),
strong = document.createElement("strong");
strong.appendChild(hi);
mydiv = document.getElementById("theEl");
document.body.insertBefore(strong, mydiv);
Questions
What is the best way to do? One is a single line, another is about five lines.
What is the performance aspect?
What is the right way or best practise?
Is there any difference between the codes as a whole?
If at all this question is not making sense, please let me know, I will be glad to close this or even remove this. Thanks.
For the close voter, this is not going to be a duplicate of that question. One thing I just noted is, using createElement() preserves the event handlers attached to the element. Even though that's a good point, any kind of basic web page, too has jQuery in them, which provides delegation and such stuff that allow me to have the event attached to the element even after change in HTML.
There is no "best" or "best practice". They are two different methods of adding content that have different characteristics. Which one you select depends upon your particular circumstance.
For creating lots and lots of elements, setting a block of HTML all at once has generally shown to be faster than creating and inserting lots of individual elements. Though if you really cared about this aspect of performance, you would need to test your particular circumstance in a tool like jsperf.
For creating elements with lots of fine control, setting classes from variables, setting content from variables, etc..., it is generally much easier to do this via createElement() where you have direct access to the properties of each element without having to construct a string.
If you really don't know the difference between the two methods and don't see any obvious reason to use one over the other in a particular circumstance, then use the one that's simpler and less code. That's what I do.
In answer to your specific questions:
There is no "best" way. Select the method that works best for your circumstance.
You will need to test the performance of your specific circumstance. Large amounts of HTML have been shown in some cases to be faster by setting one large string with .innerHTML rather than individually created an inserting all the objects.
There is no "right way" or "best practice. See answer #1.
There need be no difference in the end result created by the two methods if they are coded to create the same end result.
I actually like a combination of both: createElement for the outer element so you won't be removing any event handlers, and innerHTML for the content of that element, for convenience and performance. For example:
var strong = document.createElement('strong');
strong.innerHTML = 'Hi';
document.getElementById('theEl').appendChild(strong);
Of course, this technique is more useful when the content of the thing you're adding is more complex; then you can use innerHTML normally (with the exception of the outer element) but you're not removing any event listeners.
1. What is the best way to do? One is a single line, another is about five lines.
It depends on context. You probably want to use innerHTML sparingly as a rule of thumb.
2. What is the performance aspect?
DOM manipulation significantly outperforms innerHTML, but browsers seem to keep improving innerHTML performance.
3. What is the right way or best practise?
See #1.
4. Is there any difference between the codes as a whole?
Yes. The innerHTML example will replace the contents of the existing element, while the DOM example will put the new element next to the old one. You probably meant to write mydiv.appendChild(strong), but this is still different. The existing element's child nodes are appended to rather than replaced.
What did you mean by best? In just one DOM operation everything is good and shows the same performance. But when you need multiple DOM insertion, things go diferently.
Background
Every time you insert DOM node, the browser render new image of the page. So if you insert multiple child inside a DOM node, the browser renders it multiple times. That operation is the slowest that you will see.
The solution
So, we need to append most child at once. Use a empty dom node. The built in is createDocumentFragment();
var holder = createDocumentFragment();
// append everything in the holder
// append holder to the main dom tree
The real answer
If in the case is that you described, I would prefer the shortest solution. Because there is no performance penalty in one dom operation
do you have any experiences with the following problem: JavaScript has to run hundreds of performance intensive function calls which cannot be skipped and causing the browser to feel crashed for a few seconds (e.g. no scrolling and clicking)? Example: Imagine 500 calls for getting an elements height and then doing hundreds of DOM modifications, e.g. setting classes etc.
Unfortunately there is no way to avoid the performance intensive tasks. Web workers might be an approach, but they are not very well supported (IE...). I'm thinking of a timeout or callback based step by step rendering giving the browser time to do something in between. Do you have any experiences you can share on this?
Best regards
Take a look at this topic this is some thing related to your question.
How to improve the performance of your java script in your page?
If your doing that much DOM manipulation, you should probably clone the elements in question or the DOM itself, and do the changes on a cached version, and then replace the whole ting in one go or in larger sections, and not one element at the time.
What takes time is'nt so much the calculations and functions etc. but the DOM manipulation itself, and doing that only once, or a couple of times in sections, will greatly improve the speed of what you're doing.
As far as I know web workers aren't really for DOM manipulation, and I don't think there will be much of an advantage in using them, as the problem probably is the fact that you are changing a shitload of elements one by one instead of replacing them all in the DOM in one batch instead.
Here is what I can recommend in this case:
Checking the code again. Try to apply some standard optimisations as suggested, e.g. reducing lookups, making DOM modifications offline (e.g. with document.createDocumentFragment()...). Working with DOM fragments only works in a limited way. Retrieving element height and doing complex formating won't work sufficient.
If 1. does not solve the problem create a rendering solution running on demand, e.g. triggered by a scroll event. Or: Render step by step with timeouts to give the browser time to do something in between, e.g. clicking a button or scrolling.
Short example for step by step rendering in 2.:
var elt = $(...);
function timeConsumingRendering() {
// some rendering here related to the element "elt"
elt = elt.next();
window.setTimeout((function(elt){
return timeConsumingRendering;
})(elt));
}
// start
timeConsumingRendering();
I'm currently debugging a ajax chat that just endlessly fills the page with DOM-elements. If you have a chat going for like 3 hours you will end up with god nows how many thousands of DOM-nodes.
What are the problems related to extreme DOM Usage?
Is it possible that the UI becomes totally unresponsive (especially in Internet Explorer)?
(And related to this question is off course the solution, If there are any other solutions other than manual garbage collection and removal of dom nodes.)
Most modern browser should be able to deal pretty well with huge DOM trees. And "most" usually doesn't include IE.
So yes, your browser can become unresponsive (because it needs too much RAM -> swapping) or because it's renderer is just overwhelmed.
The standard solution is to drop elements, say after the page has 10'000 lines worth of chat. Even 100'000 lines shouldn't be a big problem. But I'd start to feel uneasy for numbers much larger than that (say millions of lines).
[EDIT] Another problem is memory leaks. Even though JS uses garbage collection, if you make a mistake in your code and keep references to deleted DOM elements in global variables (or objects references from a global variable), you can run out of memory even though the page itself contains only a few thousand elements.
Just having lots of DOM nodes shouldn't be much of an issue (unless the client is short on RAM); however, manipulating lots of DOM nodes will be pretty slow. For example, looping through a group of elements and changing the background color of each is fine if you're doing this to 100 elements, but may take a while if you're doing it on 100,000. Also, some old browsers have problems when working with a huge DOM tree--for example, scrolling through a table with hundreds of thousands of rows may be unacceptably slow.
A good solution to this is to buffer the view. Basically, you only show the elements that are visible on the screen at any given moment, and when the user scrolls, you remove the elements that get hidden, and show the ones that get revealed. This way, the number of DOM nodes in the tree is relatively constant, but you don't really lose anything.
Another similar solution to this is to implement a cap on the number of messages that are shown at any given time. This way, any messages past, say, 100 get removed, and to see them you need to click a button or link that shows more. This is sort of what Facebook does with their profiles, if you need a reference.
Problems with extreme DOM usage can boil down to performance. DOM scripting is very expensive, so constantly accessing and manipulating the DOM can result in a poor performance (and user experience), particularly when the number of elements becomes very large.
Consider HTML collections such as document.getElementsByTagName('div'), for example. This is a query against the document and it will be reexecuted every time up-to-date information is required, such as the collection's length. This could lead to inefficiencies. The worst cases will occur when accessing and manipulating collections inside loops.
There are many considerations and examples, but like anything it depends on the application.