At various points in my 1-page web app I want to do some fairly heavy DOM manipulation, moving various divs around (which each have lots of sub-elements). I don't want the browser trying to repeatedly redraw the page mid-manipulation. Is there a way to say to the browser "pause redrawing until I give the go ahead"?
requestAnimationFrame() seems like one candidate, but is that suitable for DOM rearranging, or just for animation? Are there any other things I could do?
Thanks
You can try using documentFragment.
Create the documentFragment.
Write everything into a documentFragment first.
When done, replace DOM content with documentFragment.
Then the manipulation does not take place on-the-fly, you use the documentFragment as a sort of buffer.
Related
My way of thinking:
If we want to perform something on dom element we can do it by:
document.getElementById("#someId").DoSomething();
document.getElementById("#someId").DoSomethingElse();
In that situation browser needs to search entire DOM for #someId object. Then it forgets element and searches again to perform DoSomethingElse().
To solve "forgetting and searching again" problem we can save our element as JavaScript object.
var someElement = document.getElementById("#someId");
someElement .DoSomething();
someElement .DoSomethingElse();
Going further we can save entire group of elements or entire nodes to achieve better performance. One more step and we have whole DOM saved as an JavaScript object named virtual dom.
Is that correct way to understand purpose of virtual DOM?
Sorry for noob questions, I'm not front end developer, I'm just curious :)
The main point of the VirtualDOM is that, effectively, you're working on a copy of the real DOM. But the working with that copy is ways faster that working with the actual DOM, because it only has the thing that React actually needs, leaving specific browser issues aside.
The main problem with working with the actual DOM is that it's slow. At least, it's faster to work with that kind of copy, make your work there, and the changes have been done, then you update the actual DOM.
Yes, it sounds a bit crazy, but is faster to compute the differences between state changes and the change everything in "just one step", than making that changes with the actual DOM.
Additionally, you've used for your example just a single DOM node, but you're working on changes on DOM subtrees the thing is not that easy.
For an explanation with more detail you can take a look to this article: http://reactkungfu.com/2015/10/the-difference-between-virtual-dom-and-dom/
I have several charts I do redraw everytime I zoom/pann using d3 brushes.
But, when I have tons of rendered elements, redrawing starts to be a little bit slow.
Instead of redrawing all elements everytime I move my brush, I was wondering whether or not it's feasible to transform (translate) the already drawn elements, and only redraw whenever I need to update my data.
I think it would increase my visualization performance a lot whenever panning to right/left, wouldn't it ?
Any insights ?
In general, the less you touch the DOM the better your performance will be. The details are browser and platform specific, but in general this is the pecking order of performance at a very high level (ordered from most expensive to least):
Creating and removing DOM elements.
Modifying properties of existing DOM elements.
In memory JavaScript (that is, not involving DOM at all... e.g. Array iteration).
So if you can get the result you want by simply modifying a targeted subset of existing elements with a transform attribute, I would guess you will be much better off.
Of course, it's impossible to say anything with certainty without seeing the actual code and use case.
do you have any experiences with the following problem: JavaScript has to run hundreds of performance intensive function calls which cannot be skipped and causing the browser to feel crashed for a few seconds (e.g. no scrolling and clicking)? Example: Imagine 500 calls for getting an elements height and then doing hundreds of DOM modifications, e.g. setting classes etc.
Unfortunately there is no way to avoid the performance intensive tasks. Web workers might be an approach, but they are not very well supported (IE...). I'm thinking of a timeout or callback based step by step rendering giving the browser time to do something in between. Do you have any experiences you can share on this?
Best regards
Take a look at this topic this is some thing related to your question.
How to improve the performance of your java script in your page?
If your doing that much DOM manipulation, you should probably clone the elements in question or the DOM itself, and do the changes on a cached version, and then replace the whole ting in one go or in larger sections, and not one element at the time.
What takes time is'nt so much the calculations and functions etc. but the DOM manipulation itself, and doing that only once, or a couple of times in sections, will greatly improve the speed of what you're doing.
As far as I know web workers aren't really for DOM manipulation, and I don't think there will be much of an advantage in using them, as the problem probably is the fact that you are changing a shitload of elements one by one instead of replacing them all in the DOM in one batch instead.
Here is what I can recommend in this case:
Checking the code again. Try to apply some standard optimisations as suggested, e.g. reducing lookups, making DOM modifications offline (e.g. with document.createDocumentFragment()...). Working with DOM fragments only works in a limited way. Retrieving element height and doing complex formating won't work sufficient.
If 1. does not solve the problem create a rendering solution running on demand, e.g. triggered by a scroll event. Or: Render step by step with timeouts to give the browser time to do something in between, e.g. clicking a button or scrolling.
Short example for step by step rendering in 2.:
var elt = $(...);
function timeConsumingRendering() {
// some rendering here related to the element "elt"
elt = elt.next();
window.setTimeout((function(elt){
return timeConsumingRendering;
})(elt));
}
// start
timeConsumingRendering();
I'm creating a new game engine for the web by the name of Engine1. I've currently produced a couple prototypes. So far I've been able to:
Map the transparent pixels of sprites using canvas.
Bind events to the opaque pixels of sprites.
Develop a game runtime with a set fps.
Animate sprites at variable frame timing.
Animate element movement, both
frame by fame
and with frame based motion tween
I'm happy with my progress but I seem to be uncomfortable with advancing further without consulting an expert in DOM performance.
Currently when an element is created, its appended to a DOM fragment I call the "Shadow DOM". Every frame this "Shadow DOM"'s HTML is copied and inserted into the body of the page (or the current view port).
I've set it up this way because I can add everything to the page in one re-flow of the browser.
My concern is that the performance gained will be offset by the need to re flow the contents of the browser, even if only parts of the page are changed.
Also, event binding gets much more complicated.
Any thoughts?
Should I use a "Shadow DOM"?
Is there a better way to render a large number of elements?
Is there a way to only copy differences from the "Shadow DOM" to the browser body?
Replacing large chunks of the DOM may be expensive. In general the DOM is where bottlenecks occur. It would be better to keep track of what parts of the DOM you are modifying and updating these. You can either do that in a separate data structure that you transform into DOM when updating, or use a shadow DOM like you said. If the changes are individually large then it may be a good idea to use a shadow DOM. If they are small (such as just updating text values) then it would make more sense to use a separate type of data structure.
In either case you need a third object keeping track of changes.
I wrote Cactus Templates a long time ago. You use it to bind a DOM structure together with a domain object letting updates propagate from either side to the other. It automatically attaches events to locations specified (key value paths in the domain and html class names in the DOM). It may or may not be exactly what you're looking for but perhaps you can get some ideas from it.
I am developing a site which creates many table rows dynamically. The total amount of rows right now is 187. Everything works fine when creating the rows, but in IE when I leave the page, there is a large amount of lag. I do not know if this is some how related to the heavy DOM manipulation I am doing in the page? I do not create any function closures when building the dynamic content's event handlers so I do not believe this problem is related to memory leaks. Any insight is much appreciated.
Are you creating the element nodes by hand, or using innerHTML? Although I'm not sure, my suspicion is that IE has its own memory leaks related to HTML nodes.
I made a demo page that adds 187 rows to a table via jQuery. I believe jQuery.append() uses a clever little trick to turn a string into a set of nodes. It creates a div and sets the innerHTML of that div to your string, and then clones all the child nodes of that div into the node you specify before finally deleting the div it created.
http://www.andrewpeace.com/stackoverflow/rows/rows.html
I'm not getting any lag in IE8, but maybe it will lag in the version you're using. I'd love it if you'd let me know! Maybe I can help some more.
Peace
YUI (and probably some other popular javascript libraries) provides automatic listener cleanup, so I highly recommend using YUI or another library with this feature to minimize problems with IE. However, it sounds like you might be experiencing plain slowness rather than any kind of memory leak issue; you are attaching event handlers to a whole bunch of elements. IE6 is known to be less than optimized, so it might just be taking forever to clean everything up.
apeace also has a good point: innerHTML can get you in trouble and set you up with DOM weirdness. It sounds like JQuery has a fix for that.
Try taking advantage of event bubbling to replace all event handlers with just one.
I agree with porneL. Attach one event handler to the <table> and let bubbling work its magic. Most frameworks provide a way for you to find the element that caused the original event (usually referred to as a "target").
If you're making lots of elements using document.createElement(), you can add them to a DOM fragment. When you append the fragment to the page, it appends all the child nodes attached to it. This operation is faster than appending each node one-at-a-time. John Resig has a great write-up on DOM document fragments: http://ejohn.org/blog/dom-documentfragments/