I have an HTML page that uses AJAX to retrieve messages from a server. I'm appending these messages to a as they are retrieved by setting its innerHTML property.
This works fine while the amount of text is small, but as it grows it causes Firefox to use all available CPU and the messages slow down to a crawl. I can't use a textbox, because I want some of the text to be highlighted in colour or using other HTML formatting. Is there any faster way to do this that wouldn't cause the browser to lock up?
I've tried using jQuery as well, but from what I've read setting .innerHTML is faster than its .html() function and that seems to be the case in my own experience, too.
Edit: Perceived performance is not an issue - messages are already being written as they are returned (using Comet). The issue is that the browser starts locking up. The amount of content isn't that huge - 400-500 lines seems to do it. There are no divs within that div. The entire thing is inside a table, but hopefully that shouldn't matter.
You specifically said that you were appending meaning that you are attaching it to the same parent. Are you doing something like:
myElem.innerHTML += newMessage;
or
myElem.innerHTML = myElem.innerHTML + newMessage;
because this is extremely inefficient (see this benchmark: http://jsben.ch/#/Nly0s). It would cause the browser to first do a very very large string concat (which is never good) but then even worse would have to re-parse insert and render everything you had previously appended. Much better than this would be to create a new div object, use innerHTML to put in the message and then call the dom method appendChild to insert the newly created div with the message. Then the browser will only have to insert and render the new message.
Can you break up your text and append it in chunks? Wrap portions of the text into div tags and then split them apart adding the smaller divs into the page.
While this won't speed the overall process, the perceived performance likely will be better as the user sees the first elements more quickly while the rest loads later, presumably off the visible page.
Additionally you probably should reconsider how much data you are sending down and embedding into the page. If the browser is slow chances are the amount of data is bound to be huge - does that really make sense to the user? Or would a paging interface (or other incremental load mechanism) make more sense?
You can use insertAdjacentHTML("beforeend", "<HTML to append>") for this.
Here is an article about its performance which has benchmarks showing insertAdjacentHTML is ~150x faster than the .innerHTML += operation. This might have been optimised by browsers in the years after the article was written though.
Going along with Rick's answer, why not just pass the info back as JSON, so you can just go through it, using setTimeout, and display perhaps 2-5 messages, then call setTimeout, then it will do the next batch, until the JSON array has been processed.
You should use innerHTML though, so your javascript can create that dynamically and add to the div, but, I would only do that for the first batch, to get everything up quickly.
After that I would go with cloning the first batch and changing the innerhtml for each of the other messages, along with other info, and add that to the dom tree.
Cloning will be faster than creating new elements and you won't have problems if anything else changes the dom tree while you are processing.
"The entire thing is inside a table, but hopefully that shouldn't matter."
Actually it matters alot. Due to the nature of tables a cell can often not be rendered until the width and height of all cells in the column and row are calculated. table-layout: fixed overcomes this at the cost of locking cell width and height based on the first row.
In short it may be best not to wrap in a table or if the data really is tabular try fixed layout rendering.
http://www.w3schools.com/Css/pr_tab_table-layout.asp
I wrote something similar, and it was challenging. First, you need to isolate the problem.
Is it the rendering code? Try commenting out all the rendering and see if the AJAX itself slows down Firefox. If so, try different approaches to the rendering, as outlined above.
Is it the network? Try commenting out the Ajax, and just run your innerHTML setting periodically. If this is the problem, you may need to experiment with different timing settings.
Related
I would like to know what is the maximum data that angular framework can handle. Say, I am displaying a chart using angular and some charting framework like chartjs. I'd like to know up to how many data can the browser display properly, with slowness, or up to when it crashes.
Your question has no simple answer, but I will try to flatten it and give a simple answer, or at least simple things to consider...
Angular (at runtime), like many other frameworks is simply JavaScript,
So let us reduce the question to "Limitation of JavaScript and browsers with regards to data loaded",
JavaScript has no upper limit of memory or storage it can handle,
I've seen JavaScript applications that require more than 15GB of RAM,
and they performed well too.
So assume data size itself is not an issue (unless your application is poorly implemented, leaking memory or just not very efficient, of course).
The main challenge as I see it, is displaying and manipulating the information
without causing unnecessary delay or unresponsiveness.
Displaying the information - let's say you have a list (or a table) containing 1,000,000 possible gifts which you then want to display for the user to select.
Adding the list items to the document one by one is tempting, but will require the browser to repaint after each addition (causing a delay or full unresponsiveness until finished), another way is adding the elements to some DOM element (denoted by N) still being kept in memory, then adding all elements corresponding the list items to the element N (still, just an in memory operation), finally adding N to the document containing the entire list - the will be a much better solution for displaying the large amount of data.
Manipulating the information - displaying is indeed not enough. you would like to move, drag, sort and filter the data being displayed. And as mentioned before, it is a bad idea removing many elements directly from DOM. You should instead remove container from the document's DOM to memory, manipulate the data in it, and then add the container right back to the document. Angular does this kind of magic for you.
(Toggling the 'display:none\block' css attribute of many elements has a similar blocking effect as I recall).
A good practice is implementing an application/page showing only the amount of data that can be processed by a human at a single glance. The rest of it should be considered in the application data-layer, in memory, and should be loaded to display given the appropriate need or request.
To conclude, you can deal with huge amounts of data as long as you provide a mechanism that efficiently filter the displayed information.
I hope it helps...
for further reading:
Slow and fast ways of adding elements to DOM
A question emphasizes the lack of memory limit used by JS
CSS display attribute performance
A good discussion about the reasons for slow DOM
About using HTML5 correctly - old but still true
Once the DOM creation procedure is understood - it much easier to display data without affecting performance / user experience
I have a DOM element (let's call it #mywriting) which contains a bigger HTML subtree (a long sequence of paragraph elements). I have to update the content of #mywriting regularly (but only small things will change, the majority of the content remains unchanged).
I wonder what is the smartest way to do this. I see two options:
In my application code I find out which child elements of #mywriting has been changed and I only update the changed child elements.
I just update the innerHTML attribute of #mywriting with the new content.
Is it worth to develop the logic of approach one to find out the changed child nodes or will the browser perform this kind of optimization when I apply approach two?
No, the browser doesn't do such optimisation. When you reassign innerHTML, it will throw away the old contents, parse the HTML, and place the new elements in the DOM.
Doing a diff to only replace (or rather, update) the parts that need an update can be worth a lot, and is done with great success in rendering libraries that employ a so-called virtual DOM.
However, they're doing that diff on an element data structure, not an HTML string. Parsing that to find out which elements changed is going to be horribly inefficient. Don't use HTML strings. (If you're already sold on them, you might as well just use innerHTML).
Without concdering the overhead of calculating which child elements has to be updated option 1 seems to be much faster (at least in chrome), according to this simple benchmark:
https://jsbench.github.io/#6d174b84a69b037c059b6a234bb5bcd0
I have a use case where there are thousands of items on a list shown to the user. They are loaded a small batch at a time and I see the network traffic going in and out, I see data getting loaded. I see the DOM getting bigger, but the list itself in the UI stops updating (Chrome).
When I examine it, I see thousands of items in the code, when I select the items through console and make it count them, I see the proper number. But in the page itself, I don't see these items get displayed. The list uses drag-and-drop to put items from it into another list (and load additional data about them).
Not using jquery.datatables at the moment, but been meaning to migrate to them a long time ago. I can’t find a good example, though, everything I see uses pagination to split, but what if this is not an option?
How can I pinpoint what it is that is preventing the items from display? The number of entries will vary between 500 and 20 000.
Never mind. everything works as intended, duh. I was stupid and missed something very obvious: things had "display: none" for a very good reason about which I totally forgot (has to do with the core logic of the application). Next time hit me with a stick so I could remember to pay more attention.
Not sure what you meant by saying 'DOM getting bigger' but 'don't see items get displayed'.
Typically JS has a main thread which will handle functions/callbacks as well as view-refresh. So if you operation is blocking , the view will not be refreshing.
As for the pagination is not an option thing, you can consider using DOM-lazy-Loading mechanism where you only put what should be in the current viewport into the dom. As user scroll, you calculate the scroll height dynamically to add/remove items to/from the DOM. One thing to remember is you typically need to define a fixed height for your rows so that you could do the calculation. This lazy-loading way is a common way of solving this type of problem and is widely used by different frameworks like GXT, angular-gird..etc.
do you have any experiences with the following problem: JavaScript has to run hundreds of performance intensive function calls which cannot be skipped and causing the browser to feel crashed for a few seconds (e.g. no scrolling and clicking)? Example: Imagine 500 calls for getting an elements height and then doing hundreds of DOM modifications, e.g. setting classes etc.
Unfortunately there is no way to avoid the performance intensive tasks. Web workers might be an approach, but they are not very well supported (IE...). I'm thinking of a timeout or callback based step by step rendering giving the browser time to do something in between. Do you have any experiences you can share on this?
Best regards
Take a look at this topic this is some thing related to your question.
How to improve the performance of your java script in your page?
If your doing that much DOM manipulation, you should probably clone the elements in question or the DOM itself, and do the changes on a cached version, and then replace the whole ting in one go or in larger sections, and not one element at the time.
What takes time is'nt so much the calculations and functions etc. but the DOM manipulation itself, and doing that only once, or a couple of times in sections, will greatly improve the speed of what you're doing.
As far as I know web workers aren't really for DOM manipulation, and I don't think there will be much of an advantage in using them, as the problem probably is the fact that you are changing a shitload of elements one by one instead of replacing them all in the DOM in one batch instead.
Here is what I can recommend in this case:
Checking the code again. Try to apply some standard optimisations as suggested, e.g. reducing lookups, making DOM modifications offline (e.g. with document.createDocumentFragment()...). Working with DOM fragments only works in a limited way. Retrieving element height and doing complex formating won't work sufficient.
If 1. does not solve the problem create a rendering solution running on demand, e.g. triggered by a scroll event. Or: Render step by step with timeouts to give the browser time to do something in between, e.g. clicking a button or scrolling.
Short example for step by step rendering in 2.:
var elt = $(...);
function timeConsumingRendering() {
// some rendering here related to the element "elt"
elt = elt.next();
window.setTimeout((function(elt){
return timeConsumingRendering;
})(elt));
}
// start
timeConsumingRendering();
Recently we redesigned one of our pages and suddenly page has been increased from 1MB to 1.98MB.
I compared the no of DOM elements and its increased from 1600 to 2300. I found the no of elements from the below command
document.getElementsByTagName('*').length
We did a load test and found the load time also increased from 1.1 to 2 seconds. Is this the reason for all problems.
I think the above line won't consider any inline css and js right , as they are not DOM elements.
Can you please suggest
Without knowing exactly what you redesigned, it's impossible to know what change caused the increase. But even a 1MB page is pretty large. JavaScript (and particularly jQuery) can change the number of DOM objects... consider this:
$('p').append('<span>Blah</span> <span>blah</span> <span>blah</span>');
That will add 3 DOM objects for each p tag on the page (which could be a lot!) and yet it adds only 71 bytes to your page. jQuery can similarly remove DOM objects. So I don't think the number of DOM objects is really much of a consideration.
The javascript that runs can manipulate the dom and create new nodes which would affect your count. However it shouldn't make the page load any slower as it's rendered on the client side.
I think you need to include more information if you expect to get a better answer.
Also you should look into browser plugins (for firefox) like Yslow, or firebug (net tab) that show you all the files being loaded and how long they load.
Anytime that you have more information crossing the wire, it will take longer. Therefore, with more DOM elements in the page, the loading time will be slower. I hope this answers your question because I'm not really sure of what you are actually asking.