I am porting quite a huge piece of software to an ExtJS Grid. Lots of data (and I mean lots of data) is loaded on-demand into spans that are placed inside grid's cells.
Imagine grid cells having <span id="foo_bar'></span> as content, and special ajax handlers are polling the backend for updated information and once available the spans are filled with it.
Now, in case I collapse some part of the grid and then re-exand them again I loose all automatically filled cell content, and am left with empty spans (which I started from in the first place).
I know the correct way is to setup a store and push all data into the store. But as I've mentioned above: I am porting quite a huge piece of legacy software to ExtJS, and I do not really have much choice here.
Is there a way to automagically push grid cell values to the store?
Update:
A grid is loaded with, suppose, 2000 cells (this can vary tremendously). Every cell contains various grades of HTML, mostly this is , but this can be pretty much anything (including several spans or divs in one cell). In the background there is a comet process pushing new data to the HTML page almost in real time. This data is populated to the corresponding SPANS and DIVs either based on their IDs or class or both.
What I want to achieve is that either:
a) the model for the grid is atomagically updated with the new html content of the cells (how can I achieve this)?
b) when collapsing/expanding tree's nodes the model data is NOT reloaded afresh.
Is either a or b possible? if so — how?
Just update your store when the 'special ajax handlers' successfully return values. In staid of manually messing with the dom.
so in the success callback do something like -> loadData()
Edit:
Seems like you are working with very bad legacy code :) If adding a simple line of code to the ajax handler is a tremendous effort.. You can however attach dom listeners, but it's very bad practice. Here are some events to listen to
Edit:
Here is an example of the use of how you could listen to dom events, it's rather a pure js solution but, meh.. it works..
Related
I'm currently trying to optimize our application that uses KnockoutJS for view binding and rendering. We discovered a huge bottleneck with a growing dataset size in the following scenario:
The dataset (observableArray) is displayed in an html table.
50 rows are displayed at the same time.
Each field in the row-model is an observable as the data is inline editable (input, select, ...)
Per row exist 8 selects that are initialized with the Select2 Widget (http://ivaynberg.github.io/select2) and a jQuery datepicker.
I already implemented these KnockoutJS performance tips, which I found on different sites:
The observable array is published with the full dataset (call of the observable) and not with multiple push() calls.
I'm using the template-binding in conjunction with the foreach option, as advised on multiple sites, and not both split up as separate binds.
Datepicker and Select2 are implemented as custom knockout bindingHandlers.
ko.applyBindings is invoked directly with the table-element as second parameter, so that not the whole DOM is bound by knockout.
But the main bottleneck in my opinion seems to be the initialization of the additional widgets. I measured the creation of a Select2 widget, which takes ~15ms. Of course that cummulates quickly with 50 rows and 8 columns. So, a single call on the observable array to load the full dataset takes up to 10 seconds! During this time, the Browser is under heavy load and becomes unresponsive, which is a no-go usability wise.
This leads up to my questions:
Has someone experienced a similar scenario and how was it handled?
Is there a better way to initialize third-party widgets with knockout?
Are there alternative ways (using KnockoutJS) to solve this problem?
Has someone experienced a similar scenario and how was it handled?
I have an interactive table on my page, which I display using the HandsOnTable plugin. The table has about 15 columns and I need to display up to 250 rows.
I bind it to an observableArray, and, at first, the observableArray contained only observables.
I was not satisfied with the speed of it in old versions of IE I am required to maintain (IE8/9, worked ok with IE11/recent Chrome), and made the decision to remove all observables inside the observableArray.
It is not really in the spirit of knockout and I am not so proud of this solution, but it does work much faster.
Handsontable lets you handle some events, in which I update the observableArray just like a regular array and then run the valueHasMutated function to notify the changes of the array.
Likewise, my custom binding handler watches changes on the whole observableArray and updates the table as needed.
I think the other difference with your solution is that Handsontable turns into edit mode only when needed, thus the plugins (like datepicker) load only then.
Is there a better way to initialize third-party widgets with knockout?
Maybe you can enter edit mode only when the line/cell is selected. You could then use an if binding to init and display your widgets only then.
I have a big HTML table (~10K rows, 4 columns, text only) dumped from a database. I'm experiencing poor performance when opening it in Chrome/Firefox.
I do not have direct access to database, so it is impossible to load page by page. All data is static HTML.
Does pagination with some jQuery plugin help improve performance in this case? Any other suggestions?
When applicable, setting table-layout: fixed helps in reducing rendering time a lot. The slowness is mostly caused by the fact that in the default table layout method, the browser has to parse and process the entire table, calculating width requirements for all cells, before it can render any part of the table.
Fixed layout tells the browser to set column widths according to the first row (paying attention to any CSS that may apply to it). This may, of course, result in a mess, if the widths are not suitable for other rows. For a data table where rows are generally very similar, the widths can probably be set suitably.
This does not change the issue that very few people, if any, are going to read all the 10,000 rows. People will probably be looking for some specific data there. It might be better to set up a search form that lets the user get just what he wants.
I had a similar problem and made a jQuery plugin:
https://github.com/lperrin/infinitable
To use it, you can load all your data with Ajax call, turn it into a huge array, and pass it to the plugin.
It basically hides cells that are not visibles, while making it easy to sort or filter cells. There are other solutions to achieve that, but this one has no dependencies besides jQuery.
The project contains a demo with a table containing 100,000 elements.
Pagination would most certainly solve this problem. You could also try initially setting the table style to display: none. Although the pagination should likely take effect before the browser attempts to render the table.
You could also store the table in a separate html file, do an ajax call to the document, and implement live scrolling. Although, this depends on how you expect the user to explore the data. If jumping to a particular rage like 100-199 is useful, a paginated table would be ideal.
I've developed a module at my work in JQuery, it is basically a table with the following functionality
Cell level Edit
Row level edit
Drop n drop rows to change position
Show/hide columns
Column resize
every thing works fine on latest browsers like, FF9.0, IE9 and Chrome, but in older browsers like IE8 and FF3.6 as the number of rows in the table increases the performance of the page reduces significantly.
I've tried many optimization from jQuery and DOM manipulation but still no effect on the performance. Any idea if I'm missing something or some tip to make the performance better i.e. to an acceptable level.
I haven't use any plugin, everything is my custom implementation. The javascript file is quite huge and I'm looking for some general good practices and tips.
There are two major ways to improve performance with large html pages - reduce the number of page reflows and reduce the number of handlers.
1. Reduce the number of page reflows
Every time you make a change to the DOM, it needs to redraw itself. This is a reflow. Keep these to a minimum by creating a string or DOM fragment with all your DOM manipulations, then inserting that into your page. This will trigger only one reflow. For example, if you're adding a table, create the whole table with text in it as should be, then insert that in a single operation.
JQuery allows you to create a DOM fragment like this:
var table = $('<table></table');
You can manipulate your fragment in the standard ways:
var line = $('<tr><td>Some Data</td></tr>');
line.css('color','red');
table.append(line);
Then when the fragment is complete, add it to the DOM in a single step:
$('body').append(table);
you will trigger only one reflow and the process will be orders of magnitude faster.
2. Reduce the number of handlers
If you have a lot of controls on each row, that's a lot of handlers. Instead create only one handler and attach it to the document root, then when it gets called inspect the target attribute to discover what to do. In JQuery you can use the new "on" handlers to do this, or else use the old "live" style handlers.
for example:
$('table td').on('click', function() {
//do work here
});
Only one handler will be created which will handle all of the td click events.
Do both of these and you should see dramatically improved performance.
The bad news is, there's not really much you can do as it mostly depends on the javascript engine used in the browser.
If it's an option for you try Google's Chromeframe for IE8, but on a public website that's most likely not a very nice solution. But it can be in a corporate environment when users can easily update software.
You could also try to render the table on-the-fly:
Ao you got your table and you got an array with information in javascript (or pullable via ajax), plus you got informations about where you are in the table (row) and how long the table is (maxrows).
Then create a table that's only as big as the viewport, maybe a little bit bigger. Everything above and below the viewport is handled by a big <div> or anything that is streched to the height of remaining rows or rows before the topmost in-viewport row.
That way only a very limited number of dom-nodes is present at any time. It could possibly improve performance.
When the user scrolls remove table cells that are no longer in viewport (and add their height to the respective whitespace block of that side) and add table cells that freshly came into viewport (removing the height from the whitespace on that side).
I need some conceptual help:
I am trying to display a page that contains a single table with a lot of data (moderately big number of rows, very big number of columns), and I want that page to be as fast and smooth as possible from the user's point of view. What I am doing is the following:
Retrieve a list containing the database primary keys of the elements to be displayed in the table.
Iterate through the list, asynchronously request each element given its primary key, and, every time element is retrieved, add it to the table.
Each of these retrieval operations is implemented as a Web service call.
Now my questions are the following:
How can I reorder the elements if they arrive in a different order than they were requested? (It is absolutely essential for me that these elements be inserted in the table in the same positions as their respective primary keys were in the original list.)
Can this strategy be made compatible with any of the main JavaScript grid controls available out there? (Without me having to modify or understand how these controls internally work, of course.)
I think you can look into the jQuery DataTables plugin. It is quite a powerful tool to display data in a tabular format.
I have an HTML page that uses AJAX to retrieve messages from a server. I'm appending these messages to a as they are retrieved by setting its innerHTML property.
This works fine while the amount of text is small, but as it grows it causes Firefox to use all available CPU and the messages slow down to a crawl. I can't use a textbox, because I want some of the text to be highlighted in colour or using other HTML formatting. Is there any faster way to do this that wouldn't cause the browser to lock up?
I've tried using jQuery as well, but from what I've read setting .innerHTML is faster than its .html() function and that seems to be the case in my own experience, too.
Edit: Perceived performance is not an issue - messages are already being written as they are returned (using Comet). The issue is that the browser starts locking up. The amount of content isn't that huge - 400-500 lines seems to do it. There are no divs within that div. The entire thing is inside a table, but hopefully that shouldn't matter.
You specifically said that you were appending meaning that you are attaching it to the same parent. Are you doing something like:
myElem.innerHTML += newMessage;
or
myElem.innerHTML = myElem.innerHTML + newMessage;
because this is extremely inefficient (see this benchmark: http://jsben.ch/#/Nly0s). It would cause the browser to first do a very very large string concat (which is never good) but then even worse would have to re-parse insert and render everything you had previously appended. Much better than this would be to create a new div object, use innerHTML to put in the message and then call the dom method appendChild to insert the newly created div with the message. Then the browser will only have to insert and render the new message.
Can you break up your text and append it in chunks? Wrap portions of the text into div tags and then split them apart adding the smaller divs into the page.
While this won't speed the overall process, the perceived performance likely will be better as the user sees the first elements more quickly while the rest loads later, presumably off the visible page.
Additionally you probably should reconsider how much data you are sending down and embedding into the page. If the browser is slow chances are the amount of data is bound to be huge - does that really make sense to the user? Or would a paging interface (or other incremental load mechanism) make more sense?
You can use insertAdjacentHTML("beforeend", "<HTML to append>") for this.
Here is an article about its performance which has benchmarks showing insertAdjacentHTML is ~150x faster than the .innerHTML += operation. This might have been optimised by browsers in the years after the article was written though.
Going along with Rick's answer, why not just pass the info back as JSON, so you can just go through it, using setTimeout, and display perhaps 2-5 messages, then call setTimeout, then it will do the next batch, until the JSON array has been processed.
You should use innerHTML though, so your javascript can create that dynamically and add to the div, but, I would only do that for the first batch, to get everything up quickly.
After that I would go with cloning the first batch and changing the innerhtml for each of the other messages, along with other info, and add that to the dom tree.
Cloning will be faster than creating new elements and you won't have problems if anything else changes the dom tree while you are processing.
"The entire thing is inside a table, but hopefully that shouldn't matter."
Actually it matters alot. Due to the nature of tables a cell can often not be rendered until the width and height of all cells in the column and row are calculated. table-layout: fixed overcomes this at the cost of locking cell width and height based on the first row.
In short it may be best not to wrap in a table or if the data really is tabular try fixed layout rendering.
http://www.w3schools.com/Css/pr_tab_table-layout.asp
I wrote something similar, and it was challenging. First, you need to isolate the problem.
Is it the rendering code? Try commenting out all the rendering and see if the AJAX itself slows down Firefox. If so, try different approaches to the rendering, as outlined above.
Is it the network? Try commenting out the Ajax, and just run your innerHTML setting periodically. If this is the problem, you may need to experiment with different timing settings.