Handling large grid datasets in JavaScript - javascript

What are some of the better solutions to handling large datasets (100K) on the client with JavaScript. In particular, if you have multi-column sort and search capabilities, how do you handle fetching (and pre-fetching) the data, client side model binding (for display), and caching the data.
I would imagine a good solution would be doing some thoughtful work in the background. For instance, initially, if the table was displaying N items, it might fetch 2N items, return the data for the user, and then go fetch the next 2N items in the background (even if the user hasn't requested this). As the user made search/sort changes, it would throw out (or maybe even cache the initial base case), and do similar functionality.
Can you share the best solutions you have seen?
Thanks

Use a jQuery table plugin like DataTables: http://datatables.net/
It supports server-side processing for sorting, filtering, and paging. And it includes pipelining support to prefetch the next x pages of records: http://www.datatables.net/examples/server_side/pipeline.html
Actually the DataTables plugin works 4 different ways:
1. With an HTML table, so you could send down a bunch of HTML and then have all the sorting, filtering, and paging work client-side.
2. With a JavaScript array, so you could send down a 2D array and let it create the table from there.
3. Ajax source - which is not really applicable to you.
4. Server-side, where you send data in JSON format to an empty table and let DataTables take it from there.

SlickGrid does exactly what you're looking for. (Demo)
Using the AJAX data store, SlickGrid can handle millions of rows without flinching.

Since you tagged this with Ext JS, I'll point you to Ext.ux.LiveGrid if you haven't already seen it. The source is available, so you might have a look and see how they've addressed this issue. This is a popular and widely-used extension in the Ext JS world.
With that said, I personally think (virtually) loading that much data is useless as a user experience. Manually pulling a scrollbar around (jumping hundreds of records per pixel) is a far inferior experience to simply typing what you want. I'd much prefer some robust filtering/searching instead of presenting that much data to the user.
What if you went to Google and instead of a search box, it just loaded the entire internet into one long virtual list that you had to scroll through to find your site... :)

It depends on how the data will be used.
For a large dataset, where the browser's Find function was adequate, just returning a straight HTML table was effective. It takes a while to load, but the display is responsive on older, slower clients, and you never have to worry about it breaking.
When the client did the sorting and search, and you're not showing the entire table at once, I had the server send tab-delimited tables through XMLHTTPRequest, parsed them in the browser with list = String.split('\n'), and updated the display with repeated calls to $('node').innerHTML = 'blah'. The JS engine can store long strings pretty efficiently. That ran a lot faster on the client than showing, hiding, and rearranging DOM nodes. Creating and destroying new DOM nodes on the fly turned out to be really slow. Splitting each line into fields on-demand seems to work; I haven't experimented with that degree of freedom.
I've never tried the obvious pre-fetch & background trick, because these other methods worked well enough.

Check out this comprehensive list of data grids and
spreadsheets.
For filtering/sorting/pagination purposes you may be interested in great Handsontable, or DataTables as a free alternative.
If you need simply display huge list without any additional features Clusterize.js should be sufficient.

Related

How to handle an extremely big table in a search?

I'm looking for suggestions on how to go about handling the following use case scenario with python django framework, i'm also open to using javascript libraries/ajax.
I'm working with pre-existing table/model called revenue_code with over 600 million rows of data.
The user will need to search three fields within one search (code, description, room) and be able to select multiple search results similar to kendo controls multi select. I first started off by combining the codes in django-filters as shown below, but my application became unresponsive, after waiting 10-15 minutes i was able to view the search results but couldn't select anything.
https://simpleisbetterthancomplex.com/tutorial/2016/11/28/how-to-filter-querysets-dynamically.html
I've also tried to use kendo controls, select2, and chosen because i need the user to be able to select as many rev codes as they need upward to 10-20, but all gave the same unresponsive page when it attempted to load the data into the control/multi-select.
Essentially what I'm looking for is something like this below, which allows the user to select multiple selections and will handle a massive amount of data without becoming unresponsive? Ideally i'd like to be able to query my search without displaying all the data.
https://petercuret.com/add-ajax-to-django-without-writing-javascript/
Is Django framework meant to handle this type of volume. Would it be better to export this data into a file and read the file? I'm not looking for code, just some pointers on how to handle this use case.
What the basic mechanism of "searching 600 millions"? Basically how database do that is to build an index, before search-time, and sufficiently general enough for different types of query, and then at search time you just search on the index - which is much smaller (to put into memory) and faster. But no matter what, "searching" by its nature, have no "pagination" concept - and if 600 millions record cannot go into memory at the same time, then multiple swapping out and in of parts of the 600 millions records is needed - the more parts then the slower the operation. These are hidden behind the algorithms in databases like MySQL etc.
There are very compact representation like bitmap index which can allow you to search on data like male/female very fast, or any data where you can use one bit per piece of information.
So whether Django or not, does not really matters. What matters is the tuning of database, the design of tables to facilitate the queries (types of indices), and the total amount of memory at server end to keep the data in memory.
Check this out:
https://dba.stackexchange.com/questions/20335/can-mysql-reasonably-perform-queries-on-billions-of-rows
https://serverfault.com/questions/168247/mysql-working-with-192-trillion-records-yes-192-trillion
How many rows are 'too many' for a MySQL table?
You can't load all the data into your page at once. 600 million records is too many.
Since you mentioned select2, have a look at their example with pagination.
The trick is to limit your SQL results to maybe 100 or so at a time. When the user scrolls to the bottom of the list, it can automatically load in more.
Send the search query to the server, and do the filtering in SQL (or NoSQL or whatever you use). Database engines are built for that. Don't try filtering/sorting in JS with that many records.

"fragmenting" HTTP requests

I have an Angular app pulling data from a REST server. Each item we pull has some "core" data - what's needed to display it's basic representation - and then what I call "secondary" data, comments and other things that the user might want to see and might not.
I'm trying to optimize our request pattern to minimize the overall amount of time the user spends looking at a loading spinner: Pulling all (core/secondary) data at once causes the initial request to return far too slowly, but pulling only the bare essentials until the user asks for something we haven't requested yet also creates unnecessary load time, at least inasmuch as I could've anticipated them wanting to see it and loaded it while they were busy reading the core content.
So, right now I'm doing a "core content" pull first and then initiating a "secondary" pull at the end of the success callback from the first. This is going to be an experimental process, but I'm wondering what (if any) best practices have been established in this situation. (I'm sure a good answer to that is a google away, but in this instance I'm not quite sure what to google - thus the quotation marks in this question's title)
A more concrete question: Am I better off initiating many small HTTP transactions or a few large ones? My instinct is to do many small ones, particularly if I can anticipate a few things the user is likeliest to want to see first and get those loaded as soon as possible. But surely there's an asymptote here? Or am I off-base in this line of thinking entirely?
I use the same approach as you, and it works pretty well for a many-keyed, 10,0000+ collection.
The collection is paginated with ui.bootstrap.pagination, only a maximum of 10 items are displayed at once. It can be searched on title.
So my approach is to retrieve only id and title, for the whole collection, so the search can be used straight away.
Then, as the items displayed on screen are in an array, I place a $watch on that array. The job of the $watch is to go fetch full details of the items in the array (secondary pull), but of course only when the array is changed.
So, in the worst case scenario, you are pulling the full details of only 10 items.
Results are cached for more efficiency. It displays instant results, as the $watch acts as a pre-loader.
Am I better off initiating many small HTTP transactions or a few large ones?
I believe large transactions, for just a few items (the ones which are clickable on the screen) are very efficient.
Regarding the best practice bit: I suppose there are many ways to achieve your goals; however, the technique you are using works extremely well, as it retrieves only what is needed, and only just before it is needed.
Besides, it is simple enough to implement.
Also, like you I would have thought many smaller pulls were surely better than several large ones. However, I was advised to go for a large pull as a comment to this question: Fetching subdocuments with angular $http
To answer you question about which keywords to search for, I suggest:
progressive loading
An alternative could be using websockets and streaming loading: Oboe.js does this quite well:
http://oboejs.com/examples

Javascript/jQuery tree renderer

I need some help in choosing a "tree renderer" similar to jsTree. Since my requirements might be a little different, I would like to ask you to recommend a (preferably jQuery based) plugin for me.
I am trying to (re) write an organization browser, which shows who reports to who etc. The catch is that my server can only throw back JSON (or whatever format) one time to the client. That is, on page load, my HTML would have a JSON map containing about 2000 employees, and jsTree/whatever-solution-you-recommend, would render this as a tree.
By one-time, I mean that the client JS cannot repeatedly / progressively ask the server for more data in an AJAX-y fashion. (This is due to some complexities at the server side).
Do you think jsTree is fit for this? I don't want the entire user experience to become sluggish with a 1500-2000 employee org-chart (2000 is the limit here).
Any pointers would help, thanks!
you can try jQuery plugin
Plugins/Treeview/treeview
http://docs.jquery.com/Plugins/Treeview/treeview

Processing a large (12K+ rows) array in JavaScript

The project requirements are odd for this one, but I'm looking to get some insight...
I have a CSV file with about 12,000 rows of data, approximately 12-15 columns. I'm converting that to a JSON array and loading it via JSONP (has to run client-side). It takes many seconds to do any kind of querying on the data set to returned a smaller, filtered data set. I'm currently using JLINQ to do the filtering, but I'm essentially just looping through the array and returning a smaller set based on conditions.
Would webdb or indexeddb allow me to do this filtering significantly faster? Any tutorials/articles out there that you know of that tackles this particular type of issue?
http://square.github.com/crossfilter/ (no longer maintained, see https://github.com/crossfilter/crossfilter for a newer fork.)
Crossfilter is a JavaScript library for exploring large multivariate
datasets in the browser. Crossfilter supports extremely fast (<30ms)
interaction with coordinated views, even with datasets containing a
million or more records...
This reminds me of an article John Resig wrote about dictionary lookups (a real dictionary, not a programming construct).
http://ejohn.org/blog/dictionary-lookups-in-javascript/
He starts with server side implementations, and then works on a client side solution. It should give you some ideas for ways to improve what you are doing right now:
Caching
Local Storage
Memory Considerations
If you require loading an entire data object into memory before you apply some transform on it, I would leave IndexedDB and WebSQL out of the mix as they typically both add to complexity and reduce the performance of apps.
For this type of filtering, a library like Crossfilter will go a long way.
Where IndexedDB and WebSQL can come into play in terms of filtering is when you don't need to load, or don't want to load, an entire dataset into memory. These databases are best utilized for their ability to index rows (WebSQL) and attributes (IndexedDB).
With in browser databases, you can stream data into a database one record at a time and then cursor through it, one record at a time. The benefit here for filtering is that this you means can leave your data on "disk" (a .leveldb in Chrome and .sqlite database for FF) and filter out unnecessary records either as a pre-filter step or filter in itself.

Best Practices for displaying large lists

Are there any best practices for returning large lists of orders to users?
Let me try to outline the problem we are trying to solve. We have a list of customers that have 1-5,000+ orders associated to each. We pull these orders directly from the database and present them to the user is a paginated grid. The view we have is a very simple "select columns from orders" which worked fine when we were first starting but as we are growing, it's causing performance/contention problems. Seems like there are a million and one ways to skin this cat (return only a page worth of data, only return the last 6 months of data, etc.) but like I said before just wondering if there are any resources out there that provide a little more hand holding on how to solve this problem.
We use SQL Server as our transaction database and select the data out in XML format. We then use a mixture of XSLT and Javascript to create our grid. We aren't married to the presentation solution but are married to the database solution.
My experience.
Always set default values in the UI for the user that are reasonable. You don't want them clicking "Retrieve" and getting everything.
Set a limit to the number of records that can be returned.
Only return from the database the records you are going to display.
If forward/backward consistencency is important, store the entire results set from the query in a temp table and return just the page you need to display. When paging up/down retrieve the next set from the temp table.
Make sure your indexs are covering your queries.
Use different queries for different purposes. Think "Open Orders" vs "Closed Orders". These might perfrom much better as different queries instead of one generic query.
Set parameter defualts in the stored procedures. Protect your query from a UI that is not setting reasonable limits.
I wish we did all these things.
I'd recommend doing some profiling to find the actual bottlenecks. Perhaps you have access to Visual Studio Profiler? http://msdn.microsoft.com/en-us/magazine/cc337887.aspx There are plenty of good profilers out there.
Otherwise, my first stop would be pagination to bring back less records from the db, which is easier on the connection and the memory footprint. Take a look at this (I'm assuming you're on SQL Server >= 2005)
http://www.15seconds.com/issue/070628.htm
I"m not sure from the question exactly what UI problem you are trying to solve.
If it's that the customer can't work with a table that is just one big amorphous blob, then let him sort on the fields: order date, order number, your SKU number, his SKU number maybe, and I guess others,too. He might find it handy to do a multi-column stable sort, too.
If it's that the table headers scroll up and disappears when he scrolls down through his orders, that's more difficult. Read the SO discussion to see if the method there gives a solution you can use.
There is also a JQuery mechanism for keeping the header within the viewport.
HTH
EDIT: plus I'll second #Iain 's answer: do some profiling.
Another EDIT: #Scott Bruns 's answer reminded me that when we started designing the UI, the biggest issue by far was limiting the number of records the user had to look at. So, yes I agree with Scott that you should give the user some way to see only a limited number of records right from the start; that is, before he ever sees a table, he has told you a lot about what he wants to see.
Stupid question, but have you asked the users of your application for input on what records that they would like to see initially?

Categories

Resources