Handling huge amount of data in Fixed Data Tables - javascript

I am trying to use Fixed Data Tables in my Web Application, I am dealing with large amount of data like hundreds of thousands of records. I am trying to load all the data at a time to make the best use of Search and Sort functionalities of Data Table.
Here is the link to the data table which I am using.
It is consuming huge time to load data, which is expected, but after loading of data getting some glitches in browser, I mean it is getting stuck.
How to handle huge amount of data in Data Tables with complete functionality?

The main advantage of using the fixed data table is that you can render the entire table based on an array or an object.
The official link for the fixed data table is given at:
http://schrodinger.github.io/fixed-data-table-2/example-object-data.html
The following link consists of rendering the table on the basis of JSON data. Some additional features like client side sorting and filtering can also be added as you mentioned that the data you have is huge.

Related

Storing and retrieving data using MySQL for NxN structure in the most optimized and less time consuming way

I have a form like below which contains dynamic columns and rows which can be added and deleted as per the user.
Assuming the number of rows being 50 to 60 and columns upto 10, there are a lot of calculations taking place in the javascript.
I am using MySQL database here and php(Laravel) as backend.
Currently, my table structure is as given below to store the above:-
Table 1 - row_headers (row_id,row_name)
Table 2 - column_headers (column_id, column_name)
Table 3 - book_data ( book_id, row_id, column_id, data_value)
The above set of tables do suffice the data storing, but is extremely slow with respect to Store call as well as get call. While getting the complete data back to the UI there is much load on the database as well as HTML to load the data properly(for loops kill all the time) and also the same is a tedious process.
I want to understand how to optimize the above? What table structure should be used other than the above and what is the best way to reduce the load at backend as well as frontend?
Help appreciated.

How do I parse large amounts of data in XLSX with Javascript?

I want to process some 48,000 rows to build a dashboard and show some stats based on the data in those rows. One particular field which has a length of 30 characters also has some data in form of substrings. How do I parse all of this data, row by row, to come up with the end result? There are plenty of examples out there, couldn't relate to them just as well.
I'm using the "js-xlsx" library in one of my application. The performance is considerably seems to be good.
Here is the github URL.
https://github.com/SheetJS/js-xlsx

Datatables - Local Server Side Processing

One of the queries I currently run to populate a Datatable is pretty resource intensive, so I am trying to reduce the amount of load on my database by reducing the amount of ajax calls for pagination, sorting, searching etc...
I currently do a single ajax call to get a json array of the entire dataset(received by the browser in a couple of seconds), plug that in to Datatables and have it render the whole set.
A problem occurs when there are tens of thousands of records, the browser hangs for close to 20 seconds before rendering the data.
Is there some sort of middle ground, where datatables doesn't load the whole set from the json array, and instead uses the json array as a sort of local server source for the data. So in practice it would retrieve the first 10 rows from the json array, render them, and when the next page is clicked, or a search is initiated it goes back to the json array for the data instead of the server?
This sounds like a pretty simple solution, but I have not managed to find a function for this looking through the documentation. Is there a way to accomplish this with Datatables, natively or not?

Insert large volume of data to create a tree with 25k nodes

We are creating a tree structure with the help of a custom tool develop in JavaScript/Jquery.
It works great now we have to create that tree with help of a feed file (CSV file).
I am working on creating a POC to understand the behavior of JS file for 25k nodes.
The problem is how do I Insert such volume of data in my Database to check the behavior in browser.
Let me brief you about our approach for inserting the tree in the DB . We create the Left right value using the
NSM model. then insert it in two table one with a collection of node names. Other with left right values and some other
Attributes. So I need to Insert such volume of data at least ( 10K nodes) with left right values of it.
We supply a json object for rendering tree on client side then recursively calling the function to redraw the structure.
Question is not entirely clear, but whenever I need to insert a large amount of data into sql server I use BCP, especially since your data is in CSV format, it should be easy:
http://msdn.microsoft.com/en-us/library/ms162802.aspx

Handling large grid datasets in JavaScript

What are some of the better solutions to handling large datasets (100K) on the client with JavaScript. In particular, if you have multi-column sort and search capabilities, how do you handle fetching (and pre-fetching) the data, client side model binding (for display), and caching the data.
I would imagine a good solution would be doing some thoughtful work in the background. For instance, initially, if the table was displaying N items, it might fetch 2N items, return the data for the user, and then go fetch the next 2N items in the background (even if the user hasn't requested this). As the user made search/sort changes, it would throw out (or maybe even cache the initial base case), and do similar functionality.
Can you share the best solutions you have seen?
Thanks
Use a jQuery table plugin like DataTables: http://datatables.net/
It supports server-side processing for sorting, filtering, and paging. And it includes pipelining support to prefetch the next x pages of records: http://www.datatables.net/examples/server_side/pipeline.html
Actually the DataTables plugin works 4 different ways:
1. With an HTML table, so you could send down a bunch of HTML and then have all the sorting, filtering, and paging work client-side.
2. With a JavaScript array, so you could send down a 2D array and let it create the table from there.
3. Ajax source - which is not really applicable to you.
4. Server-side, where you send data in JSON format to an empty table and let DataTables take it from there.
SlickGrid does exactly what you're looking for. (Demo)
Using the AJAX data store, SlickGrid can handle millions of rows without flinching.
Since you tagged this with Ext JS, I'll point you to Ext.ux.LiveGrid if you haven't already seen it. The source is available, so you might have a look and see how they've addressed this issue. This is a popular and widely-used extension in the Ext JS world.
With that said, I personally think (virtually) loading that much data is useless as a user experience. Manually pulling a scrollbar around (jumping hundreds of records per pixel) is a far inferior experience to simply typing what you want. I'd much prefer some robust filtering/searching instead of presenting that much data to the user.
What if you went to Google and instead of a search box, it just loaded the entire internet into one long virtual list that you had to scroll through to find your site... :)
It depends on how the data will be used.
For a large dataset, where the browser's Find function was adequate, just returning a straight HTML table was effective. It takes a while to load, but the display is responsive on older, slower clients, and you never have to worry about it breaking.
When the client did the sorting and search, and you're not showing the entire table at once, I had the server send tab-delimited tables through XMLHTTPRequest, parsed them in the browser with list = String.split('\n'), and updated the display with repeated calls to $('node').innerHTML = 'blah'. The JS engine can store long strings pretty efficiently. That ran a lot faster on the client than showing, hiding, and rearranging DOM nodes. Creating and destroying new DOM nodes on the fly turned out to be really slow. Splitting each line into fields on-demand seems to work; I haven't experimented with that degree of freedom.
I've never tried the obvious pre-fetch & background trick, because these other methods worked well enough.
Check out this comprehensive list of data grids and
spreadsheets.
For filtering/sorting/pagination purposes you may be interested in great Handsontable, or DataTables as a free alternative.
If you need simply display huge list without any additional features Clusterize.js should be sufficient.

Categories

Resources