I am loading content of a server and putting it on a website.
$("#mydiv").load ("/content");
The content is a very long HTML-table (60k rows with 2 cols, simple strings, no css).
Content-string is ~3.5MB, loading needs 1 second.
But the problem is that it takes ~1 minute for the browser to show it.
Is there any way I can speed the rendering up - without rethinking my presentation (etc. paginated list)?
You cannot speed up by receiving all the data at once.
There is a way to get the total number of records and add (append) them to the table using ajax while looping.
Related
I have a trick problem to create a table with much data in React application.
It was created by using only HTML/CSS/Javascript, not using any third party libraries.
The table is displayed in one page and it loads 20 rows more whenever the user scrolls down to the end.
If there aren't many rows, the loading speed is good.
But if the rows are increasing to 1000, 1500, 2000, etc, the performance is very low.
It happens because many rows are rerendering again and again whenever the paginated rows load.
To improve the performance, I would like to load only elements inside the visual area of the table.
Is there a way?
I'm developing a single page app that fetches data from the back-end to an AngularJS controller and then shows them in some grids and charts and stuff like that using KendoUI. Then I want to generate reports of these grids and I want it to be print-friendly; still fine.
The problem is that I want to add a footer row at the end of each printed page, that contains sum of all rows in that page, which I can't, because I got no idea about how many pages it takes and how many rows would fit in a page after printing.
I tried different ways like creating PDFs but the other issue is that most of pdf libs don't support UTF and that's something that I need.
What I want is: to have the number the of row elements in each "Printed" page, then I would calculate the sum row and add it to that "page". In fact I need something like "page-break event" or something like that, as the whole job would take place in JS runtime. But there seems no way to do this when printing HTML directly, Because the browser is the one that deals with the printer.
So the only way I found is to fix row heights and calculate how many rows each page contains, And hope that users don't change the paper size.
Is there any way?
PS. I wonder why I didn't find anyone with this issue. Because it's very common in report generators to calculate total of values in a page when printing.
I am working on a project that fetches at least 50,000 data sets that have to be displayed as tabular data. The data sets are fetched from our MySQL database. The users have the following requirements...
The users don't want to set up pagination but they want to render more and more table rows as they scroll down the page.
If they scroll up, the users should not see the table rows that were previously displayed when they scrolled down. That means that I have to either delete the table rows that were previously created or hide them.
The lag or load time so as to not detect any obvious latency or lag.
Our project uses a LAMP (Python) stack with Django as a framework. We use Django templates which are server side templates. I don't know how to translate the same experience to client side. I have an idea for an approach, can someone either validate it or correct it?
My intended approach is given below:
Fetch a certain subset of rows of the original data set upon load (say 5000). This will be cached server side with memcached. On page load, only a certain subset of it (say 100) will be displayed. Each page will have certain state associated with such as page number or page size. These will be initialized using HTML5 history pushstate API.
When an ajax request is made, then additional data sets will be fetched and additional table rows will be appended to the existing table rows.
When the user scrolls up and reaches what would have been a previous page, the table rows get deleted or hidden.
I don't know how to implement step 3. What event should I listen to? What preconditions should I check for in order to trigger step 3. I also don't know if step 2 is the correct approach.
Here's what I would do:
Add a class (scrolled-past) to any row that is scrolled past the window top. (If you're using jQuery, jQuery Waypoints is a great tool to detect this.)
Fetch more data when reaching bottom.
When the data has been fetched, hide all .scrolled-past elements, append new rows and scroll to the top of the table (where to first row now is the one that was previously the uppermost visible row).
There might be glitches when hiding, appending and scrolling, but I bet you'll nail it with an hour of tweaking. Adding the exact top offset of the uppermost visible row to the scroll offset in step 3 is one way of making it more neat.
I am using ajax call to get json data.The data contains array of say 10000 size.
data.LocationQuality.forEach(function(datum){
}
I am using this loop to go through all the element and create tr with filling all the td's using datum.
Problem I am facing is that because of the large amount of data.The loop is taking lots of time and ultimately the browser shows not responding and I have to kill the page.
How to solve this problem?
There are a few things to be said here.
Firstly, as mentioned, pagination would definitely be the best solution for this. Keeping 10000 rows in memory is not a problem for browsers nowadays, but displaying 10000 rows in a table often does take ages. Only displaying 20 rows at a time for example, is much slower.
A solution like jqGrid does not require ajax (but does support it). You can load you data once and the grid will keep the data in memory and only displays a number of items at once, with navigational buttons to flip through pages.
You can get it here: http://www.trirand.com/blog/?page_id=6
Secondly, something should be said about the dom. I'm expecting that you are currently adding a row to the dom 10000 times. This is much much slower than building the table in memory and then adding the one table with 10000 rows in one step. It will still be slow, but not as slow as hitting the dom so many times.
So rather than
$.each(data, function (index, row) {
$('#yourTable').append('your row definition'); // hitting the dom for each row
}
do this instead
var table = $('<table></table>');
$.each(data, function (index, row) {
table.append('your row definition'); // adding it to the table in memory
}
$('#yourTable').replace(table); // hit the dom once
Without code this is a stab in the dark:
1: optimise your page creation loop. DOM operations are very slow - you might be able to improve things by doing them in a different way
2: download your data bit by bit. As the user scrolls your page make another AJAX call to get the next portion of the data.
3: download all the data, but then render only a page at a time.
You also might want to look at this: https://developer.mozilla.org/en-US/docs/Web/API/document.createDocumentFragment?redirectlocale=en-US&redirectslug=DOM%2Fdocument.createDocumentFragment
It allows you to create large parts of the DOM without adding them yet. That way you can build your table and append it to the DOM in one go, should be a lot faster then just appending rows all the time
You have problem displaying large number of data, and you can't get data in multiple queries?
So, get all data in one call, parse them and place in array, and do pagination with that "local" array.
Instead of requesting data from server to do pagination, just get data from "local" array.
I have a JavaScript application which opens an ExtJS Window, containing an ExtJS TabPanel, which has a tab containing a Data Grid, showing approximately 900 - 1000 rows, each with 7 columns of text fields. The grid may also be filtered to show about 100 rows at a time. When the window opens, navigating to this tab can cause Firefox or Safari to spin/lock up for over 60 seconds...
This is Ext 2.2
I know it's very hard to say without code... but without seeing code, my question is: Should ExtJS be capable of displaying a grid of so much data? In trying to optimize should I be looking at my code, or is ExtJS itself the problem? Is anyone using ExtJs to display such large grids?
ExtJS itself can handle that many rows - we have a grid that we've capped at 1000 rows on the server, but the page renders with no problems - certainly not taking 60 seconds.
Some other questions:
Are you sending the data in XML or JSON format? We're using JSON loading it directly into a JsonStore.
Are you doing any processing of the data before it's rendered?
What specific grid class are you using?
I'm not doing it within Window/TabPanel (grid is shown directly within main page body) but I doubt that matters. Yes, Ext 2.2 should and does handle 1000 rows reasonably well (there's some delay, but it's certainly not 60 seconds).
Things to consider:
How are you reading the data? Does it take 60 seconds to actually render the data or is significant part of it taken by data loading?
Can you paginate in, say, increments of 100? Or, if not, lazy load?
Is there anything else happening on this page perhaps that results in this delay?
Not sure as I have not used it myself but I did come across this when looking at grid components myself.
Buffer ExtJS Grid
Can you also limit pages to rendering fewer rows? or is there a need to have 1000 per page (quite a lot for users to look at). You might find for example 250 more usable and more efficent at the same time?