I need to fetch huge data(may be some 10K records) from DB and show it as report(i use DataTable), and it has data filter/search and pagination.
Question - which one is best/recommended way from the below option,
I will fetch all the records at once and store it in front end(as a object) and if filter applies i will filter from the object and display it.
Likewise i wont interact with DB if i work with pagination(Since i have all the records with myself already)
Every time i need to contact the DB when i applies filter/search.
Likewise for pagination,
For example, if i select page 5 then i will send a query to DB to get me only those data and display it. Note: Number of record per page is also the option to select.
If we have any other best way, please guide me.
Thanks,
I am not familiar with DataTable, but it appears to be similar to jqGrid, which I'm familiar with.
I prefer your proposed solution #2. You are better off fetching only what you need. If you're only displaying, say, 100 rows, it's wasteful (both in terms of bandwidth and local memory usage) to fetch 10k rows at once if you're only displaying 100.
Use LIMIT on the MySQL side to fetch only the records you need. If you want, say, records 200 through 300 for page 3, you'd add LIMIT 200, 100 to the end of your query (the first parameter to LIMIT says "start at 200" and the second says "fetch 100 rows.") If DataTable works like jqGrid, you should be able to re-query the database and repopulate your table when the user changes pages, and this fetch will be done in the background with AJAX, which conserves bandwidth. Your query will be identical except for the range specified by the LIMIT at the end of your query.
Think of it this way: say you use GMail and you never archive your messages, so your inbox contains 20,000 emails, but only shows 100 per page. Do you think Google has designed the GMail front-end so that all 20k subject and from lines are fetched at once and stored locally, or is the server queried again when the user changes pages? (It's the latter.)
Related
I have been assigned with a task to handle large number of data and show then to a webpage tabular form. I'm using HTML/JSP and JS for Frontend and Java as backend.
Business logic is to query database (for me it's Oracle) and get data.
Query looks something like
Select field1, field2 etc.. from table where field1 = "SearchString"
Limit 30
The search string will be given by user.
So, each time the query gets executed I'm getting 30 rows and storing it in a bean.
And with field2 data from iteration 1 I'll execute the query again which will give another 30 rows, I will append those in the bean and loop continues untill there is no matching records. After that I need to display the bean data in UI in tabular form.
Now problem arises when the data is huge. Like, the iteration goes on 1000 times giving 30k records. Then the code is getting stuck in this loop for more time and UI screen is showing loading.
Is there a better approach to my situation?
Note : I can't do any operation the query. Because it's forbidden.
And the query is pseudo query not actual. If the first record has matching record of 30k rows. I need to take 30 in each iteration.
I agree with the comments that this is not the best practice when you are trying to present thousands and thousands of rows to the UI...
It really sounds like you should implement pagination on your UI. This is done by using queries... I don't know what DB system you are using but here is a guide on pagination for SQL Server.
You can explain to the business that using pagination is better for the user. Use the example of how google search gives you pages of search results instead of showing you millions of websites of cat pictures all in one page.
I have an application uses viewpanels to display data. One viewpanel displays unprocessed records and the other displays processed records. The user chooses an unprocessed record (using the show values in this column as links option), and is directed to a page where they input information. Then then click on button that updates the documents using doc.replaceItemValue statements in javascript. The user is then directed back to the viewpanel that displays the unprocessed records. In order to have the just processed record not show up in the unprocessed records I have to reindex the database. I am using database.updateFTIndex(false) to accomplish this.
Is there a better way to accomplish this? If two are more users are submitting records, will their individual indexes step on each other?
I never had to worry about this when using mysql.
Thanks for any advice.
I've used that technique for a while in production and not been notified of any issues. Updating an index via the Database Properties or a View gives the message that it has been queued for update on the server, but I'm not sure if the same happens with the programmatic call. It may well do.
In my scenario, I'm consolidating a lot of data into individual documents, so although intensive use periodically, it's not a huge number of documents being updated at any one time.
I'm also running the update to the index via sessionAsSigner, I had assumed that would be needed for authority purposes.
Hi i was able to implement infinite scroll when the data is static. But when the data is dynamic in nature, let us say i have a table TABLE with fields Name, Last_modified. My requirement is to load the results batch by batch sorted on the decreasing order of Last_modified column in a web page. Let us say my page-size is 10, if i load the recently modified 10 items in the web page and if the user scrolls down to the bottom of the web page then i am supposed to load the next recent 10 items.
If the data is static, then it is easy to implement but as my table data is dynamic in nature i.e Last_modified column can change dynamically so there is a possibility that a row which is already served to the user may be a recently updated row or a row which was not updated previously may be modified/updated by the time when we do the subsequent request.
I thought, i can solve the problem by intimating the server about the already served rows so that the server can filter the already served rows and give the recently updated rows for the rest of the data.
But this is not scalable as the number of already served rows to the users eventually pile up and i will have to send this huge amount of already served row id's to the server which hits the network!
Is there any cleaner way of solving this issue?
Maybe can you serve rows with data + id (directly mysql's ones).
So your client's request can send the last served item ID, you can now know which data to send back to the cliend (id + 1, +2...).
Show some code for more precise answer :)
Our website provides various data services to our clients; one of which is gauge data. Some gauges log information every 15 minutes, some every minute. This data is sent to our SQL database.
All of this data is displayed via a graph (generated server side via PHP and JPGraphs) with each individual log entry being displayed as a row in a collapsible table (jquery 1.10.2).
When a client wants to view the data, they select a date range and which gauges they would like to view. If they want to view the last 3 days of a gauge that logs every minute then it loads pretty quickly. If they want to view 2 of those then it takes around 15-30 seconds to load. The real problem comes when they want to view a months worth of data; especially more than 1 gauge. This can take upwards of 15-20 minutes to load and the browser repeatedly asks if we want to stop the script from populating the collapsible table rows(jquery).
Obviously this is a problem since clients want a relatively fast response (1-5 min max). Ideally, we would also like to be able to pull gauge data from several months at a time. The only way we can do that now is to pull data 2 weeks at a time and compile the total manually.
For reference: If I wanted to pull a months data for 2 of our once-a-minute-logging gauges, then there would be 86,400 rows added via jQuery to a collapsible table. The page takes approx. 5 minutes to load and the browser is terribly slow during this time period.
My question is: What is the best way to pull/graph/populate data using a PHP based server (Symfony 1.4 framework) and javascript?
Should we look into upgrading our allotted processing power/RAM(we are hosted by GoDaddy)? Is there a faster way to populate collapsibles than with jquery? All of our calculatoins are done server side. Should we just pull the raw data and let the client side do the data processing? Should we split the data processing between client and server?
Here's a screen shot of the web page. Its cropped so that more client-sensitive information is not displayed:
In response to my comment.
Since you need the entire data-set only on the server side (you create your graph on the server), this means that you don't actually need to send the entire data-set to the client.
Instead send a small portion to the client. Let's say the first 200 results. Then you can go ahead and cache the rest of the result-set into a JSON file (lite database, whatever you want really). Then create an interface where the user can request for more data. Infinity scroll is nice but has its own problems. Maybe just a button that says load more data. As people have said anything more than a few hundred data points in a table at one time is crazy to have because people won't look at it anyways. Then when they hit the button to get more data, you send an AJAX request to the server with the correct parameters for what data you want.
For example the first time they click getMoreData() you want to get the next 200 data points. So you send getMoreData(start=200, length=200). Your server picks up the AJAX request and finds the correct data in the JSON file or the lite database, wherever you have cached the results. And the user can keep requesting more data (making sure you update your start parameter), and you only ever return a small subset. The user doesn't even realize that they don't have the whole data-set there in front of them because it looks like they do.
One that is complicated about this is sorting and searching. If you want to implement those then you need to make sure you go to the server side and sort/search through the cached results.
So basically you have a system where you can create the entire graph on the server side which shouldn't take long. What does take long is the loading of the entire data-set to the client side. So you break up that up into small chunks. You can even easily create pagination and the such with this method.
I am working on an ajax application which will display about a million records in an html table. Web service returns records from server, I build a logn string by concatinating data and tags and than put this string using innerHTML (not using DOM for getting better performance).
For testing I have put 6000 recods in database (stored procedure takes about 4 seconds in completion of its execution).
While testing on local system (database and application on same machine) it took about 5 minutes to display the records in page. After deplying on web server it did not responde even for more time. It looks very low performance. I put records in a CSV file and its weight was less than 2 MB. I couldn't understand why string concatinations to build html table and putting string in innerHTML is taking such a huge time (if it is the issue). Requiment is to show about million records in web page but performance on just 6000 records is disappointing. I am not gettign what to do to increase performance.
Kindly guide me and help me.
You're trying to display a million records on a single page? No matter how you optimize your server code, that's a LOT of html to parse/render, especially if it's in a table.
Even using .innerHTML isn't going to "save" you any time. The rendering engine is still going to have to parse/style/render/position many millions of table rows/cells and you WILL have to wait while it's working.
If you absolutely HAVE to show all those records on a single page, try to break things up into manageable chunks. Have the AJAX call return (say) 100 records at a time, put those into the table, then fetch another 100 records, etc... At least that way you'll see the content of the page growing, rather than having to sit there and wait for 1,000,000 table rows to get displayed in a single shot.
A better option would be to do pageination, where only 100 records are shown at a time and you present a standard navigation with << first / prev / next / last >> buttons to swap through "pages" of data.
As Marc stated, you need pagination. See if this helps - How do I do pagination in ASP.NET MVC?
In addition to this you could optimize the result by employing master-detail pattern - fetch only the summary of the record (master) and on some action in master, fetch details and display on the screen. This will reduce the size of data being transfered from the server.