I am looking a proper solution to implement the below scenario.
I have a large amount of json data to process, previously the whole data is send to the server using rest apis and server perform certain action on each json row, after a very long time server sends the response back with the processing status of each row. but this approach always make the user confused, whether it is processing or not (because they are seeing the loading screen for over a 10 mins), since i am using rest apis I can't get any status while processing the data.
I am looking
Is there any good way to send the data in small batches ?
how can I get the status from server while processing the data ?
my frontend is ReactJS and backend is nodejs
As per your given situation. You can use the pagination and send the data in a limited number of data packets.
If the user wants more data he can click on the next or previous data. In this case, I would suggest ANTD library. It works perfectly for pagination. For your reference here is the link for the trial.
You can also refer to this codeSandbox by Andrew Guo.
https://codesandbox.io/s/m4lj7l2yq8?file=/src/index.js
https://ant.design/components/pagination/
AS you mentioned the data is taking too long, for this I may suggest indexing in the database.And you can show Suspence until the whole data is loaded. This may help the user to understand that the data is been loaded.
Here is the link for your reference
https://ant.design/components/spin/
Related
Problem
I have a scenario that need to call an API multiple times till get full data from server, Since the data is huge, server is sending the data chunk by chunk.
Current Implementation
Current implementation existing in GWT (Google web toolkit) and the scenario handled is, if the server taking time to collect the full data, will send a response with token id and 1st chunk of data. So that client will not get timeout and make another request with the latest token id. This continues process happens till client get full data.
challenge in React part
Currently i am planning with Redux and ReduxSaga implementation. But the challenge is how to hold the data without updating the state and make the repeated API call and wait till server respond full data.
I use Axios to get a response from an endpoint and then I input that response into a variable in the form of an object. The response I get from the server has more than 10000 data.
My question is :
How to process the data so it doesn't take up memory and can insert it into a table?
Is there a request handler that can process that response better?
Or is there a method to partially download the response so that it can be partially consumed by the user from the frontend?
There are many things to look at here:
Every site has a memory limit ( Assigned by the browsers ) and the site crashes if the memory exceeds.
The short and ugly fix would be... taking all those and storing only some part of it (Ugly fix).
The right and correct fix would be making a kind of pagination and limiting the number of data per request. in this case, You'll have proper data and a good user experience with the help of Tabs/pages with limited scroll. And most important, the site's memory will be optimized.
Consider I have a zoo app that shows all the zoos for each city. Each city is a page with a list of zoos.
In my current solution, on each page, I have ajax call to the server that pulls the list of the zoos for that particular city.
The performance is extremely important for me and my thought was to remove the ajax call and replace it with a JSON object that will live in the app. That way I will save a call to the server and I believe the data will arrive faster.
Is this solution makes sense? There are around 40 cities with ~50 zoos for each.
Consider the data is static and will never change.
Since 900 records is not much **, you can get all the records at once during the initial load and filter the all records array by city, that way your user experience would be much smoother, since client side js processing is far better than n/w latency.
** - note: strictly considering the data set size of ~900
Other solution can be - cache the data in the session scope and when ever there is a specific request for a city check for the availability in session scope, if it's not there make a n/w call.
I think correct question is what is my performance requirements?
Because you can write all your data in json object and do everything on client side without any ajax call but in this case when any client visit your page that means it will download all data. and that is another question mark
I've developed a web application with the concept of Single Page Application but none of the modern techs and frameworks.
So I have a jQuery page that dynamically requests data to localhost - a Laravel instance that compiles the entries in the DB (within a given time interval).
So the client wants to see all the entries for last week, the app works fine. But if he wants to see the results for the whole last month... well, they're so many that the default execution time of the php ins't enough to process all the data (30 seconds). I can easily override this, of course, but then the jQuery client will loop through these arrays of objects and do stuff with them (sort, find, sum...). So I'm not even sure jQuery can handle this many data.
So my question can be broken in two:
Can laravel ->paginate() be used so the ajax request of jQuery can also chunk the data? How does this work (hopefully in a manner that doesn't force me to rewrite all the code).
How could I store large amounts of information on the client? It's only temporary but the users will hang around for a considerable amount of time on my webpage, and I don't want them to wait 5 minutes every time they press a button
Thanks.
If you want to provide an interface to a large amount of data stored in a backed, you should paginate the data. This is a standard approach, so I'm sure your client will be ok with that.
Using pagination is pretty simple - see the docs for Laravel 5.0 here: http://laravel.com/docs/5.0/pagination
In order to paginate results in the backend, you need to call paginate($perPage) on your query instead of get() in your controller, like that:
$users = User::whereIsActive(true)->paginate(15);
This will return paginated result with 15 records per page. Page number will be taken from page parameter of the request. In order to get 3rd page of users, you'll need your frontend jQuery app to send a request to URL like:
/users?page=3
I don't recommend caching data in the frontend application. The data can be changed by some other user and you won't even know about it. And with pagination, your requests should be lightweight enough to stop worrying about a request sent to fetch every page of results.
Not sure if you're subscribed to laracasts but Jeffery Way is amazing in explaining features of Laravel and I highly recommend his videos.
In short you can paginate the results, then on the view when you call the foreach on your items you can array_chunk() the results to display them how you need to. But the paginated results are going to be fetched using a query in the URL, and i'm not sure that is what you want if you're already using a lot of jQuery to keep everything on the same page.
https://laracasts.com/lessons/crazy-simple-pagination
But assuming you're already paginating the results with whatever jQuery you've already written for the json data...
You could also use a query scope to get the data you need to for the amount of time to scope to create a simple api to use with ajax. I think that's probably what you're looking for.
So here's what I would do assuming you're already doing some pagination manually with your javascript.
Create a few query scopes to filter the data for different lengths of time
Create simple routes to fetch results from URI using the query scopes
Get the json data from the route preforming an ajax requests to the URIs created
More information on Query Scopes: http://laravel.com/docs/5.1/eloquent#query-scopes
Our website provides various data services to our clients; one of which is gauge data. Some gauges log information every 15 minutes, some every minute. This data is sent to our SQL database.
All of this data is displayed via a graph (generated server side via PHP and JPGraphs) with each individual log entry being displayed as a row in a collapsible table (jquery 1.10.2).
When a client wants to view the data, they select a date range and which gauges they would like to view. If they want to view the last 3 days of a gauge that logs every minute then it loads pretty quickly. If they want to view 2 of those then it takes around 15-30 seconds to load. The real problem comes when they want to view a months worth of data; especially more than 1 gauge. This can take upwards of 15-20 minutes to load and the browser repeatedly asks if we want to stop the script from populating the collapsible table rows(jquery).
Obviously this is a problem since clients want a relatively fast response (1-5 min max). Ideally, we would also like to be able to pull gauge data from several months at a time. The only way we can do that now is to pull data 2 weeks at a time and compile the total manually.
For reference: If I wanted to pull a months data for 2 of our once-a-minute-logging gauges, then there would be 86,400 rows added via jQuery to a collapsible table. The page takes approx. 5 minutes to load and the browser is terribly slow during this time period.
My question is: What is the best way to pull/graph/populate data using a PHP based server (Symfony 1.4 framework) and javascript?
Should we look into upgrading our allotted processing power/RAM(we are hosted by GoDaddy)? Is there a faster way to populate collapsibles than with jquery? All of our calculatoins are done server side. Should we just pull the raw data and let the client side do the data processing? Should we split the data processing between client and server?
Here's a screen shot of the web page. Its cropped so that more client-sensitive information is not displayed:
In response to my comment.
Since you need the entire data-set only on the server side (you create your graph on the server), this means that you don't actually need to send the entire data-set to the client.
Instead send a small portion to the client. Let's say the first 200 results. Then you can go ahead and cache the rest of the result-set into a JSON file (lite database, whatever you want really). Then create an interface where the user can request for more data. Infinity scroll is nice but has its own problems. Maybe just a button that says load more data. As people have said anything more than a few hundred data points in a table at one time is crazy to have because people won't look at it anyways. Then when they hit the button to get more data, you send an AJAX request to the server with the correct parameters for what data you want.
For example the first time they click getMoreData() you want to get the next 200 data points. So you send getMoreData(start=200, length=200). Your server picks up the AJAX request and finds the correct data in the JSON file or the lite database, wherever you have cached the results. And the user can keep requesting more data (making sure you update your start parameter), and you only ever return a small subset. The user doesn't even realize that they don't have the whole data-set there in front of them because it looks like they do.
One that is complicated about this is sorting and searching. If you want to implement those then you need to make sure you go to the server side and sort/search through the cached results.
So basically you have a system where you can create the entire graph on the server side which shouldn't take long. What does take long is the loading of the entire data-set to the client side. So you break up that up into small chunks. You can even easily create pagination and the such with this method.