My customer give me a table with 12k records (about 20->30MB) and then they want to show all in screen, at this time they dont want to pagination.
When the page is loaded, I call api and update new state for component but I take about 10s to finish render and when I scroll this list, it's slow too.
My question is How do I make it faster?
This is second case, when I try with 33k records (about 51MB), memory leak occur and white screen appear.
My question is What is the limitation of state? Do I update state with bigger data?
First of all, what you need is Infinity Scroll.
It's like what Netflix or Prime Videos Does.
First You call 20 Records and when you scroll to the bottom it will call 20 more records and then so on.
So It will start with 20 and as soon as you are about to hit the bottom of the scrollbar you will call the API to fetch 20 more and add it to the old List.
Now If you have scrolled a lot and you have like 2000+ Records and It slows down, then use react-window or react-virtualized package, what this does is only render the content which you are viewing in the dom.
Check this video for reference https://www.youtube.com/watch?v=QhPn6hLGljU.
For the first question, the reason why it becomes slow is because the DOM you are rendering is giant, so it consumes way too much memory, hence your browser starts to hog your RAM, you should implement virtual scroll so that only visible elements are loaded in the DOM.
Related
I searched the internet for these new features of angular 7, but didn't fully understand it.
I went through drag and drop and virtual scrolling
Could someone please shed some light on these?
now consider a case where you are to display huge chunk of data, now either you will do pagination which will include an api call per page (if data changes frequently) or if you load everything at once which will slow down or kill UI process.
Virtual Scroll is about loading huge chunk of data in DOM without hampering the performance.
it's key features are:
data is displayed as per size of viewport i.e. if you container div is 500 px it will show around 10-15 rows at a time.
Ad you scroll these rows are changed but number of elements in the DOM will remain consistent.
This is handy when you have to show huge chunk of data without implementing pagination.
Thus it improves the UI performance.
I implemented Virtual list displaying multiple column and array length was 1 million which is a huge amount of data to display at a time.
Virtual list is implemented over virtual scroll and it supports multiple columns.
check out detailed explanation and code here:
https://www.codeproject.com/Articles/5260356/Virtual-List-in-Angular
please have a look in the image:
I currently make some async ajax calls and creates rows in table based on returning data. If rows are around 400-500, page hangs after this dom creations. eg. if i click any text box or drop-downs, it stuck forever. If rows are around 100-200 and then i click any text box, it is still slow but at least it does not stuck.
So, I think the problem is there are too many dom to be created and this causes some problems in browser or page or whatever.
Do you have any ideas or any solutions to improve this performance?
You need to lazy load your data somehow. Ever noticed on sites like Twitter, Facebook, and others that when you scroll to the bottom of the page it will begin loading more records from the server? Good apps will start to garbage collect old records that have been scrolled up as well.
When you scroll through your Facebook news feed it's not loading all your friends post since 2007 into the browser all at the same time. Once a maximum number of posts exists in the DOM Facebook will start removing the oldest ones you scrolled up to make room for more and grab fresh posts from the server so you can continue scrolling. You can even see your browser scroll bar jump up as you scroll down because more posts are being added to the DOM.
No browser is going to be able to handle that much data. You're going to have to sit down and think of a better way to show that data to the user. Only you will know what experience your users will actually want, but no matter what you'll definitely have to reduce the amount of elements you're including on the page.
Example:
Notice how the browser scroll bar jumps up a bit when it gets to the bottom. Twitter gets to the bottom and then loads more data to scroll through. It will eventually start cleaning up data at the top of the page as well if I scroll far enough.
The simplest solution is probably going to be for you to pass up a page number with your ajax requests and have your server only return the results for that page of data.
I'm trying to learn better how Chrome (and other browsers) are rendering pages. I do have some good understanding now of how DOM and CSSOM are constructed. Also, I know a bit about processes of performing Layout and Paint phases.
I tried the following idea:
try rendering 1000 items created by regular calls to DOM API
each item is placed inside a <div> element (floated left) and appended to <body>
this takes around 1000ms based on timing.js output (tool from Addy Osmani) and it's also visible inside the Timeline in chrome
JS Fiddle is available here. Sorry for the quality of this example, but I was drafting this in a hurry.
Based on Timeline analysis I figured out that each <div> is laid out before the first Paint occurs. This is certainly not what I want, because only first 10 are visible on screen after page loads.
So I figured that I will first render initial 10 items which I have hoped will cause Paint. Right after that (wrapping the call in setTimeout(function(){},0)) I'm appending the rest 990 items to the body.
I was hoping to have the Paint method to happen right after initial 10 items are laid out, but that doesn't happen. I can see two Layout phases and the Paint happens after all 1000 items are laid out. Any ideas where did I go wrong here?
I am working on a project that fetches at least 50,000 data sets that have to be displayed as tabular data. The data sets are fetched from our MySQL database. The users have the following requirements...
The users don't want to set up pagination but they want to render more and more table rows as they scroll down the page.
If they scroll up, the users should not see the table rows that were previously displayed when they scrolled down. That means that I have to either delete the table rows that were previously created or hide them.
The lag or load time so as to not detect any obvious latency or lag.
Our project uses a LAMP (Python) stack with Django as a framework. We use Django templates which are server side templates. I don't know how to translate the same experience to client side. I have an idea for an approach, can someone either validate it or correct it?
My intended approach is given below:
Fetch a certain subset of rows of the original data set upon load (say 5000). This will be cached server side with memcached. On page load, only a certain subset of it (say 100) will be displayed. Each page will have certain state associated with such as page number or page size. These will be initialized using HTML5 history pushstate API.
When an ajax request is made, then additional data sets will be fetched and additional table rows will be appended to the existing table rows.
When the user scrolls up and reaches what would have been a previous page, the table rows get deleted or hidden.
I don't know how to implement step 3. What event should I listen to? What preconditions should I check for in order to trigger step 3. I also don't know if step 2 is the correct approach.
Here's what I would do:
Add a class (scrolled-past) to any row that is scrolled past the window top. (If you're using jQuery, jQuery Waypoints is a great tool to detect this.)
Fetch more data when reaching bottom.
When the data has been fetched, hide all .scrolled-past elements, append new rows and scroll to the top of the table (where to first row now is the one that was previously the uppermost visible row).
There might be glitches when hiding, appending and scrolling, but I bet you'll nail it with an hour of tweaking. Adding the exact top offset of the uppermost visible row to the scroll offset in step 3 is one way of making it more neat.
I am working on some custom jQuery/javascript navigation for a site and I am curious about the performance implications of a design decision.
The way it works is for every option there are up to 8 child options, this hierarchy can go 4 levels deep. I believe this makes for 8^4 or 4096 possible navigation items (probably less but this is the max). These relationships are defined on the server side.
Currently I am working with test data, so there are only about 50 navigation items. When the page loads, I create every navigation item and then only display what is needed for the current selection.
Should I consider rewriting this to only load the items that are needed when a selection is made via an AJAX call or something? I am concerned that my current approach may not scale well if it goes up to 4096 navigation items.
If having 4096 navigation items is a real possibility then you'll have to do something like what you're describing. Simply loading the items into the DOM will take considerable time and further processing will cause greater delays and a poor experience.
For a small number of items, it probably isn't worth your while to over-engineer the solution. However, the performance gains on a large number of items would be expected to be significant.
Here is an example of on-demand loading in a Telerik Treeview. I'm not advocating purchasing the controls (great controls but expensive) however it is an excellent example of what is possible. Coding this on your own wouldn't be difficult to do and, as you can see, makes for a great user experience.
My two cents: if you have the time, do it now before things get even more complicated/difficult to do later.
Downloading them all at the same time is definitely an option, though loading them into the DOM is another story. If you really reach the 4096 possibility limit, you can be looking at pushing down 1-2 megabytes a page load (not to much considering image sizes). Unless you are looking at more data (maybe 16 nodes, 8 levels deep 16^8), then it would be a valid concern.
you could always load 2 deep (8^2 = 64), then when they open a panel, load everything for that panel. The second layer they need to click through should give you enough time to load the rest of the values.