Limitation of Angular and browsers with regards to data loaded - javascript

I would like to know what is the maximum data that angular framework can handle. Say, I am displaying a chart using angular and some charting framework like chartjs. I'd like to know up to how many data can the browser display properly, with slowness, or up to when it crashes.

Your question has no simple answer, but I will try to flatten it and give a simple answer, or at least simple things to consider...
Angular (at runtime), like many other frameworks is simply JavaScript,
So let us reduce the question to "Limitation of JavaScript and browsers with regards to data loaded",
JavaScript has no upper limit of memory or storage it can handle,
I've seen JavaScript applications that require more than 15GB of RAM,
and they performed well too.
So assume data size itself is not an issue (unless your application is poorly implemented, leaking memory or just not very efficient, of course).
The main challenge as I see it, is displaying and manipulating the information
without causing unnecessary delay or unresponsiveness.
Displaying the information - let's say you have a list (or a table) containing 1,000,000 possible gifts which you then want to display for the user to select.
Adding the list items to the document one by one is tempting, but will require the browser to repaint after each addition (causing a delay or full unresponsiveness until finished), another way is adding the elements to some DOM element (denoted by N) still being kept in memory, then adding all elements corresponding the list items to the element N (still, just an in memory operation), finally adding N to the document containing the entire list - the will be a much better solution for displaying the large amount of data.
Manipulating the information - displaying is indeed not enough. you would like to move, drag, sort and filter the data being displayed. And as mentioned before, it is a bad idea removing many elements directly from DOM. You should instead remove container from the document's DOM to memory, manipulate the data in it, and then add the container right back to the document. Angular does this kind of magic for you.
(Toggling the 'display:none\block' css attribute of many elements has a similar blocking effect as I recall).
A good practice is implementing an application/page showing only the amount of data that can be processed by a human at a single glance. The rest of it should be considered in the application data-layer, in memory, and should be loaded to display given the appropriate need or request.
To conclude, you can deal with huge amounts of data as long as you provide a mechanism that efficiently filter the displayed information.
I hope it helps...
for further reading:
Slow and fast ways of adding elements to DOM
A question emphasizes the lack of memory limit used by JS
CSS display attribute performance
A good discussion about the reasons for slow DOM
About using HTML5 correctly - old but still true
Once the DOM creation procedure is understood - it much easier to display data without affecting performance / user experience

Related

Reduce Javascript memory leaks?

I have been trying to reduce javascript memory used by our web application (single-page application without MVC framework with jQuery & Bootstrap and a lot of Plugins ). And remove memory leaks we have. In our web application, there is a dashboard filled with Highcharts. It can be of any number can be 0 or can be 50 on the basis of user's choice. There are plenty of pages with Datatables, there are a lot of forms to create update data through the app.
Now I have been called to reduce memory usage. I have reduced memory leaks using google chrome developer tools by using "Heap Shot Profiling" in memory tab. As on refresh of charts I was adding a new chart and not deleting old chart. So after removing old chart which is replaced made a good impact. Still, we want to dig dipper and see what else we can improve.
Right now I am seeing the same thing with our DataTables we are adding new Datatables whenever you go to a new page. After checking I found out that we are not destroying them after, destroying them do not make that impact what, we had with Highcharts.
Here we call a jQuery ajax call to change the content of the page. We use ".html( newContent )" to change text in our main container as well as with every HTML element in which we want to change content as per reading it says it should remove all the event listeners and references to those elements should be removed but still data tables are present in memory. And maybe many other things.
Furthermore, I tried removing BODY tag and checking memory screenshots. The size of the screenshot remains same. It shows a lot of elements have been de-attached. But its impact on Javascript Memory usage and Javascript memory allocation is near to 0.
Am I on the right path to reduce Memory leaks. Or there is nothing I can do now more?
There are few screenshots.
These all screenshots are after removing the all the children of the body tag. Like if we remove all of the elements it should have low javascript memory usage but here I see change near to 0.
Here you can see we are still using 50MB in javascript Memory.
We are still having Datatables in our memory.
There is an increase in the snapshot and also increase in memory allocation. Yes, there are decreases in memory allocation at down where it shows unattached elements but it is nominal.
What should be the interpretation of all these data?
I have read a lot of StackOverflow questions regarding this and blogs for memory leaks. But nothing is making a big impact?

React-like programming without React

I grew up using JQuery and have been following a programming pattern which one could say is "React-like", but not using React. I would like to know how my graphics performance is doing so well, nonetheless.
As an example, in my front-end, I have a table that displays some "state" (in React terms). However, this "state" for me is just kept in global variables. I have an update_table() function which is the central place where updates to the table happen. It takes the "state" and renders the table with it. The first thing it does is call $("#table").empty() to get a clean start and then fills in the rows with the "state" information.
I have some dynamically changing data (the "state") every 2-3 seconds on the server side which I poll using Ajax and once I get the data/"state", I just call update_table().
This is the perfect problem for solving with React, I know. However, after implementing this simple solution with JQuery, I see that it works just fine (I'm not populating a huge table here; I have a max of 20 rows and 5 columns).
I expected to see flickering because of the $("#table").empty() call followed by adding rows one-by-one inside the update_table() function. However, the browser (chrome/safari) somehow seems to be doing a very good job of updating only that elements that have actually changed (Almost as if the browser has an implementation of Virtual DOM/diffing, like React!)
I guess your question is why you can have such a good graphics performance without React.
What you see as a "good graphics performance" really is a matter of definition or, worse, opinion.
The classic Netscape processing cycle (which all modern browsers inherit) has basically four main stages. Here is the full-blown Gecko engine description.
As long as you manipulate the DOM, you're in the "DOM update" stage and no rendering is performed AT ALL. Only when your code yields, the next stage starts. Because of the DOM changes the sizes or positions of some elements may have changed, too. So this stage recomputes the layout. After this stage, the next is rendering, where the pixels are redrawn.
This means that if your code changes a very large number elements in the DOM, they are all still rendered together, and not in an incremental fashion. So, the empty() call does not render if you repopulate the table immediately after.
Now, when you see the pixels of an element like "13872", the rendering stage may render those at the exact same position with the exact same colors. You don't have any change in pixel color, and thus there is no flickering you could see.
That said, your graphics performance is excellent -- yes. But how did you measure it? You just looked at it and decided that it's perfect. Now, visually it really may be very very good. Because all you need is avoid the layout stage from sizing/positioning something differently.
But actual performance is not measured with the lazy eyes of us humans (there are many usability studies in that field, let's say that one frame at 60 Hz takes 16.6 ms, so it is enough to render in less than that). It is measured with an actual metric (updates per second or whatever). Consider that on older machines with older browsers and slower graphics cards your "excellent" performance may look shameful. How do you know it is still good on an old Toshiba tablet with 64 MB graphics memory?
And what about scaling? If you have 100x the elements you have now, are you sure it will scale well? What if some data takes more (or less) space and changes the whole layout? All of these edge conditions may not be covered by your simple approach.
A library like React takes into account those cases you may not have encountered yet, and offers a uniform pattern to approach them.
So if you are happy with your solution you don't need React. I often avoid jQuery because ES5/ES6 is already pretty good these days and I can just jot down 3-4 lines of code using document.getElementById() and such. But I realize that on larger projects or complex cases jQuery is the perfect tool.
Look at React like that: a tool that is useful when you realize you need it, and cumbersome when you think you can do without. It's all up to you :)
When you have something like this:
$("#table").empty()
.html("... new content of the table ... ");
then the following happens:
.empty() removes content and marks rendering tree / layout as invalid.
.html() adds new content and marks rendering tree / layout as invalid.
mark as invalid among other things calls InvalidateRect() (on Windows) that causes the window to receive WM_PAINT event at some point in future.
By handling WM_PAINT the browser will calculate layout and render all the result.
Therefore multiple change requests will be collapsed into single window painting operation.

Maximum number of element IDs you'd want on a page?

Having trouble searching on this. The only thing I've found so far was related to a mapping API where (for map markers) JavaScript bogs down badly around 500 IDs. I'm pretty sure that was related to the particular scripts in use though and not a general rule.
I have a page with a complicated list. Each item in the list has about 10 distinct IDs on it and there's several JavaScript scripts in play. The list is paginating and the user has a choice of how many items to show per page. Considering each item in the list has ~10 IDs, I'm wondering what I should set the maximum number of items per page to (total number of records the user can choose to view at one time on a page).
I mean - aside from increased load times based on the raw number of records, is there a known number of id's where a limit is hit or where CSS/JavaScript performance starts to degrade sharply?
Edit: Pardon, each 'item' is complex business card (to paint a picture) with three divs in play and a small form. That's whay each item (record) has about 10 id's.
I think you will find that it will ultimately depend on the user's browser and personal system.
What would probably make the most sense is to do some research on your intended user base, what browser and how much ram they have available. You can profile your application in your local environment (and there was already a discussion about doing just that). That would give you an idea of how many system resources you're using versus how much the user has available.
But given the amount of highly graphical javascript stuff going on with HTML5, ect., I would wager you're going to need to have a ridiculous amount of data displaying for the javascript performance to be your biggest sticking point (at least in the most modern browsers, if you have to support something like IE6, all bets are off).
Closed the parenthesis, sorry for the willie-giving
You didn't answer my comment and you've already accepted an answer, but I'll give an answer anyway, because I've got a feeling you may not need all the IDs you have.
Let's take following example code:
<div id="card-1" onclick="doSomethingWithCard('card-1')">
<h2 id="card-1-name">John Doe</h2>
</div>
Both IDs here can be unnecessary. Many people try to get a reference to an element, that triggered an event, by passing its ID to the event handler and using getElementById. Instead this already contains a reference:
<div onclick="doSomethingWithCard(this)">
...
</div>
And once you have that reference, you can use getElementsbyTagName, a CSS selector (usually with a JavaScript Framework/Library like jQuery) or XPath to access the elements inside instead of their separate IDs. This works best if all items have the same inner structur using the appropriate elements (h2, h3, ul, form, etc.) and classes.
jQuery example:
function doSomethingWithCard(card) {
alert($(card).find("h2").text());
}

Javascript performance problems with too many dom nodes?

I'm currently debugging a ajax chat that just endlessly fills the page with DOM-elements. If you have a chat going for like 3 hours you will end up with god nows how many thousands of DOM-nodes.
What are the problems related to extreme DOM Usage?
Is it possible that the UI becomes totally unresponsive (especially in Internet Explorer)?
(And related to this question is off course the solution, If there are any other solutions other than manual garbage collection and removal of dom nodes.)
Most modern browser should be able to deal pretty well with huge DOM trees. And "most" usually doesn't include IE.
So yes, your browser can become unresponsive (because it needs too much RAM -> swapping) or because it's renderer is just overwhelmed.
The standard solution is to drop elements, say after the page has 10'000 lines worth of chat. Even 100'000 lines shouldn't be a big problem. But I'd start to feel uneasy for numbers much larger than that (say millions of lines).
[EDIT] Another problem is memory leaks. Even though JS uses garbage collection, if you make a mistake in your code and keep references to deleted DOM elements in global variables (or objects references from a global variable), you can run out of memory even though the page itself contains only a few thousand elements.
Just having lots of DOM nodes shouldn't be much of an issue (unless the client is short on RAM); however, manipulating lots of DOM nodes will be pretty slow. For example, looping through a group of elements and changing the background color of each is fine if you're doing this to 100 elements, but may take a while if you're doing it on 100,000. Also, some old browsers have problems when working with a huge DOM tree--for example, scrolling through a table with hundreds of thousands of rows may be unacceptably slow.
A good solution to this is to buffer the view. Basically, you only show the elements that are visible on the screen at any given moment, and when the user scrolls, you remove the elements that get hidden, and show the ones that get revealed. This way, the number of DOM nodes in the tree is relatively constant, but you don't really lose anything.
Another similar solution to this is to implement a cap on the number of messages that are shown at any given time. This way, any messages past, say, 100 get removed, and to see them you need to click a button or link that shows more. This is sort of what Facebook does with their profiles, if you need a reference.
Problems with extreme DOM usage can boil down to performance. DOM scripting is very expensive, so constantly accessing and manipulating the DOM can result in a poor performance (and user experience), particularly when the number of elements becomes very large.
Consider HTML collections such as document.getElementsByTagName('div'), for example. This is a query against the document and it will be reexecuted every time up-to-date information is required, such as the collection's length. This could lead to inefficiencies. The worst cases will occur when accessing and manipulating collections inside loops.
There are many considerations and examples, but like anything it depends on the application.

I have a couple thousand javascript objects which I need to display and scroll through. What are my options?

I'm working off of designs which show a scrollable box containing a list of a user's "contacts".
Users may have up to 10,000 contacts.
For now assume that all contacts are already in memory, and I'm simply trying to draw them. If you want to comment on the question of how wise it is to load 10k items of data in a browser, please do it here.
There are two techniques I've seen for managing a huge list like this inside a scrollable box.
Just Load Them All
This seems to be how gmail approaches displaying contacts. I currently have 2k contacts in gmail. If I click "all contacts", I get a short delay, then the scrollable box at the right begins to fill with contacts. It looks like they're breaking the task into chunks, probably separating the DOM additions into smaller steps and putting those steps into timeouts in order to not freeze the entire interface while the process completes.
pros:
Simple to implement
Uses native UI elements the way they were designed to be used
Google does it, it must be ok
cons
Not totally snappy -- there is some delay involved, even on my development machine running Firefox. There will probably be quite a lot of delay for a user running a slower machine running IE6
I don't know what sort of limits there are in how large I can allow the DOM to grow, but it seems to me there must be some limit to how many nodes I can add to it. How will an older browser on an older machine react if I ask it to hold 10k nodes in the DOM?
Draw As Needed
This seems to be how Yahoo deals with displaying contact lists. The create a scrollable box, put a super-tall, empty placeholder inside it, and draw contacts only when the user scrolls to reveal them.
pros:
DOM nodes are drawn only as needed, so there's very little delay while loading, and much less risk of overloading the browser with too many DOM nodes
cons:
Trickier to implement, and more opportunity for bugs. For example, if I scroll quickly in the yahoo mail contact manager as soon as the page loads, I'm able to get contacts to load on top of one another. Of course, bugs can be worked out, but obviously this approach will introduce more bugs.
There's still the potential to add a huge number of DOM nodes. If the user scrolls slowly through the entire list, every item will get drawn, and we'll still end up with an enormous DOM
Are there other approaches in common use for displaying a huge list? Any more pros or cons with each approach to add? Any experience/problems/success using either of these approaches?
I would chunk up the DOM-writing into handle-able amounts (say, 25 or 50), then draw the chunks on demand. I wouldn't worry about removing the old DOM elements until the amount drawn gets quite large.
I would divide the contacts into chunks, and keep a sort of view buffer alive that changes which chunks are written to the DOM as the user scrolls through the list. That way the total number of dom elements never rises above a certain threshold. Could be fairly tricky to implement, however.
Using this method you can also dynamically modify the size of chunks and the size of the buffer, depending on the browser's performance (dynamic performance optimization), which could help out quite a bit.
It's definitely non-trivial to implement, however.
The bug you see in Yahoo may be due to absolutely positioned elements: if you keep your CSS simple and avoid absolutely/relatively positioning your contact entries, it shouldn't happen.

Categories

Resources