Angular 5 growing browser memory - javascript

I have Angular 5 project. It's admin web with many tabs. Every tab contains complicated filter and grid. After often switching tabs grows memory (in Chrome js heap size), but number of DOM Nodes isn't growing. It's not problem to grow memory to 1GB in 5 minutes. IE has a limit of about 1.5 GB and then falls due to a lack of memory.
At first I thought it was only a problem of our application or Angular, but I created a simple html test page with two buttons and simple javascript. One button adds a textbox and the other removes it. Then I created a simple E2E test that added 1000 times and removed the textbox and memory in the browser grew as well. Tested in Chrome (66) and IE11 with same symptoms as angular application, JS Heap is increasing, but number of DOM Nodes is fine.
Then I tried to watch for example facebook.com, that is one of best known single page application. I watched the same behavior and it is not a problem to drop the app in a moment.
I think this problem is caused by adding and removing many html elements over and over again.
Does anyone have similar experiences or solution?

Related

Rendering a view blocks the event loop when using Angular Material

We're using Angular together with Angular-material design. We noticed that during rendering some more complicated views, the scripting and rendering blocks the event loop thus it makes the whole page unresponsive for a few seconds. I created a simple page that demonstrates this problem: https://quirky-hugle-91193d.netlify.com (source code: https://github.com/lukashavrlant/angular-lag-test) If you click on the button, it will generate 200 rows with a single mat-checkbox component. There is also a counter that is refreshed every 10 ms if everything is fine. You can clearly see that the counter stops during the rendering because the event loop is blocked by the *ngFor rendering the rows. I tried to measure the length of the block, it blocks it for 400 ms on my machine.
My question is: is it completely "normal" behavior? Can it be avoided? Is it really so much data that it has to block event loop for that long? Because the delay is easily noticeable by user.
The only solution we found was to render it incrementally. I. e. render one row, wait 1 ms, render the next row, wait 1 ms etc. Isn't there a better solution?
EDIT 1: I tried also the Chrome's Performance tool. It says it spent most of the time in "scripting" phase. I tried to look at the stack traces there but wasn't able to identify the problem.
Adding element to the DOM has a cost that you won't be able to reduce ever.
However if you look at the performance tab of the devtools while you are adding the elements you will notice a few things.
Right now it looks like you are build in dev mode, not prod. A prod build should be significantly faster.
Secondly, while there's an associated cost to adding elements in the DOM, there's also a much higher cost to instantiating those elements. Especially when it comes to Angular Material elements. They are really pretty and well made, but they are also on the heavy side when it comes to both code and html nodes. If you have to including lots of checkboxes at once, using regular html elements with a bit of css to make them look similar instead of an actual angular element might be they way to go.
In my case the actual operation of inserting everything in the DOM is ~90ms, out of the whole 350ms including the click event triggering the creation of all the elements in the ngFor loop.
The last part is further supported by the cost of elements after adding them, that you can also easily notice when looking at the performances. This is exacerbated by the dev mode once again, but it's present nonetheless.
So I would recommand trying it out in a more optimized build mode, and if it's still not enough maybe swap the Material component for a basic checkbox with css (maybe even a bit of svg).

Reduce Javascript memory leaks?

I have been trying to reduce javascript memory used by our web application (single-page application without MVC framework with jQuery & Bootstrap and a lot of Plugins ). And remove memory leaks we have. In our web application, there is a dashboard filled with Highcharts. It can be of any number can be 0 or can be 50 on the basis of user's choice. There are plenty of pages with Datatables, there are a lot of forms to create update data through the app.
Now I have been called to reduce memory usage. I have reduced memory leaks using google chrome developer tools by using "Heap Shot Profiling" in memory tab. As on refresh of charts I was adding a new chart and not deleting old chart. So after removing old chart which is replaced made a good impact. Still, we want to dig dipper and see what else we can improve.
Right now I am seeing the same thing with our DataTables we are adding new Datatables whenever you go to a new page. After checking I found out that we are not destroying them after, destroying them do not make that impact what, we had with Highcharts.
Here we call a jQuery ajax call to change the content of the page. We use ".html( newContent )" to change text in our main container as well as with every HTML element in which we want to change content as per reading it says it should remove all the event listeners and references to those elements should be removed but still data tables are present in memory. And maybe many other things.
Furthermore, I tried removing BODY tag and checking memory screenshots. The size of the screenshot remains same. It shows a lot of elements have been de-attached. But its impact on Javascript Memory usage and Javascript memory allocation is near to 0.
Am I on the right path to reduce Memory leaks. Or there is nothing I can do now more?
There are few screenshots.
These all screenshots are after removing the all the children of the body tag. Like if we remove all of the elements it should have low javascript memory usage but here I see change near to 0.
Here you can see we are still using 50MB in javascript Memory.
We are still having Datatables in our memory.
There is an increase in the snapshot and also increase in memory allocation. Yes, there are decreases in memory allocation at down where it shows unattached elements but it is nominal.
What should be the interpretation of all these data?
I have read a lot of StackOverflow questions regarding this and blogs for memory leaks. But nothing is making a big impact?

Chrome App Poor Performance

I am developing a Chrome App which is based on the same code as the normal web based version. It's a web-audio app so quite performance critical for timing purposes.
I have noticed whilst holding down the mouse button within the app and wiggling it around, performance drops significantly, enough to mess up the timing. This issue does not occur in the ordinary browser based version which is running the same code.
I have recorded the activity with the Chrome developer tools and the only thing I can spot that does not happen with the browser based version is a function call to updateAppWindowProperties - which is a built in Chrome App function.
I have attached a screenshot of the dev tools recording where you can see 3 big spikes in the activity, these are the bits where I am holding down the mouse button and moving it around.
Anyone know what the cause of this could be, is it something to do with the Chrome App checking the window size?
It seems to have been fixed by reducing the number of css classes. I had a LOT of classes that were used only for jquery selectors, I changed them to data attribs and that seems to have fixed the performance issues.

Performance issues in IE11 using ExtJS 4.2.1

I am facing performance issues with Tabs Panel for IE9 (and above) if I even open just 2-3 tabs with HTML document of size varying from 1MB to 20MB. Then on switching between tabs it’s taking around 3-4 seconds (when page has only tab panel) and around 5-6 seconds (when page has a lot of other extjs components) on IE-11. The response is almost 1-2 secs on IE8 which is very surprising.
I also tried with different hideMode options (Display, offsets, visibility, asclass) but with not much benefit although hideMode='asclass' is comparatively fast than other options.
I also created a sample page to confirm if it is really document size issue or extjs component issue. Created simple tabs (with divs) without any JS library and just changed their css z-index (instead of css display or css visibility) and they were switching instantly (on all IE versions). Trying the same solution on a separate page for ExtJS doesn't help on IE11 (somewhat faster on IE8). So it seems to be ExtJS specific issue.
Has anyone else ever faced this issue? Any comments or solutions would be greatly appreciated.
You can use the profiler for IE to see whether any scripts related to ExtJS 4.2.1 take a long time to load. If you discover such a script, you can then proceed to making a bug report for this library.
Here's some detailed information of how to do JavaScript profiling in IE.
Profiling JavaScript performance

How to structure a HTML/JS iPad application for best performance?

I'm developing a HTML-based iPad application that makes heavy use of JavaScript for its UI. GUI is going to be magazine-like i.e. chopped into screens/views that the user then navigates between with touch events and webkit transitions. All of this runs locally on an iPad (via a native wrapper such as PhoneGap, etc.).
Lets say the application is going to have 50-100 of those screens - filled with standard web elements like text, images, tables and forms.
How to structure that for best performance? Which of the following 2 methods is preferable and why?
having only 3 immediate (current, previous, next) views/screens in DOM and then appending new ones (and deleting the old ones) into DOM as the user navigates forward/backward?
having the entire 50-100 HTML screens generated at start and then hiding (display:none) all of them but the above 3
So basically what works better memory/performance-wise? From one side continuous DOM operations might be costly (and worse make the transitions between app screens jerky) - and from the other side - don't know if having up to a 100 HTML screens pre-generated in a single document DOM won't make Mobile Safari choke to death. Of course even though those screens are in DOM, most of them are display:none most of the time - but is the mobile safari garbage collector that good? Has anyone tried this out?
BTW please note that this is not an image-memory problem/leak type question. I'm aware of that problem and will be handling it via small-dummy-image unloading trick no matter which way I go. This is only about HTML views - skeletons if you will.
If the content is all going to be downloaded to the device at the time of launch, and everything is locally cached, then definitely go with the first method, as it will have a significantly smaller memory footprint.
If the content is downloaded on the fly, however, I would go with a mix between 1 and 2... probably prefetching and rendering the next two or three slides in either direction, or maybe one in each direction and one for each link on the page.

Categories

Resources