I have a medium sized angular app which uses angular-1.2.10 and ui-router-0.2.8. When I transition to a particular state I am getting frame rate issues on the animation regardless of if I am using $animate on a ng-show or manually animating it.
When I dig into the profiler I can see that the $apply after the XHR is taking up to 200ms. Which I am presuming to be the cause of the lag. When I remove the code in the state I am going to, this problem goes as expected.
There is no large ng-repeat, and the bindings are fast:
This is leaving me a bit stuck as I can't see where the issue is originating from. If anyone can see something to point me in the right direction that would be great.
UPDATE
I have done into incoginto mode and run the same tests, with the $digest counter. The $digest runs 40 times and produces the following.
Lots of things seem to take a long time(30ms+) but I still can't find a cause.
UPDATE
looking at the timeline there seems to be a lot of DOMSubTreeModified.
Angular uses $digest cycles to see if anything needs updating. The pure fact that you've counted a lot is probably just another symptom of potential optimization. The true problem lies in the time it is taking, and the processing bottleneck since it's slowing down animations. So there are a couple of things to try:
Make sure you are not deep-watching anything, which means you shouldn't be passing 'true' for objectEquality. This process is more intensive and uses more memory as well.
Use isolate scope if directives are involved - if you can. Having an isolate scope will reduce the chatter of $digests in contrast to an inherited scope, since it will re-digest all shared scopes whenever the parent-controller changes.
Try replacing $watch statements with an event handler if they are rendered in the DOM. The reason for this is you can reduce the number of times the DOM is re-rendered by $broadcasting an event once the data has been processed (at the end of the XHR call) instead of it re-rendering each time you modify a property.
Can you animate via CSS using hardware-accelerated attributes to smooth it out?
Multiple $digests means you have cascading model changes, where changing a triggers a $watch('a') that in turn changes b, which triggers another digest that might trigger a $watch('c'), which triggers another digest that might (heaven forbid) trigger a $watch('a').
That cascade can take a long time even if each individual $watch evaluation is fast. If you can do all of your changes in one go without sending them propogating between watches you'll cut down your digest count.
It's hard to help without the code and bindings in markup. If you have parts of UI which are read only and it doesn't depend on multiple digest cycles, try using bindonce: https://github.com/Pasvaz/bindonce. It might help you reduce the number of watchers and unexpected digest cycles.
Related
We're using Angular together with Angular-material design. We noticed that during rendering some more complicated views, the scripting and rendering blocks the event loop thus it makes the whole page unresponsive for a few seconds. I created a simple page that demonstrates this problem: https://quirky-hugle-91193d.netlify.com (source code: https://github.com/lukashavrlant/angular-lag-test) If you click on the button, it will generate 200 rows with a single mat-checkbox component. There is also a counter that is refreshed every 10 ms if everything is fine. You can clearly see that the counter stops during the rendering because the event loop is blocked by the *ngFor rendering the rows. I tried to measure the length of the block, it blocks it for 400 ms on my machine.
My question is: is it completely "normal" behavior? Can it be avoided? Is it really so much data that it has to block event loop for that long? Because the delay is easily noticeable by user.
The only solution we found was to render it incrementally. I. e. render one row, wait 1 ms, render the next row, wait 1 ms etc. Isn't there a better solution?
EDIT 1: I tried also the Chrome's Performance tool. It says it spent most of the time in "scripting" phase. I tried to look at the stack traces there but wasn't able to identify the problem.
Adding element to the DOM has a cost that you won't be able to reduce ever.
However if you look at the performance tab of the devtools while you are adding the elements you will notice a few things.
Right now it looks like you are build in dev mode, not prod. A prod build should be significantly faster.
Secondly, while there's an associated cost to adding elements in the DOM, there's also a much higher cost to instantiating those elements. Especially when it comes to Angular Material elements. They are really pretty and well made, but they are also on the heavy side when it comes to both code and html nodes. If you have to including lots of checkboxes at once, using regular html elements with a bit of css to make them look similar instead of an actual angular element might be they way to go.
In my case the actual operation of inserting everything in the DOM is ~90ms, out of the whole 350ms including the click event triggering the creation of all the elements in the ngFor loop.
The last part is further supported by the cost of elements after adding them, that you can also easily notice when looking at the performances. This is exacerbated by the dev mode once again, but it's present nonetheless.
So I would recommand trying it out in a more optimized build mode, and if it's still not enough maybe swap the Material component for a basic checkbox with css (maybe even a bit of svg).
I'm developping an angular app right now for my company, but I reached a point where the app became extremely slow so I tried tunning it by using onetimebind everywhere I can, track by ...but it's faster to load at first but still laggy, it is composed of a pretty much huge nested objects, I've counted the total number of objects, it starts at 680 and can go up to +6000 for normal use of the app, oh yeah I should precise that the app is generating a form and pretty much +90% of the objects in the scope belongs to an input and are updated each time the client click(radio) keyup/change(text).
It also have like 5/6 arrays composed of objects and the array gets bigger/smaller accodring to the clients choice, and that's where it gets laggy, each time I add an object to the array, it takes like a second to render it, so I tried using nested controllers thinking that if the child of an object is updated Angular will render only this child and not all the others, but somehow the app got even slower and laggier :s (it's a bit faster when I use ng-show instead of ng-if but the memory used jumps from ~50Mb to ~150Mb)
I should also precise that the form is in a wizard style, and not all the inputs are displayed at once, the number of inputs that are displayed are between 10%-20% of the total inputs
Has anyone encountred this problem before? does anyone know how to deal with big scopes?
Sad to say, but that's intrinsic of the view rendering in angular.
An update in the model triggers a potential redraw of the entire view. No matter if you have elements hidden or not. The two way data binding can really kill performances. You can consider evaluate if you need to render the view only once, in that case there are optimizations, but I'm assuming that your form change dynamically, therefore a 2 way data binding is necessary.
You can try to work around this limitation but encapsulate sub part of the entire MVC. In this way a contained controllers only update the specific view associated to that scope.
You may want to consider using react (that has as first goal to address exactly your use case)
Have a look at this blog post for a comparison of the rendering pipeline between angular and react Js.
http://www.williambrownstreet.net/blog/2014/04/faster-angularjs-rendering-angularjs-and-reactjs/
I have a directive in Angularjs that will have a table with lots of rows(over 1000), so my boss said that i shouldn't use binding to make the grids content(because of angular's ~2000 limit in binding) and instead i should create the dom elements on the fly.
And i did that with angular.element(...), and it works.
BUT now i am thinking if there could be a performance boost if i use the native js document.createElement?
So is jqlite slower than pure js? how much impact would it make when making over 1000 rows of html?
is jquery faster?slower or equal to jqlite?
UPDATE :
#Joe Enzminger +1 for one time binding it would be good for report/print view witch is just for view. BUT the grid has inline editing so it needs the two-way bindings. it has 19 columns each with a input and 2 buttons and a save button at the last column. each button has ng-show and the save button has ng-class to change its icon based on the rows status. so (19*3)+1 two way bindings.
this grid is a data entry form of some kine :D and all rows should be visible and cannot have pagination.
UPDATE2:
I forgot to mention that right now i have a empty tbody element in my template and all its content is generated as a simple dom and injected into it with absolutely no data bindings of any kind. all the interactions are handled with good all fashion JS manually :D.
I'm sure one of them is "faster". However, probably only marginally so - I don't think there is much to gain performance wise using one vs. the other.
However, from a maintainability standpoint, I'd suggest using Angular's one time binding feature. The mythical "~2000 binding limit" really applies to $watches, not bindings, and is not really a limit as much as a guideline. Using {{::var}} inside of an ng-repeat is going to yield much more maintainable code, with comparable performance, than building custom DOM on the fly, and it will not create $watches that could affect performance.
There are two things to consider here: DOM rendering performance, and angular $watch($digest) performance. In both cases trying to optimise document.createElement vs angular.element isn't worth it.
For DOM rendering the bottleneck is not JavaScript execution speed, but browser repaints (see: html5rocks) and reflows (see: Google developers). Whether you use document.createElement or angular.element is insignificant because the performance hit and UI blocking come when you add to or modify elements on the page, not when creating DOM elements in memory. This is why most modern UI frameworks batch DOM updates rather than making lots of tiny updates (e.g. ReactJS, Meteor, EmberJS).
For $watch and $digest performance, the performance hit comes from the number and complexity of binding expressions (e.g. {{things}}, ng-bind, ng-show, ng-class, etc) that Angular has to evaluate in each $digest cycle. If you keep in mind that in most cases, a simple action like a click will trigger a $digest cycle, thousands of bindings could be evaluated on each click. Using one-time bindings to minimise the number of $watches and keeping watches as simple as possible is recommended.
In the default ng-repeat directive (presumably what you are using to create a grid) you don't really have fine control over these two considerations aside from preferring one-time over two-way bindings as much as possible or butchering your data model. This is why performance sensitive developers bypass ng-repeat completely and create their own directives. If performance is key to you, you should look into doing something similar.
I have a timer that runs in a web worker with a 10 millisecond interval. Each time the timer ticks, a function is called in the controller which increments a variable. This variable is used by a bootstrap progress bar on my page.
The problem that I'm encountering is that the progress bar doesn't update unless I call $scope.$apply() in the function call where the value gets updated.
Meanwhile, I have an array with a bunch of complex objects in them (100+ objects) that are on the $scope. Since I need to call $scope.$apply() in order to have the view take the changes every time my timer ticks, it's also updating this list of objects (every 10ms), which is slowing down my application.
Does anyone have any ideas as to how I could potentially resolve this issue? Please let me know if I can provide additional details.
If the elements for the 100+ objects aren't all actually visible on screen at any one time, you can only include the ones on screen in the DOM (and so only have watchers for them) by using something like https://github.com/kamilkp/angular-vs-repeat (a colleague of mine had to hack it slightly to get it to do exactly what was needed: I forget the details)
If you know the variables only need to be updates in a certain $scope, you can call $scope.$digest() on that $scope. As opposed to $apply(), $digest() will only run the watchers on that $scope (and its children) rather than throughout the application.
An update every 10 milliseconds is extremely frequent. If everything keeps up, this would be 100 updates a second: around 4 times the frame rate of a lot of video formats. A very simple way of speeding things up is to reduce this considerably.
One way I can think of doing this that would probably fit in with most architectures is to use a throttle function, such as `_.throttle' from lodash:
var throttledApply = _.throttle($scope.apply, 500);
And then when you receive a message, you would have something like:
worker.onmessage = function(e) {
// ... Other processing code ...
throttledApply();
};
If you're using the Bootstrap progress bar, it should still give a smooth transition between displayed values, even if the differences between them are large.
I've got a ng-repeat directive with some filters on it and massive DOM inside each repetition. For example:
<ul>
<li ng-repeat='task in tasks'>
<!--Some DOM with many bindings-->
</li>
</ul>
I want to improve performance a bit, but I want to keep two-way binding.
One way to do it is to insert track by:
ng-repeat='task in tasks track by task.id'
The other way is to use native bind once in bindings:
{{::task.name}}
Obviously I cannot use both of them because in this case two way binding will not work.
How can I measure DOM rebuild speed? Which way is more effective?
These are not mutually exclusive constructs and both have different uses.
Using track by simply allows Angular to better manage the DOM when items are added or removed. By default, it uses a hash of the entire object, which can be slow and inefficient compared to a simple atomic value.
Using one time binding syntax however simply reduces the number of total watches in the application. This makes the app more responsive when performing updates because it has less things to watch.
Great question.
The answer: it depends, but mostly one time bindings are a better option unless your app is very small.
Why?
Because if your app is middle sized or large you will have watch count problem. If you blow number of watches to more then 2000 your app will feel sluggish on less powerful devices whatever you do. Watches will slow your application all the time. On every digest cycle. So your primary concern regarding performance should be to keep that watch count down. And removing watches from stuff inside ng-repeat helps the most obviously.
On the other hand track by will speed up refresh of a table a little but it's a optimization that you should take only if you know your app will stay small (bellow 2000 watches)