I've got a ng-repeat directive with some filters on it and massive DOM inside each repetition. For example:
<ul>
<li ng-repeat='task in tasks'>
<!--Some DOM with many bindings-->
</li>
</ul>
I want to improve performance a bit, but I want to keep two-way binding.
One way to do it is to insert track by:
ng-repeat='task in tasks track by task.id'
The other way is to use native bind once in bindings:
{{::task.name}}
Obviously I cannot use both of them because in this case two way binding will not work.
How can I measure DOM rebuild speed? Which way is more effective?
These are not mutually exclusive constructs and both have different uses.
Using track by simply allows Angular to better manage the DOM when items are added or removed. By default, it uses a hash of the entire object, which can be slow and inefficient compared to a simple atomic value.
Using one time binding syntax however simply reduces the number of total watches in the application. This makes the app more responsive when performing updates because it has less things to watch.
Great question.
The answer: it depends, but mostly one time bindings are a better option unless your app is very small.
Why?
Because if your app is middle sized or large you will have watch count problem. If you blow number of watches to more then 2000 your app will feel sluggish on less powerful devices whatever you do. Watches will slow your application all the time. On every digest cycle. So your primary concern regarding performance should be to keep that watch count down. And removing watches from stuff inside ng-repeat helps the most obviously.
On the other hand track by will speed up refresh of a table a little but it's a optimization that you should take only if you know your app will stay small (bellow 2000 watches)
Related
I have already read 10 articles about React and Virtual DOM.
I understand that virtual DOM uses the diffing algorithm and only update the UI that was changed. But I still don't understand why that is faster than updating the actual DOM.
Following is an example:
<div id="test">
Hello React!
</div>
Let's say we created a component and changed it using React. Let's say we changed the text to Hello World!
I can do the same thing using plain JS right ? document.getElementById('test').innerHTML = Hello World!
My question:
Why is React faster ? I feel like React is doing exactly same thing under the hood right ?
I feel like I am missing something fundamental here.
In your case the plain js function will be definetly faster. React is just very good for really complicated UIs. The more complicated your UI gets, you either need to write a lot code to update it or you just rebuild the whole UI on every rerender. However those DOM updates are quite slow. React allows you to completely rerender your data but actually not rerender the whole DOM but just update some parts of it.
Actually the Virtual Dom is not faster than the actual Dom , The real DOM itself is fast, it can search, remove and modify elements from the DOM tree quickly. However the layout and painting elements in html tree is slow. But React virtual DOM is not faster. The real benefit from Virtual DOM is it allows calculation the different between each changes and make make minimal changes into the HTML document.
Now why react is better when it come to manipulating the DOM?,your browser does a lot of work to update the DOM. Changing the DOM can trigger reflows and repaints; when one thing changes, the browser has to re-calculate the position of other elements in the flow of the page, and also has to do work re-drawing.
The browser has its own internal optimization to reduce the impact of DOM changes (e.g. doing repaints on the GPU, isolating some repaints on their own layers, etc), but broadly speaking, changing a few things can trigger expensive reflows and repaints.
It's common even when writing everything from scratch to build UI off the DOM, then insert it all at once (e.g. document.createElement a div and insert a whole tree under it for attaching to the main DOM), but React is engineered to watch changes and intelligently update small parts of the DOM to minimize the impact of reflows and repaints
A few reasons off the top of my head:
React uses a virtual DOM, which are just JS objects, to represent the DOM. The "current" version of the virtual DOM are objects with references to the actual DOM elements while the "next" vDOM are just objects. Objects are incredibly fast to manipulate because they are just memory changes whereas real DOM changes require expensive style layout, paint and rasterization steps.
React diffs the current vDOM against the next vDOM to produce the smallest number of changes required to make the real DOM reflect the next vDOM. This is called reconciliation. The fewer changes you make to the DOM, the faster layout calculations will be.
React batches DOM changes together so that it touches the real DOM as few times as possible. It also uses requestAnimationFrame everywhere to ensure that real DOM changes play "nicely" with the browser's layout calculation cycles.
Finally (probably React's least appreciated feature), React has increasingly sophisticated scheduling step to distinguish between low- and high-priority updates. Low priority updates are UI updates that can afford to take longer e.g. data fetched from servers whereas high-priority updates are things that the user will notice right away e.g. user input fields. Low priority updates use the very new requestIdleCalback API to ensure that they run when the browser's main thread is actually idle and that they frequently yield back to the main thread to avoid locking up the UI.
why that is faster than updating the actual DOM.
It is not faster. Explicit and controllable DOM updates are faster than anything else.
React may schedule better update graphs on sites similar to facebook but with the cost of diff processing O(D*N). On other sites React could be just a waste of CPU power.
No silver bullet here - each framework is optimal for the site it was created for initially. For others you will be lucky if particular framework is at least sub-optimal.
Real and complex Web App UIs are not using any "framework" but their own primitives: GMail, Google Docs, etc.
You might just use vanilla-js as you described:
document.getElementById('test').innerHTML = Hello World!
that's great, but super hard for medium \ big projects.
why? because react handle all your dom interaction and minmize it (as much as it could), when you would use vanilla-js most of your code would be just manipulation on the dom, extract data and insert data, with react you can put those worries aside and put all your afforts to create the best site\project.
more over, the virtual dom makes all the calculation behinde the sense, when you would try to do it manulay you have to deal with all the calculation every time (when you update an list and when you update another one), most probably from some point most of your code would be calculation about the dom updates
sound familiar? well, just for that you already got react.
Don't Repeat Yourself
if someone already done it, reuse it.
Separation Of Concerns
one part of your project should manipulate the ui, another should manipulate the logic
and the list can go on and on, but in conclusion the most important thing, vanilla-js is faster, in the end with the virtual dom and without it, you would have to use vanilla-js in the end. make your life simpler, make your code readable.
React is NOT faster then PURE Javascript.
The big difference between them is :
Pure Javascript: if you can master the Javascript language and have time to spend to find the best solution (with the lowest impact in browser performance) you definitively got an UI render system more powerful then React (because you have created a specific engine oriented to your necessity)
React: if you want to spend more time in data structure with no worries on UI update performance React (or Vue.js the future candidate for UI developing) is the best choice
I'm developping an angular app right now for my company, but I reached a point where the app became extremely slow so I tried tunning it by using onetimebind everywhere I can, track by ...but it's faster to load at first but still laggy, it is composed of a pretty much huge nested objects, I've counted the total number of objects, it starts at 680 and can go up to +6000 for normal use of the app, oh yeah I should precise that the app is generating a form and pretty much +90% of the objects in the scope belongs to an input and are updated each time the client click(radio) keyup/change(text).
It also have like 5/6 arrays composed of objects and the array gets bigger/smaller accodring to the clients choice, and that's where it gets laggy, each time I add an object to the array, it takes like a second to render it, so I tried using nested controllers thinking that if the child of an object is updated Angular will render only this child and not all the others, but somehow the app got even slower and laggier :s (it's a bit faster when I use ng-show instead of ng-if but the memory used jumps from ~50Mb to ~150Mb)
I should also precise that the form is in a wizard style, and not all the inputs are displayed at once, the number of inputs that are displayed are between 10%-20% of the total inputs
Has anyone encountred this problem before? does anyone know how to deal with big scopes?
Sad to say, but that's intrinsic of the view rendering in angular.
An update in the model triggers a potential redraw of the entire view. No matter if you have elements hidden or not. The two way data binding can really kill performances. You can consider evaluate if you need to render the view only once, in that case there are optimizations, but I'm assuming that your form change dynamically, therefore a 2 way data binding is necessary.
You can try to work around this limitation but encapsulate sub part of the entire MVC. In this way a contained controllers only update the specific view associated to that scope.
You may want to consider using react (that has as first goal to address exactly your use case)
Have a look at this blog post for a comparison of the rendering pipeline between angular and react Js.
http://www.williambrownstreet.net/blog/2014/04/faster-angularjs-rendering-angularjs-and-reactjs/
I have a directive in Angularjs that will have a table with lots of rows(over 1000), so my boss said that i shouldn't use binding to make the grids content(because of angular's ~2000 limit in binding) and instead i should create the dom elements on the fly.
And i did that with angular.element(...), and it works.
BUT now i am thinking if there could be a performance boost if i use the native js document.createElement?
So is jqlite slower than pure js? how much impact would it make when making over 1000 rows of html?
is jquery faster?slower or equal to jqlite?
UPDATE :
#Joe Enzminger +1 for one time binding it would be good for report/print view witch is just for view. BUT the grid has inline editing so it needs the two-way bindings. it has 19 columns each with a input and 2 buttons and a save button at the last column. each button has ng-show and the save button has ng-class to change its icon based on the rows status. so (19*3)+1 two way bindings.
this grid is a data entry form of some kine :D and all rows should be visible and cannot have pagination.
UPDATE2:
I forgot to mention that right now i have a empty tbody element in my template and all its content is generated as a simple dom and injected into it with absolutely no data bindings of any kind. all the interactions are handled with good all fashion JS manually :D.
I'm sure one of them is "faster". However, probably only marginally so - I don't think there is much to gain performance wise using one vs. the other.
However, from a maintainability standpoint, I'd suggest using Angular's one time binding feature. The mythical "~2000 binding limit" really applies to $watches, not bindings, and is not really a limit as much as a guideline. Using {{::var}} inside of an ng-repeat is going to yield much more maintainable code, with comparable performance, than building custom DOM on the fly, and it will not create $watches that could affect performance.
There are two things to consider here: DOM rendering performance, and angular $watch($digest) performance. In both cases trying to optimise document.createElement vs angular.element isn't worth it.
For DOM rendering the bottleneck is not JavaScript execution speed, but browser repaints (see: html5rocks) and reflows (see: Google developers). Whether you use document.createElement or angular.element is insignificant because the performance hit and UI blocking come when you add to or modify elements on the page, not when creating DOM elements in memory. This is why most modern UI frameworks batch DOM updates rather than making lots of tiny updates (e.g. ReactJS, Meteor, EmberJS).
For $watch and $digest performance, the performance hit comes from the number and complexity of binding expressions (e.g. {{things}}, ng-bind, ng-show, ng-class, etc) that Angular has to evaluate in each $digest cycle. If you keep in mind that in most cases, a simple action like a click will trigger a $digest cycle, thousands of bindings could be evaluated on each click. Using one-time bindings to minimise the number of $watches and keeping watches as simple as possible is recommended.
In the default ng-repeat directive (presumably what you are using to create a grid) you don't really have fine control over these two considerations aside from preferring one-time over two-way bindings as much as possible or butchering your data model. This is why performance sensitive developers bypass ng-repeat completely and create their own directives. If performance is key to you, you should look into doing something similar.
I'm building a backbone.js application. It has quite a few interactive calculator type forms. I need values in these forms to update live when users type values into inputs. I was thinking two approaches:
Strategies:
1) rerender the view upon each interaction using underscore templates
OR
2) render once, find every point of display in jQuery and update them on each interaction
My question: What do you think is best practice both from a maintainability point of view and from a browser performance standpoint. Minimizing repaints etc seems like a good idea to me but attaching all those listeners and pairing them to bits of the view also seems a bit yuck.
Any advice greatly appreciated,
Jack
Well, This is certainly a bit subjective question and can not be answered either way. It really depends upon the complexity of both the strategies which again depend on your app logic. Still here are a few thoughts:
You can have multiple elements sitting parallel to each other and each element has its own content (view). Depending upon logic, hide the rest of them and show only one. This avoids re-rendering of the view multiple times but yes you have to write some logic of maintaining multiple views (Hide all, show one). So this approach is fine as long as you have 2-3 views and your app needs switching between these views multiple times.
Re-painting a view may not be a costly operation depending upon how you do it. Just cache the selector and do all the changes in it and update the DOM only once which is the most common practice. So you are good if you are re-rendering a view again if it costs only one DOM operation.
From code maintainability perspective, definitely go with Strategy #1, Re-rendering of view.
tl;dr: I wonder if having lots (100+ for the moment, potentially up to 1000/2000 or more) of backbone views (as a cell of a table) is too heavy or not
The project I'm working on revolves around a planning view. There one row per user that covers 6 hours of a day, each hour splitted in 4 slots of 15mn. This planning is used to add "reservations" when clicking on a slot, and should handle hovering of the correct slots, and also handle when it is NOT possible to make a reservation - ie. prevent user click on an "unavailable" slot.
There is many reasons why a slot can't be clicked on: the user is not available at this time, or the user is in a reservation; or the app needs to "force" a delay slot between two reservations. Reservations (a div) are rendered in a slot (a cell of a table), and by toying with dimensions, hovers the right number of slots.
All this screen is handled with backbone. So For each slot I'm hovering on, I need to check wether I can do a reservation here or not. As of this moment, I use this by toying with the data attributes on the slots : when a reservation object is added, the slots covered are "enhanced with (among others) the reservation object (the backbone view object).
But in some cases I don't quite have a grasp on now, it mixes up, and when the reservation view is removed, the slots are not "cleaned up" : the previous class is not reset correctly. It is probably something I've done wrong or badly, but this is only going to get heavier; I think I should use another class of Backbone views here, but I'm afraid the number of slots and thereof of views objects will be high and cause performance issue. I don't know mush about js perf so I'd like to have some feedback before jumping on that train. Any other advice on how to do this would be quite welcomed too.
Thanks for your time. If this is not clear enough, tell me, I'll try and rephrase it.
We have a pretty complex backbone.js app with potentially thousands of views at a given time. Our bottlenecks are mostly in memory leaks from views that have not been properly removed or eventdriven rendering that re-renders views needlessly. That said, the actual number of views doesn't seem to affect much. Backbone views are pretty lightweight, so as long as you don't have too much events bound to them, it's not that big of a problem.
You might run into performance issues, but the views probably isn't the problem.
What you might want to do though, if you do use thousands of views is to stash up the initial rendering into one big ´.html()´-call. You can do this by giving each view-element an id based on the view's cid and then you concatenate the view's html strings and afterwards you use setElement on each view to find its element again.
Maybe this helps a litte bit :
https://stackoverflow.com/a/7150279/1067061
btw.: i like your avatar ;)
backbone handles everything is the least performant way for the browser. If it's not fast enough you need to stop creating and destroying templates. Caching a template is almost no help at all.
Instead of calling the render function against, save out a variable to the jquery selector and when the model changes use the cached selector to update the DOM. This will give you the largest jump in performance off the bat.
$title = null,
init = function(){ $title = this.$el.find('h1'); },
onTitleChange = function(event){ $title.html(event.value) }
I'm working on a backbone view that is performant. It can render a collection with 1,000,000 models at over 120FPS. The code is up on Github. https://github.com/puppybits/BackboneJS-PerfView There are a lot of other optimizations that can be done. If you check out the PrefView it has comments on most of the optimizations.