Angularjs performance in creating dom elements, pure js vs jqlite - javascript

I have a directive in Angularjs that will have a table with lots of rows(over 1000), so my boss said that i shouldn't use binding to make the grids content(because of angular's ~2000 limit in binding) and instead i should create the dom elements on the fly.
And i did that with angular.element(...), and it works.
BUT now i am thinking if there could be a performance boost if i use the native js document.createElement?
So is jqlite slower than pure js? how much impact would it make when making over 1000 rows of html?
is jquery faster?slower or equal to jqlite?
UPDATE :
#Joe Enzminger +1 for one time binding it would be good for report/print view witch is just for view. BUT the grid has inline editing so it needs the two-way bindings. it has 19 columns each with a input and 2 buttons and a save button at the last column. each button has ng-show and the save button has ng-class to change its icon based on the rows status. so (19*3)+1 two way bindings.
this grid is a data entry form of some kine :D and all rows should be visible and cannot have pagination.
UPDATE2:
I forgot to mention that right now i have a empty tbody element in my template and all its content is generated as a simple dom and injected into it with absolutely no data bindings of any kind. all the interactions are handled with good all fashion JS manually :D.

I'm sure one of them is "faster". However, probably only marginally so - I don't think there is much to gain performance wise using one vs. the other.
However, from a maintainability standpoint, I'd suggest using Angular's one time binding feature. The mythical "~2000 binding limit" really applies to $watches, not bindings, and is not really a limit as much as a guideline. Using {{::var}} inside of an ng-repeat is going to yield much more maintainable code, with comparable performance, than building custom DOM on the fly, and it will not create $watches that could affect performance.

There are two things to consider here: DOM rendering performance, and angular $watch($digest) performance. In both cases trying to optimise document.createElement vs angular.element isn't worth it.
For DOM rendering the bottleneck is not JavaScript execution speed, but browser repaints (see: html5rocks) and reflows (see: Google developers). Whether you use document.createElement or angular.element is insignificant because the performance hit and UI blocking come when you add to or modify elements on the page, not when creating DOM elements in memory. This is why most modern UI frameworks batch DOM updates rather than making lots of tiny updates (e.g. ReactJS, Meteor, EmberJS).
For $watch and $digest performance, the performance hit comes from the number and complexity of binding expressions (e.g. {{things}}, ng-bind, ng-show, ng-class, etc) that Angular has to evaluate in each $digest cycle. If you keep in mind that in most cases, a simple action like a click will trigger a $digest cycle, thousands of bindings could be evaluated on each click. Using one-time bindings to minimise the number of $watches and keeping watches as simple as possible is recommended.
In the default ng-repeat directive (presumably what you are using to create a grid) you don't really have fine control over these two considerations aside from preferring one-time over two-way bindings as much as possible or butchering your data model. This is why performance sensitive developers bypass ng-repeat completely and create their own directives. If performance is key to you, you should look into doing something similar.

Related

Limitation of Angular and browsers with regards to data loaded

I would like to know what is the maximum data that angular framework can handle. Say, I am displaying a chart using angular and some charting framework like chartjs. I'd like to know up to how many data can the browser display properly, with slowness, or up to when it crashes.
Your question has no simple answer, but I will try to flatten it and give a simple answer, or at least simple things to consider...
Angular (at runtime), like many other frameworks is simply JavaScript,
So let us reduce the question to "Limitation of JavaScript and browsers with regards to data loaded",
JavaScript has no upper limit of memory or storage it can handle,
I've seen JavaScript applications that require more than 15GB of RAM,
and they performed well too.
So assume data size itself is not an issue (unless your application is poorly implemented, leaking memory or just not very efficient, of course).
The main challenge as I see it, is displaying and manipulating the information
without causing unnecessary delay or unresponsiveness.
Displaying the information - let's say you have a list (or a table) containing 1,000,000 possible gifts which you then want to display for the user to select.
Adding the list items to the document one by one is tempting, but will require the browser to repaint after each addition (causing a delay or full unresponsiveness until finished), another way is adding the elements to some DOM element (denoted by N) still being kept in memory, then adding all elements corresponding the list items to the element N (still, just an in memory operation), finally adding N to the document containing the entire list - the will be a much better solution for displaying the large amount of data.
Manipulating the information - displaying is indeed not enough. you would like to move, drag, sort and filter the data being displayed. And as mentioned before, it is a bad idea removing many elements directly from DOM. You should instead remove container from the document's DOM to memory, manipulate the data in it, and then add the container right back to the document. Angular does this kind of magic for you.
(Toggling the 'display:none\block' css attribute of many elements has a similar blocking effect as I recall).
A good practice is implementing an application/page showing only the amount of data that can be processed by a human at a single glance. The rest of it should be considered in the application data-layer, in memory, and should be loaded to display given the appropriate need or request.
To conclude, you can deal with huge amounts of data as long as you provide a mechanism that efficiently filter the displayed information.
I hope it helps...
for further reading:
Slow and fast ways of adding elements to DOM
A question emphasizes the lack of memory limit used by JS
CSS display attribute performance
A good discussion about the reasons for slow DOM
About using HTML5 correctly - old but still true
Once the DOM creation procedure is understood - it much easier to display data without affecting performance / user experience

What actually makes update using React faster that regular UI update?

I have already read 10 articles about React and Virtual DOM.
I understand that virtual DOM uses the diffing algorithm and only update the UI that was changed. But I still don't understand why that is faster than updating the actual DOM.
Following is an example:
<div id="test">
Hello React!
</div>
Let's say we created a component and changed it using React. Let's say we changed the text to Hello World!
I can do the same thing using plain JS right ? document.getElementById('test').innerHTML = Hello World!
My question:
Why is React faster ? I feel like React is doing exactly same thing under the hood right ?
I feel like I am missing something fundamental here.
In your case the plain js function will be definetly faster. React is just very good for really complicated UIs. The more complicated your UI gets, you either need to write a lot code to update it or you just rebuild the whole UI on every rerender. However those DOM updates are quite slow. React allows you to completely rerender your data but actually not rerender the whole DOM but just update some parts of it.
Actually the Virtual Dom is not faster than the actual Dom , The real DOM itself is fast, it can search, remove and modify elements from the DOM tree quickly. However the layout and painting elements in html tree is slow. But React virtual DOM is not faster. The real benefit from Virtual DOM is it allows calculation the different between each changes and make make minimal changes into the HTML document.
Now why react is better when it come to manipulating the DOM?,your browser does a lot of work to update the DOM. Changing the DOM can trigger reflows and repaints; when one thing changes, the browser has to re-calculate the position of other elements in the flow of the page, and also has to do work re-drawing.
The browser has its own internal optimization to reduce the impact of DOM changes (e.g. doing repaints on the GPU, isolating some repaints on their own layers, etc), but broadly speaking, changing a few things can trigger expensive reflows and repaints.
It's common even when writing everything from scratch to build UI off the DOM, then insert it all at once (e.g. document.createElement a div and insert a whole tree under it for attaching to the main DOM), but React is engineered to watch changes and intelligently update small parts of the DOM to minimize the impact of reflows and repaints
A few reasons off the top of my head:
React uses a virtual DOM, which are just JS objects, to represent the DOM. The "current" version of the virtual DOM are objects with references to the actual DOM elements while the "next" vDOM are just objects. Objects are incredibly fast to manipulate because they are just memory changes whereas real DOM changes require expensive style layout, paint and rasterization steps.
React diffs the current vDOM against the next vDOM to produce the smallest number of changes required to make the real DOM reflect the next vDOM. This is called reconciliation. The fewer changes you make to the DOM, the faster layout calculations will be.
React batches DOM changes together so that it touches the real DOM as few times as possible. It also uses requestAnimationFrame everywhere to ensure that real DOM changes play "nicely" with the browser's layout calculation cycles.
Finally (probably React's least appreciated feature), React has increasingly sophisticated scheduling step to distinguish between low- and high-priority updates. Low priority updates are UI updates that can afford to take longer e.g. data fetched from servers whereas high-priority updates are things that the user will notice right away e.g. user input fields. Low priority updates use the very new requestIdleCalback API to ensure that they run when the browser's main thread is actually idle and that they frequently yield back to the main thread to avoid locking up the UI.
why that is faster than updating the actual DOM.
It is not faster. Explicit and controllable DOM updates are faster than anything else.
React may schedule better update graphs on sites similar to facebook but with the cost of diff processing O(D*N). On other sites React could be just a waste of CPU power.
No silver bullet here - each framework is optimal for the site it was created for initially. For others you will be lucky if particular framework is at least sub-optimal.
Real and complex Web App UIs are not using any "framework" but their own primitives: GMail, Google Docs, etc.
You might just use vanilla-js as you described:
document.getElementById('test').innerHTML = Hello World!
that's great, but super hard for medium \ big projects.
why? because react handle all your dom interaction and minmize it (as much as it could), when you would use vanilla-js most of your code would be just manipulation on the dom, extract data and insert data, with react you can put those worries aside and put all your afforts to create the best site\project.
more over, the virtual dom makes all the calculation behinde the sense, when you would try to do it manulay you have to deal with all the calculation every time (when you update an list and when you update another one), most probably from some point most of your code would be calculation about the dom updates
sound familiar? well, just for that you already got react.
Don't Repeat Yourself
if someone already done it, reuse it.
Separation Of Concerns
one part of your project should manipulate the ui, another should manipulate the logic
and the list can go on and on, but in conclusion the most important thing, vanilla-js is faster, in the end with the virtual dom and without it, you would have to use vanilla-js in the end. make your life simpler, make your code readable.
React is NOT faster then PURE Javascript.
The big difference between them is :
Pure Javascript: if you can master the Javascript language and have time to spend to find the best solution (with the lowest impact in browser performance) you definitively got an UI render system more powerful then React (because you have created a specific engine oriented to your necessity)
React: if you want to spend more time in data structure with no worries on UI update performance React (or Vue.js the future candidate for UI developing) is the best choice

Angular bind once vs. track by performance

I've got a ng-repeat directive with some filters on it and massive DOM inside each repetition. For example:
<ul>
<li ng-repeat='task in tasks'>
<!--Some DOM with many bindings-->
</li>
</ul>
I want to improve performance a bit, but I want to keep two-way binding.
One way to do it is to insert track by:
ng-repeat='task in tasks track by task.id'
The other way is to use native bind once in bindings:
{{::task.name}}
Obviously I cannot use both of them because in this case two way binding will not work.
How can I measure DOM rebuild speed? Which way is more effective?
These are not mutually exclusive constructs and both have different uses.
Using track by simply allows Angular to better manage the DOM when items are added or removed. By default, it uses a hash of the entire object, which can be slow and inefficient compared to a simple atomic value.
Using one time binding syntax however simply reduces the number of total watches in the application. This makes the app more responsive when performing updates because it has less things to watch.
Great question.
The answer: it depends, but mostly one time bindings are a better option unless your app is very small.
Why?
Because if your app is middle sized or large you will have watch count problem. If you blow number of watches to more then 2000 your app will feel sluggish on less powerful devices whatever you do. Watches will slow your application all the time. On every digest cycle. So your primary concern regarding performance should be to keep that watch count down. And removing watches from stuff inside ng-repeat helps the most obviously.
On the other hand track by will speed up refresh of a table a little but it's a optimization that you should take only if you know your app will stay small (bellow 2000 watches)

How to modify a large number of elements in DOM without sacrificing usability?

I have a list with quite a few elements (each of them is a nested div). Each element has a custom onclick handler.
JS updates the list several times per second, this may result in:
adding or removing some elements
changing text in some elements
changing styles in some elements
changing height of some elements
etc.
Most of the time the update makes small changes to the majority of the elements.
To minimize reflows I should remove the list from DOM, make the changes and append it back. The problem I have with this approach is that when user selects some text, the next update will reset the selection. (And the next update comes within a second) If user clicks a button his click may fail to register if there was an update between mose_down and mouse_up.
I understand when the selection resets on text that have been changed. It makes sense. But with such approach any selection in this list will reset.
Is there any better way to do this? How would you implement such list?
This list is fully generated by JS. If I'm removing it from DOM anyway, is there any benefit to modifying it instead of recreating it from scratch? Creating it anew each time would require less code.
This sounds like 2 way data binding, there are a couple of good custom solutions to data-binding answers on here: Handy stack link. Alternatively backbone.js and knockout.js have good techniques amongst quite a few other frameworks (angular ect).
Additionally, if you want to have a pop at it yourself (which I highly recommend to get a better understanding) you could use the proposed 'Object Observe' function. There's some handy documentation with some examples on how this works over at Mozilla. as well as The trusty HTML5 Rocks, which is a nice simple tutorial on using the new Object.Observe functionality, well worth a read.
Hope this helps!

Backbone.js render performance and repaints

I'm building a backbone.js application. It has quite a few interactive calculator type forms. I need values in these forms to update live when users type values into inputs. I was thinking two approaches:
Strategies:
1) rerender the view upon each interaction using underscore templates
OR
2) render once, find every point of display in jQuery and update them on each interaction
My question: What do you think is best practice both from a maintainability point of view and from a browser performance standpoint. Minimizing repaints etc seems like a good idea to me but attaching all those listeners and pairing them to bits of the view also seems a bit yuck.
Any advice greatly appreciated,
Jack
Well, This is certainly a bit subjective question and can not be answered either way. It really depends upon the complexity of both the strategies which again depend on your app logic. Still here are a few thoughts:
You can have multiple elements sitting parallel to each other and each element has its own content (view). Depending upon logic, hide the rest of them and show only one. This avoids re-rendering of the view multiple times but yes you have to write some logic of maintaining multiple views (Hide all, show one). So this approach is fine as long as you have 2-3 views and your app needs switching between these views multiple times.
Re-painting a view may not be a costly operation depending upon how you do it. Just cache the selector and do all the changes in it and update the DOM only once which is the most common practice. So you are good if you are re-rendering a view again if it costs only one DOM operation.
From code maintainability perspective, definitely go with Strategy #1, Re-rendering of view.

Categories

Resources