How can I most easily identify bottlenecks in React render performance? - javascript

I'm having an issue with identifying bottlenecks in render performance while working on a JSON viewer. With few elements, it performs well, but at a certain point it becomes annoyingly slow.
Checking the profiler, it seems that elements are rendering fast enough, but I've noticed a few issues that I'm not sure how to pursue.
Overview
The app is a JSON viewer which allows you to expand / minimize all elements at once, as well as individual elements.
Performance is fine with few elements, but seems to decrease dramatically as the number of elements increases.
When profiling both my object filter method with performance.now() as well as checking the render time in React DevTools, the figures seem okay. I could be interpreting it wrong.
I've tried using React.memo() on stateless elements (particularly the key/value which is the most frequently rendered component), but it doesn't seem to improve the performance noticeably. Admittedly, I'm not sure if I understand the reasoning enough behind memoizing React components to implement this usefully.
Implementation
Currently, my app loads data into a parent which feeds into a component that loads the JSON tree using a recursive element.
Loading JSON feed from URL changes the state of the parent component, which is filtered using a helper method that uses values entered into an input field.
Issues
There are two functionalities which reproduce a slow response time with (not so big) JSON documents:
The expand all button
The first few keypresses on a filter query
With the current implementation, both filtering and expanding all triggers a display: none change on the child elements, and the behavior leads me to believe I'm doing something inefficiently to handle this use case.
Reproduction Steps
The code is available here: https://codesandbox.io/s/react-json-view-4z348
With a production build here (not performing any better): https://csb-4z348.vercel.app/
To reproduce the issue, play around with the Expand All function (plus sign next to filter input) and some filter inputs.
Then, try loading a JSON feed with more elements (you can test on my GitHub API feed) and try filtering/expanding all. Notice the major performance hit.
What I've noticed
When logging useEffect, minimizing seems to cause ~2x as many rerenders as expanding all.
As the filter input becomes more specific, the performance (logically) improves as less elements are being rendered.
Question
While I would appreciate a nudge in the right direction for this specific case, what I'm most curious about is how best to identify what is causing these performance issues.
I've looked into windowing the output, but it's not my first choice, and I'm pretty sure I'm doing something wrong, rather than the cause being too many elements rendered.
I appreciate your time, and thank you in advance for any tips you could provide!

It seems I've answered my own question. The problem was a reconciliation issue due to using UUID as a key prop in my child components, which caused them to re-render every time the minimize state changed. From the docs:
Keys should be stable, predictable, and unique. Unstable keys (like
those produced by Math.random()) will cause many component instances
and DOM nodes to be unnecessarily recreated, which can cause
performance degradation and lost state in child components.
I'll leave the steps here for anyone else who runs into this issue.
After (too long) digging around in performance profiler, I noticed that each time I minimized or expanded the elements, each child was being mounted again. After consulting Google with a more specific query, I found this blog post and realized that I was committing this flagrant performance error.
Once I found the source of the problem, I found many other references to it.
After fixing the key prop, interaction time got ~60% faster for minimize/expand all.
Finally, I memoized some other components related to the instant filter and finally it seems to be performing as well as I would like for the time being.
Thanks to anyone who took a look at this in the meantime, and I hope it's helpful for anyone who might come across this.

Related

Javascript Chrome profiler granularity - Go deeper

I'm currently debugging an Angular (JS) based app. I have some speed issue on runtime (client side) and want to analyze why.
I use the Devtool profiler from Chrome. I can see that some Events (i.e. keypress, blur) took a lot of time (see screenshot below).
Now I would like to go deeper and know which source code contains these event listeners and cause my application to slow down like this.
For information, the app is very slow when I write text in input, and when I focus/blur from my input; I know that some watchers could cause the slow down, but I'm not sure.
Hope deeper profiler analysis could help me !
--- Edit 25 feb 2020 ---
I think my problem is linked to digest cycle (apply/digest, etc).
I found this plugin : digest-hud. After several tries, it seems that a binding (which is used in a lot of components) called "source" is taking all digest resource :
Digest-hud was really helpful. I cannot find a way to know exactly how to find ha function initial calls on callstack. Like Kresimir Pendic said, probably a map issue.
But I found a lot of binding/watcher with source and one of them was called every single event of focus/blur/tipping. So I removed it, find an other way to signal changes within input and it works.
So don't hesitate to check with Digest-hud (disclaimer, I'm not releated in any way with Digest-hud developer(s)) if you have some performance issue with your AngularJS app, it'll give you some hints to solve the problem.

Using mutationobservers to detect changes in the results of a fetch request

I'm working on a narrow cast that displays an amount of tickets (an integer with the total added up to eachother) from a 3rd party API. I want to display a notification when this amount increases. I've read about mutationobservers, and that they are good for doing similar tasks like when something gets added or deleted.
The app has a Vue frontend, and a Laravel backend which does the requesting/authenticating. The index blade loads in a Vue component which contains the other components (and distributes the data from the API to child components).
I'm not quite sure wether mutationobservers are good for this specific job, though. Googling really didn't give me great alternatives.
In conclusion, I want to know if mutationobservers are the right tools for this task and what property would work. Better suited alternatives are also welcome.
Using vue, you can use a watcher function to watch for changes in a particular variable (amount). Mutation Observers only watches for dom updates, it won't give you what you want

Multiple render variants with React?

I've built a web application using React which is up and running and working well. I should probably just leave it alone, but there's one area which is troubling me, where I think I need to do a bit of refactoring because what I'm doing doesn't seem to me to be going with the flow of React. I'd be interested in others' views.
I have a React class, Product, which I use to keep track of products on the page. The only property stored in state is 'quantity', but I have various functions which do things like update a basket by means of pub/sub. Depending on how and where this Product class is used (whether in a table or for a detail view, whether on mobile or desktop), the necessary display is quite different. So in my render function, I call variously 'renderForDetailOnMobile', 'renderForTableOnMobile', 'renderForDetailOnDesktop' and 'renderForTableOnDesktop'.
As I say, this doesn't feel very React-y to me, as if I've got the whole thing upside down (although the rest of the app is, I would say much more idiomatic). So how should be thinking this through in order to break it down into separate smaller classes, which is what I imagine I should be doing? Sorry, for privacy reasons it's not possible to poast actual code, so I hope this description makes the situation clear enough.
You should be using reducers or stores, depending whether you have a flux or redux application. This would help you to understand your state and how it changes.
I see you are using state in your Product, while you should be using stores as mentioned above.
So, how I see the issue is that you have data source and you need to transform it based on the device requirements.
In such case I would make a container which would load other components in charge of transforming and presenting data for different devices.
Container should be rather simple just returning the correct component based on the conditional being met.

React + Redux performance optimization with componentShouldUpdate

I have a react/redux application which has become large enough to need some performance optimizations.
There are approx ~100 unique components which are updated via websocket events. When many events occur (say ~5/second) the browser starts to slow down significantly.
Most of the state is kept in a redux store as Immutable.js objects. The entire store is converted to a plain JS object and passed down as props through the component tree.
The problem is when one field updates, the entire tree updates and I believe this is where there is most room for improvement.
My question:
If the entire store is passed through all components, is there an intelligent way to prevent components updating, or do I need a custom shouldComponentUpdate method for each component, based on which props it (and its children) actually use?
You really don't want to do things that way. First, as I understand it, Immutable's toJS() is fairly expensive. If you're doing that for the entire state every time, that's not going to help.
Second, calling toJS() right away wastes almost the entire benefit of using Immutable.js types in the first place. You really would want to keep your data in Immutable-wrapped form down until your render functions, so that you get the benefit of the fast reference checks in shouldComponentUpdate.
Third, doing things entirely top-down generally causes a lot of unnecessary re-rendering. You can get around that if you stick shouldComponentUpdate on just about everything in your component tree, but that seems excessive.
The recommended pattern for Redux is to use connect() on multiple components, at various levels in your component tree, as appropriate. That will simplify the amount of work being done, on several levels.
You might want to read through some of the articles I've gathered on React and Redux Performance. In particular, the recent slideshow on "High Performance Redux" is excellent.
update:
I had a good debate with another Redux user a couple days before this question was asked, over in Reactiflux's #redux channel, on top-down vs multiple connections. I've copied that discussion and pasted it in a gist: top-down single connect vs multiple lower connects.
Also, yesterday there was an article posted that conveniently covers exactly this topic of overuse of Immutable.js's toJS() function: https://medium.com/#AlexFaunt/immutablejs-worth-the-price-66391b8742d4. Very well-written article.

Why is React's concept of Virtual DOM said to be more performant than dirty model checking?

I saw a React dev talk at (Pete Hunt: React: Rethinking best practices -- JSConf EU 2013) and the speaker mentioned that dirty-checking of the model can be slow. But isn't calculating the diff between virtual DOMs actually even less performant since the virtual DOM, in most of the cases, should be bigger than model?
I really like the potential power of the Virtual DOM (especially server-side rendering) but I would like to know all the pros and cons.
I'm the primary author of a virtual-dom module, so I might be able to answer your questions. There are in fact 2 problems that need to be solved here
When do I re-render? Answer: When I observe that the data is dirty.
How do I re-render efficiently? Answer: Using a virtual DOM to generate a real DOM patch
In React, each of your components have a state. This state is like an observable you might find in knockout or other MVVM style libraries. Essentially, React knows when to re-render the scene because it is able to observe when this data changes. Dirty checking is slower than observables because you must poll the data at a regular interval and check all of the values in the data structure recursively. By comparison, setting a value on the state will signal to a listener that some state has changed, so React can simply listen for change events on the state and queue up re-rendering.
The virtual DOM is used for efficient re-rendering of the DOM. This isn't really related to dirty checking your data. You could re-render using a virtual DOM with or without dirty checking. You're right in that there is some overhead in computing the diff between two virtual trees, but the virtual DOM diff is about understanding what needs updating in the DOM and not whether or not your data has changed. In fact, the diff algorithm is a dirty checker itself but it is used to see if the DOM is dirty instead.
We aim to re-render the virtual tree only when the state changes. So using an observable to check if the state has changed is an efficient way to prevent unnecessary re-renders, which would cause lots of unnecessary tree diffs. If nothing has changed, we do nothing.
A virtual DOM is nice because it lets us write our code as if we were re-rendering the entire scene. Behind the scenes we want to compute a patch operation that updates the DOM to look how we expect. So while the virtual DOM diff/patch algorithm is probably not the optimal solution, it gives us a very nice way to express our applications. We just declare exactly what we want and React/virtual-dom will work out how to make your scene look like this. We don't have to do manual DOM manipulation or get confused about previous DOM state. We don't have to re-render the entire scene either, which could be much less efficient than patching it.
I recently read a detailed article about React's diff algorithm here: http://calendar.perfplanet.com/2013/diff/. From what I understand, what makes React fast is:
Batched DOM read/write operations.
Efficient update of sub-tree only.
Compared to dirty-check, the key differences IMO are:
Model dirty-checking: React component is explicitly set as dirty whenever setState is called, so there's no comparison (of the data) needed here. For dirty-checking, the comparison (of the models) always happen each digest loop.
DOM updating: DOM operations are very expensive because modifying the DOM will also apply and calculate CSS styles, layouts. The saved time from unnecessary DOM modification can be longer than the time spent diffing the virtual DOM.
The second point is even more important for non-trivial models such as one with huge amount of fields or large list. One field change of complex model will result in only the operations needed for DOM elements involving that field, instead of the whole view/template.
I really like the potential power of the Virtual DOM (especially
server-side rendering) but I would like to know all the pros and cons.
-- OP
React is not the only DOM manipulation library. I encourage you to understand the alternatives by reading this article from Auth0 that includes detailed explanation and benchmarks. I'll highlight here their pros and cons, as you asked:
React.js' Virtual DOM
PROS
Fast and efficient "diffing" algorithm
Multiple frontends (JSX, hyperscript)
Lightweight enough to run on mobile devices
Lots of traction and mindshare
Can be used without React (i.e. as an independent engine)
CONS
Full in-memory copy of the DOM (higher memory use)
No differentiation between static and dynamic elements
Ember.js' Glimmer
PROS
Fast and efficient diffing algorithm
Differentiation between static and dynamic elements
100% compatible with Ember's API (you get the benefits without major updates to your existing code)
Lightweight in-memory representation of the DOM
CONS
Meant to be used only in Ember
Only one frontend available
Incremental DOM
PROS
Reduced memory usage
Simple API
Easily integrates with many frontends and frameworks (meant as a template engine backend from the beginning)
CONS
Not as fast as other libraries (this is arguable, see the benchmarks below)
Less mindshare and community use
Here's a comment by React team member Sebastian Markbåge which sheds some light:
React does the diffing on the output (which is a known serializable format, DOM attributes). This means that the source data can be of any format. It can be immutable data structures and state inside of closures.
The Angular model doesn't preserve referential transparency and therefore is inherently mutable. You mutate the existing model to track changes. What if your data source is immutable data or a new data structure every time (such as a JSON response)?
Dirty checking and Object.observe does not work on closure scope state.
These two things are very limiting to functional patterns obviously.
Additionally, when your model complexity grows, it becomes increasingly expensive to do dirty tracking. However, if you only do diffing on the visual tree, like React, then it doesn't grow as much since the amount of data you're able to show on the screen at any given point is limited by UIs. Pete's link above covers more of the perf benefits.
https://news.ycombinator.com/item?id=6937668
In React, each of your components have a state. This state is like an observable you might find in knockout or other MVVM style libraries. Essentially, React knows when to re-render the scene because it is able to observe when this data changes. Dirty checking is slower than observables because you must poll the data at a regular interval and check all of the values in the data structure recursively. By comparison, setting a value on the state will signal to a listener that some state has changed, so React can simply listen for change events on the state and queue up re-rendering.The virtual DOM is used for efficient re-rendering of the DOM. This isn't really related to dirty checking your data. You could re-render using a virtual DOM with or without dirty checking. You're right in that there is some overhead in computing the diff between two virtual trees, but the virtual DOM diff is about understanding what needs updating in the DOM and not whether or not your data has changed. In fact, the diff algorithm is a dirty checker itself but it is used to see if the DOM is dirty instead.
We aim to re-render the virtual tree only when the state changes. So using an observable to check if the state has changed is an efficient way to prevent unnecessary re-renders, which would cause lots of unnecessary tree diffs. If nothing has changed, we do nothing.
Virtual Dom is not invented by react. It is part of HTML dom.
It is lightweight and detached from the browser-specific implementation details.
We can think virtual DOM as React’s local and simplified copy of the HTML DOM. It allows React to do its computations within this abstract world and skip the “real” DOM operations, often slow and browser-specific. Actually there is no big differenc between DOM and VIRTUAL DOM.
Below are the points why Virtual Dom is used (source Virtual DOM in ReactJS):
When you do:
document.getElementById('elementId').innerHTML = "New Value" Following thing happens:
Browser needs to parse the HTML
It removes the child element of elementId
Updates the DOM value with new value
Re-calculate the css for the parent and child
Update the layout i.e. each elements exact co-ordinates on the screen
Traverse the render tree and paint it on the browser display
Recalculating the CSS and changed layouts uses complex algorithm and
they effect the performance.
As well as updating the DOM properties ie. values. It follows a algorithm.
Now, suppose if you update DOM 10 times directly, then all the above steps will run one by one and updating DOM algorithms will take time to updates DOM values.
This, is why Real DOM is slower than virtual DOM.

Categories

Resources