Performance issue in react js - javascript

I am currently using reactJS version "15.0.1" in my web application. In one of the feature we need to keep pooling some information continuously after each 2 seconds. So we receive the response which is List of some object(700/1000 items in list) which we update and show in the react web application. The Problem is after some time the application becomes unresponsive and takes too much time for any operation. On profiling I found its render, batch updates and dispatch event in react js that takes the longest time. Is there any recommended way to get away with the performance issue in react. The feature needs to be refreshed every 2 seconds and list size is more than 1000 items each time.
The performance issue is observed in IE and Chrome browser.

It's hard to tell without seeing your code, maybe you have a memory leak? You could try to mark your objects for garbage collection at the end of your methods.
listOfSomeObject = null;
Here is a good article capturing some methods to identify and fix memory leaks.
https://auth0.com/blog/four-types-of-leaks-in-your-javascript-code-and-how-to-get-rid-of-them/

Related

How to solve concurrency issues in a React Native application?

Let's add a <FlatList/> into our application.
The first requirement we have is to render a predefined set of 5 items. We define a constant in our component, pass it into the list via the data prop and it works just fine...
... until we decide to store this data on a server and expose it via the API. OK, no problem, will fetch the data in our componentDidMount() method, put it into the state when it finishes loading, pass the state to the data prop and it also works just fine...
... until we notice that we have a huge delay before we can show the first item of the list. That is because the amount of data we're loading from the API grew significantly over time. Maybe now it is some REST resource collection consisting of thousands of items.
Naturally, we decide to implement a pagination in our API. And that is when the things start to get interesting... When do we load the next page of the resource collection? We reach out to the wonderful React Native API reference, examine the FlatList part of it, and figure out that it has a very handy onEndReached callback prop. Wonderful! Let's load the next page of our collection every time this callback is called! It would work as a charm...
... until you receive a bug report in your mail. In this report a user tells us that the data is not sorted properly in the list, that some items are duplicated and some items are just missing.
After a quick debugging we are able to reproduce the issue and figure out what causes it. Just set the onEndReachedThreshold = { 5 } and scroll the list very fast. onEndReached callback would fire asynchronously before the previous one has finished.
Inside our component, we have a variable pageId storing the last page ID we loaded. Each time the onEndReachedThreshold gets fired we use it to construct the next page URL and then increment it. The problem is that this method is called concurrently and the same value of pageId is used multiple times.
I used to do a bit of multithreading programming before, I've heard of mutexes, semaphores, and atomicity. I would like to be able to acquire an exclusive lock on the pageId to use it in this concurrent callback.
But after a quick Internet search, it seems that JS does not provide such tools out of the box. I found some libraries like this one but it doesn't look like a good candidate for a dependency, it's not very actively developed, it's not made by a major vendor etc. Looks more like some hobby project.
The question is: what are the industry-standard rock-solid tools or patterns for thread-safe React Native programming? How can I solve the described concurrency issue in a React Native application?

How to share a pouchDB between main thread and a web worker

In a web app, using pouchDB, I have a slow running function that finishes by updating a document in the DB. I want to move it off the main UI thread and into a web worker. However, we have lots of other code using pouchDB still in the main thread (e.g. the change event listener, but also code that deals with other documents). (For reference the database size is on the order of 100MB; Vue2 is used so, in general, the UI can update when the data changes.)
This is where I seem to come unstuck immediately:
Shared memory is basically out, as all the browsers disable it by default
Even if it wasn't, pouchDB is a class, and cannot be transferred(?).
Isolating all the db code, including the changes handler, into one web worker is a huge refactor; and then we still have the issue of having to pass huge chunks of data in and out of that web worker.
Move all the code that uses the data into the web worker too, and just have the UI thread pass messages back and forth, is an even bigger refactor, and I've not thought through how it might interfere with Vue.
That seems to leave us with a choice of two extremes. Either rewrite the whole app from the ground up, possibly dropping Vue, or just do the slow, complex calculation in a web worker, then have it pass back the result, and continue to do the db.put() in the main UI thread.
Is it really an all or nothing situation? Are there any PouchDB "tricks" that allow working with web workers, and if so will we need to implement locking?
You're missing an option, that I would chose in your situation. Write a simple adapter that allows your worker code to query the DB in main thread via messages. Get your data, process it in the worker and send it back.
You only need to "wrap" the methods that you need in the worker. I recommend writing a class or set of functions that are async in your worker, to make the code readable.
You don't need to worry about the amount of data passed. The serialization and de-serialization is quite fast and the transfer is basically memcpy, so that does not take any reasonable time.
I found this adapter plugin, which I guess counts as the "PouchDB trick" I was after: https://github.com/pouchdb-community/worker-pouch
It was trivial to add (see below), and has been used in production for 6-7 weeks, and appears to have fixed the problems we saw. (I say appears, as it is quite hard to see it having any effect, and we didn't have a good way to reproduce the slowdown problems users were seeing.)
const PouchDB = require('pouchdb-browser').default
const pouchdbWorker = require('worker-pouch')
PouchDB.adapter('worker', pouchdbWorker)
The real code is like this, but usePouchDBWorker has always been kept as true:
const PouchDB = require('pouchdb-browser').default
// const pouchdbDebug = require('pouchdb-debug')
if (usePouchDBWorker) {
const pouchdbWorker = require('worker-pouch')
PouchDB.adapter('worker', pouchdbWorker)
}
This code is used in both web app and Electron builds. The web app is never used with older web browsers, so read the github site if that might be a concern for your own use case.

React/Redux Large Real Time Data List Performance

Currently I have a large list of data (500 Rows) that at times many records can be updated a second. I'm using firebase's Real time Database. I am using React and Redux and basically whenever a record is changed, I fire a dispatch event to update the state in my app. When there are many records being updated it slows down and almost crashes the browser.
I've narrowed down my performance issues to it trying to dispatch 200+ actions at once. But since it is websockets/firebase I have no way of getting these updates in groups.
I am wondering if there is a library to use that will queue the dispatch requests and update the state one at a time, in order. Instead of trying to do it all at once.
Are these issues by any chance occurring in development with Redux dev tools also running?
Redux is fairly optimised to handle large data sets (particularly if you normalise your data structure). However, if you are dispatching a large number of actions and also have a large amount of data in your Redux store, using Redux dev tools can give a somewhat false impression of poor performance.
In a production build of your application there would only ever be one instance of your Redux state at a particular moment. Hence the first of the three Redux principles; single source of truth.
The difference whilst using Redux-dev tools in development however is that the dev tools keep a history of your actions and the state for each action dispatch you trigger. This subsequently can lead to large amounts of that data filling up your browser’s memory and thus giving the perception of poor performance.
You can also take a look at the Redux documentation on performance which has several further suggestions for how you can optimise your application.
If you would also like to show us how your data is structured in your reducer, or how you are handling your action dispatches, perhaps we can make further suggestions to improve your performance.

Javascript memory leak in recursive polling method

Actual problem that needs solved: I'm working on a large application, the initial release of which was about two years ago. We're now adding a new page to the application, and noticing some odd behavior. The new screen is going to be an "always-on" status screen, meaning it is the default screen in the app and the dedicated pc the app runs on will always display it by default. After a certain amount of time (only a few minutes on IE, usually much longer on Chrome) things start misbehaving. First, the animation of the scrolling messages (if any) becomes choppy and slow, eventually to the point that they appear to move about 1 pixel/second. The choppiness begins in minutes in IE on the machines we use, and within a couple of hours will be slowed to a crawl. By that time, the other odd behavior has started: the browser itself will be slow to react. There is a menu/login button on this screen, and there is a delay of 3-4 seconds before anything happens when it is clicked. Other visual elements have a similar delay before updating, even though they have no interaction with the user.
I and others on the team have spent several days looking at everything on this page, and think we have the cause narrowed down to what appears to be a memory leak within the service we use for polling data. It appears on each page in which we use the service, but we believe that the symptoms are only an issue on the new screen because of the large number of visual cues (scrolling, updating icons/colors, etc.), many of which have some intensive processing/graphing that runs every cycle when the data is updated, and the fact that no one is likely to have left the other screens up and running for the length of time it may take to start to see symptoms on a less hard-working page.
I took a timeline screenshot in Chrome's developer tools, and this is what it looks like. It looks similar on each page that uses our polling service.
I created a demo of just the polling, and got a similar graph:. It appears a little less severe in the demo, but the pattern is obviously similar, and looks like a memory leak. How can we resolve the memory leak and eliminate our other issues?
Some relevant code:
var reload = function() {
$http({
method: 'GET',
url: 'api.txt',
timeout: 5000
})
.success(function(response) {
//do stuff
})
.error(function(data) {
//do other stuff
})
.finally(function() {
timer = $timeout(reload, 1000);
});
};
are you using closures somewhere in your code?
Since these type of problems occur only while using many closures because that is the place in java script where object is not destroyed, if you use too many closures eventually you will run short of memory.

Save or destroy data/DOM elements? Which takes more resources?

I've been getting more and more into high-level application development with JavaScript/jQuery. I've been trying to learn more about the JavaScript language and dive into some of the more advanced features. I was just reading an article on memory leaks when i read this section of the article.
JavaScript is a garbage collected language, meaning that memory is allocated to objects upon their creation and reclaimed by the browser when there are no more references to them. While there is nothing wrong with JavaScript's garbage collection mechanism, it is at odds with the way some browsers handle the allocation and recovery of memory for DOM objects.
This got me thinking about some of my coding habits. For some time now I have been very focused on minimizing the number of requests I send to the server, which I feel is just a good practice. But I'm wondering if sometimes I don't go too far. I am very unaware of any kind of efficiency issues/bottlenecks that come with the JavaScript language.
Example
I recently built an impound management application for a towing company. I used the jQuery UI dialog widget and populated a datagrid with specific ticket data. Now, this sounds very simple at the surface... but their is a LOT of data being passed around here.
(and now for the question... drumroll please...)
I'm wondering what the pros/cons are for each of the following options.
1) Make only one request for a given ticket and store it permanently in the DOM. Simply showing/hiding the modal window, this means only one request is sent out per ticket.
2) Make a request every time a ticket is open and destroy it when it's closed.
My natural inclination was to store the tickets in the DOM - but i'm concerned that this will eventually start to hog a ton of memory if the application goes a long time without being reset (which it will be).
I'm really just looking for pros/cons for both of those two options (or something neat I haven't even heard of =P).
The solution here depends on the specifics of your problem, as the 'right' answer will vary based on length of time the page is left open, size of DOM elements, and request latency. Here are a few more things to consider:
Keep only the newest n items in the cache. This works well if you are only likely to redisplay items in a short period of time.
Store the data for each element instead of the DOM element, and reconstruct the DOM on each display.
Use HTML5 Storage to store the data instead of DOM or variable storage. This has the added advantage that data can be stored across page requests.
Any caching strategy will need to consider when to invalidate the cache and re-request updated data. Depending on your strategy, you will need to handle conflicts that result from multiple editors.
The best way is to get started using the simplest method, and add complexity to improve speed only where necessary.
The third path would be to store the data associated with a ticket in JS, and create and destroy DOM nodes as the modal window is summoned/dismissed (jQuery templates might be a natural solution here.)
That said, the primary reason you avoid network traffic seems to be user experience (the network is slower than RAM, always). But that experience might not actually be degraded by making a request every time, if it's something the user intuits involves loading data.
I would say number 2 would be best. Because that way if the ticket changes after you open it, that change will appear the second time the ticket is opened.
One important factor in the number of redraws/reflows that are triggered for DOM manipulation. It's much more efficient to build up your content changes and insert them in one go than do do it incrementally, since each increment causes a redraw/reflow.
See: http://www.youtube.com/watch?v=AKZ2fj8155I to better understand this.

Categories

Resources