I'm currently debugging an Angular (JS) based app. I have some speed issue on runtime (client side) and want to analyze why.
I use the Devtool profiler from Chrome. I can see that some Events (i.e. keypress, blur) took a lot of time (see screenshot below).
Now I would like to go deeper and know which source code contains these event listeners and cause my application to slow down like this.
For information, the app is very slow when I write text in input, and when I focus/blur from my input; I know that some watchers could cause the slow down, but I'm not sure.
Hope deeper profiler analysis could help me !
--- Edit 25 feb 2020 ---
I think my problem is linked to digest cycle (apply/digest, etc).
I found this plugin : digest-hud. After several tries, it seems that a binding (which is used in a lot of components) called "source" is taking all digest resource :
Digest-hud was really helpful. I cannot find a way to know exactly how to find ha function initial calls on callstack. Like Kresimir Pendic said, probably a map issue.
But I found a lot of binding/watcher with source and one of them was called every single event of focus/blur/tipping. So I removed it, find an other way to signal changes within input and it works.
So don't hesitate to check with Digest-hud (disclaimer, I'm not releated in any way with Digest-hud developer(s)) if you have some performance issue with your AngularJS app, it'll give you some hints to solve the problem.
Related
What is the advantage of using shared module over rewriting code in each component/module in Angular?
In my project I've approx 30-40 modules. In all modules in service file same api is written. As per angular standard we should use sharedModule to so that code can be reused. I want to update my Angular project before that wanted to understand what is the advantage of using shared module over re writing code? How will it help to my Angular project?
As per angular standard we should use sharedModule
This isn't per Angular standard. It's per any standard, let alone per development standard.
The phrase exists: "don't reinvent the wheel".
Literally - car needs new tyres? Not going to design whole new ones, you'll grab some more off the shelf and shove them on.
Same applies - 7 places in your app that need to make API requests? Don't design and write 7 whole new ones, use the one you've already made.
Design principal: DRY - Don't Repeat Yourself.
This is especially important with code. You say you have 30-40 modules. Each with their own copy/paste version of some API service.
What happens when authentication is added/removed/modified for that API? Suddenly need to add some token into the header for your requests?
30-40 copy/paste jobs after you've made the change. 30-40... you can't even give us an exact number! How do you know you replaced ALL of them successfully?
Why on Earth would you do that to yourself when you can just keep reusing the one original thing you made?
30-40 modules all use that one API service. One place to make any fixes/changes. One service to test.
Oh lawd the testing - of which I'm nearly 100% certain you have zero tests, and any you do have are likely ineffectual and definitely don't cover nearly as much as you should have covered.
That's 30-40 test classes that you need to update as well (let me guess - copy paste those too?).
And that's just a single mentioned API service. What do you do if you write yourself some kind of helper methods for something in your app?
"Oh, I got fed up of writing these same 5 lines to do x, so I wrote a method to do it for me, it makes it much faster".
Cool - copy paste that another 30-40 times for me into all our other modules so that we can use it too. Thanks.
Put that shizzle into your shared module. One helper class. One class to write tests around. One class to change for additions/fixes. Zero copying and pasting and wasting time and missing things.
Ignoring alllllll of this, how the bejeesus have you managed to go days/weeks/months of repeating yourself over and over and copying/pasting over and over and over and god knows what else over and over and over.... and not once thought "this is a lot of effort, maybe I can save some here by doing something smarter"?!
This isn't even a thought-provoking or discussion-inspiring question. It's a question drawing attention to ones basic common sense and the long-standing human desire to be able to do as much or more with the same or less effort.
Why'd we figure out farming? Because hunting around the whole area for a few berries was more effort.
Why'd we hook animals up to our ploughs? Because it's hard work and we're lazy.
Why'd we replace animals with tractors? Because they can do it better.
Why're we replacing traditional farms with those swanky 'vertical' farm things? Because they're more efficient, can be automated more, etc.
Stop copying and pasting chunks of anything.
The millisecond you do anything for a second time, you refactor that away into a single thing that both can use.
I sincerely hope that you are currently a student and/or just starting out (self taught?). If so, welcome! Keep asking questions, keep hitting Google for your answers (where you'll find better than I can provide), and keep learning. My code was just as bad (worse, likely) back at uni.
If you're not, and are actually a 'seasoned' software developer of some kind, where people are paying you to do this... Please stop, take up farming, and let us all know what you've worked on to date so that we can immediately stop using any of it.
I'm having an issue with identifying bottlenecks in render performance while working on a JSON viewer. With few elements, it performs well, but at a certain point it becomes annoyingly slow.
Checking the profiler, it seems that elements are rendering fast enough, but I've noticed a few issues that I'm not sure how to pursue.
Overview
The app is a JSON viewer which allows you to expand / minimize all elements at once, as well as individual elements.
Performance is fine with few elements, but seems to decrease dramatically as the number of elements increases.
When profiling both my object filter method with performance.now() as well as checking the render time in React DevTools, the figures seem okay. I could be interpreting it wrong.
I've tried using React.memo() on stateless elements (particularly the key/value which is the most frequently rendered component), but it doesn't seem to improve the performance noticeably. Admittedly, I'm not sure if I understand the reasoning enough behind memoizing React components to implement this usefully.
Implementation
Currently, my app loads data into a parent which feeds into a component that loads the JSON tree using a recursive element.
Loading JSON feed from URL changes the state of the parent component, which is filtered using a helper method that uses values entered into an input field.
Issues
There are two functionalities which reproduce a slow response time with (not so big) JSON documents:
The expand all button
The first few keypresses on a filter query
With the current implementation, both filtering and expanding all triggers a display: none change on the child elements, and the behavior leads me to believe I'm doing something inefficiently to handle this use case.
Reproduction Steps
The code is available here: https://codesandbox.io/s/react-json-view-4z348
With a production build here (not performing any better): https://csb-4z348.vercel.app/
To reproduce the issue, play around with the Expand All function (plus sign next to filter input) and some filter inputs.
Then, try loading a JSON feed with more elements (you can test on my GitHub API feed) and try filtering/expanding all. Notice the major performance hit.
What I've noticed
When logging useEffect, minimizing seems to cause ~2x as many rerenders as expanding all.
As the filter input becomes more specific, the performance (logically) improves as less elements are being rendered.
Question
While I would appreciate a nudge in the right direction for this specific case, what I'm most curious about is how best to identify what is causing these performance issues.
I've looked into windowing the output, but it's not my first choice, and I'm pretty sure I'm doing something wrong, rather than the cause being too many elements rendered.
I appreciate your time, and thank you in advance for any tips you could provide!
It seems I've answered my own question. The problem was a reconciliation issue due to using UUID as a key prop in my child components, which caused them to re-render every time the minimize state changed. From the docs:
Keys should be stable, predictable, and unique. Unstable keys (like
those produced by Math.random()) will cause many component instances
and DOM nodes to be unnecessarily recreated, which can cause
performance degradation and lost state in child components.
I'll leave the steps here for anyone else who runs into this issue.
After (too long) digging around in performance profiler, I noticed that each time I minimized or expanded the elements, each child was being mounted again. After consulting Google with a more specific query, I found this blog post and realized that I was committing this flagrant performance error.
Once I found the source of the problem, I found many other references to it.
After fixing the key prop, interaction time got ~60% faster for minimize/expand all.
Finally, I memoized some other components related to the instant filter and finally it seems to be performing as well as I would like for the time being.
Thanks to anyone who took a look at this in the meantime, and I hope it's helpful for anyone who might come across this.
In my script, I am trying to locate and click one of the many document links, with this syntax:
cy.wait(3000); cy.get('a[href^="/articleDetail/"]').first().click();
I got this error:
CypressError: Timed out retrying: Expected to find element:
'a[href^="/articleDetail/"] but never found it'
The issue is this happens only few times, not all the times. Like 3 out 5 times. How should I solve this issue ?
Testing it via the Selector Playground (as N. suggested) is a good step. What you also can do is investigate screenshots which Cypress can make on failure. That shows the exact state of the application when the failure happened. That usually gives a good hint to the problem.
Besides that you can also try to set the wait to an absurt value like 10000. If Cypress can find the element at that case, the application is slow and therefor Cypress is not waiting long enough.
For different reasons (internet speed, CPU, Memory, errors) your page could take longer to load or not load at all. As a good practice, your page should have a loading system, where it is shown until the page is completely rendered. This way you could have something like cy.get('your-loading-element').should('not.be.visible'), which will hold the next command while the loading is in place.
Waiting is not the right approach as you never know exactly how long it will take and raising the time will only delay your tests.
It is very important to think of your test in the same way a test analyst would execute them, because one of the steps would be to wait the page to be rendered.
Here is some good testing good practices: UI test automation good practices
Since angular 2 i18n is not yet ready most ppl build it themselves. So do I. I know there's a package on Github but I rather wait until the Framework comes up with something nice or maybe I'll even help. But for now:
What has a bigger impact on the performance of a big angular 2 application?
<p [innerHTML]="l["textKey1"]"></p>
or
<p>{{l["textKey1"]}}</p>
or
<p>{{l.textKey1}}</p>
I know the 2nd and 3rd one cause 2 way binding, while the first one is single direction. I'm sure that makes for a performance increase but does the innerHTML affect it in a bad way again? If not, could we write a directive that then looks like this:
<p [lang]="l.textKey1"></p>
Has anyone made experiments on this before?
We are using Backbone.js and having issues when running our WebDriver tests. We are getting the following error:
org.openqa.selenium.StaleElementReferenceException: Error Message => 'Element does not exist in cache'
Our understanding is that this is caused when we are finding an element, and executing an action on that element (e.g. click()). The element that we have found has gone 'stale', and we suspect that element has been re-rendered or modified.
We have seen lots of solutions that we are not keen on:
Use Thread.Sleep(...). We don't want explicit sleeps in our code
Using a retry strategy, either as a loop or try-catching the StaleElementReferenceException. We feel this is not the right/clean solution, and is prone to breaking in the future
Some people are using WebDriverWait and waiting until some javascript function execution returns true. We have seen people wait for notifyWhenNoOutstandingRequests(callback) in Angular, but can't find anything obvious for Backbone.
We are hoping there is a clean solution that does not involve explicit sleeping, or some form of looping. Any thoughts?
I looked into WebDriverWaits a bit more and I think i've come up with a combination of expectations that works for us:
wait.until(refreshed(elementToBeClickable(...)));
The refreshed expectation is a wrapper for other expectations that deals with StaleElementReferenceException, and the elementToBeClickable expectation checks the element is clickable. What is interesting is that looking at the source for the built in expectations, some of them deal with StaleElementReferenceExceptions, while others don't (e.g. presenceOfElementLocated) and need to be wrapped in the refreshed expectation, so I think that's what initially threw me off when I first looked at WebDriverWaits.