I'm looking for help to automate performance test of my single page angular application. We are using protractor for E2E tests and would like to add performance tests. Our first target is to be able to measure simple timings between e.g. Button Click and finish of loading of a svg. (We have requirements that state that the load time must be less than 2 secs. So we need to assert those things.)
My first idea was to use browser-perf/ protractor-perf. Unfortunately, protractor-perf doesn't seem to work with the latest chrome version and in general browser-perf is just measuring page load times which wont change on a single page application.
My latest idea is to simply use performance.now() and measure the times 'manually'. This has the big disadvantage that it is not supported on iOS Safari. ( I need the tests to run on iPad, too.)
So my question is: Is there someone who has a good idea how I can include performance measurements into my protractor tests, measuring time intervals like the one I mentioned above?
The latest version of browser-perf now works with the latest version of Chrome. You should simply reinstall protractor-perf and it should start working. This was fixed recently - https://github.com/axemclion/browser-perf/issues/31.
Also note that browser-perf also measures things like framerates, area painted, etc, which may be useful for single page apps.
Related
I wish to create a script that can interact with any webpage with as little delay as possible, excepting network delay. This is to be used for arbitrage trading so any tips in this direction are welcome.
My current approach has been to use Selenium with Python as I am a beginner in web scraping. However, I have investigated some approaches and from my findings it seems that Selenium has a considerable delay (See benchmark results below) I mention other things besides it in this question but the focus is on Selenium.
In general, I need to send a BUY or SELL request to whatever broker I am currently working with. Using the web browser client, I have found a few approaches to do this:
1. Actually clicking the button
a. Use Selenium to Click the button with the corresponding request
b. Use Puppeteer to Click
c. Use Pynput or other direct mouse input manipulation
2. Reverse-engineering the request and sending it directly without clicking any buttons
Now, the delay in sending this request needs to be minimized as much as possible. Network delay is out of our control for the purpose of this optimization.
I have benchmarked the approaches for 1. by opening a page in the standard way you would, either with puppeteer and selenium then waiting for a few seconds. While the script waits, I injected into the browser the following code:
$x('//*[#id="id_demo_webfront"]/iframe')[0].contentDocument.body.addEventListener('click', (data => console.log(new Date().getTime())), true);
The purpose of this code is to log in console the current time when the click is registered by the browser.
Then, I log in my python(Selenium, pynput)/javascript(Puppeteer) script the current time right before issuing the click. I am running on Ubunutu 18.04 as opposed to Windows so my system clock should have good resolution. Additional reading about this here and here.
Actual results, all were run on the same webpage multiple times, for roughly 10 clicks each run:
1. a. ~80ms
1. b. ~10-30ms
1. c. ~5-10ms
For 2., I haven't reliably tested the delay. The idea behind this approach is to inject a function that when fired will send a request that exactly resembles a request that would be sent when the button is clicked. I have not tested the delay between issuing the command to run such an injected function and it actually being run, however I expect this approach to be even faster, basically creating my own client to interact with whatever API the broker has on the backend.
From the results, it is pretty clear that issuing mouse commands seems to be the quickest, but it also is the hardest for me to implement reliably. Also, I seem to find puppeteer running faster across the board, but I prefer selenium in python for ease of development and was wondering if there are any tips and ideas to speed up those delays I am experiencing.
Summary:
Why does Selenium with Python have such a delay to issue commands and can it be improved?
Why does Puppeteer seem to have a lower delay when interacting with the same browser?
Some code snippets of my approach:
class DMMPageInterface:
# These are part of the interface script, self.page is the webpage after initialization
def __init__(self, config):
self.bid_price = self.page.get_element('xpath goes here')
...
# Mostly only the speed of this operation matters, sending the trade request
def bid_click(self):
logger.debug("Clicking bid")
if USE_MOUSE:
mouse.position = (720,390)
mouse.click(Button.left, 1)
else:
self.bid_price.click()
Quite a few questions there!
"Why does Selenium with Python have such a delay to issue commands"
I don't have any direct referencable evidence for this but the selenium delay is probably down to the amount of work it has to do. It was created for testing not for performance. The difference in that is that the most valuable part of a test is that MUST run reliably, if it's slower and does more checks in order to make it more reliable than so be it. You still get a massive performance gain over manual testing, and that's what we're replacing.
This is where you'll need the reliability - and i see you mention this need in your question too. It's great if you have a trading script that can complete an action in <1s - but what if it fails 5% of the time? How much will that cause problems?
Selenium also needs to render the GUI then, depending on how you identify your objects it needs to scan the entire DOM for what you want. Generic locators will logically take longer.
Selenium's greater power is the simplicity and the provided ability synchronisation. There's lots of talk in lots of SO articles about webdriverwait (which is great for in-page script sync), but there's also good recognition of page load wait-times. i.e. no point pushing the button before all the source is loaded.
"Why does Puppeteer seem to have a lower delay when interacting with the same browser?"
Puppeteer works with chrome devtools - selenium works with the DOM. So the two are interacting in a different way. Obviously, puppeteer is only for chrome, but selenium can go to almost any browser. It might not be a problem for you if you don't care about cross-browser capability.
However - how much faster is it really?
In your execution timings, you might also want to factor in the whole life cycle of what you aim to do - namely, include: browser load time, and page render time. From your story, you want to buy or sell something. So, you kick off the script - how long does it take from pressing GO to the end result. You most likely will need synchronisation and that's where puppeteer might not be as strong (or take you much longer to get a handle on).
You say you want to "interact with any webpage with as little delay as possible", but you have to get to the page first and get it in the right state. I wouldn't be concerned about milliseconds per action. Kicking open chrome isn't the quickest action in itself and your overall millisecond test becomes >seconds to minutes.
When you have full selenium script taking 65 seconds and a puppeteer script taking 64 seconds, the overall consideration changes :-)
Some other options to think about if you really need speed:
Try running selenium headless - no gui, less resources, runs faster. (google it) :-)
Try other browsers
If you're all about speed - Drop the GUI entirely. Create an API script or start to investigate the use of performance scripts (for example jmeter or loadrunner).
Another option that isn't selenium or puppeteer is javascript directly in the browser
Try your speed tests with other free web tools like cypress or Microsoft's playwright - they interact with JS in the browser directly and brag about some excellent inbuilt stability features.
If it was me, and it was about a single performant action - my first port of call would be doing it at the API level. There's lots of resources and practice sites out there to help get you started and there's no browser to render, there's barely anything more than the network delay. That's the real way to keep it as a sub-second script.
Final thought, when you ask "can it be improved": if you have some code, there may be other ways to make it faster. Share your script so far and i'm sure folks will love to chip in their 2 cents on how you can streamline it :-)
My angular app runs terribly slow in mobile so I carried a test and found out that the javascript execution time (after the bundle.js gets loaded) is the main bottle neck. There's a huge difference in them between desktop to that on a mobile.
What might be causing the problem and what might be the possible solutions?
Test for desktop
Test for mobile(Moto G)
You can see the JS Execution time is denoted by the purple bar.
Please have a look at the page speed suggestions by Google for your site.
https://developers.google.com/speed/pagespeed/insights/?url=https%3A%2F%2Faiesec.org&tab=mobile
Try to make the changes mentioned and test again to see if it makes it any better.
Move the render blocking js to the footer. That's the one thing I noticed when checked the source from my mobile.
Please have a look at the below question
How can I improve load performance of Angular2 apps?
I have seen the initial slow issue on many angular apps even though they have been optimized with production build. Please have a look at that it may help.
Update 2018-03-01:
This issue has apparently been fixed in version 66. Tested a number of pages in Android Chrome Dev and Canary, no crashes observed even during long and/or heavy runs. I never did find a workaround, however 66 should be hitting Chrome Beta soon, and not too long after Stable so normal users should be able to run them again on Android in the regular version of Chrome within a few(ish) weeks. Incidentally, this is the same update that Chrome will fully discontinue trust in Symantec-CA-based HTTPS/SSL certs. I have noticed it seems slightly janky in comparison to before, but will have to test as this could be merely subjective on my part, or due to changes in my code looking for workarounds. Further info can be found in the Chromium Bug report linked to below.
** Note: If you've followed this post... I have to admit I thought I had a workaround by rounding values to only two decimals out. However, I later realized this was causing a side-effect (because of an optimization I had made) of scale3d-ing some elements to 0,0,0, essentially causing them not to be rendered, and thus not triggering the problem for a long time. When I temporarily removed scaling, said elements rendered again, and the bug was triggered. Deep andhumble appologies, but-- problem not solved. So, I have removed my answer. Here are some things I have tried, none of which were successful:
Rounding transform decimal values to x.xx, two digits out.
Rounding transform values to straight integers.
Using setInterval instead of RAF to see if that was a factor (ugh!).
Using a style tag and .innerHTML to set dynamic transforms via css instead of the DOM (this was surprisingly efficient, but still triggered a crash just the same.
I've tried narrowing it down to particular transform parameters, numerical patterns, element nesting depths, just about anything I can think of to get around triggering a crash, and it looks like there is only one pattern: every time you do a transform there is a chance of a crash. It's a seemingly tiny probability, but doing scripted animations, even if you optimize to prevent repeats of identical transforms, makes the crash inevitable. I can make it take longer, but I can't prevent it.
Now, I've seen similar reports all over the web regarding recent problems with CSS in Chrome since November, when version 62 update was pushed on the stable channel. This last test, using a dynamically generates, embedded style-sheet to update the transforms was especially dis-heartening. There doesn't seem to be a way around it except to wait for version 67 when chromium.org says the fix should come out. That is months away for the typical Android-user.
This problem is not relegated to scripted animations either. I've seen similar issues reported with CSS animations and transitions as well.
I've gone to the extent of trying a number of my animation engines from the past, all of which work beautifully on version 61 down to when RAF was available. I've even written new, simple test engines, just for this. I've tried a number of other developers engines. Still crashes.
At this point, I think the only thing that can be done is to wait for the fix, and possibly get enough attention from enough people to hope they will up the priority.
They said in the ticket it's a problem with a code-optimization. I would really like to see them revert that portion back in version 65 before it hits the stable channel, so the average user will not see this problem soon. There's still time to generate another build and get it out before then.
Anyone who would like to see this happen, please go to the link below and put your two cents in on the ticket at chromium.org.
PREVIOUS UPDATE: This issue has now been tested and confirmed at chromium.org as a bug in the GPU rasterization code, affecting up to version 66 of Chrome for Android, and appearing after build 62.0.3197.0 (the last build unaffected). Their engineers are now working to fix it.
I am leaving this open for anyone who has run into this issue, so they don't think it's a problem with their code, and in case anyone happens to read it that can either contribute to the fix or offer a reasonable workaround until a patched build is released. If anyone does find a workaround, please provide it as an answer here.
For those interested, the link to the ticket is below. Here is an excerpt from the bug report:
Tested the issue in Android and able to reproduce the issue.
Steps Followed:
1. Launched the Chrome Browser.
2. Navigate to the URL: https://keithclark.co.uk/labs/css-fps/
3. Tap on any of the tab and try to access the game
4. Observed that Chrome gets hanged.
Chrome versions tested:
64.0.3282.116 Stable/Beta, 65.0.3325.16 (dev) 66.0.3328.0(Canary)
OS: Android 8.1.0
Android Devices: Pixel
Using the per-revision bisect providing the bisect results, Good Build
- 62.0.3197.0 (497604) Bad Build - 62.0.3198.0 (497674)
You are looking for a change made after 497614(GOOD), but before
497615(BAD).
Previously:
Not quite sure if this is a Javascript, CSS, or Android problem-- but here goes...
Basic Question: What do I need to do to keep my pages with scripted RAF 3D transforms from locking up Chrome vs. 62 and above on Android? Is anyone else having this problem?
**Note: This problem may actually be a Chromium issue. I have opened a bug report with Chromium Project. Those interested can view the ticket here: https://bugs.chromium.org/p/chromium/issues/detail?id=805525
I am leaving this open for now so they can view the full description, and in case it turns out I can fix my own pages with a change to my code. But please, read on... ;)
Background:
My pages work great on 61 and below. I manually updated Chrome on my Samsung Note 3 to version 63 stable recently, and found every page (not just mine) with complex, nested 3D transforms running on a requestAnimationFrame loop would lock up the browser. A perfect example is a page from Keith Clark, who has a CSS/HTML/Javascript proof-of-concept demo of a First-Person Shooter. Now, the mobile version worked great on my phone before. After the update, it locks up. My pages work really good, even on weak devices, till this update.
I've narrowed it down a bit. If I clear cache, and uninstall updates, the stock Chrome (41) runs these things great. Install updates to 63, 62, 64(beta), same problem. I can't remember if I tested 60 or 61, but update to version 59 and we're still golden. Firefox is fine. Opera is well, Opera. Same updates on desktop run great.
What I'm not totally sure of is if it's a problem with my phone. No other problems, ever. I know it's a little old, but it still blows most mobile devices off the shelf. Android version 5, Lollipop. Rooted. If any of that matters. Malware scans have always produced 0 with any AV/M app I've tried. I'm very careful.
Anyway, if anyone else has had this problem, or knows of it...
What do I need to change in my code to make it compatible with current Chrome on Android? Is this a problem I need to solve with code? I've looked everywhere but can't seem to find info specifically on it. All I can say is it's breaking my animations. I can't even use dev tools to figure it out, as running a perf check from my computer crashes my phone so bad Chrome dies and loses the connection and any performance data gained, taking my phones wallpaper along with it!
I'm not picking up any errors from my script-- it's fairly basic. I stripped it down to bare bones because I thought I had a runtime error, but nothing. Is there a change with the way Chrome for Android interprets the CSS or does the layering or something?
Sincere apologies if this ends up being off-topic, as I'm not totally sure if this a coding issue, or just a problem with my phone in particular. If it's a coding issue, that's what I need to know-- and how to fix it.
I've searched a bit for this and tried to implement a self-made solution but so far haven't found to be confident with it.
What I need is to write integration tests in Ruby on Rails which interact with JavaScript and get programmatic ways to assert some behaviors. I'm using Test::Unit for the controller/models part but I'm struggling to test some jQuery/JavaScript behaviors used by my app. Mainly it consists in ajax calls and interactions in the UI which updates some sets of information.
I haven't found a solution which makes me confident and which integrates nicely with autotest and the whole red-green process, so for now most parts of my client-side code is untested and that's making me nervous (as it should be :P).
So, does anyone have suggestions for best practices on this issue? Unit testing JS is a bit tricky, as Crockford points out, because it dependes heavily on the current state of the UI and etc and as AFAIK even he hasn't found a good way to implement decent testing...
Shortly: I need to implement tests for some UI behavior which depends on Ajax, integrating with autotest or some other CI tool and haven't found a good and elegant way to do it.
Thanks all for the attention,
Best Regards
AFAIK, outside of a combination of Capybara with Selenium Web-Driver there is very few options for automated testing of JS code.
I use cucumber with capybara and selenium web-driver and because selenium-webdriver actually launches firefox or chrome to go through testing a particular page with ajax call, It does take significantly longer to run through a suite of tests.
There are some alternatives but they dont work all the time or for every situations.
For instance: Capybara with envjs
In April 2011 the thoughtbot guys updated their quest for javascript testing.
Akephalos has fallen out of favor for the following reasons:
Bugs: as previously mentioned, there are bugs in htmlunit,
specifically with jQuery’s live. Although all browser implementations
have bugs, it’s more useful if tests experience the same bugs as
actual browsers.
Compatibility: htmlunit doesn’t fully implement the feature set that
modern browsers do. For example, it doesn’t fully handle DOM ranges or
Ajax file uploads.
Rendering: htmlunit doesn’t actually render the page, so tests that
depend on CSS visibility or positioning won’t work.
Performance: when most of your tests use Javascript, test suites with
htmlunit start to crawl. It takes a while to start up a test with
Akephalos, and a large test suite can easily take 10 or 15 minutes.
So they rolled their own solution which is open source - capybara-webkit. It's still fairly new but it looks like the way to go now.
This article recommends Akephalos.
I have used cucumber and capybara with selenium. This was very frustrating because selenium did not seem to be able to see dynamically generated javascript, despite the fact that capybara was supposed to be waiting for it. That was in January 2011. Things may be different now.
Currently, I am using cucumber and capybara with akephalos. So far, it has been very difficult because 1. it is headless, so you can't see progress. Capybara's "save_and_open" call has helped to a degree. 2. jQuery and akephalos don't seem to play that nicely together. For instance, triggering on a radio button with jquery's .change() works fine in chrome, but doesn't in akephalos. Maybe this is intentional because I later heard somewhere that it doesn't work in IE. I fixed the issue by using .click() instead of .change() for the radio button but since the .change function was set up to run on a bunch of questions, I had to code specifically to get it to work for the test.
The bottom line for me is that automated javascript acceptance testing in a rails env is still immature and possibly more work that it is worth.
After an hour or two of heavy use on the site I'm developing, Firebug develops the following problems:
Breakpoints get glitchy -- it becomes difficult to add/remove breakpoints. Sometimes I click on a line multiple times, see nothing, move to the console tab and back, and then see my breakpoints again.
Console stops logging xhr's, or stops logging debug statements.
Script files become non-viewable.
I'm working with a javascript file which is quite large (over 10k lines). I don't think this is a memory leak issue with my own code. I'm refreshing the page all the time. Looks like an issue on the Firebug side. Is my logic sound? Is there anything I can do to get firebug to behave better? Or do I just need to restart firefox every hour?
Keep in mind Firefox while wonderful has always had many issues with handling memory. You should take a look at your task manager to see Firefox's memory footprint. Additionally I'd break up that JS file if you could into smaller chunks (for many reasons aside of this as well) to be better readable and work with the segments. Finally turn off plugins your not using or that may conflict with Friebug if your not using them.
I spent hours using Firebug without restart Firefox and never crashed, try a clean profile, install on it only Firebug and check if all works fine.
I use a separated development profile with Firebug and other dev oriented extensions installed.
How to configure a profile is described in many sites, on my wiki you find a brief description
i have similar problems! i think it is partially to do with the massive JS files. I just re-start firefox every once and a while. no big deal.