Headless HTML5 performance capture as in chrome dev tools - javascript

I've recently been working with Chrome's dev tools Performance tab to reduce the processing burdens of my page. I'm hoping to be able to append a check into my CI so that this isn't an exclusively manual process going forward.
Captures provide the following information:
This gives me the amount of time spent idle, and I know how long I've run the test for, so I can get %idle. I'd like to be able to run a headless process evaluating the page that ultimately allows me to determine %idle in the same way.
Is there a tool that can do this? I'd be happy to go as far as installing a headless chrome browser if there's a way to get this information in a report generated using command line arguments maybe?

Related

Puppeteer works headless but not headful (JS)

I am running puppeteer on a Linux VM, which does not have a display; however, I am using xvfb to remedy that.
For some background: The overall goal of this program is to navigate to a page and authenticate using puppeteer. From there, I audit several webpages using Lighthouse.
I have been running this script for several weeks in headless no issues. I've recently found that the performance metrics of the headless Lighthouse runs far better than reality by running manual tests on my local machine through chrome developer tools. If it's relevant, I specifically noticed a huge gap in the number of recorded DOM nodes. Long story short, I now need to run this script in headful mode.
After installing and implementing xvfb in my code, I have been able to start a display within my script without hitting any errors. I used the answer from this question as reference: Running Puppeteer with xfvb headless : false
With all of that out of the way, onto the actual issue: this script, including the xvfb implementation, still runs flawlessly in headless mode with absolutely no timeouts despite repeated runs. When I try running it in headful mode, I consistently timeout on the puppeteer launch, even when I tried extending the timeout to 5 minutes. I've tried an absurd amount of launch-option combinations to no avail. Any and all help would be so greatly appreciated, even if it means significantly altering my approach to the problem. Thank you!

How to get chrome performance metrics in javascript

If I use Chrome Dev Tools I can do the following:
Open chrome dev tools (right click on the page in chrome => inspect)
Navigate to the "performance" tab
Click the record button
Click on a button in my web app
Stop performance recording
Then i get a nice little pie in the "Summary" tab of chrome:
My question is:
How can i start recording, stop recording and get those summary values (Loading, Scripting etc.) in javascript?
It would be really nice if someone could give me a little code example.
My question is not on how I can handle page navigation, cause for this I am using C# selenium. What I want to do is start performance recording, execute some steps with the webdriver, stop recording and measure the performance.
There are two ways you could do it:
First one:
I would recommend looking into puppeteer.
It's a project done by the guys from google chrome and it has support for tracing. As you can see here https://pptr.dev/#?product=Puppeteer&version=v1.13.0&show=api-class-tracing they have a way to retrieve the generated trace, and you should just write it to your computer to be able to use it later.
The call of tracing.start({}) uses a path which specifies the file to write the trace to.
The call of tracing.stop() can be very easily integrated with the fs library to convert the Buffer output to a file that later you can read with the chrome dev tools in case you wouldn't want to use the start function with the path parameter.
The only downside, is that you can't really reuse your Selenium script and you would have to start more or less from the scratch, even thought Puppeteer claims to be easier.
Second one (a little more difficult):
Use something similar to this library. https://github.com/paulirish/automated-chrome-profiling
It's written in JS, and it works perfectly as it's expected with the example, if you follow the installation steps of the package and then run the command node get-timeline-trace.js and load the file generated (profile-XXXXXXXX.devtools.trace) to the chrome profiler you will have a very nice report.
The only problem I see is that you will have to find a way to execute your selenium scripts passing it the chrome instance to it, and I don't know how easy that could be (maybe the PID might do?)

How to profile javascript in PhantomJS

We use PhantomJs 2.0 to take screenshots of web pages. We've found that one particular page takes several minutes to process. This page does not appear to have this issue (or at least not of any comparable magnitude) when loaded in Chrome.
I believe that this is because the javascript is hanging/running very slowly. During the hang, Phantom is using a lot of CPU (although only one core). It does not appear to be taking up an abnormal amount of memory. I am fairly confident that javascript is the culprit because I can see from logging that all requests complete quickly, but then after the page loads Phantom hangs for awhile and won't run anything (I think this is because Phantom is all single-threaded so if the page is still running javascript my Phantom script won't run anything).
I'd like to debug and try to understand what part of the JS is taking so long, but I can't figure out how to get at this in Phantom. For example, I can't seem to collect any output from console.profile/console.profileEnd. How can I profile the javascript running in Phantom to find the bottleneck?
I use Phantomas, via grunt-phantomas. It's a tool that integrates with PhantomJS to profile a wide variety of performance-related metrics. Definitely worth checking out. If it doesn't give you exactly what you need, you can look at the source and see how they integrate with PhantomJS and get data out.

Can node-inspector debug an app without pausing it?

Node-inspector is a fantastic tool for debugging server-side code just as one would use the Chrome developer tools. I'm using it to debug a Meteor server, as in https://stackoverflow.com/a/19438774/586086.
One thing that would be even better would be to be able to use the debugging console to inspect objects while the app is running, without pausing it, as allowed by the Chrome developer tools. Currently, if one tries to do this without pausing, the following type of error is displayed:
It seems that there should be a way to replicate the client-side debugging functionality that Chrome has, by inserting the inspection code into the Node event loop, instead of pausing to do it. Does anyone know if this is possible?
Disclaimer: I am the maintainer of Node Inspector.
The V8 debugger protocol used by Node Inspector does not support inspecting objects while the program is running. (Well, it allows you to inspect an object, but you can't inspect the result of the inspection.) Chrome Developer Tools apply a workaround, they are injecting their custom javascript code into the web page and using this injected code to perform inspections.
It should be possible to inject the same code from Node Inspector and rewrite Node Inspector inspections to call the injected code instead of using the V8 debugger protocol. The change is probably not too difficult, but it still requires a decent amount of time.
If you like to contribute this feature, I am happy to help you. Please open a github issue to discuss implementation details.

Testing browser extensions

I'm going to write bunch of browser extensions (the same functionality for each popular browser). I hope, that some of the code will be shared, but I'm not sure about this yet. For sure some of extensions will use native API. I have not much experience with TDD/BDD, and I thought it's good time to start folowing these ideas from this project.
The problem is, I have no idea how to handle it. Should I write different tests for each browser? How far should I go with these tests? These extensions will be quite simple - some data in a local storage, refreshing a page and listening through web sockets.
And my observation about why is it hard for me - because there is a lot of behaviour, and not so much models, which are also dependent on a platform.
I practise two different ways of testing my browser extensions:
Unit tests
Integration test
Introduction
I will use the cross-browser YouTube Lyrics by Rob W extension as an example throughout this answer. The core of this extension is written in JavaScript and organized with AMD modules. A build script generates the extension files for each browser. With r.js, I streamline the inclusion of browser-specific modules, such as the one for cross-origin HTTP requests and persistent storage (for preferences), and a module with tons of polyfills for IE.
The extension inserts a panel with lyrics for the currently played song on YouTube, Grooveshark and Spotify. I have no control over these third-party sites, so I need an automated way to verify that the extension still works well.
Workflow
During development:
Implement / edit feature, and write a unit test if the feature is not trivial.
Run all unit tests to see if anything broke. If anything is wrong, go back to 1.
Commit to git.
Before release:
Run all unit tests to verify that the individual modules is still working.
Run all integration tests to verify that the extension as whole is still working.
Bump versions, build extensions.
Upload update to the official extension galleries and my website (Safari and IE extensions have to be hosted by yourself) and commit to git.
Unit testing
I use mocha + expect.js to write tests. I don't test every method for each module, just the ones that matter. For instance:
The DOM parsing method. Most DOM parsing methods in the wild (including jQuery) are flawed: Any external resources are loaded and JavaScript is executed.
I verify that the DOM parsing method correctly parses DOM without negative side effects.
The preference module: I verify that data can be saved and returned.
My extension fetches lyrics from external sources. These sources are defined in separate modules. These definitions are recognized and used by the InfoProvider module, which takes a query, (black box), and outputs the search results.
First I test whether the InfoProvider module functions correctly.
Then, for each of the 17 sources, I pass a pre-defined query to the source (with InfoProvider) and verify that the results are expected:
The query succeeds
The returned song title matches (by applying a word similarity algorithm)
The length of the returned lyrics fall inside the expected range.
Whether the UI is not obviously broken, e.g. by clicking on the Close button.
These tests can be run directly from a local server, or within a browser extension. The advantage of the local server is that you can edit the test and refresh the browser to see the results. If all of these tests pass, I run the tests from the browser extension.
By passing an extra parameter debug to my build script, the unit tests are bundled with my extension.
Running the tests within a web page is not sufficient, because the extension's environment may differ from the normal page. For instance, in an Opera 12 extension, there's no global location object.
Remark: I don't include the tests in the release build. Most users don't take the efforts to report and investigate bugs, they will just give a low rating and say something like "Doesn't work". Make sure that your extension functions without obvious bugs before shipping it.
Summary
View modules as black boxes. You don't care what's inside, as long as the output matches is expected or a given input.
Start with testing the critical parts of your extension.
Make sure that the tests can be build and run easily, possibly in a non-extension environment.
Don't forget to run the tests within the extension's execution context, to ensure that there's no constraint or unexpected condition inside the extension's context which break your code.
Integration testing
I use Selenium 2 to test whether my extension still works on YouTube, Grooveshark (3x) and Spotify.
Initially, I just used the Selenium IDE to record tests and see if it worked. That went well, until I needed more flexibility: I wanted to conditionally run a test depending on whether the test account was logged in or not. That's not possible with the default Selenium IDE (it's said to be possible with the FlowControl plugin - I haven't tried).
The Selenium IDE offers an option to export the existing tests in other formats, including JUnit 4 tests (Java). Unfortunately, this result wasn't satisfying. Many commands were not recognized.
So, I abandoned the Selenium IDE, and switched to Selenium.
Note that when you search for "Selenium", you will find information about Selenium RC (Selenium 1) and Selenium WebDriver (Selenium 2). The first is the old and deprecated, the latter (Selenium WebDriver) should be used for new projects.
Once you discovered how the documentation works, it's quite easy to use.
I prefer the documentation at the project page, because it's generally concise (the wiki) and complete (the Java docs).
If you want to get started quickly, read the Getting Started wiki page. If you've got spare time, look through the documentation at SeleniumHQ, in particular the Selenium WebDriver and WebDriver: Advanced Usage.
Selenium Grid is also worth reading. This feature allows you to distribute tests across different (virtual) machines. Great if you want to test your extension in IE8, 9 and 10, simultaneously (to run multiple versions of Internet Explorer, you need virtualization).
Automating tests is nice. What's more nice? Automating installation of extensions!
The ChromeDriver and FirefoxDriver support the installation of extensions, as seen in this example.
For the SafariDriver, I've written two classes to install a custom Safari extension. I've published it and sent in a PR to Selenium, so it might be available to everyone in the future: https://github.com/SeleniumHQ/selenium/pull/87
The OperaDriver does not support installation of custom extensions (technically, it should be possible though).
Note that with the advent of Chromium-powered Opera, the old OperaDriver doesn't work any more.
There's an Internet Explorer Driver, and this one does definitely not allow one to install a custom extension. Internet Explorer doesn't have built-in support for extensions. Extensions are installed through MSI or EXE installers, which are not even integrated in Internet Explorer. So, in order to automatically install your extension in IE, you need to be able to silently run an installer which installs your IE plugin. I haven't tried this yet.
Testing browser extensions posed some difficulty for me as well, but I've settled on implementing tests in a few different areas that I can invoke simultaneously from browsers driven by Selenium.
The steps I use are:
First, I write test code integrated into the extension code that can be activated by simply going to a specific URL. When the extension sees that URL, it begins running the tests.
Then, in the page that activates the testing in the extension I execute server-side tests to be sure the API performs, and record and log issues there. I record the methods invoked, the time they took, and any errors. So I can see the method the extension invoked, the web performance, the business logic performance, and the database performance.
Lastly, I automatically invoke browsers to point at that specific URL and record their performance along with other test information, errors, etc on any given client system using Selenium:
http://docs.seleniumhq.org/
This way I can break down the tests in terms of browser, extension, server, application, and database and link them all together according to specific test sets. It takes a bit of work to put it all together, but once its done you can have a very nice extension testing framework.
Typically for cross-browser extension development in order to maintain a single code-base I use crossrider, but you can do this with any framework or with native extensions as you wish, Selenium won't care, it is just driving the extension to a particular page and allowing you to interact and perform tests.
One nice thing about this approach is you can use it for live users as well. If you are providing support for your extension, have a user go to your test url and immediately you will see the extension and server-side performance. You won't get the Selenium tests of course, but you will capture a lot of issues this way - very useful when you are coding against a variety of browsers and browser versions.

Categories

Resources