I am trying to use Accessibility plugin that comes with Protractor. From what I see it does checking for a11y of last page that I am located.
Is there a way to have 2 test scripts executed one after another one and provide different reports or put all in one report but separated.
Example:
access.js
access1.js
Output file:
resultJsonOutputFile: 'result/result.json'
I tried to this way in conf.js:
specs: ['../test/access.js', '../test/access1.js'],
or
specs: ['../test/access*.js'],
but still get result for last script executed
I tried also creating suites:
suites: {
homepage: '../test/homepage/access.js',
catalogpage: '../test/catalogpage/access1.js'
},
but when I check JSON file, if 2 scripts executed, then 1st one is ok with no issues and provides error for 2nd script. However, if to run 1st script alone, Protractor provides errors
Also I tried to create in one js file as different scenarios, but still same issue
With the current implementation, the accessibility plugin is set to run exactly once per invocation of the Protractor runner, on the last page. So unfortunately, no modification of the suites or test files will make it run more than once.
You can create separate configuration files for each set of test files you'd like to run, or using shardTestFiles to make sure that each file is run in its own process. See the referenceConf for more details on sharding.
Alternatively, you could use aXe to do your accessibility testing. In order to use it with e2e tests in protractor and Webdriver, do the following:
npm install --save-dev axe-webdriverjs
Then in your e2e test files, you do:
var AxeBuilder = require('path_to_the/axe-webdriverjs');
to get hold of the AxeBuilder and then wherever you need to run a test, you:
AxeBuilder(browser.driver)
.analyze(function (results) {
expect(results.violations.length).toBe(0);
});
The above example is using Jasmine but you can extrapolate for any other assertion library.
Also: there is a sample project you can clone and run here https://github.com/dylanb/UITestingFramework
Disclaimer: I am associated with the aXe project and therefore not neutral
I ran into that problem too - as another poster stays the plugin isn't really designed to operate that way.
I wrote a derivative of that plugin which does what you're looking for - protractor-axe-report-plugin.
You make a call to runAxeTest (or runAxeTestWithSelector) whenever you have a page open in the browser that you want to test, and it generates reports using the aXe engine.
Continuum can be used for your use case where it seems the accessibility plugin that comes with Protractor cannot. Here's some documentation on a Protractor-based sample project that uses Continuum. It can be downloaded from webaccessibility.com under 'Continuum for Protractor'. If you look at the source code of the sample project, it basically just boils down to this:
const continuum = require('../js/Continuum.js').Continuum;
continuum.setUp(driver, "../js/AccessEngine.community.js");
continuum.runAllTests().then(() => {
const accessibilityConcerns = continuum.getAccessibilityConcerns();
// accessibilityConcerns.length will be 0 if no accessibility concerns are found
});
(For more information on the above, you can check out the API documentation.)
You can execute that continuum.runAllTests bit wherever in your tests that you like. That includes multiple times from within the same test too, if desired, which if I understand correctly is ultimately what you're after.
Of course, no automated accessibility testing tool is a replacement for manual accessibility testing. It seems like you're just looking to get a baseline level of compliance right now though, so Continuum seems appropriate for your use case to tackle the low-hanging fruit.
Related
I'm trying of the console to access the "browser" environment e.x. browser.cookies.getAll but this not defined anywhere except extension environment.
If make simple firefox addon (extension) with one .js file where browser API request:
browser.cookies.getAll({}).then(console.log);
get an array with an interactive preview.
Execute from extension
If execute this command in console
How to access "browser" namespace from console?
This is not possible, browser.* or chrome.* are not available on developer console because they need an extension's context to run and developer console runs commands on the context of current page.
The following approach requires learning/knowledge of unit testing and integration testing in JavaScript and node.js, the example provided is over-simplified, this is by no means production ready code.
A better approach for testing your extensions and debugging it is to write tests for it.
Choose a testing framework (Jest, Mocha + chai, etc) and set it up according to your needs
Install sinon-chrome package which provides you with stubs for browser.* methods/apis by running npm install --save-dev sinon-chrome
(Optional) Install webextensions-api-fake which provides you with mocks for browser.* methods/apis by running npm install --save-dev webextensions-api-fake
(Optional) Install webextensions-jsdom which helps you to write tests for your browser_action default_popup, sidebar_action default_panel or background page/scripts
Start writing tests by following the example below
In order to debug your extension, set a breakpoint in your IDE/Editor of choice and run the tests, the execution will stop at the breakpoint and you will have access the states of Objects and Variables at that time in execution. This will help you know what and how exactly things are executing and what's happening to the data you pass around in functions. There is no need for writing console.log statements everywhere for checking your output or variables, debuggers can help with that.
(Optional) webextensions-toolbox is another great tool for writing cross-browser extensions (Your extension will work on chrome, firefox, opera, edge) with the same code base. This also comes with hot-reloading of your extension page, so you don't have to hit refresh every time you make any changes.
By following this approach, it will improve your development workflow and will reduce the number of times you have to hit refresh on your browser.
Example usage of sinon-chrome stubs using jest testing framework.
Lets say you have written your code in yourModule.js then to test/verify that it works in
yourModule.test.js you write:
import browser from 'sinon-chrome';
import yourModule from './lib/yourModule';
describe('moduleName', () => {
beforeAll(() => {
// To make sure yourModule uses the stubbed version
global.browser = browser;
});
it('does something', async () => {
await yourModule();
// Lets assume your module creates two tabs
expect(browser.tabs.create.calledTwice).toBe(true);
// If you want to test how those browser methods where called
expect(browser.tabs.create.firstCall.calledWithExactly({
url: 'https://google.com',
})).toBe(true);
// Notice the usage of `.firstCall` here, this makes sure only the first time
// `browser.tabs.create` was called with the given args.
});
});
When you run this test using jest, yourModule will expect there to exist a global variable browser with the apis it uses which is only possible in a real browser, but we faked it using the sinon-chrome package, your module will execute in node.js environment as expected.
You don't need to run it in the browser to see changes. You just write tests, write code to pass those tests and when all tests pass. Check your extension by running it in the browser, at this point in time your extension will run as you'd expect it to. If you add another feature to yourModule and your tests fail you know exactly what went wrong.
However the above example only makes sure how browser.* methods/apis were called, for you to test the behavior of yourModule you'd need to mock those methods/apis, this is were the webextensions-api-fake package comes in. You can find example in its repo on github.
Examples for testing your browser_action default_popup, sidebar_action default_panel or background page/scripts are also provided in the webextensions-jsdom repo on github.
My goal is to debug one of my tests. I'm using Mocha as a base, and SinonJS for spies, stubs and mocks. For some unknown reason my stub of the ajax method has stopped working. It worked a week ago, now the requests are sent and the stub does not track the calls.
I have these lines inside the outermost describe
let sandbox = sinon.sandbox.create();
let ajaxStub = undefined;
and then this:
beforeEach(function () {
ajaxStub = sandbox.stub($, 'ajax');
});
afterEach(function () {
sandbox.restore();
});
Anyway, my question is not what's wrong with this, I'm probably doing something extremely stupid elsewhere, and some debugging could probably solve it. My problem is with the debugging itself.
mocha --debug-brk --inspect ./test/mytest.js
This is what I run in command line to get the debugging session going.
My problem is to run the tests I'm currently using Gulp, with which I'm loading all my framework dependencies and all my globals - the libraries added this way include also jQuery and sinon
And of course, if I debug my tests using that command line, NodeJS does not load the required files in the environment, and at the first reference to sinon I get an exception.
I could create an html page in which I load required files and tests and run the test - then debug it manually with the browser inspector - but that's something that I would like to avoid. Is there anything more automated?
I'm not a NodeJS expert, I just roughly understand what it is and how it works, so I'm pretty confident there could be something I missed that can be of help :)
What I'm thinking about right now is a batch script to find the required files, but that's all I have.
Just an additional note: code base is really old and big, and I do not really have permission to refactor existing code into es6 modules.
I found a solution: I'm going to create a testDebugLoader.js script in which I will write which test I want to debug, and an array of paths to scripts I need to load.
Then loop trough the array, load each needed file, and call eval on the retrieved text.
I am using the following stack to run several tests:
NodeJs
Selenium standalone
geckodriver thought I use chrome
webdriver.io
mocha
chai
So after all my first_test.js is:
describe ('Website url test ', () => {
it('should have a title ', () => {
browser.call((done) => {
browser.url('http://webdriver.io');
var title = browser.getTitle();
expect(title).to.be.equal('WebdriverIO - WebDriver bindings for Node.js')
done();
})
})
And the output in the console is:Incorrect console output
But it should be like this for the passing tests as well: Correct console output
Is something in Mocha config that I should change so that the passing tests would produce the same optical outcome?
This behavior was caused by the reporter chosen (in my case dot).
I changed to spec and I have a very verbose output now.
WebdriverIO supports a great variety of reporters:
Dot: which is the default reporter for WDIO, a lightweight console reporter that outputs a green, or red dot ('.') for a passing, respectively failing test case;
Spec: which just outputs in the console a step-by-step breakdown of the test cases you previously ran. This output will reside strictly in the console, unless you want to pipe your entire console log-stack via the logOutput: './<yourLogFolderPath>/' attribute from the wdio.conf.js file;
Json: which generates a .json report of the tests your previously ran. It's very well suited for people who already have a test results dashboard where they analyze their regression results (passing tests, failing tests, run-time, etc.) and just need to parse the data from somewhere. You can configure the path where you want the .json report to be generated via:
reporterOptions: {
outputDir: './<yourLogFolderPath>'
}
Note: The Json reporter will populate the path given with WDIO-<timestamp>.json reports. If you want to pipe said .json to some other software for parsing, then you will need to go inside the library and change the naming convention so that you always get your results in the same file as opposed to a dynamically generated one.
Allure: Allure is one of the best reporter choices, especially if you don't have the makings of a test results dashboard in place as it generates one for you. You can check out this answer for a step-by-step breakdown;
!!! BUT as a best practice, no reporter should outweigh the importance of setting your logLevel (inside the wdio.conf.js file) to debug (logLevel: 'debug') for wdio-v5, or verbose (logLevel: 'verbose') for wdio-v4.
When debugging (I presume that was the reason purpose with the reporting), it's crucial that you get to the root of the problem in the fastest way possible and that is by looking at the REST calls made by your tests during run-time.
Hope this give a clearer overview to people starting with WebdriverIO and who need more info regarding which of these reporters is best suited for what scenario/situation.
Cheers!
I am working on a WYSIWIG animation editor for designing sliders / ad banners that includes a lot of dependencies, which also means a lot of extra bloated code that isn't ever used. I am hoping to run a report on the code that helps me identify the important things. I have a couple cool starts that will search through javascript for all functions and return each function by parts:
https://regex101.com/r/sXrHLI/1
Then some PHP that will sort it by size:
Sort preg_match_all by named group size
The thought is that by identifying large functions that aren't being used, we can remove them. My next step is to identify the function tree of what functions are invoked on document load, and then which are loaded and invoked on actions such as clicks / mouseovers and so on.
While I have this handy function that tells me all functions loaded in the DOM, it isn't enough:
var functionArray;
$(document).ready(function(){
var objs = [];
for (var obj in window){
if(window.hasOwnProperty(obj) && typeof window[obj] === 'function') objs.push(obj);
};
console.log(obj));
});
I am looking for a solution that I can use to script in PHP / shell to emulate page load - now here is where my knowledge of terminology fails me, am I looking for "Call Stack", do I need a timeline, interpreter, framework, engine or a parser?
I next need to emulate a click / hover event on all elements, or all elements that match something like this regex:
(?|\$\(['"](\.\w*)["']|getElementsByClassName\('(\w*)'\))
(?|\$\(['"](\#\w*)["']|getElementsById\('(\w*)'\))
to find any events that trigger functions so I can make a master list of functions that need to be in the final code.
I was watching a talk from a Google Developer and I thought of your post. The following link has more intel on Dev Tools Coverage Profiler, but the following is the high level.
Google dev tools ships out a neat feature for generating reports on used and unused JS and CSS code -- which is right along the essence of what you were searching to do (just a slightly different medium -- it'd be a bit harder to automate, but it otherwise contains, I believe, exactly what you were looking for).
Open Dev tools and then open up the ellipse in the bottom left corner (see image) and then click the record button [see image 1]. Go through the steps you want to capture. You'll get an interactive screen to which you can go through all the code and see what was used (green) and what was not used (red) [see image 2]
Image 1 - Ellipse drop down to get to coverage tool
Image 2 - Full screenshot of the interactive report for this StackOverflow page while editing this post.
I'd suggest you to take a look at this tool:
Istanbul
With it you can do the following:
create an instrumented version of your code
deploy it on the server and run the manual test (coverage information is collected in one of the global variables inside the browser)
copy coverage information into a file (its an lcov data as far as I remember)
generate code coverage report with it
If you feel like going further, you can actually use something like jvm-cucumber with selenium to automate UI tests. You will need to dump the coverage data every time you reload the page, however. Then you will have to combine coverage from different runs and use this combined lcov data to generate the overall report.
The whole process is a bit heavy, but it is way better then starting to reinvent it.
Even more, you can combine this data with unit test coverage information to produce joint reports.
As a step further, you may want to setup sonar server so that you store multiple versions of the coverage reports and compare differences between tests.
I have a script that I run by calling node filename. Im trying to unit test a particular function in this file. The problem is when I require the file like app = require('../app') it runs the entire script and I only want to access the exported functions. Is there any way to prevent the script functions from running when the file is being imported?
The answer to the question of 'can I stop the script from running' to do unit tests is essentially no. However, there are a number of options that you might find helpful. The first is to make your node module more modular by taking out the actions that run on require. My suggestion on that is to change it so that it takes an argument:
node filename run
When the 'run' argument is present, the regular actions fire. This will let you require in the file, which always runs the script, but without the 'run' it will not execute the actions you don't want. You could even have a 'test' option to run it in test mode. A second option that I think is less helpful is to move the function you want to test into its own file and have that required into the main section. You can then make it available to test separately.
What you basically want is to partially execute the module you require.
So the answer is no unless you patch the code of this module.