WebDriver.io no console output - javascript

I am using the following stack to run several tests:
NodeJs
Selenium standalone
geckodriver thought I use chrome
webdriver.io
mocha
chai
So after all my first_test.js is:
describe ('Website url test ', () => {
it('should have a title ', () => {
browser.call((done) => {
browser.url('http://webdriver.io');
var title = browser.getTitle();
expect(title).to.be.equal('WebdriverIO - WebDriver bindings for Node.js')
done();
})
})
And the output in the console is:Incorrect console output
But it should be like this for the passing tests as well: Correct console output
Is something in Mocha config that I should change so that the passing tests would produce the same optical outcome?

This behavior was caused by the reporter chosen (in my case dot).
I changed to spec and I have a very verbose output now.

WebdriverIO supports a great variety of reporters:
Dot: which is the default reporter for WDIO, a lightweight console reporter that outputs a green, or red dot ('.') for a passing, respectively failing test case;
Spec: which just outputs in the console a step-by-step breakdown of the test cases you previously ran. This output will reside strictly in the console, unless you want to pipe your entire console log-stack via the logOutput: './<yourLogFolderPath>/' attribute from the wdio.conf.js file;
Json: which generates a .json report of the tests your previously ran. It's very well suited for people who already have a test results dashboard where they analyze their regression results (passing tests, failing tests, run-time, etc.) and just need to parse the data from somewhere. You can configure the path where you want the .json report to be generated via:
reporterOptions: {
outputDir: './<yourLogFolderPath>'
}
Note: The Json reporter will populate the path given with WDIO-<timestamp>.json reports. If you want to pipe said .json to some other software for parsing, then you will need to go inside the library and change the naming convention so that you always get your results in the same file as opposed to a dynamically generated one.
Allure: Allure is one of the best reporter choices, especially if you don't have the makings of a test results dashboard in place as it generates one for you. You can check out this answer for a step-by-step breakdown;
!!! BUT as a best practice, no reporter should outweigh the importance of setting your logLevel (inside the wdio.conf.js file) to debug (logLevel: 'debug') for wdio-v5, or verbose (logLevel: 'verbose') for wdio-v4.
When debugging (I presume that was the reason purpose with the reporting), it's crucial that you get to the root of the problem in the fastest way possible and that is by looking at the REST calls made by your tests during run-time.
Hope this give a clearer overview to people starting with WebdriverIO and who need more info regarding which of these reporters is best suited for what scenario/situation.
Cheers!

Related

Firefox extension. How to access "browser" namespace from console?

I'm trying of the console to access the "browser" environment e.x. browser.cookies.getAll but this not defined anywhere except extension environment.
If make simple firefox addon (extension) with one .js file where browser API request:
browser.cookies.getAll({}).then(console.log);
get an array with an interactive preview.
Execute from extension
If execute this command in console
How to access "browser" namespace from console?
This is not possible, browser.* or chrome.* are not available on developer console because they need an extension's context to run and developer console runs commands on the context of current page.
The following approach requires learning/knowledge of unit testing and integration testing in JavaScript and node.js, the example provided is over-simplified, this is by no means production ready code.
A better approach for testing your extensions and debugging it is to write tests for it.
Choose a testing framework (Jest, Mocha + chai, etc) and set it up according to your needs
Install sinon-chrome package which provides you with stubs for browser.* methods/apis by running npm install --save-dev sinon-chrome
(Optional) Install webextensions-api-fake which provides you with mocks for browser.* methods/apis by running npm install --save-dev webextensions-api-fake
(Optional) Install webextensions-jsdom which helps you to write tests for your browser_action default_popup, sidebar_action default_panel or background page/scripts
Start writing tests by following the example below
In order to debug your extension, set a breakpoint in your IDE/Editor of choice and run the tests, the execution will stop at the breakpoint and you will have access the states of Objects and Variables at that time in execution. This will help you know what and how exactly things are executing and what's happening to the data you pass around in functions. There is no need for writing console.log statements everywhere for checking your output or variables, debuggers can help with that.
(Optional) webextensions-toolbox is another great tool for writing cross-browser extensions (Your extension will work on chrome, firefox, opera, edge) with the same code base. This also comes with hot-reloading of your extension page, so you don't have to hit refresh every time you make any changes.
By following this approach, it will improve your development workflow and will reduce the number of times you have to hit refresh on your browser.
Example usage of sinon-chrome stubs using jest testing framework.
Lets say you have written your code in yourModule.js then to test/verify that it works in
yourModule.test.js you write:
import browser from 'sinon-chrome';
import yourModule from './lib/yourModule';
describe('moduleName', () => {
beforeAll(() => {
// To make sure yourModule uses the stubbed version
global.browser = browser;
});
it('does something', async () => {
await yourModule();
// Lets assume your module creates two tabs
expect(browser.tabs.create.calledTwice).toBe(true);
// If you want to test how those browser methods where called
expect(browser.tabs.create.firstCall.calledWithExactly({
url: 'https://google.com',
})).toBe(true);
// Notice the usage of `.firstCall` here, this makes sure only the first time
// `browser.tabs.create` was called with the given args.
});
});
When you run this test using jest, yourModule will expect there to exist a global variable browser with the apis it uses which is only possible in a real browser, but we faked it using the sinon-chrome package, your module will execute in node.js environment as expected.
You don't need to run it in the browser to see changes. You just write tests, write code to pass those tests and when all tests pass. Check your extension by running it in the browser, at this point in time your extension will run as you'd expect it to. If you add another feature to yourModule and your tests fail you know exactly what went wrong.
However the above example only makes sure how browser.* methods/apis were called, for you to test the behavior of yourModule you'd need to mock those methods/apis, this is were the webextensions-api-fake package comes in. You can find example in its repo on github.
Examples for testing your browser_action default_popup, sidebar_action default_panel or background page/scripts are also provided in the webextensions-jsdom repo on github.

Passing cmd parameters to Jasmine tests

I would like to pass command line parameters to my Node.js/Jasmine tests. At the moment I have basic Jasmine file/dir structure set up (jasmine.json file and specs - as in the example: Jasmine documentation).
I run specs from command line by executing the following command:jasmine.
I would like to pass some command line parameter, so that I can use it in my specs. I would run tests with the following command:jasmine --param value or jasmine param_value.
Is it possible (how to do that)?
The parameter I want to pass is a password and I don't want to hardcode it - maybe you can suggest any better solution?
Thanks in advance!
First off in general if you want to send parameters to a test in jasmine you would use jasmine -- --param value or jasmine -- param_value. No that extra -- isn't a typo it tells jasmine to ignore everything after that and pass it on.
That said passwords and the command-line parameters don't mix. Virtually every shell has some form of history. That history will record every command just as its typed. This is major security no no. It leaves an unencrypted password recorded on disk.
The test itself should ask for the password if that needs to be supplied without hard-coding. This will at most generate a transient in memory temporary with the value. That is must safer then having it in a history file somewhere. node has modules such as readline that can do this without generating visible output for the password being entered.

Get JSON data/report from jasmine tests

I am creating an editable tester. I am running jasmine tests and I would like to get the result (JSON data from tests) of tests and represent it on my web page.
Is it possible, and if how?
I do not want to save it somewhere, and then read it. I want that tests return a response, that tells me what happened with tests.
You are asking about jasmine reporters. There are so many of them now, for instance:
jasmine-reporters
karma-jasmine-html-reporter
jasmine-jsreporter
jasmine-json-test-reporter

Protractor Accessibility reporting

I am trying to use Accessibility plugin that comes with Protractor. From what I see it does checking for a11y of last page that I am located.
Is there a way to have 2 test scripts executed one after another one and provide different reports or put all in one report but separated.
Example:
access.js
access1.js
Output file:
resultJsonOutputFile: 'result/result.json'
I tried to this way in conf.js:
specs: ['../test/access.js', '../test/access1.js'],
or
specs: ['../test/access*.js'],
but still get result for last script executed
I tried also creating suites:
suites: {
homepage: '../test/homepage/access.js',
catalogpage: '../test/catalogpage/access1.js'
},
but when I check JSON file, if 2 scripts executed, then 1st one is ok with no issues and provides error for 2nd script. However, if to run 1st script alone, Protractor provides errors
Also I tried to create in one js file as different scenarios, but still same issue
With the current implementation, the accessibility plugin is set to run exactly once per invocation of the Protractor runner, on the last page. So unfortunately, no modification of the suites or test files will make it run more than once.
You can create separate configuration files for each set of test files you'd like to run, or using shardTestFiles to make sure that each file is run in its own process. See the referenceConf for more details on sharding.
Alternatively, you could use aXe to do your accessibility testing. In order to use it with e2e tests in protractor and Webdriver, do the following:
npm install --save-dev axe-webdriverjs
Then in your e2e test files, you do:
var AxeBuilder = require('path_to_the/axe-webdriverjs');
to get hold of the AxeBuilder and then wherever you need to run a test, you:
AxeBuilder(browser.driver)
.analyze(function (results) {
expect(results.violations.length).toBe(0);
});
The above example is using Jasmine but you can extrapolate for any other assertion library.
Also: there is a sample project you can clone and run here https://github.com/dylanb/UITestingFramework
Disclaimer: I am associated with the aXe project and therefore not neutral
I ran into that problem too - as another poster stays the plugin isn't really designed to operate that way.
I wrote a derivative of that plugin which does what you're looking for - protractor-axe-report-plugin.
You make a call to runAxeTest (or runAxeTestWithSelector) whenever you have a page open in the browser that you want to test, and it generates reports using the aXe engine.
Continuum can be used for your use case where it seems the accessibility plugin that comes with Protractor cannot. Here's some documentation on a Protractor-based sample project that uses Continuum. It can be downloaded from webaccessibility.com under 'Continuum for Protractor'. If you look at the source code of the sample project, it basically just boils down to this:
const continuum = require('../js/Continuum.js').Continuum;
continuum.setUp(driver, "../js/AccessEngine.community.js");
continuum.runAllTests().then(() => {
const accessibilityConcerns = continuum.getAccessibilityConcerns();
// accessibilityConcerns.length will be 0 if no accessibility concerns are found
});
(For more information on the above, you can check out the API documentation.)
You can execute that continuum.runAllTests bit wherever in your tests that you like. That includes multiple times from within the same test too, if desired, which if I understand correctly is ultimately what you're after.
Of course, no automated accessibility testing tool is a replacement for manual accessibility testing. It seems like you're just looking to get a baseline level of compliance right now though, so Continuum seems appropriate for your use case to tackle the low-hanging fruit.

Protractor Test Result variations

I am working with protractor in some projects and when i run projects there are some differences in the messages at the end of the successful test run. In one test i have written the tests normally and when the tests run this message comes
But in the other i have written using Page Objects and there are one test and 4 assertions in it. but when it runs successfully this message comes
What i want to know is why in the second scenario the assertions are not shown and why its not in green. What is the reason for this difference and is this an issue, if so how can i fix it?
That's configured in jasmineNodeOpts in your protractor-configuration-file. It's the configuration for the default reporter.
The properties you are looking for are:
jasmineNodeOpts: {
silent: false,
showColors: true
}
The values above are the default ones. In your second screen-shot they are inverted.
Take a look at jasmine-spec-reporter for another reporter with more options.

Categories

Resources