Protractor Test Result variations - javascript

I am working with protractor in some projects and when i run projects there are some differences in the messages at the end of the successful test run. In one test i have written the tests normally and when the tests run this message comes
But in the other i have written using Page Objects and there are one test and 4 assertions in it. but when it runs successfully this message comes
What i want to know is why in the second scenario the assertions are not shown and why its not in green. What is the reason for this difference and is this an issue, if so how can i fix it?

That's configured in jasmineNodeOpts in your protractor-configuration-file. It's the configuration for the default reporter.
The properties you are looking for are:
jasmineNodeOpts: {
silent: false,
showColors: true
}
The values above are the default ones. In your second screen-shot they are inverted.
Take a look at jasmine-spec-reporter for another reporter with more options.

Related

When carrying out my unit tests, how can I execute custom code if at least one of my unit tests fails?

In a test file, I've added several unit tests using the Tape test harness. What I'd like to do now is ensure that, if at least one of my unit tests fails (screenshot), some custom JS code is executed. How would I approach that?
In this case, the custom code I'd like to carry out will play a sound (which I plan to do using the sound-play Node package.
If it matters, I'm running the unit tests in VSCode, and the Tape output is currently printed to VSCode's output panel.
Thanks.
I figured out that Tape has an onFailure method, which fires a callback whenever at least one test fails. This was exactly what I needed.

Log to terminal when QUnit test suite completes?

When my test suite completes, I need to output some stats, i. e. meta info about tests collected during test execution.
I'm trying this:
QUnit.done(() => console.log("some meta info here"))
This works when I run tests in the browser.
But when I run tests in the terminal, the console.log output is not displayed.
There's probably some debug flag, but it will enable all console.log messages and pollute the output greatly.
Instead, I need to output one specific message to the terminal, so that it's logged to CI.
PS console.log messages sent during test execution seem to make it into the terminal successfully.
PPS Using QUnit in an Ember CLI app, against Chrome headless.
This was a tricky one, as I've never had a need to interact with QUnit like, this, but here are my findings each step of the way:
Attempt 1:
That's a weird error, I thought I was passing a callback function :-\
Attempt 2:
After looking up the documentation for Qunit.log, I could see I was using it wrong. Switching to console.log shows the beginning message -- but not the ending message.
Attempt 3:
moduleDone will print something at the end -- but it also prints for every time you use the word module (after everything inside finishes running). So, I guess as a hack if QUnit.done never ends up working, you could keep track of the number of modules started, and modules done, make sure every started modules completes, and if that number is 0 at the end, your test suite is done?
Attempt 4
Turns out, this is only actually helpful for if you want to know the outermost module is done, cause it seems like multiple tests don't run in parallel (which is probably better anyway for test stability).
Attempt 5
https://github.com/qunitjs/qunit/issues/1308
It looks like an issue with the testem adapter :(

WebDriver.io no console output

I am using the following stack to run several tests:
NodeJs
Selenium standalone
geckodriver thought I use chrome
webdriver.io
mocha
chai
So after all my first_test.js is:
describe ('Website url test ', () => {
it('should have a title ', () => {
browser.call((done) => {
browser.url('http://webdriver.io');
var title = browser.getTitle();
expect(title).to.be.equal('WebdriverIO - WebDriver bindings for Node.js')
done();
})
})
And the output in the console is:Incorrect console output
But it should be like this for the passing tests as well: Correct console output
Is something in Mocha config that I should change so that the passing tests would produce the same optical outcome?
This behavior was caused by the reporter chosen (in my case dot).
I changed to spec and I have a very verbose output now.
WebdriverIO supports a great variety of reporters:
Dot: which is the default reporter for WDIO, a lightweight console reporter that outputs a green, or red dot ('.') for a passing, respectively failing test case;
Spec: which just outputs in the console a step-by-step breakdown of the test cases you previously ran. This output will reside strictly in the console, unless you want to pipe your entire console log-stack via the logOutput: './<yourLogFolderPath>/' attribute from the wdio.conf.js file;
Json: which generates a .json report of the tests your previously ran. It's very well suited for people who already have a test results dashboard where they analyze their regression results (passing tests, failing tests, run-time, etc.) and just need to parse the data from somewhere. You can configure the path where you want the .json report to be generated via:
reporterOptions: {
outputDir: './<yourLogFolderPath>'
}
Note: The Json reporter will populate the path given with WDIO-<timestamp>.json reports. If you want to pipe said .json to some other software for parsing, then you will need to go inside the library and change the naming convention so that you always get your results in the same file as opposed to a dynamically generated one.
Allure: Allure is one of the best reporter choices, especially if you don't have the makings of a test results dashboard in place as it generates one for you. You can check out this answer for a step-by-step breakdown;
!!! BUT as a best practice, no reporter should outweigh the importance of setting your logLevel (inside the wdio.conf.js file) to debug (logLevel: 'debug') for wdio-v5, or verbose (logLevel: 'verbose') for wdio-v4.
When debugging (I presume that was the reason purpose with the reporting), it's crucial that you get to the root of the problem in the fastest way possible and that is by looking at the REST calls made by your tests during run-time.
Hope this give a clearer overview to people starting with WebdriverIO and who need more info regarding which of these reporters is best suited for what scenario/situation.
Cheers!

Getting intermittent failures of tests in Chrome

Update 2:
After forgetting about this for a week (and being sick), I am still out of my depth here. The only news is that I reran the tests in Safari and Firefox, and now Safari always fails on these tests, and Firefox always times out. I assume I've changed something somewhere, but I have no idea where.
I'm also more and more certain there's a timing issue somewhere. Possibly simply code going async where it shouldn't, but more likely it's something being interrupted.
Update:
I'm less interested in finding the actual bug, and way more interested in why it's intermittent. If I can find out why that is, I can probably find the bug, or at least rewrite the code so it's avoided.
TL;DR:
I'm using Karma (with Webpack and Babel) to run tests in Chrome, and most of them are fine, but for some reason 7 tests get intermittent failures.
Details:
So! To work!
The six first tests MOSTLY succeed when I run it in the debug tab, and MIGHT fail. The failure percentage seems higher when running it normally, though. These six tests are related, as they all fail after running a specific method which functions as a safe delete() for some Backbone models. Basically it's meant to check and clear() all linked models in the model to be deleted, and return false if it's not able to do that.
And had the failures been 100%, I am sure I would find the error and wink it out, but the only thing I know is that it has to do with trying to access or change a model that has already been deleted, which seems like it's a timing thing...? Something being run asynchronously but shouldn't perhaps...? I have no idea how to fix it...
The seventh test is a little easier. It's using Jasmine-Jquery to check if a dom element (which starts out empty) gets another div inside after I change something. It's meant to test if Bootstrap's Alert-system is implemented correctly, but has been simplified heavily in order to try to find out why it fails. This test always fails if I run it as a gulp task, but always succeeds if I open the debug tab and rerun the test manually. So my hypothesis is that Chrome doesn't render the DOM correctly the first time, but fixes it if I rerun it in the debug tab...?
TMI:
When I say I open the debug tab and rerun the test manually, I am still inside the same 'gulp test' task, of course. I also use a 'gulp testonce', but the only change there is that it has singleRun enabled and the HTML reporter enabled. It shows the exact same pattern, though I can't check the debug page there, since the browser exits after the tests.
Output from one of the first 6 tests using the html reporter.
Chrome 47.0.2526 (Mac OS X 10.11.2) model library: sentences: no longer has any elements after deleting the sentence and both elements FAILED
Error: No such element
at Controller._delete (/Users/tom/dev/Designer/test/model.spec.js:1344:16 <- webpack:///src/lib/controller.js:107:12)
at Object.<anonymous> (/Users/tom/dev/Designer/test/model.spec.js:143:32 <- webpack:///test/model.spec.js:89:31)
Output from test 7 using the html reporter.
Website tests ยป Messaging system
Expected ({ 0: HTMLNode, length: 1, context: HTMLNode, selector: '#messagefield' }) not to be empty.
at Object.<anonymous> (/Users/tom/dev/Designer/test/website.spec.js:163:39 <- webpack:///test/website.spec.js:109:37)
Now, the first thing you should know is that I have of course tried other browsers, but Safari has the exact same pattern as Chrome, and Firefox gives me the same errors, but the error messages end up taking 80MB of diskspace in my html reporter and SO MUCH TIME to finish, if it even finishes. Most of the time it just disconnects - which ends up being faster.
So I ended up just using Chrome to try to find this specific bug, which has haunted my dreams now for a week.
Source
Tests:
https://dl.dropboxusercontent.com/u/117580/model.spec.js.html
https://dl.dropboxusercontent.com/u/117580/website.spec.js.html
Test output (Since the errors are intermittent, this is really just an example): https://dl.dropboxusercontent.com/u/117580/output.html
OK, all tests now succeed. I THINK this was the answer:
Some tests called controller, and some tests called window.controller. This included some reset() and remove() commands.
After doing a rewrite, I still had failures, so I did another rewrite. As part of that rewrite, I decided to make all calls through window.*, and after that rewrite all tests succeeded.

Protractor Accessibility reporting

I am trying to use Accessibility plugin that comes with Protractor. From what I see it does checking for a11y of last page that I am located.
Is there a way to have 2 test scripts executed one after another one and provide different reports or put all in one report but separated.
Example:
access.js
access1.js
Output file:
resultJsonOutputFile: 'result/result.json'
I tried to this way in conf.js:
specs: ['../test/access.js', '../test/access1.js'],
or
specs: ['../test/access*.js'],
but still get result for last script executed
I tried also creating suites:
suites: {
homepage: '../test/homepage/access.js',
catalogpage: '../test/catalogpage/access1.js'
},
but when I check JSON file, if 2 scripts executed, then 1st one is ok with no issues and provides error for 2nd script. However, if to run 1st script alone, Protractor provides errors
Also I tried to create in one js file as different scenarios, but still same issue
With the current implementation, the accessibility plugin is set to run exactly once per invocation of the Protractor runner, on the last page. So unfortunately, no modification of the suites or test files will make it run more than once.
You can create separate configuration files for each set of test files you'd like to run, or using shardTestFiles to make sure that each file is run in its own process. See the referenceConf for more details on sharding.
Alternatively, you could use aXe to do your accessibility testing. In order to use it with e2e tests in protractor and Webdriver, do the following:
npm install --save-dev axe-webdriverjs
Then in your e2e test files, you do:
var AxeBuilder = require('path_to_the/axe-webdriverjs');
to get hold of the AxeBuilder and then wherever you need to run a test, you:
AxeBuilder(browser.driver)
.analyze(function (results) {
expect(results.violations.length).toBe(0);
});
The above example is using Jasmine but you can extrapolate for any other assertion library.
Also: there is a sample project you can clone and run here https://github.com/dylanb/UITestingFramework
Disclaimer: I am associated with the aXe project and therefore not neutral
I ran into that problem too - as another poster stays the plugin isn't really designed to operate that way.
I wrote a derivative of that plugin which does what you're looking for - protractor-axe-report-plugin.
You make a call to runAxeTest (or runAxeTestWithSelector) whenever you have a page open in the browser that you want to test, and it generates reports using the aXe engine.
Continuum can be used for your use case where it seems the accessibility plugin that comes with Protractor cannot. Here's some documentation on a Protractor-based sample project that uses Continuum. It can be downloaded from webaccessibility.com under 'Continuum for Protractor'. If you look at the source code of the sample project, it basically just boils down to this:
const continuum = require('../js/Continuum.js').Continuum;
continuum.setUp(driver, "../js/AccessEngine.community.js");
continuum.runAllTests().then(() => {
const accessibilityConcerns = continuum.getAccessibilityConcerns();
// accessibilityConcerns.length will be 0 if no accessibility concerns are found
});
(For more information on the above, you can check out the API documentation.)
You can execute that continuum.runAllTests bit wherever in your tests that you like. That includes multiple times from within the same test too, if desired, which if I understand correctly is ultimately what you're after.
Of course, no automated accessibility testing tool is a replacement for manual accessibility testing. It seems like you're just looking to get a baseline level of compliance right now though, so Continuum seems appropriate for your use case to tackle the low-hanging fruit.

Categories

Resources