Tests result as successful, although errors are found during Tests - javascript

I have set up my test environment as described here with QunitJS + PhantomJS + GruntJS: http://jordankasper.com/blog/2013/04/automated-javascript-tests-using-grunt-phantomjs-and-qunit/
Everything works fine, but I have the problem that, my Grunt Task finishes without errors, although errors are found. This is crucial for my build process. Due to the Test results the build either fails or succeeds. But in my case the build always succeeds. Any Ideas why grunt doesn't exit with failure when errors found?
qunit Task of the grunt file:
module.exports = {
services: {
options: {
urls: [
'http://localhost:8000/tests/services.html'
],
timeout: 20000,
force: true
}
},
gui: {
options: {
urls: [
'http://localhost:8000/tests/gui.html'
],
timeout: 20000,
force: true
}
}
};
Output:
Please consider that I cant upload more info due to confidental issues.

You are asking 'why does Grunt continue and when the tests fail?' The answer is 'because you are asking it to'.
The force option controls whether the QUnit task fails if there are failing tests. Setting it to true as you have done tells Grunt to continue even if there are failing tests. Try setting it to false, or removing it altogether as false is the default.

Related

How to add a screenshot on a test failure using Appium and WebdriverIO

I currently have multiple wdio.config files due to the multiple apps in my end to end suite
it looks like this and all these files have the allure reporting command in it and my reports are working fine:
every file has an allure reporting command like this:
reporters: [['allure', {
outputDir: 'allure-results',
disableWebdriverStepsReporting: true,
disableWebdriverScreenshotsReporting: true,
}]],
and also the desired capabilities like this:
capabilities: {
App: {
port: 4723,
capabilities: {
platformName: 'iOS',
'appium:platformVersion': '13.6',
'appium:orientation': 'PORTRAIT',
'appium:noReset': true,
'appium:newCommandTimeout': 240,
"appium:platformName": "iOS",
"appium:deviceName": "iPhone 8",
"appium:bundleId": "com.app",
}
},
},
and i also have a separate general wdio file which a general file and have not added much in it.
afterStep: function (test, context, { error, result, duration, passed, retries }) {
if (error) {
browser.takeScreenshot();
}
}
I have tried adding the after hook in the custom wdio config file as well but i am not able to have the screenshot on failure. i have also used App.takeScreenshot(); command as well instead of using browser.takeScreenshot(); but no luck.
Not sure about appium part but below code as you also have used in wdio.conf.js (or may be custom wdio file) works in webdriverio. I have used it in my framework.
afterTest: function(test, context, { error, result, duration, passed, retries }) {
if (!passed) {
browser.takeScreenshot();
}
},
This should work:
afterTest: async function (test) {
browser.saveScreenshot("browserfull.png")
// OR
driver.saveScreenshot("driverfull.png")
}

grunt-run remove console colors

I have a small grunt task to clean my coverage folders, then run my tests like so :
grunt.registerTask('test', [
'clean:test',
'run:test'
]);
The run task itself looks like so :
options: {},
test: {
cmd: 'npm',
args: [
'test'
]
}
};
Note: this is using grunt-run task - https://www.npmjs.com/package/grunt-run
These tasks do their job just fine, however when run through the grunt task the color is removed from the tests in the console. When I just run npm test, the color is there. I am wondering if there is any way to get around this? After googling a bit, I tried adding to the run task:
options: {
'no-color': false
},
but this seemed to do nothing. Is there any way to enable the color here? Thanks!
Try using colors in grunt.log if coloring in tests is in your control.,
E.g,
var grunt = require("grunt");
grunt.log.writeln("Test failure !"["red"]);
grunt.log.writeln("5 tests failed !"["red"].bold);
I have this same problem and resolve it by configurate grunt task with additional arguments: "--colors"
uiTests: {
cmd: "node",
args: [
"node_modules/codeceptjs/bin/codecept.js",
"run"
"--steps"
"--colors"
]
},

Avoid one test out of multiple test files in firefox protractor

I have multiple tests in my tests folder where the naming conventions for all the tests ends with spec.js. I am running all the tests from the Config file with */spec.js option.
I want to skip running one test in FF as it is not supported in that browser. This is what I am attempting to do but it is not skipping that tests. Please advise.
multiCapabilities: [{
'browserName': 'chrome',
'chromeOptions' : {
args: ['--window-size=900,900']
// }
},
},
{
'browserName': 'firefox',
'chromeOptions' : {
args: ['--window-size=900,900']
// }
},
}],
specs: [
'../tests/*.spec.js'
],
I have the following in my onPrepare function:
browser.getCapabilities().then(function (cap) {
browser.browserName = cap.caps_.browserName;
});
In one of the test file where I am looking to skip running this test in FF, I am doing this
if(browser.browserName=='firefox') {
console.log("firefox cannot run *** tests")
} else {
blah... rest of the tests which I want to execute for Chrome and IE I have put it in this block}
But still the test which I wanted to skip running in FF still runs.
Please advise.
An easy way to do this is to update your firefox multicapabilities to exclude particular test spec using exclude tag. This prevents use of an if condition and additional lines of code. More details are here. Here's how -
multiCapabilities: [{
browserName: 'chrome',
chromeOptions : {
args: ['--window-size=900,900']
},
},
{
browserName: 'firefox',
// Spec files to be excluded on this capability only.
exclude: ['spec/doNotRunInChromeSpec.js'], //YOUR SPEC NAME THAT YOU WANT TO EXCLUDE/SKIP
}],
Hope it helps.
As soon as browser.getCapabilities() is asynchronous and is based on Promises, your code inside .then() may be executed later than the rest of the code. I guess your if condition is placed inside describe block, which actually runs before the value for browser.browserName is set, as a result you get a value of undefined for it and condition fails. To make sure that your tests run after all the preparations are done, you should return a promise from onPrepare:
onPrepare: function() {
return browser.getCapabilities().then(function (cap) {
browser.browserName = cap.caps_.browserName;
});
}
Protractor will explicilty wait until it resolves and then start executing the tests.
describe('Suite', function () {
console.log(browser.browserName); // 'firefox'
it('spec', function () {
expect(true).toBe(true);
});
});

grunt-browserify alias stop working

I have simple grunt-browserify config. This configuration works perfectly until I change any JavaScript file, then "watchify" compiles build again. Since that moment build.js fails in browser with exception: Uncaught Error: Cannot find module 'i18n'
Seems like "watchify" ingnores alias option, or Am I doing something wrong ?
browserify: {
client: {
src: ['app/app.js'],
dest: 'app/build.js',
options: {
browserifyOptions: {
debug: true
},
alias: [
'./app/dispatchers/appDispatcher.js:appDispatcher',
'./app/models/i18n.js:i18n'
],
watch: true
}
}
}
Thank you.
Adding cache:false solves the problem. My only concern is that caching helps to speed up the process. So but turning it off I'm slowing down re-building process.
...
browserifyOptions: {
...
cache: false
}
...
Problem comes from "module-deps" package. Following commit fixes this issue. Wait for official build and then caching option can be removed.
link to commit

Karma.js: Does anyone know how to make karma return filename of logging / errors

I'm using requirejs for a not super complicated project- problem is that I have some utility methods that log information to console and it's brought to my attention a question I've had for a while but never asked:
Say you have karma running unit tests on roughly a few billion files and one of them is logging to the console...
Without using a stack trace, how can you determine the name / location of that ONE file?
or
What would be the easiest way to filter / refine / define karma's output (other than adjusting logLevel)?
I've looked into different reporters, and will be trying to write one for karma here soon, I'm just trying to make sure I know what's available (if applicable).
karma.conf.js:
module.exports = function (config) {
config.set ({
basePath : '../',
frameworks : ['mocha', 'requirejs', 'chai'],
files : [
{pattern: 'tests/_*.js', included: false},
'tests/test-main.js'
],
reporters : ['dots', 'growl'],
port : 9876,
logLevel : config.LOG_DEBUG,
//autoWatch : true,
autoWatch : false,
plugins : [
'karma-requirejs',
'karma-mocha',
'karma-chai',
],
singleRun : false
});
};
I see that you are using mocha: I would suggest to have more a "test-title" approach rather than a "filename" one.
What about a solution like the following - it could be implemented in karma as well with a custom reporter:
afterEach(function(){
// use here a global variable or
// appending it as property on the runner
if(variableToLookAt){
console.log(this.currentTest.fullTitle() + ': '+your_message_here);
}
});
The snippet above can be insert inside a describe suite block, or better in case of nested suites, in every block that contains a test (it).

Categories

Resources