I am writing UI tests and Im struggling to break my test up into smaller tests instead of having one long test run. I use faker to generate all the data needed, so that I dont have to add fixtures.
My current working test looks like this:
/// <reference types="cypress" />
import "cypress-audit/commands";
describe("Full system test", () => {
before(function () {
this.User = require("../../faker/user");
});
it("Registers and creates profile", function () {
cy.visit(Cypress.env("home_page"));
cy.Register({});
cy.UpdateProfile({});
cy.AddCompanyDetails({});
cy.AddTeamMember({});
cy.CreateVacancy({});
cy.AddCandidate({});
cy.AddAdditionalDocs({});
});
});
What I would like to do is have the tests like this:
/// <reference types="cypress" />
import "cypress-audit/commands";
describe("Full system test", () => {
before(function () {
this.User = require("../../faker/user");
});
it("Registers and creates profile", function () {
cy.visit(Cypress.env("home_page"));
cy.Register({});
cy.contains("Vacancies").click();
cy.UpdateProfile({});
cy.contains("Vacancies").click();
cy.AddCompanyDetails({});
cy.contains("Vacancies").click();
cy.AddTeamMember({});
});
it("Creates a vacancy and adds candidates", function () {
cy.CreateVacancy({});
cy.AddCandidate({});
cy.AddAdditionalDocs({});
});
The issue im having is that faker is going to generate new data if I break it up like I have in my second example. Also, I need to then sign back in every time I add a new test. Is there a way for me to continue where the last test ended ?
The reason I want to do this is because I want to see the tests broken up in the testing tool so that its easier to see exactly whats failing instead of having to work it out every time. Is there maybe an easier way to do this ?
I would like it to look like this in the testing tool:
You can add the data that you want at the top of the test to make sure that the same data is being transferred in the test.
/// <reference types="cypress" />
import "cypress-audit/commands";
var faker = require('faker');
var randomName = faker.name.findName();
var randomEmail = faker.internet.email();
describe("Full system test", () => {
before(function() {
this.User = require("../../faker/user");
});
it("Registers and creates profile", function() {
cy.visit(Cypress.env("home_page"));
cy.Register({});
cy.contains("Vacancies").click();
cy.UpdateProfile({});
cy.contains("Vacancies").click();
cy.AddCompanyDetails({});
cy.contains("Vacancies").click();
cy.AddTeamMember({});
});
it("Creates a vacancy and adds candidates", function() {
cy.CreateVacancy({});
cy.AddCandidate({});
cy.AddAdditionalDocs({});
});
})
Another way could be to move the before() hook with faker.js code to cypress/support/index.js since this will only work for Run all specs cypress open as the index file will be called once. But in the case of cypress run, the index.js file is called once before every spec file so the data will change.
Related
I have written a function like this:
const myFunction = () => {
return 'text';
};
exports.myFunction = myFunction;
if (require.main === module) {
console.log(myFunction());
}
and this is my test:
const { myFunction } = require('../myFunction');
describe('test', () => {
it('should return the text', () => {
expect(myFunction()).toMatch('text');
});
});
According to code coverage tools, every line in the code is covered except for this line line:
console.log(myFunction());
Based on comments, I think maybe the reality is that this line cannot be tested, so I'm updating my question:
How can I:
Test this line with Jest, understanding that it may not actually tick the "covered" box, but so I can literally test it. Because not every one of my files has such trivial code in that block. Sometimes I do want to test it for real.
Cause the coverage statistic to show the file as 100% covered? Not because I am pedantic, but I like using the coverage report to find things I need to add tests for, and having dozens of "false negatives" in my report makes that more difficult.
Based on a suggestion in the comments, I found that I can use a child_process exec call within the test to test the output from the command line like this:
const util = require('util');
const exec = util.promisify(require('child_process').exec);
const { myFunction } = require('../myFunction');
describe('test', () => {
it('should return the text', () => {
expect(myFunction()).toBe('text');
});
it('should return the text when called via command line too', async () => {
const { stdout } = await exec('node myFunction', {
encoding: 'utf8',
});
expect(stdout).toBe('text\n');
});
});
Further comments pointed out that without exporting that section of code, Jest can never see it, and hence, never test it, meaning it will never show as "covered". Therefore, once I am satisfied that it is "tested well enough" I can exclude it form my report by adding /* istanbul ignore next */ before the offending line like this:
const myFunction = () => {
return 'text';
};
exports.myFunction = myFunction;
if (require.main === module) {
/* istanbul ignore next */
console.log(myFunction());
}
As explained here, Node.js require wraps script contents with wrapper code specified in Module.wrapper and evaluates it with vm.runInThisContext. This can be implemented in a test. It can be something like:
let Module = require('module');
...
jest.resetModules();
jest.spyOn(console, 'log');
let myModPath = require.resolve('../myFunction');
let wrapper = Module.wrap(fs.readFileSync(myModPath));
let compiledWrapper = vm.runInThisContext(wrapper, {});
let mockModule = new Module(myModPath);
let mockExport = mockModule.exports;
let mockRequire = Module.createRequire(myModPath);
mockRequire.main = mockModule;
wrapper(mockExport, mockRequire, mockModule, path.basename(myModPath), path.dirname(myModPath));
expect(console.log).toBeCalledWith('test');
I have following javascript class and writing unit test using mocha and sinon. When I run test case I see uncovered lines for 'return this._agentId;' and 'this._agentId = value;'.I am not sure how to cover these lines under test.I am using Istanbul test coverage tool to see coverage.
// Agentmessage.js
class AgentMessage {
constructor(agentId, message) {
this._agentId = agentId;
this._message = message;
}
get agentId() {
return this._agentId;
}
set agentId(value) {
this._agentId = value;
}
}
module.exports = AgentMessage;
// Agentmessage.test.js
'use strict';
const chai=require('chai');
const sinon=require('sinon');
var chaiAsPromised=require('chai-as-promised');
chai.use(chaiAsPromised).should();
const expect = chai.expect;
const agentMessage = require('../src/model/agentMessage');
describe('agentMessage test',function() {
let sandbox;
let agentMessageObj;
beforeEach(() => {
agentMessageObj = new agentMessage('agentId', 'message');
sandbox=sinon.sandbox.create();
});
afterEach(() => {
sandbox.restore();
});
it('agentMessage set agentId Test',() => {
agentMessageObj.agentId = 'agentId';
expect(agentMessageObj.agentId).to.deep.equal('agentId');
});
it('agentMessage get agentId Test',() => {
expect(agentMessageObj.agentId).to.equal('agentId');
});
});
I am not seeing the same issue you are. I get 100% coverage.
You say istanbul but you are in fact using the nyc package correct? I think you'll find that the instanbul project suggests you use the nyc runner if you are not already.
Consider refreshing your environment if you are able.
rm -rf .nyc_output && rm -rf coverage && rm -rf node_modules
npm i --save-dev nyc mocha chai
If that does not clear things up, consider removing things, temporarily at least, that you are not using in these particular tests. sinon and chai-as-promised for example. Isolate the code. See if there are some conflicts there.
Try this similar code. I get full coverage.
./node_modules/.bin/nyc --reporter html ./node_modules/.bin/mocha test.js
test.js
const { expect } = require('chai')
const AgentMessage = require('./index');
describe('agentMessage test', function () {
let agentMessage;
beforeEach(function () {
agentMessage = new AgentMessage('agentId01', 'message02');
});
it('agentMessage set agentId Test', async function () {
agentMessage.agentId = 'agentId02';
expect(agentMessage.agentId).to.deep.equal('agentId02');
});
});
If after all of that, if it is still a problem, if you're using a more advanced configuration of nyc/istanbul, start stripping away that configuration and using default properties. See if you find the sweet/troubled part.
I am automating an angular js website which has a login functionality. All I want to Click on sign in link and enter username and password. But Somehow my script are executing really fast than the page load. Please advice me on how I can handle this:
My Login Page object is:
'use strict'
// Normal Login
require('../page-objects/loginPage.js');
var customerPortalHome = function () {
this.clickSignIn = function () {
browser.sleep(5000);
element(by.linkText('Sign In')).click();
browser.waitForAngular();
browser.sleep(2000);
return require('./loginPage.js');
}
}
module.exports = new customerPortalHome();
My Test Spec is;
var co = require('co');
var UI = require('./ui.js');
var ui = new UI();
var CustomerPage = require('../page-objects/customerPortalHome.js')
describe(" Smoke Test Login to the application", function () {
it("test", co.wrap(function* () {
var EC = protractor.ExpectedConditions;
browser.get(ui.createStartLink());
expect(browser.getTitle()).toContain("Portal");
// Verify if user is able to Login into the application
var loginPage = CustomerPage.clickSignIn();
loginPage.switchToFrame('account-sdk');
var reportPage = loginPage.clickLogin('$$$$$#gmail.com', '$$$$$');
expect(browser.getCurrentUrl()).toContain('reports');
reportPage.clickSignOut();
expect(browser.getCurrentUrl()).toContain("?signout");
browser.sleep(800);
}));
});
Whenever I execute the test The browser opens for a sec and then closes.
My Onprepare method looks like this:
beforeLaunch: function () {
return new Promise(function (resolve) {
reporter.beforeLaunch(resolve);
});
},
onPrepare: function () {
browser.manage().timeouts().implicitlyWait(5000);
afterEach(co.wrap(function* () {
var remote = require('protractor/node_modules/selenium-webdriver/remote');
browser.driver.setFileDetector(new remote.FileDetector());
}));
Using browser sleep is never a good idea the best thing to do is to wait for an element and use the then function to do so.
element(by.xpath("xpath")).click().then(function(){
var list = element(by.id('id'));
var until = protractor.ExpectedConditions;
browser.wait(until.presenceOf(list), 80000, 'Message: took too long');
});
browser.wait(protractor.ExpectedConditions.visibilityOf($$('.desk-sidebar > li').get(num-1)), 60000);
Usually,I use this wait.
Are you using an ignoreSynchronisation without putting the false somewhere in your page objects helpers?
Be careful, the login can sometimes break the waitForAngular when there are a lot of redirections. I ended up using a dirty sleep to wait the page to be loaded when logging (no other solution were found, ignoreSync, EC for a change of url and wait for an element were not working solutions).
You should also share the error you get.
I try to refactor my code. I know that if I have several expectations they should be isolate in 'it'. I try to understand how I can write instead this:
describe('my scenario should make', function () {
var config = browser.params;
var url = config.listOfReferencesUrl,
grid,
numberField;
it('test1', function () {
browser.get(url);
browser.executeScript("icms.go('WEB_INQ_PROC', 'InquiryList', null, 0)");
grid = psGrid(by.css("table[class='n-grid']"));
numberField = grid.getQuickFilter(1);
numberField.click().sendKeys("Hello!");
since('fail1').expect(numberField.getInputText()).toEqual("");
});
it('test2', function () {
since('fail2').expect(numberField.getInputText()).toEqual("Hello!");
});
});
Something like this:
describe('my scenario should make', function () {
var config = browser.params;
var url = config.listOfReferencesUrl,
grid,
numberField;
*********Make this part of code ONES before all tests in spec ****
browser.get(url);
browser.executeScript("icms.go('WEB_INQ_PROC', 'InquiryList', null, 0)");
grid = psGrid(by.css("table[class='n-grid']"));
numberField = grid.getQuickFilter(1);
numberField.click().sendKeys("Hello!");
*******************************************************************
it('test1', function () {
since('fail1').expect(numberField.getInputText()).toEqual("");
});
it('test2', function () {
since('fail2').expect(numberField.getInputText()).toEqual("Hello!");
});
});
Maybe somebody have an idea how I can do this?
To answer your question, if you want to run your code once before all tests then use beforeAll() function available in Jasmine 2. Here's a sample -
beforeAll(function(){
//Write your code here that you need to run once before all specs
});
You can use beforeEach() function available in Jasmine to run it each time before a test spec. Here's a sample -
beforeEach(function(){
//Write your code here that you need to run everytime before each spec
});
If you are facing issue in getting these functions to work, then update your plugins to latest version and then try running it. Also use the framework: 'jasmine2' in your conf.js file
Hope this helps.
Given the following code, I have managed to write a test by making use of QUnit for the first part but was unable to test finder.doRoutefinding. How can I 'mock'the function finder.doRoutefinding? (Mockjax cannot be used here since no ajax calls are involved)
`finder.doSelectDestination = function(address)
{
finder.destination = address;
finder.doRoutefinding(
finder.departure,
finder.destination,
finder.whenRouteLoaded,
finder.showRoute);
}
test('Destination Selector',
function()
{
address="London";
finder.doSelectDestination(address);
equal(pathfinder.destination,address, "Succesful Destination Selection");
}
);
There are caveats, but you could simply replace the function with your mock:
var originalDoRoutefinding = finder.doRoutefinding;
finder.doRoutefinding = function() {
// Mock code here.
};
// Test code here.
finder.doRoutefinding = originalDoRoutefinding;
If that kind of thing works for you, you might consider using a library like Sinon.JS.