I'm using Playwright.dev to automate our UI tests. Currently I face the following issue:
In a single spec.ts file I have different test suites. Those test suites should run in parallel but not each test. I can not split those test suites into separate files because they are created dynamically. Why I want to run the tests of each test suite serial is, to reuse the existing page. Because it's much faster to just reuse the page without doing a complete refresh of the page the whole time. I'll try to explain my problem with some pseudo-code:
catalogs.forEach((objectIdsOfCatalog, catalogId) => {
// each suite could run in parallel because they do not
// depend on each other
test.describe('Test catalog "' + catalogId + '"', () => {
let newVersion: PageObject;
let actualVersion: PageObject;
test.beforeAll(async ({browser}) => {
console.log('New page for', catalogId);
const {actualUrl, newUrl} = getConfig();
const context = await browser.newContext();
actualVersion = new PageObject(await context.newPage(), actualUrl);
newVersion = new PageObject(await context.newPage(), newUrl);
});
test.afterAll(async () => {
console.log('Close page for', catalogId);
actualVersion.close();
newVersion.close();
actualVersion = null;
newVersion = null;
});
// those tests should ran serial because it's faster
// if we just navigate on the existing page due to the
// client side caching of the web app
for (const objectId of objectIdsOfCatalog) {
test('Testing "' + objectId + '"', async () => {
});
}
});
});
Is there some way to achieve the following behavior in Playwright or do I have to rethink my approach?
I don't know if multiple test.describe.serial blocks could be nested in a test.describe.parallel (and if that works), but maybe that's worth a try.
Another option could be to not generate real tests, but just to generate steps (test.step) inside tests inside a test.describe.parallel block.
And where do the catalogs come from? Maybe instead of generating describe blocks, you could generate projects in the playwright.config.ts? Projects run in parallel by default I think. But don't know if that approach would work if the data is coming from some async source.
Related
Does loading multiple json files in single test suit is a good practice in cypress?
Something like this:
before(() => {
cy.fixture('productCatalogData').then((datajson) => {
recipeData = datajson.recipes;
return recipeData;
});
cy.fixture('loginData').then((datajson) => {
loginData = datajson;
return loginData;
});
});
If you need both the json files in one suite, I don't think there should be a problem using fixtures multiple times. However you can shorten the cy.fixture() code a bit like this:
before(() => {
cy.fixture('productCatalogData').as('productCatalogData')
cy.fixture('loginData').as('loginData')
})
it('Access fixtures data', function () {
// The test has to use "function" callback to make sure "this" points at the Mocha context
//Access fixtures data using this.productCatalogData and this.loginData
})
But one thing to add here, cypress removes aliases after every tests. So if you have just one test inside the suite then before() will work fine, but if you have multiple tests you have to use beforeEach().
I am trying to write test code for a project management software using Jest. The software is written with Javascript, and uses MongoDB for the database. The project uses an object model hirecarchy that goes:
User => Project => Backlog => Story => Task
I use an external script to populate the test database before running the tests in the test file using a beforeEach block. So far, the populate script makes a couple of users. Then assigns a couple of projects to a chosen user. Then assigns a couple of backlogs to a chosen project. Then assigns a couple of stories to a chosen backlog.
This method has worked for the tests on the user to the backlog model. Now I am writing the test for the story model and I am running into a problem where by the time the code in the test block is executing, the test database is not completely populated.
I have used breakpoints and MongoDBCompass to see what is in the database by the time the code is in the test block, and it appears that database is populated to varying extents during it run. It seems as though the code populating the database is lagging jests execution queue. Is there anyway I can ensure the database population is done before I enter the test block?
Before each block in the story model test file
beforeEach( async () => {
await User.deleteMany();
await Project.deleteMany();
await Backlog.deleteMany();
await Story.deleteMany();
populate.makeUsers();
populate.assignProjects(userXId, userX.userName);
populate.assignBacklogs(populate.projects[1]);
await populate.assignStories();
await new User(userX).save();
});
The function populating the database with Stories
this.assignStories = async function () {
const pBacklog = await Backlog.findOne({project: this.projects[1]._id, name: "Product Backlog"})
const temp = this
if (pBacklog != undefined) {
let pBacklogID = pBacklog._id
this.stories = createStories(14, this.projects[1]._id, pBacklogID, this.backlogs[0]._id);
this.stories.forEach(async (story, index) =>{
let newStory = new Story(story);
this.stories[index] = newStory;
await newStory.save();
})
} else {
setTimeout(async function () {await temp.assignStories()}, 200)
}
}
I have excluded the functions for populating the other models to keep this short but I can add it if it will help with the problem.
Thank you.
Thank you #Bergi. The problem was with using forEach with asyn/await functions. I refactored my code using Do not use forEach with an asynchronous callback.
I am creating tests in TestCafe. The goal is to have the tests written in Gherkin. I looked at some GitHub repositories which integrate Cucumber and TestCafe but I am trying a different angle.
I would like to use the Gherkin parser and skip Cucumber. Instead I will create my own implementation to run the teststeps. But currently I am stuck trying to get TestCafe to run the tests.
If I am correct the issue is that TestCafe is running my test file, and then sees no fixtures or tests anywhere. Which is correct because the Gherkin parser is using the stream API (it uses a seperate Go process to parse the feature files) to deliver the data, which means that in my current code the Promise is still pending when TestCafe quits. Or if I remove that the end callback hasn't happened yet.
Is my analysis correct? If yes how can I get all the data from the stream and create my tests such that TestCafe will run it?
gherkin_executor.js
var Gherkin = require('gherkin');
console.log('start')
const getParsedGherkin = new Promise((resolve, reject) => {
let stream = Gherkin.fromPaths(['file.feature'])
let data = []
stream.on('data', (chunk) => {
if(chunk.hasOwnProperty('source')){
data.push({source: chunk.source, name: null, pickles: []})
}
else if (chunk.hasOwnProperty('gherkinDocument')){
data[data.length-1].name = chunk.gherkinDocument.feature.name
}
else {
data[data.length-1].pickles.push(chunk.pickle)
}
})
stream.on('end', () => {
resolve(data)
})
})
let data = getParsedGherkin.then((data) => {return data})
console.log(data)
function createTests(data){
for(let feature of data){
fixture(feature.name)
for(let testcase of feature.pickles){
test(testcase.name, async t => {
console.log('test')
})
}
}
}
file.feature
Feature: A test feature
Scenario: A test case
Given some data
When doing some action
Then there is some result
Nice initiative!
To go further in your approach, the method createTests must generate the TestCafe code in at least one JavaScript or TypeScript file. Then you must start the TestCafe runner from these files.
So now, to go further in your approach, you must write a TestCafe source code generator.
Maybe the hdorgeval/testcafe-starter repo on GitHub could be an alternative until Cucumber is officially supported by the TestCafe Team.
I am using Istanbul for code coverage, but i m getting very low coverage percentage particularly in Models file.
Consider the following is the model file:
ModelA.js
const mongoose = require('mongoose');
const Schema = mongoose.Schema;
var app = require('../server')
var db = require('../db/dbConnection');
var config = require('../configs/config')
const Schema1 = new Schema({ 'configurations': [] });
exports.save = function (aa, data, callback) {
var logMeta = {
file: 'models/modelA',
function: 'save',
data: {},
error: {}
}
if (!aa) {
return callback('aa is required')
}
global.logs[aa].log('info', 'AA: ' + aa, logMeta);
db.connectDatabase(aa, function(error, mongoDB){
if(error){
logMeta.data['error'] = error
global.logs[aa].log('error', 'error', logMeta);
return callback(error)
}
const ModelA = mongoDB.model('bbb', cccc);
ModelA.findOneAndUpdate({}, data, {upsert: true, new: true, runValidators: true}, function(error ,result){
if (error) {
logMeta.data['error'] = error
global.logs[aa].log('error', 'error', logMeta);
}
else {
logMeta.data = {}
logMeta.data['result'] = JSON.parse(JSON.stringify(result))
global.logs[aa].log('info', 'result', logMeta);
}
callback(error, result);
});
})
}
TestA.js:
var should = require('should'),
sinon = require('sinon'),
ModelA= require("../models/ModelA");
describe('Model test', function () {
it('Should save Model', function (done) {
var todoMock = sinon.mock(new ModelA({'configurations': []}));
var todo = todoMock.object;
todoMock
.expects('save')
.yields(null, 'SAVED');
todo.save(function(err, result) {
todoMock.verify();
todoMock.restore();
should.equal('SAVED', result, "Test fails due to unexpected result")
done();
});
});
});
But i am getting codecoverage percentage 20. SO how can i increase the percentage:
ALso:
1.Whether i have to mock the db.connectDatabase if yews how can i acheive that?
Whether i have to use TestDb to run all my UnitTest? Or i have to assert??
Code Coverage will work for Unit Test or integration test???
Please share your ideas. Thanks
I have been using Istanbul to 100% code cover most of my client/server projects so I might have the answers you are looking for.
How does it work
Whenever you require some local file, this gets wrapped all over the place to understand if every of its parts is reached by your code.
Not only the required file is tainted, your running test is too.
However, while it's easy to code cover the running test file, mocked classes and their code might never be executed.
todoMock.expects('save')
Accordingly to Sinon documentation:
Overrides todo. save with a mock function and returns it.
If Istanbul tainted the real save method, anything within that scope won't ever be reached so that you are actually testing that mock works, not that your real code does.
This should answer your question: Code Coverage will work for Unit Test or integration test ???
The answer is that it covers the code, which is the only thing you're interested from a code cover perspective. Covering Sinon JS is nobody goal.
No need to assert ... but
Once you've understood how Istanbul works, it follows up naturally to understand that it doesn't matter if you assert or not, all it matters is that you reach the code for real and execute it.
Asserting is just your guard against failures, not a mechanism interesting per se in any Istanbul test. When your assertion fails, your test does too, so it's good for you to know that things didn't work and there's no need to keep testing the rest of the code (early failure, faster fixes).
Whether you have to mock the db.connectDatabase
Yes, at least for the code you posted. You can assign db as generic object mock to the global context and expect methods to be called but also you can simplify your life writing this:
function createDB(err1, err2) {
return {
connectDatabase(aa, callback) {
callback(err1, {
model(name, value) {
return {
findOneAndUpdate($0, $1, $3, fn) {
fn(err2, {any: 'object'});
}
};
}
});
}
};
}
global.db = createDB(null, null);
This code in your test file can be used to create a global db that behaves differently accordingly with the amount of errors you pass along, giving you the ability to run the same test file various times with different expectations.
How to run the same test more than once
Once your test is completed, delete require.cache[require.resolve('../test/file')] and then require('../test/file') again.
Do this as many times as you need.
When there are conditional features detection
I usually run the test various times deleting global constructors in case these are patched with a fallback. I also usually store them to be able to put 'em back later on.
When the code is obvious but shouldn't be reached
In case you have if (err) process.exit(1); you rarely want to reach that part of the code. There are various comments understood by Istanbul that would help you skip parts of the test like /* istanbul ignore if */ or ignore else, or even the generic ignore next.
Please consider thinking twice if it's just you being lazy, or that part can really, safely, be skipped ... I got bitten a couple of times with a badly handled error, which is a disaster since when it happens is when you need the most your code to keep running and/or giving you all the info you need.
What is being covered?
Maybe you know this already but the coverage/lcov-report/index.html file, that you can open right away with any browser, will show you all the parts that aren't covered by your tests.
I would like to make multiple calls of test function, but after first time, that jasmine.onComplete is called, the programs exits. I already know, that I can't do multiple test in parallel, but I thought, that I may be able to queue them, but if the jasmine exit the node I am done. Therefor:
Is there a way to prevent jasmine to exit node?
const toCall = {}
jasmine.onComplete(function(passed) {
toCall[varReporter.last.name](passed, varReporter.last.result)
toCall[varReporter.last.name] = null
});
function test(folder, file, callback){
toCall[file] = callback
jasmine.execute(['JS/' + folder + '/tests/' + file + '.js'])
}
// User saves a file, a test get triggered.
test('prototype', 'Array', function(passed, result){
console.log(util.inspect(result, { colors: true, depth: null }))
})
// User saves an other file and an other test should get triggered, but can't.
My test will not be called in groups, but one after an other, based on users interactions with files. I need to run test after each save, so that I can determine whenever I should process them or not.
You could override jasmine exit option:
jasmine.exit = () => {};
But that causes various glitches.
I'd rather run whole script in another process:
run-test.js
const path = require('path'),
Jasmine = require('jasmine/lib/jasmine.js');
const jasmine = new Jasmine({ projectBaseDir: path.resolve() });
jasmine.execute(process.argv.slice(2));
watch-tests.js
const fork = require('child_process').fork;
function test(folder, file) {
fork('run-test.js', ['JS/' + folder + '/tests/' + file + '.js']);
}
// User saves a file, a test get triggered.
test('prototype', 'Array')
// User saves an other file and an other test should get triggered, but can't.