I am creating tests in TestCafe. The goal is to have the tests written in Gherkin. I looked at some GitHub repositories which integrate Cucumber and TestCafe but I am trying a different angle.
I would like to use the Gherkin parser and skip Cucumber. Instead I will create my own implementation to run the teststeps. But currently I am stuck trying to get TestCafe to run the tests.
If I am correct the issue is that TestCafe is running my test file, and then sees no fixtures or tests anywhere. Which is correct because the Gherkin parser is using the stream API (it uses a seperate Go process to parse the feature files) to deliver the data, which means that in my current code the Promise is still pending when TestCafe quits. Or if I remove that the end callback hasn't happened yet.
Is my analysis correct? If yes how can I get all the data from the stream and create my tests such that TestCafe will run it?
gherkin_executor.js
var Gherkin = require('gherkin');
console.log('start')
const getParsedGherkin = new Promise((resolve, reject) => {
let stream = Gherkin.fromPaths(['file.feature'])
let data = []
stream.on('data', (chunk) => {
if(chunk.hasOwnProperty('source')){
data.push({source: chunk.source, name: null, pickles: []})
}
else if (chunk.hasOwnProperty('gherkinDocument')){
data[data.length-1].name = chunk.gherkinDocument.feature.name
}
else {
data[data.length-1].pickles.push(chunk.pickle)
}
})
stream.on('end', () => {
resolve(data)
})
})
let data = getParsedGherkin.then((data) => {return data})
console.log(data)
function createTests(data){
for(let feature of data){
fixture(feature.name)
for(let testcase of feature.pickles){
test(testcase.name, async t => {
console.log('test')
})
}
}
}
file.feature
Feature: A test feature
Scenario: A test case
Given some data
When doing some action
Then there is some result
Nice initiative!
To go further in your approach, the method createTests must generate the TestCafe code in at least one JavaScript or TypeScript file. Then you must start the TestCafe runner from these files.
So now, to go further in your approach, you must write a TestCafe source code generator.
Maybe the hdorgeval/testcafe-starter repo on GitHub could be an alternative until Cucumber is officially supported by the TestCafe Team.
Related
I'm using Playwright.dev to automate our UI tests. Currently I face the following issue:
In a single spec.ts file I have different test suites. Those test suites should run in parallel but not each test. I can not split those test suites into separate files because they are created dynamically. Why I want to run the tests of each test suite serial is, to reuse the existing page. Because it's much faster to just reuse the page without doing a complete refresh of the page the whole time. I'll try to explain my problem with some pseudo-code:
catalogs.forEach((objectIdsOfCatalog, catalogId) => {
// each suite could run in parallel because they do not
// depend on each other
test.describe('Test catalog "' + catalogId + '"', () => {
let newVersion: PageObject;
let actualVersion: PageObject;
test.beforeAll(async ({browser}) => {
console.log('New page for', catalogId);
const {actualUrl, newUrl} = getConfig();
const context = await browser.newContext();
actualVersion = new PageObject(await context.newPage(), actualUrl);
newVersion = new PageObject(await context.newPage(), newUrl);
});
test.afterAll(async () => {
console.log('Close page for', catalogId);
actualVersion.close();
newVersion.close();
actualVersion = null;
newVersion = null;
});
// those tests should ran serial because it's faster
// if we just navigate on the existing page due to the
// client side caching of the web app
for (const objectId of objectIdsOfCatalog) {
test('Testing "' + objectId + '"', async () => {
});
}
});
});
Is there some way to achieve the following behavior in Playwright or do I have to rethink my approach?
I don't know if multiple test.describe.serial blocks could be nested in a test.describe.parallel (and if that works), but maybe that's worth a try.
Another option could be to not generate real tests, but just to generate steps (test.step) inside tests inside a test.describe.parallel block.
And where do the catalogs come from? Maybe instead of generating describe blocks, you could generate projects in the playwright.config.ts? Projects run in parallel by default I think. But don't know if that approach would work if the data is coming from some async source.
I am trying to write test code for a project management software using Jest. The software is written with Javascript, and uses MongoDB for the database. The project uses an object model hirecarchy that goes:
User => Project => Backlog => Story => Task
I use an external script to populate the test database before running the tests in the test file using a beforeEach block. So far, the populate script makes a couple of users. Then assigns a couple of projects to a chosen user. Then assigns a couple of backlogs to a chosen project. Then assigns a couple of stories to a chosen backlog.
This method has worked for the tests on the user to the backlog model. Now I am writing the test for the story model and I am running into a problem where by the time the code in the test block is executing, the test database is not completely populated.
I have used breakpoints and MongoDBCompass to see what is in the database by the time the code is in the test block, and it appears that database is populated to varying extents during it run. It seems as though the code populating the database is lagging jests execution queue. Is there anyway I can ensure the database population is done before I enter the test block?
Before each block in the story model test file
beforeEach( async () => {
await User.deleteMany();
await Project.deleteMany();
await Backlog.deleteMany();
await Story.deleteMany();
populate.makeUsers();
populate.assignProjects(userXId, userX.userName);
populate.assignBacklogs(populate.projects[1]);
await populate.assignStories();
await new User(userX).save();
});
The function populating the database with Stories
this.assignStories = async function () {
const pBacklog = await Backlog.findOne({project: this.projects[1]._id, name: "Product Backlog"})
const temp = this
if (pBacklog != undefined) {
let pBacklogID = pBacklog._id
this.stories = createStories(14, this.projects[1]._id, pBacklogID, this.backlogs[0]._id);
this.stories.forEach(async (story, index) =>{
let newStory = new Story(story);
this.stories[index] = newStory;
await newStory.save();
})
} else {
setTimeout(async function () {await temp.assignStories()}, 200)
}
}
I have excluded the functions for populating the other models to keep this short but I can add it if it will help with the problem.
Thank you.
Thank you #Bergi. The problem was with using forEach with asyn/await functions. I refactored my code using Do not use forEach with an asynchronous callback.
I am using Jest to test my API and when I run my tests, my JSON file results.json gets written to due to the following line in my API app.js (which I don't want happening):
fs.writeFile('results.json', JSON.stringify(json), (err, result) => {
if (err) console.log('error', err);
});
This is what my Jest file looks like:
const request = require('supertest');
const app = require('./app');
// Nico Tejera at https://stackoverflow.com/questions/1714786/query-string-encoding-of-a-javascript-object
function serialise(obj){
return Object.keys(obj).map(k => `${encodeURIComponent(k)}=${encodeURIComponent(obj[k])}`).join('&');
}
describe('Test /addtask', () => {
test('POST /addtask Successfully redirects if newDate and newTask filled in correctly', () => {
const params = {
newTask: 'Example',
newDate: '2020-03-11'
};
return request(app)
.post('/addtask')
.send(serialise(params))
.expect(301);
});
});
I tried creating a mock of the JSON file and placed it outside the describe statement to prevent the actual results.json file being written to:
jest.mock('./results.json', () => ({ name: 'preset1', JSONtask: [], JSONcomplete: [] }, { name: 'preset2', JSONtask: [], JSONcomplete: [] }));
But this doesn't change anything. Does anyone have any suggestions?
I have seen other solutions to similar problems but they don't provide the answer I'm looking for.
EDIT: Although not a very good method, one solution to my problem is to wrap the fs.writeFile within the statement
if (process.env.NODE_ENV !== 'test') {
//code
};
although this would mean that fs.writeFile cannot be tested upon.
NOTE: I am still accepting answers!
Your issue is that the code you want to test has a hard-coded I/O operation in it, which always makes things harder to test.
What you'll want to do is to isolate the dependency on fs.writeFile, for example into something like a ResultsWriter. That dependency can then be injected and mocked for your test purposes.
I wrote an extensive example on a very similar case with NestJS yesterday under how to unit test a class extending an abstract class reading environment variables, which you can hopefully adapt to your needs.
jest.mock(path, factory) is for mocking JS modules, not file content.
You should instead mock fs.writeFile and check that it has been called with the expected arguments. The docs explain how to do it.
I am using Istanbul for code coverage, but i m getting very low coverage percentage particularly in Models file.
Consider the following is the model file:
ModelA.js
const mongoose = require('mongoose');
const Schema = mongoose.Schema;
var app = require('../server')
var db = require('../db/dbConnection');
var config = require('../configs/config')
const Schema1 = new Schema({ 'configurations': [] });
exports.save = function (aa, data, callback) {
var logMeta = {
file: 'models/modelA',
function: 'save',
data: {},
error: {}
}
if (!aa) {
return callback('aa is required')
}
global.logs[aa].log('info', 'AA: ' + aa, logMeta);
db.connectDatabase(aa, function(error, mongoDB){
if(error){
logMeta.data['error'] = error
global.logs[aa].log('error', 'error', logMeta);
return callback(error)
}
const ModelA = mongoDB.model('bbb', cccc);
ModelA.findOneAndUpdate({}, data, {upsert: true, new: true, runValidators: true}, function(error ,result){
if (error) {
logMeta.data['error'] = error
global.logs[aa].log('error', 'error', logMeta);
}
else {
logMeta.data = {}
logMeta.data['result'] = JSON.parse(JSON.stringify(result))
global.logs[aa].log('info', 'result', logMeta);
}
callback(error, result);
});
})
}
TestA.js:
var should = require('should'),
sinon = require('sinon'),
ModelA= require("../models/ModelA");
describe('Model test', function () {
it('Should save Model', function (done) {
var todoMock = sinon.mock(new ModelA({'configurations': []}));
var todo = todoMock.object;
todoMock
.expects('save')
.yields(null, 'SAVED');
todo.save(function(err, result) {
todoMock.verify();
todoMock.restore();
should.equal('SAVED', result, "Test fails due to unexpected result")
done();
});
});
});
But i am getting codecoverage percentage 20. SO how can i increase the percentage:
ALso:
1.Whether i have to mock the db.connectDatabase if yews how can i acheive that?
Whether i have to use TestDb to run all my UnitTest? Or i have to assert??
Code Coverage will work for Unit Test or integration test???
Please share your ideas. Thanks
I have been using Istanbul to 100% code cover most of my client/server projects so I might have the answers you are looking for.
How does it work
Whenever you require some local file, this gets wrapped all over the place to understand if every of its parts is reached by your code.
Not only the required file is tainted, your running test is too.
However, while it's easy to code cover the running test file, mocked classes and their code might never be executed.
todoMock.expects('save')
Accordingly to Sinon documentation:
Overrides todo. save with a mock function and returns it.
If Istanbul tainted the real save method, anything within that scope won't ever be reached so that you are actually testing that mock works, not that your real code does.
This should answer your question: Code Coverage will work for Unit Test or integration test ???
The answer is that it covers the code, which is the only thing you're interested from a code cover perspective. Covering Sinon JS is nobody goal.
No need to assert ... but
Once you've understood how Istanbul works, it follows up naturally to understand that it doesn't matter if you assert or not, all it matters is that you reach the code for real and execute it.
Asserting is just your guard against failures, not a mechanism interesting per se in any Istanbul test. When your assertion fails, your test does too, so it's good for you to know that things didn't work and there's no need to keep testing the rest of the code (early failure, faster fixes).
Whether you have to mock the db.connectDatabase
Yes, at least for the code you posted. You can assign db as generic object mock to the global context and expect methods to be called but also you can simplify your life writing this:
function createDB(err1, err2) {
return {
connectDatabase(aa, callback) {
callback(err1, {
model(name, value) {
return {
findOneAndUpdate($0, $1, $3, fn) {
fn(err2, {any: 'object'});
}
};
}
});
}
};
}
global.db = createDB(null, null);
This code in your test file can be used to create a global db that behaves differently accordingly with the amount of errors you pass along, giving you the ability to run the same test file various times with different expectations.
How to run the same test more than once
Once your test is completed, delete require.cache[require.resolve('../test/file')] and then require('../test/file') again.
Do this as many times as you need.
When there are conditional features detection
I usually run the test various times deleting global constructors in case these are patched with a fallback. I also usually store them to be able to put 'em back later on.
When the code is obvious but shouldn't be reached
In case you have if (err) process.exit(1); you rarely want to reach that part of the code. There are various comments understood by Istanbul that would help you skip parts of the test like /* istanbul ignore if */ or ignore else, or even the generic ignore next.
Please consider thinking twice if it's just you being lazy, or that part can really, safely, be skipped ... I got bitten a couple of times with a badly handled error, which is a disaster since when it happens is when you need the most your code to keep running and/or giving you all the info you need.
What is being covered?
Maybe you know this already but the coverage/lcov-report/index.html file, that you can open right away with any browser, will show you all the parts that aren't covered by your tests.
While implementing a few test scripts based on cucumber-protractor-typescript logic, I ran into a problem: my Gherkin can not find declaration of the steps:
Steps definition is coded with TypeScript. However, all my tests compiles and runs successfully.
There is the same problem as mine, but it didn't solve my problem.
When I am trying to create step definition file manually, there is no option to create TypeScript file, only JavaScript:
Here is my example of step definition:
defineSupportCode(({Given, When, Then, Before}) => {
let search: Search;
Before(() => {
search = new Search();
});
Given(/^User on the angular site$/, async () => {
let title = await browser.getTitle();
return expect(title).to.equal('Angular');
});
When(/^User type "(.*?)" into the search input field$/, async (text: string) => {
await search.enterSearchInput(text);
});
Then(/^User should see some results in the search overlay$/, async () => {
await search.getSearchResultItems();
let count = await search.getCountOfResults();
return expect(count).to.be.above(0);
});
});
And my cucumber file:
Feature: Search
As a developer using Angular
User need to look-up classes and guidelines
So that User can concentrate on building awesome apps
#SearchScenario
Scenario: Type in a search-term
Given User on the angular site
When User type "foo" into the search input field
Then User should see some results in the search overlay
My repositories structure:
/features
/steps
searchSteps.ts
search.feature
/pages
search.ts
Does somebody knows how to solve this problem?
Unfortunately WebStorm provides not support for writing Cucumber.js tests in TypeScript. Please vote for WEB-22516 and WEB-29665 to be notified on any progress with TypeScript support