Is there a conventional way to document Jest/Puppeteer test suites? [closed] - javascript

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
Summary of problem: I'm writing several test suites using Jest and Puppeteer to automate end-to-end testing of my Angular JS app. I'm big on documentation, as it is important to help future developers get up to speed more quickly. Unfortunately, I don't know of a conventional/widely accepted method for documenting test suites written with Jest. I've already written an extensive README that explains the tools we're using, how my team configured Jest/Puppeteer, and how to get started writing tests. What I'm specifically wondering about is how to document WITHIN each test suite, or if it is even necessary to spend time doing that (I'm leaning towards yes, it is definitely worth spending time doing that on the latter question).
Here's some sample code that I'd like to document:
// index.spec.js
// Insert comment here describing test file (aka test suite)
const puppeteer = require('puppeteer');
(async () => {
const browser = await puppeteer.launch({headless: false});
const page = await browser.newPage();
// Insert comment here describing test suite
describe('load startpage', async () => {
// Insert comment here describing test
test('page loads', async () => {
await page.goto('https://my-site.com', {waitUntil: 'networkidle0'});
});
});
// Insert comment here describing test suite
describe('complete form', async () => {
// Insert comment here describing test
test('populate form', async () => {
let formSelector = 'form[name="form1"]';
await page.waitForSelector(formSelector, {timeout: 3000});
await page.waitForSelector(formSelector+' input', {timeout: 3000});
await page.click(formSelector+' input');
await page.keyboard.type('Hello World');
let submitButtonSelector = 'form[name="form1"] button[type="submit"]';
await page.click(submitButtonSelector);
});
// Insert comment here describing test
test('submit form', async() => {
let submitButtonSelector = 'form[name="form1"] button[type="submit"]';
await page.waitForSelector(submitButtonSelector, {timeout: 3000});
await page.click(submitButtonSelector);
});
});
await browser.close();
})();
What I've already tried:
I've researched a little about the conventional method for documenting Javascript via JSDoc, but I don't really think this applies here because I'm using the Jest and Puppeteer Apis, which I assume are wrappers for native Javascript functions.
Question: Do any of you Jest/Puppeteer hackers know of the proper way to document tests? Thank you in advance!

I'll try and tailor this response to not sound completely opinionated.
Yes, documentation is essential, but too much of it doesn't play well either.
In the case of tests, be it unit/integration/e2e, frameworks already give you all the constructs you'll ever need to specify/document your tests. To my knowledge, there aren't any other conventions to document test suites.
The describe/it/test/etc blocks should be thought of as documentation and they should guide any developer through the intentions of whoever wrote the test.
In rare cases, any other essential commentary can be done inline.
The beauty of tests is that when their specification is well written, it reads like a book when run. And yes, describing your tests in small phrases is hard, just like naming a variable. Takes practice but it's doable.
Any documentation needed besides that you already covered in your README.
Apart from having good test specifications, you'll gain much more value in ensuring the tests are written with consistency instead of trying to explain what's going on in detail in each test, let the code do that.

Related

Is it possible to create custom commands in Playwright?

I'm looking for a way to write custom commands in Playwright like it's done in Cypress. Playwright Issues has one page related to it but I have never seen any code example.
I'm working on one test case where I'm trying to improve code reusability. Here's the code:
import { test, chromium } from '#playwright/test';
config();
let context;
let page;
test.beforeEach(async () => {
context = await chromium.launchPersistentContext(
'C:\\Users\\User\\AppData\\Local\\Microsoft\\Edge\\User Data\\Default',
{
headless: false,
executablePath: 'C:\\Program Files (x86)\\Microsoft\\Edge\\Application\\msedge.exe',
}
);
page = await context.newPage();
await page.goto('http://localhost:3000/');
});
test.afterEach(async () => {
await page.close();
await context.close();
});
test('Sample test', async () => {
await page.click('text=Open popup');
await page.click('_react=Button >> nth=0');
await page.click('text=Close popup');
});
I'm trying to create a function that will call hooks test.beforeEach() and test.afterEach() and the code inside them.
In the Playwright Issue page, it says that I need to move it to a separate Node module and then I would be able to use it but I'm struggling to understand how to do it.
The example you're giving can be solved by implementing a custom fixture.
Fixtures are #playwright/test's solution to customizing/extending the test framework. You can define your own objects (similar to browser, context, page) that you inject into your test, so the test has access to it. But they can also do things before and after each test such as setting up preconditions and tearing them down. You can also override existing fixtures.
For more information including examples have a look here:
https://playwright.dev/docs/test-fixtures

What is a good way to make screenshot tests with Playwright?

What is the good way to make a screenshot test with playwright?
If I understand truly, I need to make screenshot, like below:
it('Some test', async () => {
page.screenshot({ path: 'screenshot.png' });
}
But how I can to compare it with etalon screenshots?
If I missed something in the docs, lets me know, please
judging by the fact that the Playwright team started developing their own test runner which can compare screenshots:
playwright-test#visual-comparisons
import { it, expect } from "#playwright/test";
it("compares page screenshot", async ({ page, browserName }) => {
await page.goto("https://stackoverflow.com");
const screenshot = await page.screenshot();
expect(screenshot).toMatchSnapshot(`test-${browserName}.png`, { threshold: 0.2 });
});
they do not plan to add such functionality directly to the Playwright
Playwright's toHaveScreenshot and toMatchSnapshot are great if you want to compare a current screenshot to a screenshot from a previous test run, but if you want to compare two screenshots that you have as Buffers in memory you can use the getComparator method that Playwright uses behind the scenes:
import { getComparator } from 'playwright-core/lib/utils';
await page.goto('my website here');
const beforeImage = await page.screenshot({
path: `./screenshots/before.png`
});
//
// some state changes implemented here
//
const afterImage = await page.screenshot({
path: `./screenshots/after.png`
});
const comparator = getComparator('image/png');
expect(comparator(beforeImage, afterImage)).toBeNull();
The advantage of using getComparator is that it fuzzy matches, and you can set the threshold of how many pixels are allowed to be different. If you just want to check that the PNGs are exactly identical, a dead simple method to check for equality between the two screenshots is:
expect(Buffer.compare(beforeImage, afterImage)).toEqual(0)
Beware though - this simpler method is flakey and sensitive to a single pixel difference in rendering (such as if any animations/transitions are not completed or if there are differences in anti-aliasing).

Jasmine Async test generation

let's imagine we have a promise that does a large amounts of operations and return helper functions.
A banal example:
const testPromise = testFn => () => {
const helper = Promise.resolve({testHelper: () => 'an helper function'}) // I/O Promise that returns an helper for testing
return helper.then(testFn).finally(() => console.log('tear down'));
}
// This describe would work as expected
describe('Async test approach', () => {
it('A test', testPromise(async ({testHelper}) => {
expect(testHelper()).toBe('an helper function')
}))
})
// This part doesn't work
describe('Async describe approach', testPromise(async ({testHelper}) => {
it('Test 1', () => {
expect(testHelper()).toBe('an helper function')
})
it('Test 2', () => {
expect(testHelper()).not.toBe('A chair')
})
}))
}
What I would like to achieve is something like the second example where I can use async code within describe without re-evaluating testPromise.
describe doesn't handle async so I am not even able to loop and create dynamic tests properly.
I did read many comments around saying that describe should only be a simple way to group tests but... then... how can someone make async generated tests based on I/O result?
Thanks
= ADDITIONAL CONSIDERATION =
Regarding all the comment you guys kindly added, I should have added few additional details...
I am well aware that tests must be defined synchronously :), that is exactly where problems starts. I totally disagree with that and I am trying to find an alternative that avoids before/after and doing it without specifying an external variable. Within Jest issues there was an open one to address that, it seems they did agree on making describe async but they won't do it. The reason is... Jest is using Jasmine implementation of describe and this "fix" should be done in there.
I wanted to avoid beforeAll and afterAll, as much as I could. My purpose was creating an easy (and neat) way to define integration tests tailored on my needs without letting users to worry about initialize and tear down stuff around. I will continue to use the Example 1 above style, that seems the best solution to me, even if it would be clearly a longer process.
Take a look at Defining Tests. The doc says:
Tests must be defined synchronously for Jest to be able to collect your tests.
This is the principle for defining test cases. Which means the it function should be defined synchronously. That's why your second example doesn't work.
Some I/O operations should be done in beforeAll, afterAll, beforeEach, afterEach methods to prepare your test doubles and fixtures. The test should be isolated from the external environment as much as possible.
If you must do this, maybe you can write the dynamically obtained testHelper function to a static js file, and then test it in a synchronous way
As it was noted, describe serves to group tests.
This can be achieved with beforeAll. Since beforeAll should be called any way, it can be moved to testPromise:
const prepareHelpers = (testFn) => {
beforeAll(() => {
...
return helper.then(testFn);
})
}
describe('Async describe approach', () => {
let testHelper;
prepareHelpers(helpers => { testHelper = helpers.testHelper });
...

How to increase the code coverage using istanbul in node.js

I am using Istanbul for code coverage, but i m getting very low coverage percentage particularly in Models file.
Consider the following is the model file:
ModelA.js
const mongoose = require('mongoose');
const Schema = mongoose.Schema;
var app = require('../server')
var db = require('../db/dbConnection');
var config = require('../configs/config')
const Schema1 = new Schema({ 'configurations': [] });
exports.save = function (aa, data, callback) {
var logMeta = {
file: 'models/modelA',
function: 'save',
data: {},
error: {}
}
if (!aa) {
return callback('aa is required')
}
global.logs[aa].log('info', 'AA: ' + aa, logMeta);
db.connectDatabase(aa, function(error, mongoDB){
if(error){
logMeta.data['error'] = error
global.logs[aa].log('error', 'error', logMeta);
return callback(error)
}
const ModelA = mongoDB.model('bbb', cccc);
ModelA.findOneAndUpdate({}, data, {upsert: true, new: true, runValidators: true}, function(error ,result){
if (error) {
logMeta.data['error'] = error
global.logs[aa].log('error', 'error', logMeta);
}
else {
logMeta.data = {}
logMeta.data['result'] = JSON.parse(JSON.stringify(result))
global.logs[aa].log('info', 'result', logMeta);
}
callback(error, result);
});
})
}
TestA.js:
var should = require('should'),
sinon = require('sinon'),
ModelA= require("../models/ModelA");
describe('Model test', function () {
it('Should save Model', function (done) {
var todoMock = sinon.mock(new ModelA({'configurations': []}));
var todo = todoMock.object;
todoMock
.expects('save')
.yields(null, 'SAVED');
todo.save(function(err, result) {
todoMock.verify();
todoMock.restore();
should.equal('SAVED', result, "Test fails due to unexpected result")
done();
});
});
});
But i am getting codecoverage percentage 20. SO how can i increase the percentage:
ALso:
1.Whether i have to mock the db.connectDatabase if yews how can i acheive that?
Whether i have to use TestDb to run all my UnitTest? Or i have to assert??
Code Coverage will work for Unit Test or integration test???
Please share your ideas. Thanks
I have been using Istanbul to 100% code cover most of my client/server projects so I might have the answers you are looking for.
How does it work
Whenever you require some local file, this gets wrapped all over the place to understand if every of its parts is reached by your code.
Not only the required file is tainted, your running test is too.
However, while it's easy to code cover the running test file, mocked classes and their code might never be executed.
todoMock.expects('save')
Accordingly to Sinon documentation:
Overrides todo. save with a mock function and returns it.
If Istanbul tainted the real save method, anything within that scope won't ever be reached so that you are actually testing that mock works, not that your real code does.
This should answer your question: Code Coverage will work for Unit Test or integration test ???
The answer is that it covers the code, which is the only thing you're interested from a code cover perspective. Covering Sinon JS is nobody goal.
No need to assert ... but
Once you've understood how Istanbul works, it follows up naturally to understand that it doesn't matter if you assert or not, all it matters is that you reach the code for real and execute it.
Asserting is just your guard against failures, not a mechanism interesting per se in any Istanbul test. When your assertion fails, your test does too, so it's good for you to know that things didn't work and there's no need to keep testing the rest of the code (early failure, faster fixes).
Whether you have to mock the db.connectDatabase
Yes, at least for the code you posted. You can assign db as generic object mock to the global context and expect methods to be called but also you can simplify your life writing this:
function createDB(err1, err2) {
return {
connectDatabase(aa, callback) {
callback(err1, {
model(name, value) {
return {
findOneAndUpdate($0, $1, $3, fn) {
fn(err2, {any: 'object'});
}
};
}
});
}
};
}
global.db = createDB(null, null);
This code in your test file can be used to create a global db that behaves differently accordingly with the amount of errors you pass along, giving you the ability to run the same test file various times with different expectations.
How to run the same test more than once
Once your test is completed, delete require.cache[require.resolve('../test/file')] and then require('../test/file') again.
Do this as many times as you need.
When there are conditional features detection
I usually run the test various times deleting global constructors in case these are patched with a fallback. I also usually store them to be able to put 'em back later on.
When the code is obvious but shouldn't be reached
In case you have if (err) process.exit(1); you rarely want to reach that part of the code. There are various comments understood by Istanbul that would help you skip parts of the test like /* istanbul ignore if */ or ignore else, or even the generic ignore next.
Please consider thinking twice if it's just you being lazy, or that part can really, safely, be skipped ... I got bitten a couple of times with a badly handled error, which is a disaster since when it happens is when you need the most your code to keep running and/or giving you all the info you need.
What is being covered?
Maybe you know this already but the coverage/lcov-report/index.html file, that you can open right away with any browser, will show you all the parts that aren't covered by your tests.

"Undefined step reference" in WebStorm while trying refer to TypeScript step definition code

While implementing a few test scripts based on cucumber-protractor-typescript logic, I ran into a problem: my Gherkin can not find declaration of the steps:
Steps definition is coded with TypeScript. However, all my tests compiles and runs successfully.
There is the same problem as mine, but it didn't solve my problem.
When I am trying to create step definition file manually, there is no option to create TypeScript file, only JavaScript:
Here is my example of step definition:
defineSupportCode(({Given, When, Then, Before}) => {
let search: Search;
Before(() => {
search = new Search();
});
Given(/^User on the angular site$/, async () => {
let title = await browser.getTitle();
return expect(title).to.equal('Angular');
});
When(/^User type "(.*?)" into the search input field$/, async (text: string) => {
await search.enterSearchInput(text);
});
Then(/^User should see some results in the search overlay$/, async () => {
await search.getSearchResultItems();
let count = await search.getCountOfResults();
return expect(count).to.be.above(0);
});
});
And my cucumber file:
Feature: Search
As a developer using Angular
User need to look-up classes and guidelines
So that User can concentrate on building awesome apps
#SearchScenario
Scenario: Type in a search-term
Given User on the angular site
When User type "foo" into the search input field
Then User should see some results in the search overlay
My repositories structure:
/features
/steps
searchSteps.ts
search.feature
/pages
search.ts
Does somebody knows how to solve this problem?
Unfortunately WebStorm provides not support for writing Cucumber.js tests in TypeScript. Please vote for WEB-22516 and WEB-29665 to be notified on any progress with TypeScript support

Categories

Resources