Summary of problem: I'm writing several test suites (using Jest and Puppeteer) to automate tests for my AngularJS app's index.html page. Unfortunately, I'm seeing some odd behavior when I run my tests, which I think is related to the order in which Jest runs my various test suites.
Background: I'm using Jest (v24.8.0) as my testing framework. I'm using Puppeteer (v1.19.0) to spin up and control a Chromium browser on which to perform my tests.
My Code:
<!-- index.html -->
<div ng-app="myApp" ng-controller="myCtrl as ctrl">
<form name="form1">
<md-input-container>
<label class="md-required">Name</label>
<input ng-model="ctrl.name1"></input>
</md-input-container>
<md-dialog-actions>
<button class="md-primary md-button" ng-transclude type="submit">Submit</button>
</md-dialog-actions>
</form>
<form name="form2">
<md-input-container>
<label class="md-required">Name</label>
<input ng-model="ctrl.name2"></input>
</md-input-container>
<md-dialog-actions>
<button class="md-primary md-button" ng-transclude type="submit">Submit</button>
</md-dialog-actions>
</form>
</div>
// index.spec.js
const puppeteer = require('puppeteer');
(async () => {
const browser = await puppeteer.launch({headless: false});
const page = await browser.newPage();
await page.goto('https://my-site.com');
describe('form 1', async () => {
test('populate form 1', async () => {
let formSelector = 'form[name="form1"]';
await page.waitForSelector(formSelector+' input', {timeout: 3000});
await page.click(formSelector+' input');
await page.keyboard.type('casey');
let submitButtonSelector = 'form[name="form1"] button[type="submit"]';
await page.click(submitButtonSelector);
});
});
describe('form 2', async () => {
test('populate form 2', async () => {
let formSelector = 'form[name="form2"]';
await page.waitForSelector(formSelector+' input', {timeout: 3000});
await page.click(formSelector+' input');
await page.keyboard.type('jackie');
let submitButtonSelector = 'form[name="form2"] button[type="submit"]';
await page.click(submitButtonSelector);
});
});
await browser.close();
})();
Test Behavior:
Sometimes when I run npm test it seems that my two test suites, 'form1' and 'form2' (which I defined with describe), are being run in parallel (although I know that's not possible in Javascript, so I am assuming Jest runs different test suites asynchronously). Either way, when I run my tests in non-headless mode, I can see that form1's name input is populated with 'jackie', even though it should be 'casey'. After that happens, form2 is never filled out (even though my second test suite is supposed to do just that) and the tests complete, after which Jest informs me that 'populate form 2' has failed. Again, this doesn't happen every time I run my tests, so you may not be able to replicate my problem.
My Questions:
Does Jest run test suites in parallel/asynchronously? Note: I'm not talking about individual test's defined with test within the test suites, I know that those are run asynchronously if I pass them an async function.
If Jest does run test suites asynchronously, how do I disable that? Or better yet, is smart/conventional/optimal to give different test suites different browser instances so that they are run in completely separate windows? Are there any other methods for ensuring test suites are run separately and/or synchronously?
If Jest does not run test suites asynchronously, then why do you think I'm seeing this behavior?
I'm asking because I'd like to find a way to ensure that all my tests pass all of the time, instead of just some of the time. This will make it easier in the long run to determine whether or not my changes during development have broken anything.
Thanks in advance to all you Jest/Puppeteer hackers out there!
By default Jest runs tests in each file concurrently depending on max workers, but runs all describe and tests blocks serially within a file.
If you want all all files run serially use run in band, this removes the workers pool.
However, i recommend refactor, you can nest describe blocks
// instead of IIFE, nest describes for better parsing for jest and readability
describe('forms', async () => {
const browser = await puppeteer.launch({headless: false});
describe('form 1', () => {
test('populate form 1', async () => {
// make every test independent of the other to avoid leaky scenarios
const page = await browser.newPage();
await page.goto('https://my-site.com');
let formSelector = 'form[name="form1"]';
await page.waitForSelector(formSelector+' input', {timeout: 3000});
await page.click(formSelector+' input');
await page.keyboard.type('casey');
let submitButtonSelector = 'form[name="form1"] button[type="submit"]';
await page.click(submitButtonSelector);
});
});
describe('form 2', () => {
test('populate form 2', async () => {
const page = await browser.newPage();
await page.goto('https://my-site.com');
let formSelector = 'form[name="form2"]';
await page.waitForSelector(formSelector+' input', {timeout: 3000});
await page.click(formSelector+' input');
await page.keyboard.type('jackie');
let submitButtonSelector = 'form[name="form2"] button[type="submit"]';
await page.click(submitButtonSelector);
});
});
await browser.close();
})
Update
In NodeJS you can spawn processes , in the case of jest, each process is a node instance and they communicate through standard IO. You can tell jest how many to spawn with the CLI options, however, i've encounter performance degradation when using slightly many workers with jest, due the fact that NodeJS instances are heavyweight, and spawn/sync might be more costly than actually just running in band or few workers.
Also, there is no garante on how they run, if you want to spawn 10 processes, your OS might schedule all to the same CPU thread, and they would be running in series.
Related
I'm using Playwright.dev to automate our UI tests. Currently I face the following issue:
In a single spec.ts file I have different test suites. Those test suites should run in parallel but not each test. I can not split those test suites into separate files because they are created dynamically. Why I want to run the tests of each test suite serial is, to reuse the existing page. Because it's much faster to just reuse the page without doing a complete refresh of the page the whole time. I'll try to explain my problem with some pseudo-code:
catalogs.forEach((objectIdsOfCatalog, catalogId) => {
// each suite could run in parallel because they do not
// depend on each other
test.describe('Test catalog "' + catalogId + '"', () => {
let newVersion: PageObject;
let actualVersion: PageObject;
test.beforeAll(async ({browser}) => {
console.log('New page for', catalogId);
const {actualUrl, newUrl} = getConfig();
const context = await browser.newContext();
actualVersion = new PageObject(await context.newPage(), actualUrl);
newVersion = new PageObject(await context.newPage(), newUrl);
});
test.afterAll(async () => {
console.log('Close page for', catalogId);
actualVersion.close();
newVersion.close();
actualVersion = null;
newVersion = null;
});
// those tests should ran serial because it's faster
// if we just navigate on the existing page due to the
// client side caching of the web app
for (const objectId of objectIdsOfCatalog) {
test('Testing "' + objectId + '"', async () => {
});
}
});
});
Is there some way to achieve the following behavior in Playwright or do I have to rethink my approach?
I don't know if multiple test.describe.serial blocks could be nested in a test.describe.parallel (and if that works), but maybe that's worth a try.
Another option could be to not generate real tests, but just to generate steps (test.step) inside tests inside a test.describe.parallel block.
And where do the catalogs come from? Maybe instead of generating describe blocks, you could generate projects in the playwright.config.ts? Projects run in parallel by default I think. But don't know if that approach would work if the data is coming from some async source.
I'm looking for a way to write custom commands in Playwright like it's done in Cypress. Playwright Issues has one page related to it but I have never seen any code example.
I'm working on one test case where I'm trying to improve code reusability. Here's the code:
import { test, chromium } from '#playwright/test';
config();
let context;
let page;
test.beforeEach(async () => {
context = await chromium.launchPersistentContext(
'C:\\Users\\User\\AppData\\Local\\Microsoft\\Edge\\User Data\\Default',
{
headless: false,
executablePath: 'C:\\Program Files (x86)\\Microsoft\\Edge\\Application\\msedge.exe',
}
);
page = await context.newPage();
await page.goto('http://localhost:3000/');
});
test.afterEach(async () => {
await page.close();
await context.close();
});
test('Sample test', async () => {
await page.click('text=Open popup');
await page.click('_react=Button >> nth=0');
await page.click('text=Close popup');
});
I'm trying to create a function that will call hooks test.beforeEach() and test.afterEach() and the code inside them.
In the Playwright Issue page, it says that I need to move it to a separate Node module and then I would be able to use it but I'm struggling to understand how to do it.
The example you're giving can be solved by implementing a custom fixture.
Fixtures are #playwright/test's solution to customizing/extending the test framework. You can define your own objects (similar to browser, context, page) that you inject into your test, so the test has access to it. But they can also do things before and after each test such as setting up preconditions and tearing them down. You can also override existing fixtures.
For more information including examples have a look here:
https://playwright.dev/docs/test-fixtures
I am trying to write test code for a project management software using Jest. The software is written with Javascript, and uses MongoDB for the database. The project uses an object model hirecarchy that goes:
User => Project => Backlog => Story => Task
I use an external script to populate the test database before running the tests in the test file using a beforeEach block. So far, the populate script makes a couple of users. Then assigns a couple of projects to a chosen user. Then assigns a couple of backlogs to a chosen project. Then assigns a couple of stories to a chosen backlog.
This method has worked for the tests on the user to the backlog model. Now I am writing the test for the story model and I am running into a problem where by the time the code in the test block is executing, the test database is not completely populated.
I have used breakpoints and MongoDBCompass to see what is in the database by the time the code is in the test block, and it appears that database is populated to varying extents during it run. It seems as though the code populating the database is lagging jests execution queue. Is there anyway I can ensure the database population is done before I enter the test block?
Before each block in the story model test file
beforeEach( async () => {
await User.deleteMany();
await Project.deleteMany();
await Backlog.deleteMany();
await Story.deleteMany();
populate.makeUsers();
populate.assignProjects(userXId, userX.userName);
populate.assignBacklogs(populate.projects[1]);
await populate.assignStories();
await new User(userX).save();
});
The function populating the database with Stories
this.assignStories = async function () {
const pBacklog = await Backlog.findOne({project: this.projects[1]._id, name: "Product Backlog"})
const temp = this
if (pBacklog != undefined) {
let pBacklogID = pBacklog._id
this.stories = createStories(14, this.projects[1]._id, pBacklogID, this.backlogs[0]._id);
this.stories.forEach(async (story, index) =>{
let newStory = new Story(story);
this.stories[index] = newStory;
await newStory.save();
})
} else {
setTimeout(async function () {await temp.assignStories()}, 200)
}
}
I have excluded the functions for populating the other models to keep this short but I can add it if it will help with the problem.
Thank you.
Thank you #Bergi. The problem was with using forEach with asyn/await functions. I refactored my code using Do not use forEach with an asynchronous callback.
I have a simple test:
beforeEach(function () {
lib.startApp(constants.ENVIRONMENT, browser);//get url
loginPageLoc.loginAs(constants.ADMIN_LOGIN,constants.ADMIN_PASSWORD,
browser);// log in
browser.driver.sleep(5000); //wait
});
afterEach(function() {
browser.restart(); //or browser.close()
});
it('Test1' , async() => {
lib.waitUntilClickable(adminManagersPage.ButtonManagers, browser);
adminManagersPage.ButtonManagers.click();
expect(element(by.css('.common-popup')).isPresent()).toBe(false);
});
it('Test2' , async() => {
lib.waitUntilClickable(adminManagersPage.ButtonManagers, browser);
adminManagersPage.ButtonManagers.click();
expect(element(by.css('.common-popup')).isPresent()).toBe(false);
});
The first iteration looks fine, but after .restart() I get:
Failed: This driver instance does not have a valid session ID (did you
call WebDriver.quit()?) and may no longer be used. NoSuchSessionError:
This driver instance does not have a valid session ID (did you call
WebDriver.quit()?) and may no longer be used.
If I use .close() I get:
Failed: invalid session id
But if I change Test2 on simple console.log('case 1'); it looks fine.
Please explain what am I doing wrong?
You are declaring your functions as async but are not awaiting the any actions within. If you are not setting your SELENIUM_PROMISE_MANAGER to false in your config then you will see unexpected behavior throughout your test when declaring async functions. This async behavior is likely the cause of your issue so I would ensure SELENIUM_PROMISE_MANAGER:false and ensure your awaiting your actions in each function.
The reason your test passes if you change the second test to just be console.log() is because you are not interacting with the browser and therefore the selenium session ID is not required. Every time the browser is closed the selenium session id will be destroyed and a new one created when a new browser window is launched.
Also you should be aware that there is a config setting you can enable so you do not need to do it manually in your test.
Update: Adding code examples of what I have described:
Note: If you have a lot of code already developed it will take serious effort to convert your framework to Async/await syntax. For a quicker solution you could try removing the async keywords from your it blocks
Add these to your config
SELENIUM_PROMISE_MANAGER:false,
restartBrowserBetweenTests:true
and change you spec to
beforeEach(async function () {
await lib.startApp(constants.ENVIRONMENT, browser);//get url
await loginPageLoc.loginAs(constants.ADMIN_LOGIN, constants.ADMIN_PASSWORD,
browser);// log in
await browser.driver.sleep(5000); //wait
});
it('Test1', async () => {
await lib.waitUntilClickable(adminManagersPage.ButtonManagers, browser);
await adminManagersPage.ButtonManagers.click();
expect(await element(by.css('.common-popup')).isPresent()).toBe(false);
});
it('Test2', async () => {
await lib.waitUntilClickable(adminManagersPage.ButtonManagers, browser);
await adminManagersPage.ButtonManagers.click();
expect(await element(by.css('.common-popup')).isPresent()).toBe(false);
});
There is a relevant configuration option:
// If true, protractor will restart the browser between each test.
restartBrowserBetweenTests: true,
Add the above in your config to restart browser between your tests.
Hope it helps you.
I am creating tests in TestCafe. The goal is to have the tests written in Gherkin. I looked at some GitHub repositories which integrate Cucumber and TestCafe but I am trying a different angle.
I would like to use the Gherkin parser and skip Cucumber. Instead I will create my own implementation to run the teststeps. But currently I am stuck trying to get TestCafe to run the tests.
If I am correct the issue is that TestCafe is running my test file, and then sees no fixtures or tests anywhere. Which is correct because the Gherkin parser is using the stream API (it uses a seperate Go process to parse the feature files) to deliver the data, which means that in my current code the Promise is still pending when TestCafe quits. Or if I remove that the end callback hasn't happened yet.
Is my analysis correct? If yes how can I get all the data from the stream and create my tests such that TestCafe will run it?
gherkin_executor.js
var Gherkin = require('gherkin');
console.log('start')
const getParsedGherkin = new Promise((resolve, reject) => {
let stream = Gherkin.fromPaths(['file.feature'])
let data = []
stream.on('data', (chunk) => {
if(chunk.hasOwnProperty('source')){
data.push({source: chunk.source, name: null, pickles: []})
}
else if (chunk.hasOwnProperty('gherkinDocument')){
data[data.length-1].name = chunk.gherkinDocument.feature.name
}
else {
data[data.length-1].pickles.push(chunk.pickle)
}
})
stream.on('end', () => {
resolve(data)
})
})
let data = getParsedGherkin.then((data) => {return data})
console.log(data)
function createTests(data){
for(let feature of data){
fixture(feature.name)
for(let testcase of feature.pickles){
test(testcase.name, async t => {
console.log('test')
})
}
}
}
file.feature
Feature: A test feature
Scenario: A test case
Given some data
When doing some action
Then there is some result
Nice initiative!
To go further in your approach, the method createTests must generate the TestCafe code in at least one JavaScript or TypeScript file. Then you must start the TestCafe runner from these files.
So now, to go further in your approach, you must write a TestCafe source code generator.
Maybe the hdorgeval/testcafe-starter repo on GitHub could be an alternative until Cucumber is officially supported by the TestCafe Team.