What is the good way to make a screenshot test with playwright?
If I understand truly, I need to make screenshot, like below:
it('Some test', async () => {
page.screenshot({ path: 'screenshot.png' });
}
But how I can to compare it with etalon screenshots?
If I missed something in the docs, lets me know, please
judging by the fact that the Playwright team started developing their own test runner which can compare screenshots:
playwright-test#visual-comparisons
import { it, expect } from "#playwright/test";
it("compares page screenshot", async ({ page, browserName }) => {
await page.goto("https://stackoverflow.com");
const screenshot = await page.screenshot();
expect(screenshot).toMatchSnapshot(`test-${browserName}.png`, { threshold: 0.2 });
});
they do not plan to add such functionality directly to the Playwright
Playwright's toHaveScreenshot and toMatchSnapshot are great if you want to compare a current screenshot to a screenshot from a previous test run, but if you want to compare two screenshots that you have as Buffers in memory you can use the getComparator method that Playwright uses behind the scenes:
import { getComparator } from 'playwright-core/lib/utils';
await page.goto('my website here');
const beforeImage = await page.screenshot({
path: `./screenshots/before.png`
});
//
// some state changes implemented here
//
const afterImage = await page.screenshot({
path: `./screenshots/after.png`
});
const comparator = getComparator('image/png');
expect(comparator(beforeImage, afterImage)).toBeNull();
The advantage of using getComparator is that it fuzzy matches, and you can set the threshold of how many pixels are allowed to be different. If you just want to check that the PNGs are exactly identical, a dead simple method to check for equality between the two screenshots is:
expect(Buffer.compare(beforeImage, afterImage)).toEqual(0)
Beware though - this simpler method is flakey and sensitive to a single pixel difference in rendering (such as if any animations/transitions are not completed or if there are differences in anti-aliasing).
Related
I'm looking for a way to write custom commands in Playwright like it's done in Cypress. Playwright Issues has one page related to it but I have never seen any code example.
I'm working on one test case where I'm trying to improve code reusability. Here's the code:
import { test, chromium } from '#playwright/test';
config();
let context;
let page;
test.beforeEach(async () => {
context = await chromium.launchPersistentContext(
'C:\\Users\\User\\AppData\\Local\\Microsoft\\Edge\\User Data\\Default',
{
headless: false,
executablePath: 'C:\\Program Files (x86)\\Microsoft\\Edge\\Application\\msedge.exe',
}
);
page = await context.newPage();
await page.goto('http://localhost:3000/');
});
test.afterEach(async () => {
await page.close();
await context.close();
});
test('Sample test', async () => {
await page.click('text=Open popup');
await page.click('_react=Button >> nth=0');
await page.click('text=Close popup');
});
I'm trying to create a function that will call hooks test.beforeEach() and test.afterEach() and the code inside them.
In the Playwright Issue page, it says that I need to move it to a separate Node module and then I would be able to use it but I'm struggling to understand how to do it.
The example you're giving can be solved by implementing a custom fixture.
Fixtures are #playwright/test's solution to customizing/extending the test framework. You can define your own objects (similar to browser, context, page) that you inject into your test, so the test has access to it. But they can also do things before and after each test such as setting up preconditions and tearing them down. You can also override existing fixtures.
For more information including examples have a look here:
https://playwright.dev/docs/test-fixtures
I'm trying to take a screenshot of a failed testcase using Jest and PlayWright, the handling function is defined in a custom environment as seen below
const PlaywrightEnvironment = require('jest-playwright-preset/lib/PlaywrightEnvironment')
.default
class CustomEnvironment extends PlaywrightEnvironment {
async setup() {
await super.setup()
}
async teardown() {
await super.teardown()
}
async handleTestEvent(event, state) {
if (event.name === 'test_done' && event.test.errors.length > 0) {
const parentName = event.test.parent.name.replace(/\W/g, '-')
const specName = event.test.name.replace(/\W/g, '-')
await this.global.page.screenshot({
path: `screenshots/${parentName}_${specName}.png`,
})
}
}
}
module.exports = CustomEnvironment;
However, I am closing the pages after each test..
afterEach(async () => {
await page.close();
});
This leads to the page being closed before the screenshot is captured
Test suite failed to run
page.screenshot: Target page, context or browser has been closedError:
18 | const specName = event.test.name.replace(/\W/g, '-')
19 |
> 20 | await this.global.page.screenshot({
| ^
21 | path: `screenshots/${parentName}_${specName}.png`,
22 | })
23 | }
Is there a way to either pass the event to the afterEach so that it doesn't close if an error occurred or a way to take the screenshot more synchronously so that afterEach does not get executed before the screenshot is taken, please?
There are deliberate specialized cases you may want to use page.close() in a custom fixture/config but I'm wouldn't say that's the case for you based on what you're trying to do. By default, the browser/context will close automatically after each test, and will therefore also close the page being tested. You can see the default teardown here.
Removing your afterEach method should resolve the issue. :)
Other notable features:
Checkout Playwright-Test to see how custom config/fixtures can help you make consistent test environments, (ie. if you wanted some environments to take screenshots only-on-failure but not others.
Playwright also supports page object models -- a feature that makes your tests much more readable and maintainable. I also have an example here that you can reference for Page Object Models with the new version of Playwright, if this helps.
I have a simple test:
beforeEach(function () {
lib.startApp(constants.ENVIRONMENT, browser);//get url
loginPageLoc.loginAs(constants.ADMIN_LOGIN,constants.ADMIN_PASSWORD,
browser);// log in
browser.driver.sleep(5000); //wait
});
afterEach(function() {
browser.restart(); //or browser.close()
});
it('Test1' , async() => {
lib.waitUntilClickable(adminManagersPage.ButtonManagers, browser);
adminManagersPage.ButtonManagers.click();
expect(element(by.css('.common-popup')).isPresent()).toBe(false);
});
it('Test2' , async() => {
lib.waitUntilClickable(adminManagersPage.ButtonManagers, browser);
adminManagersPage.ButtonManagers.click();
expect(element(by.css('.common-popup')).isPresent()).toBe(false);
});
The first iteration looks fine, but after .restart() I get:
Failed: This driver instance does not have a valid session ID (did you
call WebDriver.quit()?) and may no longer be used. NoSuchSessionError:
This driver instance does not have a valid session ID (did you call
WebDriver.quit()?) and may no longer be used.
If I use .close() I get:
Failed: invalid session id
But if I change Test2 on simple console.log('case 1'); it looks fine.
Please explain what am I doing wrong?
You are declaring your functions as async but are not awaiting the any actions within. If you are not setting your SELENIUM_PROMISE_MANAGER to false in your config then you will see unexpected behavior throughout your test when declaring async functions. This async behavior is likely the cause of your issue so I would ensure SELENIUM_PROMISE_MANAGER:false and ensure your awaiting your actions in each function.
The reason your test passes if you change the second test to just be console.log() is because you are not interacting with the browser and therefore the selenium session ID is not required. Every time the browser is closed the selenium session id will be destroyed and a new one created when a new browser window is launched.
Also you should be aware that there is a config setting you can enable so you do not need to do it manually in your test.
Update: Adding code examples of what I have described:
Note: If you have a lot of code already developed it will take serious effort to convert your framework to Async/await syntax. For a quicker solution you could try removing the async keywords from your it blocks
Add these to your config
SELENIUM_PROMISE_MANAGER:false,
restartBrowserBetweenTests:true
and change you spec to
beforeEach(async function () {
await lib.startApp(constants.ENVIRONMENT, browser);//get url
await loginPageLoc.loginAs(constants.ADMIN_LOGIN, constants.ADMIN_PASSWORD,
browser);// log in
await browser.driver.sleep(5000); //wait
});
it('Test1', async () => {
await lib.waitUntilClickable(adminManagersPage.ButtonManagers, browser);
await adminManagersPage.ButtonManagers.click();
expect(await element(by.css('.common-popup')).isPresent()).toBe(false);
});
it('Test2', async () => {
await lib.waitUntilClickable(adminManagersPage.ButtonManagers, browser);
await adminManagersPage.ButtonManagers.click();
expect(await element(by.css('.common-popup')).isPresent()).toBe(false);
});
There is a relevant configuration option:
// If true, protractor will restart the browser between each test.
restartBrowserBetweenTests: true,
Add the above in your config to restart browser between your tests.
Hope it helps you.
I am using Istanbul for code coverage, but i m getting very low coverage percentage particularly in Models file.
Consider the following is the model file:
ModelA.js
const mongoose = require('mongoose');
const Schema = mongoose.Schema;
var app = require('../server')
var db = require('../db/dbConnection');
var config = require('../configs/config')
const Schema1 = new Schema({ 'configurations': [] });
exports.save = function (aa, data, callback) {
var logMeta = {
file: 'models/modelA',
function: 'save',
data: {},
error: {}
}
if (!aa) {
return callback('aa is required')
}
global.logs[aa].log('info', 'AA: ' + aa, logMeta);
db.connectDatabase(aa, function(error, mongoDB){
if(error){
logMeta.data['error'] = error
global.logs[aa].log('error', 'error', logMeta);
return callback(error)
}
const ModelA = mongoDB.model('bbb', cccc);
ModelA.findOneAndUpdate({}, data, {upsert: true, new: true, runValidators: true}, function(error ,result){
if (error) {
logMeta.data['error'] = error
global.logs[aa].log('error', 'error', logMeta);
}
else {
logMeta.data = {}
logMeta.data['result'] = JSON.parse(JSON.stringify(result))
global.logs[aa].log('info', 'result', logMeta);
}
callback(error, result);
});
})
}
TestA.js:
var should = require('should'),
sinon = require('sinon'),
ModelA= require("../models/ModelA");
describe('Model test', function () {
it('Should save Model', function (done) {
var todoMock = sinon.mock(new ModelA({'configurations': []}));
var todo = todoMock.object;
todoMock
.expects('save')
.yields(null, 'SAVED');
todo.save(function(err, result) {
todoMock.verify();
todoMock.restore();
should.equal('SAVED', result, "Test fails due to unexpected result")
done();
});
});
});
But i am getting codecoverage percentage 20. SO how can i increase the percentage:
ALso:
1.Whether i have to mock the db.connectDatabase if yews how can i acheive that?
Whether i have to use TestDb to run all my UnitTest? Or i have to assert??
Code Coverage will work for Unit Test or integration test???
Please share your ideas. Thanks
I have been using Istanbul to 100% code cover most of my client/server projects so I might have the answers you are looking for.
How does it work
Whenever you require some local file, this gets wrapped all over the place to understand if every of its parts is reached by your code.
Not only the required file is tainted, your running test is too.
However, while it's easy to code cover the running test file, mocked classes and their code might never be executed.
todoMock.expects('save')
Accordingly to Sinon documentation:
Overrides todo. save with a mock function and returns it.
If Istanbul tainted the real save method, anything within that scope won't ever be reached so that you are actually testing that mock works, not that your real code does.
This should answer your question: Code Coverage will work for Unit Test or integration test ???
The answer is that it covers the code, which is the only thing you're interested from a code cover perspective. Covering Sinon JS is nobody goal.
No need to assert ... but
Once you've understood how Istanbul works, it follows up naturally to understand that it doesn't matter if you assert or not, all it matters is that you reach the code for real and execute it.
Asserting is just your guard against failures, not a mechanism interesting per se in any Istanbul test. When your assertion fails, your test does too, so it's good for you to know that things didn't work and there's no need to keep testing the rest of the code (early failure, faster fixes).
Whether you have to mock the db.connectDatabase
Yes, at least for the code you posted. You can assign db as generic object mock to the global context and expect methods to be called but also you can simplify your life writing this:
function createDB(err1, err2) {
return {
connectDatabase(aa, callback) {
callback(err1, {
model(name, value) {
return {
findOneAndUpdate($0, $1, $3, fn) {
fn(err2, {any: 'object'});
}
};
}
});
}
};
}
global.db = createDB(null, null);
This code in your test file can be used to create a global db that behaves differently accordingly with the amount of errors you pass along, giving you the ability to run the same test file various times with different expectations.
How to run the same test more than once
Once your test is completed, delete require.cache[require.resolve('../test/file')] and then require('../test/file') again.
Do this as many times as you need.
When there are conditional features detection
I usually run the test various times deleting global constructors in case these are patched with a fallback. I also usually store them to be able to put 'em back later on.
When the code is obvious but shouldn't be reached
In case you have if (err) process.exit(1); you rarely want to reach that part of the code. There are various comments understood by Istanbul that would help you skip parts of the test like /* istanbul ignore if */ or ignore else, or even the generic ignore next.
Please consider thinking twice if it's just you being lazy, or that part can really, safely, be skipped ... I got bitten a couple of times with a badly handled error, which is a disaster since when it happens is when you need the most your code to keep running and/or giving you all the info you need.
What is being covered?
Maybe you know this already but the coverage/lcov-report/index.html file, that you can open right away with any browser, will show you all the parts that aren't covered by your tests.
I've in the past couple of months used Puppeteer to drive an automation for a couple of small level projects. Now I want to scale the framework for a medium/large complex application.
I want to use the famed Page Object Model, where in I have separated the locators, page methods in separate files and I'm calling them in the corresponding page execution code.
My directory structure is like this
e2e_tests
- locators
- common-locators.js
- page1locators.js
- page2locators.js
- constants
- config.js
- utils
- base_functions.js
- page1methods.js
- page2methods.js
- urls
- urls.json
- screenshots
- test
- bootstrap.js
- page1.js
- page2.js
The problem I'm facing right now is that I am not able to get the page to initialise in the method body for that particular page.
For e.g. if I have an input box in page1, I want to define a method inside utils/page1methods.js which can take care of this - something like
module.exports = {
fillFirstInputBox(){
await page.type(locator, "ABCDEFG");
}
}
And then I want to call this inside the page1.js it block - something like this
const firstPage = require('../utils/page1methods.js').
.
.
.
it('fills first input box', async function (){
firstPage.fillFirstInputBox();
});
I've tried this approach and ran into all kinds of .js errors regarding page being not defined in the page1methods.js file. I can copy paste the errors if that's necessary.
What can I do so that I
I am able to achieve this kind of modularisation.
If I can improve on this structure, what should be my approach.
You can return an arrow function that will return the modules/set of functions with page variable. Be sure to wrap the whole thing in first braces, or manually return it.
module.exports = (page) => ({ // <-- to have page in scope
async fillFirstInputBox(){ // <-- make this function async
await page.type(locator, "ABCDEFG");
}
})
And then pass the variable up there,
// make page variable
const firstPage = require('../utils/page1methods.js')(page)
That's it. Now all functions have access to page variable. There are other ways like extending classes, binding page etc. But this will be the easiest way as you can see. You can split it if you need.
We are halfway there. That itself won't solve this problem. The module still won't work due to async-await and class issue.
Here is a full working example,
const puppeteer = require("puppeteer");
const extras = require("./dummy"); // call it
puppeteer.launch().then(async browser => {
const page = await browser.newPage();
await page.goto("https://www.example.com");
const title = await extras(page).getTitle(); // use it here
console.log({ title }); // prints { title: 'Example Domain' }
await browser.close();
});