Goal: Use playwright for testing without having to log in multiple times as per documentation here.
Problem: Playwright tests not picking up session token from Next-Auth and hence am not able to reuse an authenticated status across mutiple tests.
My understanding is that Next-Auth checks for next-auth.session-token to validate the auth status. However following the above docs, I've noticed that the session token never get's stored in the storageState.json. If I manually add in the session token to the JSON then the tests work as expected.
Here is a very barebones repo I've setup with just a login button and one test.
Here's the typical [...nextAuth].js
import GithubProvider from "next-auth/providers/github";
export const authOptions = {
providers: [
GithubProvider({
clientId: process.env.GITHUB_ID,
clientSecret: process.env.GITHUB_SECRET,
}),
],
secret: process.env.NEXTAUTH_SECRET,
debug: true,
};
export default NextAuth(authOptions);
Here's the global-setup.js for Playwright with email/password swapped out
module.exports = async () => {
const browser = await chromium.launch({
headless: false,
});
const page = await browser.newPage();
await page.goto('http://localhost:3000/');
await page.getByRole('button', { name: 'Sign in' }).click();
await page.getByRole('button', { name: 'Sign in with GitHub' }).click();
await page
.getByLabel('Username or email address')
.fill('email');
await page.getByLabel('Password').fill('password');
await page.getByRole('button', { name: 'Sign in' }).click();
await page.context().storageState({ path: 'storageState.json' });
await browser.close();
};
Here's the e2e test I'm running
const { test, expect } = require("#playwright/test");
test.use({
storageState: "storageState.json",
});
test("sign in through github", async ({ page }) => {
await page.goto("http://localhost:3000");
await expect(page.getByText("Signed in as")).not.toBeEmpty();
});
I expect the auth state to be stored in storageState.json from the code in global-setup.js. It does indeed store information there but it's missing the next-auth.session-token for some reason and hence causes the tests I'm running to fail.
Related
I'm trying to use Playwright to automate authentication in my web application.
When I did the authentication test in a typical .spec.ts file, it succeeded:
test('bnblnlnnl', async ({ page }) => {
await page.goto('/');
await page.getByTestId('auth-github-auth-button').click();
await page.getByLabel('Username or email address').fill('automations#blabla');
await page.getByLabel('Password').fill('sdfgsdgsdfgfgf');
await page.getByRole('button', { name: 'Sign in' }).click();
const authorizeElement = page.getByRole('button', { name: 'Authorize blabla' });
const shouldAuthorize = await authorizeElement.isVisible();
if (shouldAuthorize) {
await authorizeElement.click();
}
const navElemnt = page.getByTestId('nav');
await expect(navElemnt).toBeVisible();
await expect(page).toHaveURL('/');
});
So this test successfully completes. Then, according to this documentation: https://playwright.dev/docs/auth
I can authenticate already in the global setup script, instead of authenticating before each test block. To do so, I have this script for my global setup file:
import { chromium } from '#playwright/test';
const globalSetup = async () => {
const browser = await chromium.launch();
const page = await browser.newPage();
await page.goto('http://localhost:8080/');
await page.getByTestId('auth-github-auth-button').click();
await page.getByLabel('Username or email address').fill('gfsdagdf');
await page.getByLabel('Password').fill('sadfsdfsdfs');
await page.getByRole('button', { name: 'Sign in' }).click();
const authorizeElement = page.getByRole('button', { name: 'Authorize dfssd' });
const shouldAuthorize = await authorizeElement.isVisible();
if (shouldAuthorize) {
await authorizeElement.click();
}
await page.context().storageState({ path: 'storageState.json' });
await browser.close();
};
export default globalSetup;
But when I run playwright test I get a timeout from this statement: await page.getByTestId('auth-github-auth-button').click();.
The error message:
{
"name": "TimeoutError"
}
So I checked, during test process- I browsed to http://localhost:8080 and I saw my web app is running, and the element with id auth-github-auth-button does present, including its data-test-id attribute. So why playwright fails to locate it?
This is my playwright.config.ts file:
import { defineConfig } from '#playwright/test';
const configuration = defineConfig({
testDir: './tests',
testIgnore: 'scripts',
globalSetup: './tests/scripts/global-setup.ts',
globalTeardown: './tests/scripts/global-teardown.ts',
reporter: [['html', { open: 'never' }]],
use: {
testIdAttribute: 'data-test-id',
baseURL: 'http://localhost:8080',
storageState: 'storageState.json',
},
});
export default configuration;
As you noted in your answer, the issue was that the config doesn’t affect the global setup, and so Playwright tried to use the default data-testid attribute instead of your custom attribute.
While one solution would be to switch to using data-testid attributes instead to match the default, I wanted to offer up an alternative to keep your custom attribute. According to the Playwright docs on setting a custom test id attribute, “you can configure it in your test config or by calling selectors.setTestIdAttribute().” While the config option won’t automatically work for the global setup as you mentioned in your answer, you should be able to use it as passed into your setup along with selectors.setTestIdAttribute() to use your custom attribute as expected.
So this suggested change to the top of your setup file should theoretically make it work as you expected:
import { chromium, selectors, FullConfig } from '#playwright/test';
const globalSetup = async (config: FullConfig) => {
const { testIdAttribute } = config.projects[0].use;
selectors.setTestIdAttribute(testIdAttribute);
const browser = await chromium.launch();
See the docs about global setup for their example of using the config object inside the setup to reuse values. Theirs uses baseURL and storageState, which you may find value in as well.
Hope that helps!
The issue is that I'm using data-test-id but in global setup script only data-testid will work as it's not configurable. Changing all my attributes to data-testid solved it
I'm super new to coding and building a script that navigates to a page and takes a screenshot of it.
So far, I've figured out just from googling, etc. how to login if the credentials are coded in puppeteer. However, I want for more than one person to be able to use this with their own credentials without having to go into the code and change it that way, so I decided to make a prompt.
I found some basic information on google and code that prompts for a user ID and password in the command line and put that into my code. It asks for a username and password and then just saves it.
I need help figuring out how to take the logged userID and password and have puppeteer input those into a website login. Eventually I'd like to generate a popup prompt that allows the user to input ID and password if possible. Any ideas?
Here's my code:
const fs = require("fs");
const puppeteer = require("puppeteer");
async function captureScreenshot() {
// creates a screenshot directory
if (!fs.existsSync("screenshots")) {
fs.mkdirSync("screenshots");
}
let browser = null;
try {
// launch headless Chromium browser
browser = await puppeteer.launch({ headless: true });
// new tab/page
const page = await browser.newPage();
//viewport width and height
await page.setViewport({ width: 1440, height: 1080 });
await page.goto("https://users.nexusmods.com/");
//Prompt for User ID/Password:
var prompt = require('prompt');
prompt.start();
prompt.get([{
name: 'username',
required: true
}, {
name: 'password',
hidden: true,
conform: function (value) {
return true;
}
}, {
name: 'passwordMasked',
hidden: true,
replace: '*',
conform: function (value) {
return true;
}
}], function (err, result) {
//
// Log the results.
//
console.log('Command-line input received:');
console.log(' username: ' + result.username);
console.log(' password: ' + result.password);
console.log(' passwordMasked: ' + result.passwordMasked);
});
// login
await page.waitForSelector("input[name='user[login]']")
await page.type("input[name='user[login]']", 'username')
await page.waitForSelector("input[name='user[password]']")
await page.type("input[name='user[password]']", 'password')
await page.click("input[type='submit']")
await page.waitForSelector("input[name='real_name']")
await page.click('a.d-none.d-md-flex')
await page.waitForTimeout(3000)
// capture screenshot and store it into screenshots directory.
await page.screenshot({ path: `screenshots/SEG_Report.jpeg` });
} catch (err) {
console.log(`❌ Error: ${err.message}`);
} finally {
await browser.close();
console.log(`\n SEG Report Captured and Saved!`);
}
}
captureScreenshot();
I just started coding, and I was wondering if there was a way to open multiple tabs concurrently with one another. Currently, my code goes something like this:
const puppeteer = require("puppeteer");
const rand_url = "https://www.google.com";
async function initBrowser() {
const browser = await puppeteer.launch({ headless: false });
const page = await browser.newPage();
await page.goto(rand_url);
await page.setViewport({
width: 1200,
height: 800,
});
return page;
}
async function login(page) {
await page.goto("https://www.google.com");
await page.waitFor(100);
await page.type("input[id ='user_login'", "xxx");
await page.waitFor(100);
await page.type("input[id ='user_password'", "xxx");
}
this is not my exact code, replaced with different aliases, but you get the idea. I was wondering if there was anyone out there that knows the code that allows this same exact browser to be opened on multiple instances, replacing the respective login info only. Of course, it would be great to prevent my IP from getting banned too, so if there was a way to apply proxies to each respective "browser"/ instance, that would be perfect.
Lastly, I would like to know whether or not playwright or puppeteer is superior in the way they can handle these multiple instances. I don't even know if this is a possibility, but please enlighten me. I want to learn more.
You can use multiple browser window as different login/cookies.
For simplicity, you can use the puppeteer-cluster module by Thomas Dondorf.
This module can make your puppeteer launched and queued one by one so that you can use this to automating your login, and even save login cookies for the next launches.
Feel free to go to the Github: https://github.com/thomasdondorf/puppeteer-cluster
const { Cluster } = require('puppeteer-cluster')
(async () => {
const cluster = await Cluster.launch({
concurrency: Cluster.CONCURRENCY_CONTEXT,
maxConcurrency: 2, // <= this is the number of
// parallel task running simultaneously
}) // You can change to the number of CPU
const cpuNumber = require('os').cpus().length // for example
await cluster.task(async ({ page, data: [username, password] }) => {
await page.goto('https://www.example.com')
await page.waitForTimeout(100)
await page.type('input[id ="user_login"', username)
await page.waitForTimeout(100)
await page.type('input[id ="user_password"', password)
const screen = await page.screenshot()
// Store screenshot, Save Cookies, do something else
});
cluster.queue(['myFirstUsername', 'PassW0Rd1'])
cluster.queue(['anotherUsername', 'Secr3tAgent!'])
// cluster.queue([username, password])
// username and password array passed into cluster task function
// many more pages/account
await cluster.idle()
await cluster.close()
})()
For Playwright, sadly still unsupported by the module above,you can use browser pool (cluster) module to automating the Playwright launcher.
And for proxy usage, I recommend Puppeteer library as the legendary one.
Don't forget to choose my answer as the right one, if this helps you.
There are profiling and proxy options; you could combine them to achieve your goal:
Profile, https://playwright.dev/docs/api/class-browsertype#browser-type-launch-persistent-context
import { chromium } from 'playwright'
const userDataDir = /tmp/ + process.argv[2]
const browserContext = await chromium.launchPersistentContext(userDataDir)
// ...
Proxy, https://playwright.dev/docs/api/class-browsertype#browser-type-launch
import { chromium } from 'playwright'
const proxy = { /* secret */ }
const browser = await chromium.launch({
proxy: { server: 'pre-context' }
})
const browserContext = await browser.newContext({
proxy: {
server: `http://${proxy.ip}:${proxy.port}`,
username: proxy.username,
password: proxy.password,
}
})
// ...
EDIT for Mission Clarity: In the end I am pulling inventory data and customer data from Postgres to render and send a bunch of PDFs to customers, once per month.
These PDFs are dynamic in that the cover page will have varying customer name/address. The next page(s) are also dynamic as they are lists of a particular customer's expiring inventory with item/expirying date/serial number.
I had made a client-side React page with print CSS to render some print-layout letters that could be printed off/saved as a pretty PDF.
Then, the waterfall spec came in that this was to be an automated process on the server. Basically, the PDF needs attached to an email alerting customers of expiring product (in med industry where everything needs audited).
I thought using Puppeteer would be a nice and easy switch. Just add a route that processes all customers, looking up whatever may be expiring, and then passing that into the dynamic react page to be rendered headless to a PDF file (and eventually finish the whole rest of the plan, sending email, etc.). Right now I just grab 10 customers and their expiring stock for PoC, then I have basically: { customer: {}, expiring: [] }.
I've attempted using POST to page with interrupt, but I guess that makes sense that I cannot get post data in the browser.
So, I switched my approach to using cookies. This I would expect to work, but I can never read the cookie(s) into the page.
Here is a: Simple route, simple puppeteer which writes out cookies to a json and takes a screenshot just for proof, and simple HTML with script I'm using just to try to prove I can pass data along.
server/index.js:
app.get('/testing', async (req, res) => {
console.log('GET /testing');
res.sendFile(path.join(__dirname, 'scratch.html'));
});
scratch.js (run at commandline node ./scratch.js:
const puppeteer = require('puppeteer')
const fs = require('fs');
const myCookies = [{name: 'customer', value: 'Frank'}, {name: 'expiring', value: JSON.stringify([{a: 1, b: 'three'}])}];
(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto('http://localhost:1234/testing', { waitUntil: 'networkidle2' });
await page.setCookie(...myCookies);
const cookies = await page.cookies();
const cookieJson = JSON.stringify(cookies);
// Writes expected cookies to file for sanity check.
fs.writeFileSync('scratch_cookies.json', cookieJson);
// FIXME: Cookies never get appended to page.
await page.screenshot({path: 'scratch_shot.png'});
await browser.close();
})();
server/scratch.html:
<html>
<body>
</body>
<script type='text/javascript'>
document.write('Cookie: ' + document.cookie);
</script>
</html>
The result is just a PNG with the word "Cookie:" on it. Any insight appreciated!
This is the actual route I'm using where makeExpiryLetter is utilizing puppeteer, but I can't seem to get it to actually read the customer and rows data.
app.get('/create-expiry-letter', async (req, res) => {
// Create PDF file using puppeteer to render React page w/ data.
// Store in Db.
// Email file.
// Send final count of letters sent back for notification in GUI.
const cc = await dbo.getConsignmentCustomers();
const result = await Promise.all(cc.rows.map(async x => {
// Get 0-60 day consignments by customer_id;
const { rows } = await dbo.getExpiry0to60(x.customer_id);
if (rows && rows.length > 0) {
const epiryLetter = await makeExpiryLetter(x, rows); // Uses puppeteer.
// TODO: Store in Db / Email file.
return true;
} else {
return false;
}
}));
res.json({ emails_sent: result.filter(x => x === true).length });
});
Thanks to the samples from #ggorlen I've made huge headway in using cookies. In my inline script of expiry.html I'm grabbing the values by wrapping my render function in function main () and adding onload to body tag <body onload='main()'.
Inside the main function we can grab the values I needed:
const customer = JSON.parse(document.cookie.split('; ').find(row => row.startsWith('customer')).split('=')[1]);
const expiring = JSON.parse(document.cookie.split('; ').find(row => row.startsWith('expiring')).split('=')[1]);
FINALLY (and yes, of course this will all be used in an automated worker in the end) I can get my beautifully rendered PDF like so:
(async () => {
const browser = await puppeteer.launch();
const [page] = await browser.pages();
await page.setCookie(...myCookies);
await page.goto('http://localhost:1234/testing');
await page.pdf({ path: `scratch-expiry-letter.pdf`, format: 'letter' });
await browser.close();
})();
The problem is here:
await page.goto('http://localhost:1234/testing', { waitUntil: 'networkidle2' });
await page.setCookie(...myCookies);
The first line says, go to the page. Going to a page involves parsing the HTML and executing scripts, including your document.write('Cookie: ' + document.cookie); line in scratch.html, at which time there are no cookies on the page (assuming a clear browser cache).
After the page is loaded, await page.goto... returns and the line await page.setCookie(...myCookies); runs. This correctly sets your cookies and the remaining lines execute. const cookies = await page.cookies(); runs and pulls the newly-set cookies out and you write them to disk. await page.screenshot({path: 'scratch_shot.png'}); runs, taking a shot of the page without the DOM updated with the new cookies that were set after the initial document.write call.
You can fix this problem by turning your JS on the scratch.html page into a function that can be called after page load and cookies are set, or injecting such a function dynamically with Puppeteer using evaluate:
const puppeteer = require('puppeteer');
const myCookies = [
{name: 'customer', value: 'Frank'},
{name: 'expiring', value: JSON.stringify([{a: 1, b: 'three'}])}
];
(async () => {
const browser = await puppeteer.launch();
const [page] = await browser.pages();
await page.goto('http://localhost:1234/testing');
await page.setCookie(...myCookies);
// now that the cookies are ready, we can write to the document
await page.evaluate(() => document.write('Cookie' + document.cookie));
await page.screenshot({path: 'scratch_shot.png'});
await browser.close();
})();
A more general approach is to set the cookies before navigation. This way, the cookies will already exist when any scripts that might use them run.
const puppeteer = require('puppeteer');
const myCookies = [
{
name: 'expiring',
value: '[{"a":1,"b":"three"}]',
domain: 'localhost',
path: '/',
expires: -1,
size: 29,
httpOnly: false,
secure: false,
session: true,
sameParty: false,
sourceScheme: 'NonSecure',
sourcePort: 80
},
{
name: 'customer',
value: 'Frank',
domain: 'localhost',
path: '/',
expires: -1,
size: 13,
httpOnly: false,
secure: false,
session: true,
sameParty: false,
sourceScheme: 'NonSecure',
sourcePort: 80
}
];
(async () => {
const browser = await puppeteer.launch();
const [page] = await browser.pages();
await page.setCookie(...myCookies);
await page.goto('http://localhost:1234/testing');
await page.screenshot({path: 'scratch_shot.png'});
await browser.close();
})();
That said, I'm not sure if cookies are the easiest or best way to do what you're trying to do. Since you're serving HTML, you could pass the data along with it statically, expose a separate API route to collect a customer's data which the front end can use, or pass GET parameters, depending on the nature of the data and what you're ultimately trying to accomplish.
You could even have a file upload form on the React app, then have Puppeteer upload the JSON data into the app programmatically through that form.
In fact, if your final goal is to dynamically generate a PDF, using React and Puppeteer might be overkill, but I'm not sure I have a better solution to offer without some research and additional context about your use case.
Currently I have my Puppeteer running with a Proxy on Heroku. Locally the proxy relay works totally fine. I however get the error Error: net::ERR_TUNNEL_CONNECTION_FAILED. I've set all .env info in the Heroku config vars so they are all available.
Any idea how I can fix this error and resolve the issue?
I currently have
const browser = await puppeteer.launch({
args: [
"--proxy-server=https=myproxy:myproxyport",
"--no-sandbox",
'--disable-gpu',
"--disable-setuid-sandbox",
],
timeout: 0,
headless: true,
});
page.authentication
The correct format for proxy-server argument is,
--proxy-server=HOSTNAME:PORT
If it's HTTPS proxy, you can pass the username and password using page.authenticate before even doing a navigation,
page.authenticate({username:'user', password:'password'});
Complete code would look like this,
const puppeteer = require('puppeteer');
(async () => {
const browser = await puppeteer.launch({
headless:false,
ignoreHTTPSErrors:true,
args: ['--no-sandbox','--proxy-server=HOSTNAME:PORT']
});
const page = await browser.newPage();
// Authenticate Here
await page.authenticate({username:user, password:password});
await page.goto('https://www.example.com/');
})();
Proxy Chain
If somehow the authentication does not work using above method, you might want to handle the authentication somewhere else.
There are multiple packages to do that, one is proxy-chain, with this, you can take one proxy, and use it as new proxy server.
The proxyChain.anonymizeProxy(proxyUrl) will take one proxy with username and password, create one new proxy which you can use on your script.
const puppeteer = require('puppeteer');
const proxyChain = require('proxy-chain');
(async() => {
const oldProxyUrl = 'http://username:password#hostname:8000';
const newProxyUrl = await proxyChain.anonymizeProxy(oldProxyUrl);
// Prints something like "http://127.0.0.1:12345"
console.log(newProxyUrl);
const browser = await puppeteer.launch({
args: [`--proxy-server=${newProxyUrl}`],
});
// Do your magic here...
const page = await browser.newPage();
await page.goto('https://www.example.com');
})();