Looking for a way to write custom Puppeteer commands - javascript

Previously using Nightwatch.js I was able to create custom Nightwatch commands: https://github.com/nightwatchjs/nightwatch-docs/blob/master/guide/extending-nightwatch/custom-commands.md
I'm wondering if there is anything that exists like this for Puppeteer-- the closest thing I've seen is: Is there a way to add script to add new functions in evaluate() context of chrome+puppeeter?
But it's still far away from what I want. I would like to be able to call page.commonAction(...) instead of
page.x();
page.y();
page.z();

You can always create your own script with functions you can call. For example, I have a myFunctions.js in the same folder, from which I name
const mf = require('./myFunctions.js');
Inside the myFunctions.js I have, for example, this function:
async function verifyElementPresent(page, selector) {
let verifySelector = await page.$(selector);
if (verifySelector !== null) {
console.log(found('> OK - Element present'));
} else {
console.log(notfound('>>> ERROR - Element not present: ' + selector));
}
}
So now, in my Puppeteer script all I have to do is write:
mf.verifyElementPresent(page, '#headerTitle');
And it'll print in console the result.
Hope this helps.

Related

Test immediately invoke function

I'm looking to create a test suite for a javascript file that contains an immediately invoked function. Eg.:
(function(context) {
context.setVariable("a") = context.getVariable("a") + 1
})(context);
I am not able to change the file to be tested.
My current attempt is using jest and the following:
context = null
test('test1', () => {
context = new context_mock();
context.setVariable("a", "1");
require("./My-Javascript");
// Check result
helper.check(context.getVariable("a"), 2);
});
test('test2', () => {
context = new context_mock();
context.setVariable("a", "2");
require("./My-Javascript");
// Check result
helper.check(context.getVariable("a"), 3);
});
In this case, test 2 always fails. I assume it's because the javascript file can't be required twice.
Edit: Yes context is a global variable that is operated on when the script is required. I'm aware this is an ugly solution but I'm unable to change the original file.
Any suggestions are appreciated.
jest has an extra level of caching the modules which is preventing the require from running multiple times.
Adding the following to clear the modules cache prior to calls allows the script to run multiple times:
beforeEach(() => {
jest.resetModules()
});

Page Object Model structure for a complex application

I've in the past couple of months used Puppeteer to drive an automation for a couple of small level projects. Now I want to scale the framework for a medium/large complex application.
I want to use the famed Page Object Model, where in I have separated the locators, page methods in separate files and I'm calling them in the corresponding page execution code.
My directory structure is like this
e2e_tests
- locators
- common-locators.js
- page1locators.js
- page2locators.js
- constants
- config.js
- utils
- base_functions.js
- page1methods.js
- page2methods.js
- urls
- urls.json
- screenshots
- test
- bootstrap.js
- page1.js
- page2.js
The problem I'm facing right now is that I am not able to get the page to initialise in the method body for that particular page.
For e.g. if I have an input box in page1, I want to define a method inside utils/page1methods.js which can take care of this - something like
module.exports = {
fillFirstInputBox(){
await page.type(locator, "ABCDEFG");
}
}
And then I want to call this inside the page1.js it block - something like this
const firstPage = require('../utils/page1methods.js').
.
.
.
it('fills first input box', async function (){
firstPage.fillFirstInputBox();
});
I've tried this approach and ran into all kinds of .js errors regarding page being not defined in the page1methods.js file. I can copy paste the errors if that's necessary.
What can I do so that I
I am able to achieve this kind of modularisation.
If I can improve on this structure, what should be my approach.
You can return an arrow function that will return the modules/set of functions with page variable. Be sure to wrap the whole thing in first braces, or manually return it.
module.exports = (page) => ({ // <-- to have page in scope
async fillFirstInputBox(){ // <-- make this function async
await page.type(locator, "ABCDEFG");
}
})
And then pass the variable up there,
// make page variable
const firstPage = require('../utils/page1methods.js')(page)
That's it. Now all functions have access to page variable. There are other ways like extending classes, binding page etc. But this will be the easiest way as you can see. You can split it if you need.
We are halfway there. That itself won't solve this problem. The module still won't work due to async-await and class issue.
Here is a full working example,
const puppeteer = require("puppeteer");
const extras = require("./dummy"); // call it
puppeteer.launch().then(async browser => {
const page = await browser.newPage();
await page.goto("https://www.example.com");
const title = await extras(page).getTitle(); // use it here
console.log({ title }); // prints { title: 'Example Domain' }
await browser.close();
});

Postman:how to set up library of (semi-)complicated reusable scripts for collection

Update
I've completely rewritten this question based on subsequent investigation. Hopefully this will generate some answers.
I'm new to Postman, and trying to figure out how to most efficiently build a collection of tests for a REST application. There are a bunch of utility functions that I'd like to have accessible in each of my test scripts, but cut-and-paste-ing them in to each test script seems like a horrible solution.
In looking at the various "scopes" that Postman allows you to squirrel away data (e.g. globals, environment, collection), it seems that all of these are merely string/number stores. In other words, it properly stores them if you can/do stringify the results. But it doesn't actually allow you to store proper objects or functions. This makes sense, since each script seems to be run as a separate execution, so the idea of sharing pointers to things between different scripts doesn't make sense.
It seems like the accepted way to share utility functions is to toString() the function in the defining script (e.g. the Collection Pre-Req script), and then eval() that stringified version in the test script. For instance:
Collection Pre-Req Script
const utilFunc = () => { console.log("I am a utility function"); };
pm.environment.set("utilFunc",utilFunc.toString() );
Test Script
const utilFunc = eval(pm.environment.get("utilFunc"));
utilFunc();
The test script will successfully print to console "I am a utility function".
I've seen people do more complicated things where, if they have more than one utility function, put them in to an object like utils.func1 and utils.func2, and have the overall function return the utils object, so the test script still only has to have a single line at the top importing the whole thing.
The problem I'm running in to is scoping - since the literal text of the function is executed in the Test Script, everything thing that the utility function has to have must be in that code, or otherwise exist at eval() time in the Test Script. For instance, if I do:
Collection Pre-Req Script
const baseUtilFunc = (foo) => { console.log(foo); };
const utilFunc1 = (param) => { baseUtilFunc("One: " + param); };
const utilFunc2 = (param) => { baseUtilFunc("Two: " + param); };
pm.environment.set("utilFunc1",utilFunc1.toString() );
pm.environment.set("utilFunc2",utilFunc2.toString() );
Test Script
const utilFunc1 = eval(pm.environment.get("utilFunc1"));
const utilFunc2 = eval(pm.environment.get("utilFunc2"));
utilFunc1("Test");
This fails because, in the Test Script, baseUtilFunc does not exist. Obviously, in this example, it'd be easy to fix. But in a more complicated world where the utility functions I expect to use in my Test Scripts are themselves built on top of underlying helper functions, it gets more difficult.
So what is the right way to handle this issue? Do people just cram all the relevant logic in to one big function that they then call toString() on? Do they embed an extraction-from-environment-and-then-eval in each util function within its definition, so that it works in the Test Script context? Do they export each individual method?
There are different ways to do it. The way I did recently for one of the projects is creating a project in Git and then using raw url to fetch the data. I have a sample created at below repo
https://github.com/tarunlalwani/postman-utils
To load the file you will need to associate the below code at collection level
if (typeof pmutil == "undefined") {
var url = "https://raw.githubusercontent.com/tarunlalwani/postman-utils/master/pmutils.js";
if (pm.globals.has("pmutiljs"))
eval(pm.globals.get("pmutiljs"))
else {
console.log("pmutil not found. loading from " + url);
pm.sendRequest(url, function (err, res) {
eval(res.text());
pm.globals.set('pmutiljs', res.text())
});
}
}
As shown in below screenshot
And the later in the tests or Pre-Requests you will run the below line of code to load it
eval(pm.globals.get("pmutiljs"))
And then you can use the functions easily in test.

Javascript require returns empty object

I am trying to use a library found on the web, called himalaya, which is an html parser.
https://github.com/andrejewski/himalaya
I followed their guide on importing the library, so I do
var himalaya = require('himalaya');
However when I call one of its member functions, I get an error
TypeError: himalaya.parse is not a function
I tried executing himalaya.parse() on the web browser console directly, it works. I tried commenting out the require statement in the js file, the function no longer works on web browser.
I guess this implies the require statement works? But for some reasons I cannot use it in my javascript file, only on the browser console.
Perhaps something with file scopes? Here is part of my code.
var himalaya = require('himalaya');
Template.main.onCreated(function () {
var http = new HttpGet("www.someurl.com/", "/somedirectories/", function (response) {
console.log(himalaya.parse(response.content));
});
http.sendRequest();
});
I am certain that response.content does contain a valid html string.
When you call the himalaya.parse inside the main.onCreated function it seems like the library is not completed loaded at that time. That's why it only runs in your browser console. Check if the himalaya library has a onReady function to let you know exactly when you can use it. If not, you can:
a) Call the parse function inside the main.onRendered or
b) Keep the parse call inside the main.onCreated and set a timeOut to call it after a half second like this:
var himalaya = require('himalaya');
Template.main.onCreated(function () {
var http = new HttpGet("www.someurl.com/", "/somedirectories/", function (response) {
setTimeout(function(){himalaya.parse(response.content)},500);
});
http.sendRequest();
});
If you have an issue with the setTimeout check this answer:
Meteor.setTimeout function doesn't work

Running IAsyncOperation from a Windows Runtime Component using JavaScript

I have a solution that has both a Windows Runtime Component (C#) and a Universal App (JS).
One of my classes in the WRC has the following static function:
public static IAsyncOperation<Project> Import()
{
return System.Threading.Tasks.Task.Run<Project>(async () =>
{
try
{
FileOpenPicker picker = new FileOpenPicker();
picker.ViewMode = PickerViewMode.List;
picker.SuggestedStartLocation = PickerLocationId.DocumentsLibrary;
picker.FileTypeFilter.Add(".xml");
StorageFile source = await picker.PickSingleFileAsync();
if (source != null)
{
StorageFile destination = await ApplicationData.Current.RoamingFolder.CreateFileAsync(source.Name, CreationCollisionOption.ReplaceExisting);
await source.MoveAndReplaceAsync(destination);
return await Project.Open(source.DisplayName);
}
else
{
return null;
}
}
catch (Exception)
{
return null;
}
}).AsAsyncOperation<Project>();
}
I am trying to call this function from JS using:
SignalOne.Data.Project.import().done(function () {
new Windows.UI.Popups.MessageBox("Done").showAsync();
}
However, while the "Done" message appears, the file open dialog does not. If I put a message box as the first line inside the try of the C#, it doesn't display, either.
I know I have an upper-case Import in C# and a lower-case import in JS, but that is how it comes up with Intellisense, and if I change it to upper-case in JS it crashes.
I'm sure I'm missing something small/stupid, but I can't put my finger on it.
Thanks.
As you known, if we want to use the async method in Windows Runtime Components, we should be able to use the WindowsRuntimeSystemExtensions.AsAsyncAction or AsAsyncOperation extension method to wrap the task in the appropriate interface.
You can use .NET Framework tasks (the Task class and generic Task class) to implement your asynchronous method. You must return a task that represents an ongoing operation, such as a task that is returned from an asynchronous method written in C# or Visual Basic, or a task that is returned from the Task.Run method.
For more info, see Asynchronous operations.
Also the FileOpenPicker.PickSingleFileAsync method should be run in UI thread.
In this example, the event is being fired on the UI thread. If you fire the event from a background thread, for example in an async call, you will need to do some extra work in order for JavaScript to handle the event. For more information, see Raising Events in Windows Runtime Components.
So we should be able to use CoreWindow.GetForCurrentThread method get the UI thread before the async Task is run that the async Task is not run on the UI thread.
For example:
var window = Windows.UI.Core.CoreWindow.GetForCurrentThread();
var m_dispatcher = window.Dispatcher;
Then we should be able to use the FileOpenPicker.PickSingleFileAsync method in the CoreDispatcher.RunAsync method.
For example:
await m_dispatcher.RunAsync(CoreDispatcherPriority.Normal, new DispatchedHandler(async () =>
{
var source = await picker.PickSingleFileAsync();
}));

Categories

Resources