I'm new to frontend world, I would like to write some test using protractor-image-comparison. I followed installation instructions from https://github.com/wswebcreation/protractor-image-comparison. Also I make configuration according to this page.
When I try to use functions form this lib I get following error: "TypeError: Cannot read property 'checkFullPageScreen' of undefined". I'm getting a warrning in protractor.conf.js in
const protractorImageComparison = require('protractor-image-comparison');
"Could not find a declaration file for module
'protractor-image-comparison'.
'/home/rafa/repos/example/src/example/node_modules/protractor-image-comparison/index.js'
implicitly has an 'any' type. Try npm install
#types/protractor-image-comparison if it exists or add a new
declaration (.d.ts) file containing declare module
'protractor-image-comparison';"
So I did, I made simple *.d.ts file with `declare module protractor-image-comparison' in it, but it didn't solve the problem just the warning disappear. It's propably the config issue, but I can't handle it or maybe I made wrong declaration. This is my config file :
// Protractor configuration file, see link for more information
// https://github.com/angular/protractor/blob/master/lib/config.ts
const reporter = require("cucumber-html-reporter");
const path = require("path");
const jsonReports = path.join(process.cwd(), "/reports/json");
const htmlReports = path.join(process.cwd(), "/reports/html");
const targetJson = jsonReports + "/cucumber_report.json";
const cucumberReporterOptions = {
jsonFile: targetJson,
output: htmlReports + "/cucumber_reporter.html",
reportSuiteAsScenarios: true,
theme: "bootstrap",
};
exports.config = {
allScriptsTimeout: 110000,
restartBrowserBetweenTests: true,
//SELENIUM_PROMISE_MANAGER: false,
specs: [
'./e2e/**/login.feature'
],
capabilities: {
'browserName': 'chrome'
},
directConnect: true,
baseUrl: 'http://localhost:4200/',
framework: 'custom',
frameworkPath: require.resolve('protractor-cucumber-framework'),
cucumberOpts: {
format: "json:" + targetJson,
require: ['./e2e/steps/*.ts', "./e2e/timeout.ts"],
},
useAllAngular2AppRoots: true,
onPrepare: () => {
browser.ignoreSynchronization = true;
const protractorImageComparison = require('protractor-image-comparison');
browser.protractorImageComparison = new protractorImageComparison(
{
baselineFolder: "report/screens/baseline",
screenshotPath: "report/screens/actual"
}
);
},
beforeLaunch: function() {
require('ts-node').register({
project: 'e2e'
});
},
onComplete: () => {
reporter.generate(cucumberReporterOptions);
}
};
Ok, I solved it. The reason why I was getting this TypeError is that I lunched few test scenarios and onPrepare was lunched only in the begining. I move config of protractor-image-comparison to cucumber befor hook and everything works fine now.
Related
So my problem is that since I implemented the p-retry lib (retry call api X times you want). On the localhost:3000 work fine but when I launch the tests I got the following return:
● Test suite failed to run
Jest encountered an unexpected token
Jest failed to parse a file. This happens e.g. when your code or its dependencies use non-standard JavaScript syntax, or when Jest is not configured to support such syntax.
Out of the box Jest supports Babel, which will be used to transform your files into valid JS based on your Babel configuration.
By default "node_modules" folder is ignored by transformers.
Here's what you can do:
• If you are trying to use ECMAScript Modules, see https://jestjs.io/docs/ecmascript-modules for how to enable it.
• If you are trying to use TypeScript, see https://jestjs.io/docs/getting-started#using-typescript
• To have some of your "node_modules" files transformed, you can specify a custom "transformIgnorePatterns" in your config.
• If you need a custom transformation specify a "transform" option in your config.
• If you simply want to mock your non-JS modules (e.g. binary assets) you can stub them out with the "moduleNameMapper" config option.
You'll find more details and examples of these config options in the docs:
https://jestjs.io/docs/configuration
For information about custom transformations, see:
https://jestjs.io/docs/code-transformation
Details:
/project/node_modules/p-retry/index.js:1
({"Object.<anonymous>":function(module,exports,require,__dirname,__filename,jest){import retry from 'retry';
^^^^^^
SyntaxError: Cannot use import statement outside a module
1 | import fetch from 'node-fetch';
> 2 | import pRetry, { AbortError } from 'p-retry';
| ^
3 |
4 | import HttpsProxyAgent from 'https-proxy-agent';
5 | const proxyAgent = process.env.HTTPS_PROXY
at Runtime.createScriptFromCode (node_modules/jest-runtime/build/index.js:1728:14)
at Object.<anonymous> (services/medVir/http.ts:2:1)
So I guess it's probably an error of config so this is my jest.config.js :
const nextJest = require('next/jest');
const createJestConfig = nextJest({
// Provide the path to your Next.js app to load next.config.js and .env.local files in your test environment
dir: './',
});
// Add any custom config to be passed to Jest
const customJestConfig = {
clearMocks: true,
collectCoverage: true,
coverageDirectory: 'coverage',
coveragePathIgnorePatterns: [
'/node_modules/',
'__tests__/utils/',
'/public/',
],
moduleNameMapper: {
'\\.(css|less)$': 'identity-obj-proxy',
'\\.(jpg|jpeg|png|gif|eot|otf|webp|svg|ttf|woff|woff2|mp4|webm|wav|mp3|m4a|aac|oga)$':
'identity-obj-proxy',
'^-!svg-react-loader.*$': '<rootDir>/config/jest/svgImportMock.js',
},
testEnvironment: 'jsdom',
testMatch: [
// "**/__tests__/**/*.[jt]s?(x)",
'**/?(*.)+(spec|test).[tj]s?(x)',
],
testPathIgnorePatterns: ['/node_modules/', '__tests__/utils/'],
// transformIgnorePatterns: ['node_modules/(?!(p-retry)/)'],
verbose: true,
transform: {
// Use babel-jest to transpile tests with the next/babel preset
// https://jestjs.io/docs/configuration#transform-objectstring-pathtotransformer--pathtotransformer-object
'^.+\\.(js|jsx|ts|tsx)$': [
'babel-jest',
{
presets: [
[
'#babel/preset-env',
{
targets: {
node: 'current',
},
},
],
'#babel/preset-typescript',
'#babel/preset-react',
],
},
],
},
setupFiles: ['<rootDir>/.jest/setEnvVars.js'],
setupFilesAfterEnv: ['<rootDir>/jest.setup.js'],
};
// createJestConfig is exported this way to ensure that next/jest can load the Next.js config which is async
// module.exports = customJestConfig;
module.exports = createJestConfig(customJestConfig);
I tried a lot of different config and implementation but nothing to do... still the same error so I`m wondering if the problem could be something else.
Something sure since I change axios to node-fetch with p-retry (to handle request and retry then) my tests just stoped to work
I come to give you a solution for people which crossing the same problem as me. So I didn't fix the lib or found jest config to handle the weird behavior so I made my own function to do kind of the same thing :
Which is recalling X times the same call api if the timeout is reach
code :
const makeAPICall = async ({
url,
body = '',
method = 'GET',
type = 'TEXT',
}: IApi) => {
// init path
const path = new URL(url);
const timeout = 28_000;
let myInit = {
method,
timeout,
headers: {
Accept: 'application/json',
'Content-Type': 'application/json',
},
};
if (method !== 'GET') myInit = { ...myInit, ...{ body: body } };
const res = await fetch(path.href, myInit);
switch (type) {
case 'TEXT':
return res.text();
default:
return res.json();
}
};
const sleep = (ms: number) =>
new Promise((resolve) => setTimeout(() => resolve(), ms));
const makeApiRetry: any = async (args: IArgs, retries = 3, old_n = 0) => {
let n = old_n;
return makeAPICall(args).catch(async () => {
if (n < retries) {
n++;
// console.log('Retrying request', n, `waiting ${1000 * n} `, args.url);
await sleep(1000 * (n + 1));
return makeApiRetry(args, retries, n);
} else {
return Promise.reject('Too many retries : error timeout');
}
});
};
// Get node-fectch
export const getApiRoute = async (body: string) =>
await makeApiRetry({
url: `/apiRoute`,
body,
method: 'GET',
type: 'JSON',
});
I need a basic logging system for a NodeJS application, so I'm doing some tests with log4js, which seems to be the standard for these cases. I need to print the messages both to the console and a file, so I wrote this code:
// Logger
var log4js = require('log4js');
log4js.configure({
appenders: {
'console': { type: 'console' },
'file': { type: 'file', filename: 'logs/mailer.log' }
},
categories: {
default: { appenders: ['file', 'console'], level: 'DEBUG' },
}
});
var logger = log4js.getLogger("Mailer");
logger.info("Starting application");
try {
// CODE TO READ APP CONFIG FILE
}
catch(e) {
logger.error("Couldn't read app configuration file (config.yml)");
// This is the trouble maker. It kills the app without flushing the logs
process.exit(1);
}
When I run the application, the logs appear like this:
[2019-07-29T16:07:24.763] [INFO] Mailer - Starting application
The problem is that I can see the messages in the console, but the log file remains empty. If I delete it and run the application again, it's created, so I suppose that the problem is a missing option or something like that.
Any suggestion?
To avoid changing the application logic and all the process.exit() calls, I've replaced the file appender by its synchronous version fileSync and now everything works as expected. So, the only thing I changed is the line where I declare the 'file' appender:
'file': { type: 'fileSync', filename: 'logs/mailer.log' }
Cheers!
You could use loggerWinston...I used it in my nodejs application and it works fine.
You should add it in your server.js (the file you use to start node)
const winston = require('winston');
const tsFormat = () => (new Date()).toLocaleTimeString();
global.loggerWinston = new (winston.Logger)({
transports: [
// colorize the output to the console
new (winston.transports.Console)({
timestamp: tsFormat,
colorize: true,
level: 'debug'
}),
new (winston.transports.File)({
filename: 'results.log',
timestamp: tsFormat,
// level: (process.env.NODE_ENV || 'development') === 'development' ? 'debug' : 'info'
level: 'info'
})
]
});
The results.log file should be added in your project folder.
This is the reference https://www.npmjs.com/package/winston
I have one file of config which runs tests in one browser using capabilities.
Now I have created one more separate config file which contains multiCapabilites and will run same tests in multiple browsers.
I want to optimize configs so I second config file I write multiCapabilities for first config and used
delete firstConfig['capabilities'];
to ignore the capabilities from first config and use all other params from firstConfig and use multiCapabilities from 2nd config and run.
Expected result:
params in configs should not be duplicated in both configs, only multiCapabilities is the change, rest of config is same.
Use a base configuration file
Having a base configuration file and another file that extends from it might be a better approach. For this example, we will look at my base configuration file:
var env = require('./environment');
// This is the configuration for a smoke test for an Angular TypeScript application.
exports.config = {
seleniumAddress: env.seleniumAddress,
framework: 'jasmine',
specs: [
'ng2/async_spec.js'
],
capabilities: env.capabilities,
baseUrl: env.baseUrl,
allScriptsTimeout: 120000,
getPageTimeout: 120000,
jasmineNodeOpts: {
defaultTimeoutInterval: 120000
}
};
Create a second config from the base config
From there, we did something similar to your question where we removed the capabilities and added multicapabilities. (https://github.com/angular/protractor/blob/master/spec/ciNg2Conf.js). In addition, since we were running on Sauce Labs, we also decided to increase our timeouts.
exports.config = require('./angular2Conf.js').config;
exports.config.sauceUser = process.env.SAUCE_USERNAME;
exports.config.sauceKey = process.env.SAUCE_ACCESS_KEY;
exports.config.seleniumAddress = undefined;
// TODO: add in firefox when issue #2784 is fixed
exports.config.multiCapabilities = [{
'browserName': 'chrome',
'tunnel-identifier': process.env.TRAVIS_JOB_NUMBER,
'build': process.env.TRAVIS_BUILD_NUMBER,
'name': 'Protractor suite tests',
'version': '54',
'selenium-version': '2.53.1',
'chromedriver-version': '2.26',
'platform': 'OS X 10.11'
}];
exports.config.capabilities = undefined;
exports.config.allScriptsTimeout = 120000;
exports.config.getPageTimeout = 120000;
exports.config.jasmineNodeOpts.defaultTimeoutInterval = 120000;
I hope that helps.
Update:
Per comments below, setting the config.capabilities to undefined did not work; however, setting config.capabilities to false did work.
Prepare a capabilities provider to define vary capabilities, and exports a function to return a capabilities array according to the cmd line params.
// capabilities.provider.js
var capabilities = {
chrome: {
browserName: 'chrome'
},
chrome-headless {
browserName: 'chrome',
},
firefox: {
browsername: 'firefox'
},
...
};
exports.evaluate=function(){
var caps = 'chrome';
process.argv.slice(3).forEach(function(kvp){
if(kvp.includes('--caps=')) {
caps = kvp.split('=')[1] || caps;
}
})
var _caps = [];
caps.split(',').forEach(function(cap){
if(Object.keys(capabilities).includes(cap)) {
_caps.push(capabilities[cap])
}
})
return _caps;
};
protractor config.js
var capsProvider = require('./capabilities.provider');
exports.config = {
seleniumAddress: '',
framework: 'jasmine',
specs: [
'ng2/async_spec.js'
],
params: {
},
multiCapabilities: capsProvider.evaluate(),
baseUrl: env.baseUrl,
allScriptsTimeout: 120000,
getPageTimeout: 120000,
jasmineNodeOpts: {
defaultTimeoutInterval: 120000
}
};
Specify caps from cmd line:
protractor config.js --caps=chrome,firefox,ie,safari
I have created a TypeScript service named hello-world.ts in my Angular 1.X application. It looks like this:
export default class HelloWorld {
hello() {
console.log("hello world");
}
}
I am importing it using the following:
import HelloWorld from './hello-world';
angular.module('exac.helloWorld', []).service('HelloWorld', HelloWorld);
When I pass the imported HelloWorld to console.log() in the above code snippet, I get an empty Object when I should be expecting a function.
Below is my relevant gulpfile config:
// cache, packageCache, fullPaths are necessary for watchify
var browserifyArgs = {
cache: {},
packageCache: {},
// supposedly must be true for watchify, but false seems to work:
fullPaths: false,
basedir: 'app/source',
// generate source maps
debug: true,
};
var bundler = browserify('./index.js', browserifyArgs);
// TypeScript support
bundler.plugin(tsify, {noImplicitAny: true});
bundler.transform('babelify', {
extensions: ['.js', '.ts'],
});
bundler.transform('browserify-ngannotate');
function bundle() {
// If we process.cwd('app') here this will generate perfect source maps
// even with includeContent: false; unfortunately, that affects the cwd for
// all tasks in the gulpfile.
return bundler.bundle()
// log errors if they happen
.on('error', console.log.bind(gutil, 'Browserify Error'))
.pipe(source('index.js'))
.pipe(buffer())
.pipe(sourcemaps.init({
loadMaps: true
}))
.pipe(sourcemaps.write('./', {
includeContent: true,
sourceRoot: '..'
}))
.pipe(gulp.dest('app/js/'));
}
gulp.task('js', bundle);
I have also tried the following:
// cache, packageCache, fullPaths are necessary for watchify
var browserifyArgs = {
cache: {},
packageCache: {},
// supposedly must be true for watchify, but false seems to work:
fullPaths: false,
extensions: ['.js', '.ts'],
basedir: 'app/source',
// generate source maps
debug: true,
transform: [
'babelify',
['browserify-ngannotate'],
]
};
var bundler = browserify('./index.js', browserifyArgs).plugin(tsify);
function bundle() {
...
Gulp watchify completes with no errors, but my browser generates the error angular.js:13920 Error: [ng:areq] Argument 'fn' is not a function, got Object.
As I wrote before, when I import the HelloWorld class from hello-world.ts, I am not seeing the expected value. Instead, I get an empty object. What is going on?
EDIT:
Here is the source code generated by browserify / tsify. Perhaps this holds a clue as to what may be wrong in my gulp / browserify / tsify environment?
System.register([], function (exports_1, context_1) {
"use strict";
var __moduleName = context_1 && context_1.id;
var HelloWorld;
return {
setters: [],
execute: function execute() {
HelloWorld = function () {
function HelloWorld() {}
HelloWorld.prototype.hello = function () {
console.log("hello world");
};
return HelloWorld;
}();
exports_1("default", HelloWorld);
}
};
});
Here's the library:
//library.js
var exports = module.exports = {};
exports.login = function(user_login, user_password) {
var input;
input = element(by.model('loginInfo.login'));
input.sendKeys(user_login);
expect(input.getAttribute('value')).toBe(user_login);
input = element(by.model('loginInfo.password'));
input.sendKeys(user_password);
expect(input.getAttribute('value')).toBe(user_password);
browser.sleep(1000);
browser.driver.actions().sendKeys(protractor.Key.ENTER).perform();
browser.sleep(1000);
};
And this is my config file:
//config.js
var lib = require("./library.js");
exports.config = {
directConnect: true,
onPrepare: function() {
browser.driver.manage().window().maximize();
},
// Capabilities to be passed to the webdriver instance.
capabilities: {
'browserName': 'chrome'
},
// Framework to use. Jasmine is recommended.
framework: 'jasmine',
// Spec patterns are relative to the current working directory when
// protractor is called.
specs: ['messages.js'],
// Options to be passed to Jasmine.
jasmineNodeOpts: {
defaultTimeoutInterval: 50000
}
};
And here's how I'm calling the login fn in the messages.js file:
lib.login('xxx', 'yyyyy');
However, this last line above is giving me an error: 'lib is not defined'
It looks like you are trying to run a protractor test from your library.js file.
Instead of doing that, following the guidelines that http://www.protractortest.org/#/ instructs. That is, the config.js file is for configuring the environment and the spec.js file is for testing. As such, try this instead:
/*
* library-spec.js
*/
var input;
describe('Login Test', function() {
it('should enter login information and send the Enter key to login', function() {
input = element(by.model('loginInfo.login'));
input.sendKeys(user_login);
expect(input.getAttribute('value')).toBe(user_login);
input = element(by.model('loginInfo.password'));
input.sendKeys(user_password);
expect(input.getAttribute('value')).toBe(user_password);
browser.sleep(1000);
browser.driver.actions().sendKeys(protractor.Key.ENTER).perform();
browser.sleep(1000);
});
});
And the config file will look like:
//config.js
exports.config = {
directConnect: true,
onPrepare: function() {
browser.driver.manage().window().maximize();
},
// Capabilities to be passed to the webdriver instance.
capabilities: {
'browserName': 'chrome'
},
// Framework to use. Jasmine is recommended.
framework: 'jasmine',
// Spec patterns are relative to the current working directory when
// protractor is called.
specs: ['library-spec.js'],
// Options to be passed to Jasmine.
jasmineNodeOpts: {
defaultTimeoutInterval: 50000
}
};
However, if you need that library.js file to be run before each or before all your tests, put it into your messages.js file.
From your messages.js file, within your describe block you would add:
beforeEach(function() {
lib(username, password); //where username and password are string vars
});
or
beforeAll(function() {
lib(username, password); //where username and password are string vars
});
And, as a final note, if you leave your library.js file as is, here is some cleanup:
//library.js
module.exports = login;
function login(user_login, user_password) {
var input;
input = element(by.model('loginInfo.login'));
input.sendKeys(user_login);
expect(input.getAttribute('value')).toBe(user_login);
input = element(by.model('loginInfo.password'));
input.sendKeys(user_password);
expect(input.getAttribute('value')).toBe(user_password);
browser.sleep(1000);
browser.driver.actions().sendKeys(protractor.Key.ENTER).perform();
browser.sleep(1000);
};
Note how the module.exports line replaces the line that you had. Also I've changed the exports.login to function login. Then you would...
var login = require('./login');
login('user', 'pass');
where it will be needed.