Load a subset of all custom commands in a nightwatch test suite - javascript

Is it possible to load only a subset of all custom commands in a nightwatch test suite?
E.g:
Test suites/test files:
component1Tests1.js
component1Tests2.js
component2Tests1.js
component2Tests2.js
Custom commands:
component1Commands.js
component2Commands.js
Component1TestsX files/tests should see only component1Commands. The situation for Component2TestsX is analogical. This is needed because of the eventual naming collision of the commands.
Thank you all in advance!

As far as I know, it is not possible, and looks more like a bad naming practice.
How would someone else know what does the component1Commands.js do?
What if component1Commands.js already do the same thing that component2Commands.js do?
You would be breaking the DRY principle, the main idea of the commands its to reuse the code for multiple tests. You should name it correctly. If its needed, its better to separate the whole tests into two differentes projects instead.

Related

Cypress How to handle erors

I'm testing a lot of things but some of them are not too important(like caption text fail)I want to add optional parameter (if its wrong thats okay continue testing)
I used to work with Katalon Studio, it has Change failure options(stop,fail,continue) Can I make it with Cypress for my test cases.
Sample image
As Mikkel mentioned already Cypress doesn't like optional testing. There is a way how you could do that by using an if-statement as explained in this question: In Cypress, is there a way to avoid a failure depending on a daily message?
But to do that for every test you optionally want to test can make a big pile up of your code. So if you don't care if it succeeds or fails, just don't test it.
Another way you can try to be more resilient is by cutting up the tests further more. But you have to make sure that the scenarios don't rely on each other otherwise they will still fail.

Can I get performance testing in node.js like a unit testing

I need performance testing/tuning in nodejs.
In CI/CLI, like a unit test.(target to function call. not networking.)
I use mocha timeout() now.
var dictionary_handle;
it ("Dictionary.init_dictionary timeout", function(done) {
dictionary_handle = Dictionary.init_dictionary(dictionary_data);
done();
}).timeout(1000);
it ("Linad.initialize timeout", function(done) {
Linad.initialize(function(err){
done();
});
}).timeout(6000);
But it is not enough.
I need that function.
able using in CI.
execute multiple time
output performance metric information
I believe you're looking for a some form of microbenchmark module. There is a number of options and your requirements match them all so I cannot come up with the best candidate, you will need to perform your own investigation.
However given you have performance-testing tag added I can give you a generic piece of advice: when it comes to any form of performance testing - you need to make sure that your load test exactly mimics your application under test real life usage.
If your application under test would be a NodeJS-based web application - there are a lot of factors which need to be considered apart from single functions performance so if this is the case I would recommend considering a protocol-level based load testing tool, if you want to stick to JavaScript you can use something like k6 or consider another standalone free/open-source load testing solution which can simulate real users close enough with minimal efforts from your side.
Dmitri T is correct, you need to be careful with what and how you test. That being said, https://github.com/anywhichway/benchtest require almost no work to instrument existing unit tests, so it may be worth using.

Protractor Accessibility reporting

I am trying to use Accessibility plugin that comes with Protractor. From what I see it does checking for a11y of last page that I am located.
Is there a way to have 2 test scripts executed one after another one and provide different reports or put all in one report but separated.
Example:
access.js
access1.js
Output file:
resultJsonOutputFile: 'result/result.json'
I tried to this way in conf.js:
specs: ['../test/access.js', '../test/access1.js'],
or
specs: ['../test/access*.js'],
but still get result for last script executed
I tried also creating suites:
suites: {
homepage: '../test/homepage/access.js',
catalogpage: '../test/catalogpage/access1.js'
},
but when I check JSON file, if 2 scripts executed, then 1st one is ok with no issues and provides error for 2nd script. However, if to run 1st script alone, Protractor provides errors
Also I tried to create in one js file as different scenarios, but still same issue
With the current implementation, the accessibility plugin is set to run exactly once per invocation of the Protractor runner, on the last page. So unfortunately, no modification of the suites or test files will make it run more than once.
You can create separate configuration files for each set of test files you'd like to run, or using shardTestFiles to make sure that each file is run in its own process. See the referenceConf for more details on sharding.
Alternatively, you could use aXe to do your accessibility testing. In order to use it with e2e tests in protractor and Webdriver, do the following:
npm install --save-dev axe-webdriverjs
Then in your e2e test files, you do:
var AxeBuilder = require('path_to_the/axe-webdriverjs');
to get hold of the AxeBuilder and then wherever you need to run a test, you:
AxeBuilder(browser.driver)
.analyze(function (results) {
expect(results.violations.length).toBe(0);
});
The above example is using Jasmine but you can extrapolate for any other assertion library.
Also: there is a sample project you can clone and run here https://github.com/dylanb/UITestingFramework
Disclaimer: I am associated with the aXe project and therefore not neutral
I ran into that problem too - as another poster stays the plugin isn't really designed to operate that way.
I wrote a derivative of that plugin which does what you're looking for - protractor-axe-report-plugin.
You make a call to runAxeTest (or runAxeTestWithSelector) whenever you have a page open in the browser that you want to test, and it generates reports using the aXe engine.
Continuum can be used for your use case where it seems the accessibility plugin that comes with Protractor cannot. Here's some documentation on a Protractor-based sample project that uses Continuum. It can be downloaded from webaccessibility.com under 'Continuum for Protractor'. If you look at the source code of the sample project, it basically just boils down to this:
const continuum = require('../js/Continuum.js').Continuum;
continuum.setUp(driver, "../js/AccessEngine.community.js");
continuum.runAllTests().then(() => {
const accessibilityConcerns = continuum.getAccessibilityConcerns();
// accessibilityConcerns.length will be 0 if no accessibility concerns are found
});
(For more information on the above, you can check out the API documentation.)
You can execute that continuum.runAllTests bit wherever in your tests that you like. That includes multiple times from within the same test too, if desired, which if I understand correctly is ultimately what you're after.
Of course, no automated accessibility testing tool is a replacement for manual accessibility testing. It seems like you're just looking to get a baseline level of compliance right now though, so Continuum seems appropriate for your use case to tackle the low-hanging fruit.

Unit testing Compiled JavaScript?

I am planning to use either CoffeeScript or TypeScript in one of my project which transcompiles to JavaScript. And I would like to use Jasmine/Mocha unit testing framework. But I could not find proper answers to below questions in google.
Which is correct, testing complied JavaScript or
CoffeScript/TypeScript and Why ?
Does it make sense to use
TypeScript/CoffeeScript to write testcases as well ?
You have to compile CoffeeScript/TypeScript in order to run it. That includes testing. So, yes, you test the compiled version. You may want to have different (smaller) compilation units for the unit tests if compiling the whole thing takes too long.
Sure. Then you get all the advantages you chose these languages for.
My doubt is which will make sense testing complied JavaScript or CoffeScript/TypeScript and Why
Either will work fine.
Does it make sense to use TypeScript/CoffeeScript to write testcases as well ?
Yes it does. Stick and a language an run with it. For TypeScript you might find this quick sample as useful : https://github.com/TypeStrong/tsproj
code: https://github.com/TypeStrong/tsproj/tree/master/src/lib
tests: https://github.com/TypeStrong/tsproj/tree/master/src/test
I don't have a better sample that is still simple :)

How to access mocha options from within a test file?

I am running mocha tests using gruntjs and grunt-simple-mocha:
https://github.com/yaymukund/grunt-simple-mocha
How can I access the options defined in my grunt.js file within each mocha test?
What I would like to accomplish, is to have some common configuration in my gruntfile, and use that in my tests.
The one way I found already is using global values, which is not very good, but works
inside grunt.js config
global.hell = 'hey you';
inside test
console.log(global.hell);
inspecting one more way now, maybe it will be better
--EDIT
No, seems it's the one I will stop at, if I don't want to end up with some black magic like in mocha-as-promised, because i don't have skills to write that.
--EDIT
Also you can take a look at this - https://github.com/visionmedia/mocha/wiki/Shared-Behaviours
you can share some object between tests, but not sure if it will help with grunt
As far as I'm aware there is no way to push any objects into your mocha suit. The only other interpretation I can think of for your question, you would like to load a common set of configs among your test files. I dont belive you can, other than at the very top of your test files loading a common config file to be availble to your test methods.

Categories

Resources