I am running mocha tests using gruntjs and grunt-simple-mocha:
https://github.com/yaymukund/grunt-simple-mocha
How can I access the options defined in my grunt.js file within each mocha test?
What I would like to accomplish, is to have some common configuration in my gruntfile, and use that in my tests.
The one way I found already is using global values, which is not very good, but works
inside grunt.js config
global.hell = 'hey you';
inside test
console.log(global.hell);
inspecting one more way now, maybe it will be better
--EDIT
No, seems it's the one I will stop at, if I don't want to end up with some black magic like in mocha-as-promised, because i don't have skills to write that.
--EDIT
Also you can take a look at this - https://github.com/visionmedia/mocha/wiki/Shared-Behaviours
you can share some object between tests, but not sure if it will help with grunt
As far as I'm aware there is no way to push any objects into your mocha suit. The only other interpretation I can think of for your question, you would like to load a common set of configs among your test files. I dont belive you can, other than at the very top of your test files loading a common config file to be availble to your test methods.
Related
I am building a Sails.js application using sails 1.2.3, node 10.15. I want to include a javascript module in my api/helpers/* directory, without sails automatically using it to try to create a helper. I.e. I have javascript objects that use helpers and are used in a helper, but are not helpers themselves; as in this image, where the module 'rules' is imported into the create-rule helper and the objects exported by this module are used within the helper.
By default, sails tries to load each file in the helpers/* directory as a helper, and throws if the underlying implementation does not match that of a valid helper:
ImplementationError: Failed to load helper `create-rule/rules/foo/index` into a Callable! Sorry, could not interpret "index" because its underlying implementation has a problem:
------------------------------------------------------
• Missing the `fn` property.
------------------------------------------------------
Hoping someone can help out! Let me know if more info is needed. Thanks in advance!
I don't quite understand what you are trying to do. In my humble opinion I would grab all object constructors and placed them as a single file in api/services. That will make it automatically available in all controllers. I would not allow my object's methods to use helpers by them selves (I even think you can't, at least easily). Then when you need a helper to use your object, just pass it as parameter. Anyway, again, in my humble opinion; you are structuring your code to fit all inside /helpers and that will make it extremely hard to develop. Let assume you manage to make it work all inside /helpers, only you without exception, will be able to understand what it does or how it works. Doesn't seem as a good idea.
Is it possible to load only a subset of all custom commands in a nightwatch test suite?
E.g:
Test suites/test files:
component1Tests1.js
component1Tests2.js
component2Tests1.js
component2Tests2.js
Custom commands:
component1Commands.js
component2Commands.js
Component1TestsX files/tests should see only component1Commands. The situation for Component2TestsX is analogical. This is needed because of the eventual naming collision of the commands.
Thank you all in advance!
As far as I know, it is not possible, and looks more like a bad naming practice.
How would someone else know what does the component1Commands.js do?
What if component1Commands.js already do the same thing that component2Commands.js do?
You would be breaking the DRY principle, the main idea of the commands its to reuse the code for multiple tests. You should name it correctly. If its needed, its better to separate the whole tests into two differentes projects instead.
My goal is to debug one of my tests. I'm using Mocha as a base, and SinonJS for spies, stubs and mocks. For some unknown reason my stub of the ajax method has stopped working. It worked a week ago, now the requests are sent and the stub does not track the calls.
I have these lines inside the outermost describe
let sandbox = sinon.sandbox.create();
let ajaxStub = undefined;
and then this:
beforeEach(function () {
ajaxStub = sandbox.stub($, 'ajax');
});
afterEach(function () {
sandbox.restore();
});
Anyway, my question is not what's wrong with this, I'm probably doing something extremely stupid elsewhere, and some debugging could probably solve it. My problem is with the debugging itself.
mocha --debug-brk --inspect ./test/mytest.js
This is what I run in command line to get the debugging session going.
My problem is to run the tests I'm currently using Gulp, with which I'm loading all my framework dependencies and all my globals - the libraries added this way include also jQuery and sinon
And of course, if I debug my tests using that command line, NodeJS does not load the required files in the environment, and at the first reference to sinon I get an exception.
I could create an html page in which I load required files and tests and run the test - then debug it manually with the browser inspector - but that's something that I would like to avoid. Is there anything more automated?
I'm not a NodeJS expert, I just roughly understand what it is and how it works, so I'm pretty confident there could be something I missed that can be of help :)
What I'm thinking about right now is a batch script to find the required files, but that's all I have.
Just an additional note: code base is really old and big, and I do not really have permission to refactor existing code into es6 modules.
I found a solution: I'm going to create a testDebugLoader.js script in which I will write which test I want to debug, and an array of paths to scripts I need to load.
Then loop trough the array, load each needed file, and call eval on the retrieved text.
I am trying to use Accessibility plugin that comes with Protractor. From what I see it does checking for a11y of last page that I am located.
Is there a way to have 2 test scripts executed one after another one and provide different reports or put all in one report but separated.
Example:
access.js
access1.js
Output file:
resultJsonOutputFile: 'result/result.json'
I tried to this way in conf.js:
specs: ['../test/access.js', '../test/access1.js'],
or
specs: ['../test/access*.js'],
but still get result for last script executed
I tried also creating suites:
suites: {
homepage: '../test/homepage/access.js',
catalogpage: '../test/catalogpage/access1.js'
},
but when I check JSON file, if 2 scripts executed, then 1st one is ok with no issues and provides error for 2nd script. However, if to run 1st script alone, Protractor provides errors
Also I tried to create in one js file as different scenarios, but still same issue
With the current implementation, the accessibility plugin is set to run exactly once per invocation of the Protractor runner, on the last page. So unfortunately, no modification of the suites or test files will make it run more than once.
You can create separate configuration files for each set of test files you'd like to run, or using shardTestFiles to make sure that each file is run in its own process. See the referenceConf for more details on sharding.
Alternatively, you could use aXe to do your accessibility testing. In order to use it with e2e tests in protractor and Webdriver, do the following:
npm install --save-dev axe-webdriverjs
Then in your e2e test files, you do:
var AxeBuilder = require('path_to_the/axe-webdriverjs');
to get hold of the AxeBuilder and then wherever you need to run a test, you:
AxeBuilder(browser.driver)
.analyze(function (results) {
expect(results.violations.length).toBe(0);
});
The above example is using Jasmine but you can extrapolate for any other assertion library.
Also: there is a sample project you can clone and run here https://github.com/dylanb/UITestingFramework
Disclaimer: I am associated with the aXe project and therefore not neutral
I ran into that problem too - as another poster stays the plugin isn't really designed to operate that way.
I wrote a derivative of that plugin which does what you're looking for - protractor-axe-report-plugin.
You make a call to runAxeTest (or runAxeTestWithSelector) whenever you have a page open in the browser that you want to test, and it generates reports using the aXe engine.
Continuum can be used for your use case where it seems the accessibility plugin that comes with Protractor cannot. Here's some documentation on a Protractor-based sample project that uses Continuum. It can be downloaded from webaccessibility.com under 'Continuum for Protractor'. If you look at the source code of the sample project, it basically just boils down to this:
const continuum = require('../js/Continuum.js').Continuum;
continuum.setUp(driver, "../js/AccessEngine.community.js");
continuum.runAllTests().then(() => {
const accessibilityConcerns = continuum.getAccessibilityConcerns();
// accessibilityConcerns.length will be 0 if no accessibility concerns are found
});
(For more information on the above, you can check out the API documentation.)
You can execute that continuum.runAllTests bit wherever in your tests that you like. That includes multiple times from within the same test too, if desired, which if I understand correctly is ultimately what you're after.
Of course, no automated accessibility testing tool is a replacement for manual accessibility testing. It seems like you're just looking to get a baseline level of compliance right now though, so Continuum seems appropriate for your use case to tackle the low-hanging fruit.
I tried something like:
var path = '../right/here';
var module = require(path);
but it can't find the module anymore this way, while:
var module = require('../right/here');
works like a charm. Would like to load modules with a generated list of strings, but I can't wrap my head around this problem atm. Any ideas?
you can use template to get file dynamically.
var myModule = 'Module1';
var Modules = require(`../path/${myModule}`)
This is due to how Browserify does its bundling, it can only do static string analysis for requirement rebinding. So, if you want to do browserify bundling, you'll need to hardcode your requirements.
For code that has to go into production deployment (as opposed to quick prototypes, which you rarely ever bother to add bundling for) it's always advisable to stick with static requirements, in part because of the bundling but also because using dynamic strings to give you your requirements means you're writing code that isn't predictable, and can thus potentially be full of bugs you rarely run into and are extremely hard to debug.
If you need different requirements based on different runs (say, dev vs. stage testing vs. production) then it's usually a good idea to use process.env or a config object so that when it comes time to decide which library to require for a specific purposes, you can use something like
var knox = config.offline ? require("./util/mocks3") : require("knox");
That way your code also stays immediately traversable for others who need to track down where something's going wrong, in case a bug does get found.
require('#/path/'.concat(fileName))
You can use .require() to add the files that you want to access calculating its path instead of being static at build time, this way this modules will be included and when calling require() later they will be found.