How can I execute async Mocha tests (NodeJS) in order? - javascript

This question relates to the Mocha testing framework for NodeJS.
The default behaviour seems to be to start all the tests, then process the async callbacks as they come in.
When running async tests, I would like to run each test after the async part of the one before has been called.
How can I do this?

The point is not so much that "structured code runs in the order you've structured it" (amaze!) - but rather as #chrisdew suggests, the return orders for async tests cannot be guaranteed. To restate the problem - tests that are further down the (synchronous execution) chain cannot guarantee that required conditions, set by async tests, will be ready they by the time they run.
So if you are requiring certain conditions to be set in the first tests (like a login token or similar), you have to use hooks like before() that test those conditions are set before proceeding.
Wrap the dependent tests in a block and run an async before hook on them (notice the 'done' in the before block):
var someCondition = false
// ... your Async tests setting conditions go up here...
describe('is dependent on someCondition', function(){
// Polls `someCondition` every 1s
var check = function(done) {
if (someCondition) done();
else setTimeout( function(){ check(done) }, 1000 );
}
before(function( done ){
check( done );
});
it('should get here ONLY once someCondition is true', function(){
// Only gets here once `someCondition` is satisfied
});
})

use mocha-steps
it keeps tests sequential regardless if they are async or not (i.e. your done functions still work exactly as they did). It's a direct replacement for it and instead you use step

I'm surprised by what you wrote as I use. I use mocha with bdd style tests (describe/it), and just added some console.logs to my tests to see if your claims hold with my case, but seemingly they don't.
Here is the code fragment that I've used to see the order of "end1" and "start1". They were properly ordered.
describe('Characters start a work', function(){
before(function(){
sinon.stub(statusapp, 'create_message');
});
after(function(){
statusapp.create_message.restore();
});
it('creates the events and sends out a message', function(done){
draftwork.start_job(function(err, work){
statusapp.create_message.callCount.should.equal(1);
draftwork.get('events').length.should.equal(
statusapp.module('jobs').Jobs.get(draftwork.get('job_id')).get('nbr_events')
);
console.log('end1');
done();
});
});
it('triggers work:start event', function(done){
console.log('start2');
statusapp.app.bind('work:start', function(work){
work.id.should.equal(draftwork.id);
statusapp.app.off('work:start');
done();
});
Of course, this could have happened by accident too, but I have plenty of tests, and if they would run in parallel, I would definitely have race conditions, that I don't have.
Please, refer to this issue too from the mocha issue tracker. According to it, tests are run synchronously.

I wanted to solve this same issue with our application, but the accepted answer didn't work well for us. Especially in the someCondition would never be true.
We use promises in our application and these made it very easy to structure the tests accordingly. The key however is still to delay execution through the before hook:
var assert = require( "assert" );
describe( "Application", function() {
var application = require( __dirname + "/../app.js" );
var bootPromise = application.boot();
describe( "#boot()", function() {
it( "should start without errors", function() {
return bootPromise;
} );
} );
describe( "#shutdown()", function() {
before( function() {
return bootPromise;
} );
it( "should be able to shut down cleanly", function() {
return application.shutdown();
} );
} );
} );

Related

Is there a way to run particular Protractor test depending on the result of the other test?

This kind of question has been asked before but most of this question has pretty complicated background.
The scenario is simple. Let's say we are testing our favorite TODO app.
Test cases are next:
TC00 - 'User should be able to add a TODO item to the TODO list'
TC01 - 'User should be able to rename TODO item'
TC02 - 'User should be able to remove TODO item'
I don't want to run the TC01 and TC02 if TC00 fails (the TODO item is not added so I have nothing to remove or rename)
So I've been researching on this question for the past 3 days and the most common answers fro this question are:
• Your tests should not depend on each other
• Protractor/Jasmine does not have feature to dynamically turn on/off tests ('it' blocks)
There reason why I'm asking this question here is because it looks like a very widespread case and still no clear suggestion to handle this (I mean I could not find any)
My javascript skills are poor but I understand that I need to play around, let's say' passing 'done' or adding the if with the test inside...
it('should add a todo' ()=> {
todoInput.sendKeys('test')
addButton.click();
let item = element(by.cssContainingText('.list-item','test')
expect(item.isPresent()).toBe(true)
}
In my case there are like 15 tests ('it' blocks) after adding the item to the list. And I want to skip SOME OF THE tests if the 'parent' test failed.
PLEASE NOTE:
There is a solution out there which allows to skip ALL remaining test if one fails. This does not suit my needs
Man, I spent good couple of weeks researching this, and yes there was NO clear answers, until I realized how protractor works in details. If you understand this too you'll figure out the best option for you.
SOLUTION IS BELOW AFTER SHORT THEORY
1) If you try to pass async function to describe you see it'll fail, because it only accepts synchronous function
What it means for you, is that whatever condition you want to pass to it block, it can't be Promise based (Promise == resolves somewhen, but not immediately). What you're trying to do essentially IS a Promise (open page, do something and wait to see if the condition satisfies your criteria)
if (conditionIsTrue) { // can't be Promise
it('name', () => {
})
}
Thats first thing to consider...
2) When you run protractor, it picks up spec files specified in config and builds the queue of describe/it AND beforeAll/afterAll blocks. IMPORTANT DETAIL HERE IS THAT IT HAPPENS BEFORE THE BROWSER EVEN STARTED.
Look at this example
let conditionIsTrue; // undefined
it('name', () => {
conditionIsTrue = true;
})
if (conditionIsTrue) { // still undefined
it('name', () => {
})
}
By the time Protractor reaches if() statement, the value of conditionIsTrue is still undefined. And it maybe overwritten inside of it block, when browser starts, later on, but not when it builds the queue. So it skips it.
In other words, protractor knows which describe blocks it'll run before it even opens the browser, and this queue can NOT be modified during execution
POSSIBLE SOLUTION
1.1 Define a global variable outside of describe
let conditionIsTrue; // undefined
describe("describe", () => {
it('name1', async () => {
conditionIsTrue = await element.isPresent(); // NOW IT'S TRUE if element is present
})
it('name2', async () => {
if (conditionIsTrue) {
//do whatever you want if the element is present
} else {
console.log("Skipping 'name2' test")
}
})
})
So you won't skip the it block itself, however you can skip anything inside of it
1.2 The same approach can be used for skipping it blocks across different specs, using environment variable. Example:
spec_1.js
describe(`Suite: 1`, () => {
it("element is present", async () => {
if (await element.isPresent()) {
process.env.FLAG = true
} else {
process.env.FLAG = false
}
});
});
spec_2.js
describe(`Suite: 2`, () => {
it("element is present", async () => {
if (process.env.FLAG) {
// do element specific actions
}
});
});
Another possibility I found out, but never had a chance to check is to use Grunt task runner, which may help you implement the following scenario
Run protractor to execute one spec
Check a desired condition
Export this condition to environment variable
Exit protractor
In your Grunt task implement a conditional logic for executing the rests of conditional specs, by starting protractor again
But honestly, I don't see why you'd want to go this time consuming route, which requires a lot of code... But just as an FYI
There is one way provided by Protractor which might achieve what you want to achieve.
In protractor config file you can have onPrepare function. It is actually a callback function called once protractor is ready and available, and before the specs are executed. If multiple capabilities are being run, this will run once per capability.
Now as i understand you need to do a test or we can say execute a parent function and then based on its output you want to run some tests and do not want to run other tests.
onPrepare function in protractor config file will look like this :
onPrepare: async () => {
await browser.manage().window().maximize();
await browser.driver.get('url')
// continue your parent test steps for adding an item and at the last of function you can assign a global variable say global.itemAdded = true/false based on the result of above test steps. Note that you need to use 'global.' here to make it a global variable which will then be available in all specs
}
Now in you specs file you can run tests (it()) based on global.itemAdded variable value
if(global.itemAdded === true) {
it('This test should be running' () => {
})
}
if(global.itemAdded === false) {
it('This test should not be running' () => {
})
}

How to clean up after failed Intern functional tests?

I have some functional tests that run using Intern (3) and the last step in the test is to do some cleanup, including clearing localStorage on the remote browser, and switching back to the original window by storing the window handle right away and switching back to it before the tests end (so any subsequent tests don't fail because they're trying to run on a closed window if the previous test ended on a different one). However if some chaijs assertions fail in a .then() the cleanup code at the end gets skipped. Is there a better way to do cleanup for functional tests that will still get run even when some assertions fail?
this.leadfoot.remote
.get(require.toUrl(url))
.execute(function() { return document.querySelector('#welcome').textContent})
.then(welcome => {
assert.equal(welcome, 'hello world', 'properly greeted');
})
.execute(function() {
localStorage.clear();
});
If the assertion fails it'll never clear localStorage at the end and if the next test that runs expects localStorage to be empty it will fail too. Is there a better way to clean up after a functional test?
Use an afterEach method.
afterEach() {
return this.remote
.execute(function () {
localStorage.clear();
});
},
'my test'() {
return this.remote
.get(...)
.execute(...)
.then(welcome => {
assert.equal(welcome, ...);
});
}
in our project we do it like this:
afterEach: function() {
var command = this.remote;
if(intern.variables.currentCase && !intern.variables.currentCase.hasPassed) {
command = command.execute(function () {
localStorage.clear();
});
}
return command;
},
and local Storage.clear() will be executed only if the test has failed:

before() not executing before subsequent describes()?

Given the following code :
var api = {};
var models = {};
describe(vars.project.name, function() {
before(function(done) {
// Loading models
models_module.getDbInstance('0.1', function(res) {
db = res;
var config = {
server: server,
db: db,
v: '0.1'
};
// Importing all the tests
api.base = require('./functions/api.base.js')(config);
models.test_table = require('./functions/models.test_table.js')(config);
done();
});
});
// Tests general status
describe('checking the status of the API without a version', function() {
it('should succeed with 200 OK', api.base.status.without_route);
});
});
This loads the database with my models for version 0.1 and then require my tests definitions, passing it the database and some other config infos. This is in theory.
Instead, I get an error saying Cannot read property 'status' of undefined. This means that it tries to execute my tests, or at least initialize it, before completing the before function.
I also tried with async.series (loading the models, then loading the tests) but it doesn't do anything, only displays 0 passing (0ms).
What's wrong with this?
I saw the tree and missed the forest...
This cannot work:
describe('checking the status of the API without a version', function() {
it('should succeed with 200 OK', api.base.status.without_route);
});
The problem is that you are trying to evaluate api.base while Mocha is constructing the test suite. In brief, you should keep in mind that the callbacks that are passed to describe are evaluated before the tests start. The callbacks passed to the it calls are not evaluated until the individual tests are executed. (See another answer of mine for all the gory details of what happens when.) So at the stage where api.base.satus.without_route is evaluated, api.base has not been set.
describe('checking the status of the API without a version', function() {
it('should succeed with 200 OK', function () {
api.base.status.without_route();
});
});
Given the code you had in your question, I've assumed that api.base.status.without_route is a function which would fail if your conditions are not met. That it be a function looks peculiar to me but I don't know the larger context of your application. I'm basing my assumption on the fact that the 2nd argument passed to it should be a callback. If it is not a function, then you'll need to write whatever specific test you need (if (api.base.status.without_route === ...) etc.)

Reuse scenarios by using mocha

Recently I've started to use JS and mocha.
I've wrote some tests already, but now I got to the point when I need to reuse my already written tests.
I've tired to look for "it" / "describe" reusing, but didn't find something useful...
Does anyone have some good example ?
Thanks
Considering that if you only do unit testing, you won't catch errors due to integration problems between your components, you have at some point to test your components together. It would be a shame to dump mocha to run these tests. So you may want to run with mocha a bunch of tests that follow the same general patter but differ in some small respects.
The way I've found around this problem is to create my test functions dynamically. It looks like this:
describe("foo", function () {
function makeTest(paramA, paramB, ...) {
return function () {
// perform the test on the basis of paramA, paramB, ...
};
}
it("test that foo does bar", makeTest("foo_bar.txt", "foo_bar_expected.txt", ...));
it("test what when baz, then toto", makeTest("when_baz_toto.txt", "totoplex.txt", ...));
[...]
});
You can see a real example here.
Note that there is nothing that forces you to have your makeTest function be in the describe scope. If you have a kind of test you think is general enough to be of use to others, you could put it in a module and require it.
Considering each test is only designed to test a single feature/unit, generally you want to avoid reusing your tests. It's best to keep each test self-contained an minimize the dependencies of the test.
That said, if you have something you repeat often in your tests, you can use a beforeEach to keep things more concise
describe("Something", function() {
// declare your reusable var
var something;
// this gets called before each test
beforeEach(function() {
something = new Something();
});
// use the reusable var in each test
it("should say hello", function() {
var msg = something.hello();
assert.equal(msg, "hello");
});
// use it again here...
it("should say bye", function() {
var msg = something.bye();
assert.equal(msg, "bye");
});
});
You can even use an async beforeEach
beforeEach(function(done) {
something = new Something();
// function that takes a while
something.init(123, done);
});

How do I make QUnit block until a module is complete?

I'm trying to use QUnit to test a bunch of javascript. My code looks something like this:
module("A");
doExpensiveSetupForModuleA();
asyncTest("A.1", testA1);
asyncTest("A.2", testA3);
asyncTest("A.3", testA3);
module("B");
doExpensiveSetupForModuleB();
asyncTest("B.1", testB1);
asyncTest("B.2", testB3);
asyncTest("B.3", testB3);
If I run this as-is, then doExpensiveSetupForModuleB() runs while the async tests are running, causing failures.
If doExpensiveSetupForModuleB() is run before testA*, then those tests will either fail or undo the expensive setup work so that testB* fails.
Is there a way to have QUnit block on the next module? Or to have it block starting a new test until the previous asynchronous test has completed? Or is there a better framework for JS testing that I should be using?
Note: I understand that my unit tests are not perfectly atomic. I do have cleanup code that helps make sure I don't get any dirty state, but doExpensiveSetupFor*() is prohibitively expensive, such that it wouldn't be realistic to run it before each test.
Could you use the module lifecycle?
function runOnlyOnce(fn) {
return function () {
try {
if (!fn.executed) {
fn.apply(this, arguments);
}
} finally {
fn.executed = true;
}
}
}
// http://api.qunitjs.com/module/
module("B", {
setup: runOnlyOnce(doExpensiveSetupForModuleB)
});
This is an example, adapted from your original code, that executes the setup method for each test method:
function doExpensiveSetupForModuleA() {
console.log("setup A");
}
function testA1() {
console.log("testA1");
start();
}
function testA2() {
console.log("testA2");
start();
}
function testA3() {
console.log("testA3");
start();
}
function doExpensiveSetupForModuleB() {
console.log("setup B");
}
function testB1() {
console.log("testB1");
start();
}
function testB2() {
console.log("testB2");
start();
}
function testB3() {
console.log("testB3");
start();
}
QUnit.module("A", { setup: doExpensiveSetupForModuleA });
asyncTest("A.1", testA1);
asyncTest("A.2", testA2);
asyncTest("A.3", testA3);
QUnit.module("B", { setup: doExpensiveSetupForModuleB });
asyncTest("B.1", testB1);
asyncTest("B.2", testB2);
asyncTest("B.3", testB3);
This will work independent of the order in which the tests are executed and also independent of the time spent by each method to terminate.
The calls to start() will assure that the test results will be collected only in that point of the method.
More detailed examples can be found in the QUnit Cookbook:
http://qunitjs.com/cookbook/#asynchronous-callbacks
Updated:
if you don't want your expensive methods to be executed before each test method, but actually only once per module, just add control variables to your code to check if the module was already set up:
var moduleAsetUp = false;
var moduleBsetUp = false;
function doExpensiveSetupForModuleA() {
if (!moduleAsetUp) {
console.log("setting up module A");
moduleAsetUp = true;
}
}
...
function doExpensiveSetupForModuleB() {
if (!moduleBsetUp) {
console.log("setting up module B");
moduleBsetUp = true;
}
}
...
In this sample, the output would be:
setting up module A
testA1
testA2
testA3
setting up module B
testB1
testB2
testB3
This way you are using your expensive methods as module setup instead of test method setup.
Unit tests are supposed to be atomic, independent, isolated, and thus the order in which they run shouldn't be relevant.
Qunit doesn't always run tests in the same order, anyway, if you want your tests to run in specific order, you can just tell QUnit to don't reorder them:
QUnit.config.reorder = false;
This way you can ensure that testA will run before testB.
I think you have a misunderstanding on how the test declarations work.
QUnit can run any test independently. Just because you declare a test with test() or asyncTest() does NOT mean QUnit will call the function passed in. The "Rerun" links next to each test reload the page and skip every test but the specific one.
So if you want to rerun a B module test, your code will set up A module, even though it does not need to.
The module setup solution posted by others is likely the way to go here.

Categories

Resources