Given the following code :
var api = {};
var models = {};
describe(vars.project.name, function() {
before(function(done) {
// Loading models
models_module.getDbInstance('0.1', function(res) {
db = res;
var config = {
server: server,
db: db,
v: '0.1'
};
// Importing all the tests
api.base = require('./functions/api.base.js')(config);
models.test_table = require('./functions/models.test_table.js')(config);
done();
});
});
// Tests general status
describe('checking the status of the API without a version', function() {
it('should succeed with 200 OK', api.base.status.without_route);
});
});
This loads the database with my models for version 0.1 and then require my tests definitions, passing it the database and some other config infos. This is in theory.
Instead, I get an error saying Cannot read property 'status' of undefined. This means that it tries to execute my tests, or at least initialize it, before completing the before function.
I also tried with async.series (loading the models, then loading the tests) but it doesn't do anything, only displays 0 passing (0ms).
What's wrong with this?
I saw the tree and missed the forest...
This cannot work:
describe('checking the status of the API without a version', function() {
it('should succeed with 200 OK', api.base.status.without_route);
});
The problem is that you are trying to evaluate api.base while Mocha is constructing the test suite. In brief, you should keep in mind that the callbacks that are passed to describe are evaluated before the tests start. The callbacks passed to the it calls are not evaluated until the individual tests are executed. (See another answer of mine for all the gory details of what happens when.) So at the stage where api.base.satus.without_route is evaluated, api.base has not been set.
describe('checking the status of the API without a version', function() {
it('should succeed with 200 OK', function () {
api.base.status.without_route();
});
});
Given the code you had in your question, I've assumed that api.base.status.without_route is a function which would fail if your conditions are not met. That it be a function looks peculiar to me but I don't know the larger context of your application. I'm basing my assumption on the fact that the 2nd argument passed to it should be a callback. If it is not a function, then you'll need to write whatever specific test you need (if (api.base.status.without_route === ...) etc.)
Related
We are developing data driven protractor framework (jasmine), I need help in handling certain failure scenario.
I will be iterating same test with different data set, my Page module will handle the all verification.
If any it blocks fails, I want to run the certain function to clear cookies, capture session details and re-start the browser (I do have all the functions )
but ,
I am not sure how to get the it block failure and trigger the specific function, also I want to make sure next loop iteration is triggered.
browser.restart() - never worked in data driven in before or after all .....
If am running this data driven in parallel (we can run same test in parallel browser, but we can't distribute each data in to multiple browser), is there any way to distribute?
var dData = requireFile('testData/data.json');
using(dData,async function(data, description) {
describe( scenario 1++ , function() {
it('Load URL' , async function() { })
it('validate Page1' , async function() { xxxxx })
it('validate Page2' , async function() { xxxxx })
it('validate Page3' , async function() { xxxxx })
}) }
If I understood everything right, you have like 3 questions. I'll answer only the first, general one - how to handle results of each it block
It sounds like for what you are trying to implement you should use advantage of custom reporter in jasmine
More precisely, what you want to do is to:
create a module with custom reporter
register it in your config. This would be a good place to think ahead of time if there are any parameters that you want to pass to the reporter
there are different hooks: jasmine-started, suite-started (describe), spec-started (it), suite-done, jasmine-done. Not sure if you all of them, but one particular for sure: spec-done. This should be a function that will be called after each it block. It will be taking spec object as a parameter. You can explore it on your own, but what you'll need from it is status property (spec.status). It's value can be 'passed', 'failed' and I believe others. So your logic will be like
if (spec.status === 'passed') {
// ...
} else if (spec.status === 'failed') {
// ...
} else {
// ...
}
This kind of question has been asked before but most of this question has pretty complicated background.
The scenario is simple. Let's say we are testing our favorite TODO app.
Test cases are next:
TC00 - 'User should be able to add a TODO item to the TODO list'
TC01 - 'User should be able to rename TODO item'
TC02 - 'User should be able to remove TODO item'
I don't want to run the TC01 and TC02 if TC00 fails (the TODO item is not added so I have nothing to remove or rename)
So I've been researching on this question for the past 3 days and the most common answers fro this question are:
• Your tests should not depend on each other
• Protractor/Jasmine does not have feature to dynamically turn on/off tests ('it' blocks)
There reason why I'm asking this question here is because it looks like a very widespread case and still no clear suggestion to handle this (I mean I could not find any)
My javascript skills are poor but I understand that I need to play around, let's say' passing 'done' or adding the if with the test inside...
it('should add a todo' ()=> {
todoInput.sendKeys('test')
addButton.click();
let item = element(by.cssContainingText('.list-item','test')
expect(item.isPresent()).toBe(true)
}
In my case there are like 15 tests ('it' blocks) after adding the item to the list. And I want to skip SOME OF THE tests if the 'parent' test failed.
PLEASE NOTE:
There is a solution out there which allows to skip ALL remaining test if one fails. This does not suit my needs
Man, I spent good couple of weeks researching this, and yes there was NO clear answers, until I realized how protractor works in details. If you understand this too you'll figure out the best option for you.
SOLUTION IS BELOW AFTER SHORT THEORY
1) If you try to pass async function to describe you see it'll fail, because it only accepts synchronous function
What it means for you, is that whatever condition you want to pass to it block, it can't be Promise based (Promise == resolves somewhen, but not immediately). What you're trying to do essentially IS a Promise (open page, do something and wait to see if the condition satisfies your criteria)
if (conditionIsTrue) { // can't be Promise
it('name', () => {
})
}
Thats first thing to consider...
2) When you run protractor, it picks up spec files specified in config and builds the queue of describe/it AND beforeAll/afterAll blocks. IMPORTANT DETAIL HERE IS THAT IT HAPPENS BEFORE THE BROWSER EVEN STARTED.
Look at this example
let conditionIsTrue; // undefined
it('name', () => {
conditionIsTrue = true;
})
if (conditionIsTrue) { // still undefined
it('name', () => {
})
}
By the time Protractor reaches if() statement, the value of conditionIsTrue is still undefined. And it maybe overwritten inside of it block, when browser starts, later on, but not when it builds the queue. So it skips it.
In other words, protractor knows which describe blocks it'll run before it even opens the browser, and this queue can NOT be modified during execution
POSSIBLE SOLUTION
1.1 Define a global variable outside of describe
let conditionIsTrue; // undefined
describe("describe", () => {
it('name1', async () => {
conditionIsTrue = await element.isPresent(); // NOW IT'S TRUE if element is present
})
it('name2', async () => {
if (conditionIsTrue) {
//do whatever you want if the element is present
} else {
console.log("Skipping 'name2' test")
}
})
})
So you won't skip the it block itself, however you can skip anything inside of it
1.2 The same approach can be used for skipping it blocks across different specs, using environment variable. Example:
spec_1.js
describe(`Suite: 1`, () => {
it("element is present", async () => {
if (await element.isPresent()) {
process.env.FLAG = true
} else {
process.env.FLAG = false
}
});
});
spec_2.js
describe(`Suite: 2`, () => {
it("element is present", async () => {
if (process.env.FLAG) {
// do element specific actions
}
});
});
Another possibility I found out, but never had a chance to check is to use Grunt task runner, which may help you implement the following scenario
Run protractor to execute one spec
Check a desired condition
Export this condition to environment variable
Exit protractor
In your Grunt task implement a conditional logic for executing the rests of conditional specs, by starting protractor again
But honestly, I don't see why you'd want to go this time consuming route, which requires a lot of code... But just as an FYI
There is one way provided by Protractor which might achieve what you want to achieve.
In protractor config file you can have onPrepare function. It is actually a callback function called once protractor is ready and available, and before the specs are executed. If multiple capabilities are being run, this will run once per capability.
Now as i understand you need to do a test or we can say execute a parent function and then based on its output you want to run some tests and do not want to run other tests.
onPrepare function in protractor config file will look like this :
onPrepare: async () => {
await browser.manage().window().maximize();
await browser.driver.get('url')
// continue your parent test steps for adding an item and at the last of function you can assign a global variable say global.itemAdded = true/false based on the result of above test steps. Note that you need to use 'global.' here to make it a global variable which will then be available in all specs
}
Now in you specs file you can run tests (it()) based on global.itemAdded variable value
if(global.itemAdded === true) {
it('This test should be running' () => {
})
}
if(global.itemAdded === false) {
it('This test should not be running' () => {
})
}
I am using Istanbul for code coverage, but i m getting very low coverage percentage particularly in Models file.
Consider the following is the model file:
ModelA.js
const mongoose = require('mongoose');
const Schema = mongoose.Schema;
var app = require('../server')
var db = require('../db/dbConnection');
var config = require('../configs/config')
const Schema1 = new Schema({ 'configurations': [] });
exports.save = function (aa, data, callback) {
var logMeta = {
file: 'models/modelA',
function: 'save',
data: {},
error: {}
}
if (!aa) {
return callback('aa is required')
}
global.logs[aa].log('info', 'AA: ' + aa, logMeta);
db.connectDatabase(aa, function(error, mongoDB){
if(error){
logMeta.data['error'] = error
global.logs[aa].log('error', 'error', logMeta);
return callback(error)
}
const ModelA = mongoDB.model('bbb', cccc);
ModelA.findOneAndUpdate({}, data, {upsert: true, new: true, runValidators: true}, function(error ,result){
if (error) {
logMeta.data['error'] = error
global.logs[aa].log('error', 'error', logMeta);
}
else {
logMeta.data = {}
logMeta.data['result'] = JSON.parse(JSON.stringify(result))
global.logs[aa].log('info', 'result', logMeta);
}
callback(error, result);
});
})
}
TestA.js:
var should = require('should'),
sinon = require('sinon'),
ModelA= require("../models/ModelA");
describe('Model test', function () {
it('Should save Model', function (done) {
var todoMock = sinon.mock(new ModelA({'configurations': []}));
var todo = todoMock.object;
todoMock
.expects('save')
.yields(null, 'SAVED');
todo.save(function(err, result) {
todoMock.verify();
todoMock.restore();
should.equal('SAVED', result, "Test fails due to unexpected result")
done();
});
});
});
But i am getting codecoverage percentage 20. SO how can i increase the percentage:
ALso:
1.Whether i have to mock the db.connectDatabase if yews how can i acheive that?
Whether i have to use TestDb to run all my UnitTest? Or i have to assert??
Code Coverage will work for Unit Test or integration test???
Please share your ideas. Thanks
I have been using Istanbul to 100% code cover most of my client/server projects so I might have the answers you are looking for.
How does it work
Whenever you require some local file, this gets wrapped all over the place to understand if every of its parts is reached by your code.
Not only the required file is tainted, your running test is too.
However, while it's easy to code cover the running test file, mocked classes and their code might never be executed.
todoMock.expects('save')
Accordingly to Sinon documentation:
Overrides todo. save with a mock function and returns it.
If Istanbul tainted the real save method, anything within that scope won't ever be reached so that you are actually testing that mock works, not that your real code does.
This should answer your question: Code Coverage will work for Unit Test or integration test ???
The answer is that it covers the code, which is the only thing you're interested from a code cover perspective. Covering Sinon JS is nobody goal.
No need to assert ... but
Once you've understood how Istanbul works, it follows up naturally to understand that it doesn't matter if you assert or not, all it matters is that you reach the code for real and execute it.
Asserting is just your guard against failures, not a mechanism interesting per se in any Istanbul test. When your assertion fails, your test does too, so it's good for you to know that things didn't work and there's no need to keep testing the rest of the code (early failure, faster fixes).
Whether you have to mock the db.connectDatabase
Yes, at least for the code you posted. You can assign db as generic object mock to the global context and expect methods to be called but also you can simplify your life writing this:
function createDB(err1, err2) {
return {
connectDatabase(aa, callback) {
callback(err1, {
model(name, value) {
return {
findOneAndUpdate($0, $1, $3, fn) {
fn(err2, {any: 'object'});
}
};
}
});
}
};
}
global.db = createDB(null, null);
This code in your test file can be used to create a global db that behaves differently accordingly with the amount of errors you pass along, giving you the ability to run the same test file various times with different expectations.
How to run the same test more than once
Once your test is completed, delete require.cache[require.resolve('../test/file')] and then require('../test/file') again.
Do this as many times as you need.
When there are conditional features detection
I usually run the test various times deleting global constructors in case these are patched with a fallback. I also usually store them to be able to put 'em back later on.
When the code is obvious but shouldn't be reached
In case you have if (err) process.exit(1); you rarely want to reach that part of the code. There are various comments understood by Istanbul that would help you skip parts of the test like /* istanbul ignore if */ or ignore else, or even the generic ignore next.
Please consider thinking twice if it's just you being lazy, or that part can really, safely, be skipped ... I got bitten a couple of times with a badly handled error, which is a disaster since when it happens is when you need the most your code to keep running and/or giving you all the info you need.
What is being covered?
Maybe you know this already but the coverage/lcov-report/index.html file, that you can open right away with any browser, will show you all the parts that aren't covered by your tests.
I have a Redis client that is created thus using the node_redis library (https://github.com/NodeRedis/node_redis):
var client = require('redis').createClient(6379, 'localhost');
I have a method I want to test whose purpose is to set and publish a value to Redis, so I want to test to ensure the set and publish methods are called or not called according to my expectations. The tricky thing is I want this test to work without needing to fire up an instance of a Redis server, so I can't just create the client because it will throw errors if it cannot detect Redis. Therefore, I need to stub the createClient() method.
Example method:
// require('redis').createClient(port, ip) is called once and the 'client' object is used globally in my module.
module.exports.updateRedis = function (key, oldVal, newVal) {
if (oldVal != newVal) {
client.set(key, newVal);
client.publish(key + "/notify", newVal);
}
};
I've tried several ways of testing whether set and publish are called with the expected key and value, but have been unsuccessful. If I try to spy on the methods, I can tell my methods are getting called by running the debugger, but calledOnce is not getting flagged as true for me. If I stub the createClient method to return a fake client, such as:
{
set: function () { return 'OK'; },
publish: function () { return 1; }
}
The method under test doesn't appear to be using the fake client.
Right now, my test looks like this:
var key, newVal, oldVal, client, redis;
before(function () {
key = 'key';
newVal = 'value';
oldVal = 'different-value';
client = {
set: function () { return 'OK'; },
publish: function () { return 1; }
}
redis = require('redis');
sinon.stub(redis, 'createClient').returns(client);
sinon.spy(client, 'set');
sinon.spy(client, 'publish');
});
after(function () {
redis.createClient.restore();
});
it('sets and publishes the new value in Redis', function (done) {
myModule.updateRedis(key, oldVal, newVal);
expect(client.set.calledOnce).to.equal(true);
expect(client.publish.calledOnce).to.equal(true);
done();
});
The above code gives me an Assertion error (I'm using Chai)
AssertionError: expected false to equal true
I also get this error in the console logs, which indicates the client isn't getting stubbed out when the method actually runs.
Error connecting to redis [Error: Ready check failed: Redis connection gone from end event.]
UPDATE
I've since tried stubbing out the createClient method (using the before function so that it runs before my tests) in the outer-most describe block of my test suite with the same result - it appears it doesn't return the fake client when the test actually runs my function.
I've also tried putting my spies in the before of the top-level describe to no avail.
I noticed that when I kill my Redis server, I get connection error messages from Redis, even though this is the only test (at the moment) that touches any code that uses the Redis client. I am aware that this is because I create the client when this NodeJS server starts and Mocha will create an instance of the server app when it executes the tests. I'm supposing right now that the reason this isn't getting stubbed properly is because it's more than just a require, but the createClient() function is being called at app startup, not when I call my function which is under test. I feel there still ought to be a way to stub this dependency, even though it's global and the function being stubbed gets called before my test function.
Other potentially helpful information: I'm using the Gulp task runner - but I don't see how this should affect how the tests run.
I ended up using fakeredis(https://github.com/hdachev/fakeredis) to stub out the Redis client BEFORE creating the app in my test suite like so:
var redis = require('fakeredis'),
konfig = require('konfig'),
redisClient = redis.createClient(konfig.redis.port, konfig.redis.host);
sinon.stub(require('redis'), 'createClient').returns(redisClient);
var app = require('../../app.js'),
//... and so on
And then I was able to use sinon.spy in the normal way:
describe('some case I want to test' function () {
before(function () {
//...
sinon.spy(redisClient, 'set');
});
after(function () {
redisClient.set.restore();
});
it('should behave some way', function () {
expect(redisClient.set.called).to.equal(true);
});
});
It's also possible to mock and stub things on the client, which I found better than using the redisErrorClient they provide for testing Redis error handling in the callbacks.
It's quite apparent that I had to resort to a mocking library for Redis to do this because Sinon couldn't stub out the redisClient() method as long as it was being called in an outer scope to the function under test. It makes sense, but it's an annoying restriction.
This question relates to the Mocha testing framework for NodeJS.
The default behaviour seems to be to start all the tests, then process the async callbacks as they come in.
When running async tests, I would like to run each test after the async part of the one before has been called.
How can I do this?
The point is not so much that "structured code runs in the order you've structured it" (amaze!) - but rather as #chrisdew suggests, the return orders for async tests cannot be guaranteed. To restate the problem - tests that are further down the (synchronous execution) chain cannot guarantee that required conditions, set by async tests, will be ready they by the time they run.
So if you are requiring certain conditions to be set in the first tests (like a login token or similar), you have to use hooks like before() that test those conditions are set before proceeding.
Wrap the dependent tests in a block and run an async before hook on them (notice the 'done' in the before block):
var someCondition = false
// ... your Async tests setting conditions go up here...
describe('is dependent on someCondition', function(){
// Polls `someCondition` every 1s
var check = function(done) {
if (someCondition) done();
else setTimeout( function(){ check(done) }, 1000 );
}
before(function( done ){
check( done );
});
it('should get here ONLY once someCondition is true', function(){
// Only gets here once `someCondition` is satisfied
});
})
use mocha-steps
it keeps tests sequential regardless if they are async or not (i.e. your done functions still work exactly as they did). It's a direct replacement for it and instead you use step
I'm surprised by what you wrote as I use. I use mocha with bdd style tests (describe/it), and just added some console.logs to my tests to see if your claims hold with my case, but seemingly they don't.
Here is the code fragment that I've used to see the order of "end1" and "start1". They were properly ordered.
describe('Characters start a work', function(){
before(function(){
sinon.stub(statusapp, 'create_message');
});
after(function(){
statusapp.create_message.restore();
});
it('creates the events and sends out a message', function(done){
draftwork.start_job(function(err, work){
statusapp.create_message.callCount.should.equal(1);
draftwork.get('events').length.should.equal(
statusapp.module('jobs').Jobs.get(draftwork.get('job_id')).get('nbr_events')
);
console.log('end1');
done();
});
});
it('triggers work:start event', function(done){
console.log('start2');
statusapp.app.bind('work:start', function(work){
work.id.should.equal(draftwork.id);
statusapp.app.off('work:start');
done();
});
Of course, this could have happened by accident too, but I have plenty of tests, and if they would run in parallel, I would definitely have race conditions, that I don't have.
Please, refer to this issue too from the mocha issue tracker. According to it, tests are run synchronously.
I wanted to solve this same issue with our application, but the accepted answer didn't work well for us. Especially in the someCondition would never be true.
We use promises in our application and these made it very easy to structure the tests accordingly. The key however is still to delay execution through the before hook:
var assert = require( "assert" );
describe( "Application", function() {
var application = require( __dirname + "/../app.js" );
var bootPromise = application.boot();
describe( "#boot()", function() {
it( "should start without errors", function() {
return bootPromise;
} );
} );
describe( "#shutdown()", function() {
before( function() {
return bootPromise;
} );
it( "should be able to shut down cleanly", function() {
return application.shutdown();
} );
} );
} );