I run mocha with mocha --slow 0 ./test/test.js to show the time it takes for calls to complete. I'm writing a library that does very async things with another service and response time matters. However some of the calls in my library cause another service to do slow operations like start, stop, remove more time to complete then exist between two consecutive tests. The result is that the printed time for the remove test is shown to be longer then the actual time it takes for the call to complete.
describe(`stopContainer ${test_name} t:0`, () => {
it(`should stop the container named ${test_name}`, () => {
return engine.stopContainer(test_name, {t:0}).should.be.fulfilled
})
})
// this shows to be much longer than the actual call takes
describe(`removeContainer ${test_name} v:1 `, () => {
it(`should remove the container named ${test_name} & volumes`, function(done) {
// needs to wait a second because of engine latency
this.timeout(5000)
setTimeout(() => {
engine.removeContainer(test_name, {v:1}).should.be.fulfilled.and.notify(done);
}, 1000)
})
})
Prints the following
stopContainer dap_test_container t:0
✓ should stop the container named dap_test_container (285ms)
removeContainer dap_test_container v:1
✓ should remove the container named dap_test_container & volumes (1405ms)
Obviously the last test took 1000ms less then is reported. But I have to do this all over the place with hundreds of tests and so the reported values become more meaningless as I cannot keep track of which ones are delayed and which are not.
note I don't mean to use this as a method of profiling my code, this is just to make my test results more meaningful.
I'd like to reduce the printed time, is there a way with mocha to reduce the printed time manually? Or does mocha provide a better construct for doing this?
It's very simple, delay in the before or after.
I made a global called wait
global.wait = wait = function(ms) {
return new Promise(resolve => {
setTimeout(resolve, ms)
})
}
And use it in the tests like so
describe(`removeContainer ${test_name} v:1 `, () => {
before('wait for latency', () => wait(1000))
it(`should remove the container named ${test_name} & volumes`, () => {
return engine.removeContainer(test_name, {v:1}).should.be.fulfilled;
})
})
Related
We use Cypress for test automation. And sometimes tests get stuck in Jenkins due to some issues, and the whole execution gets stuck. How to make Cypress skip a test if its execution takes really long time. But the rest suit execution should continue
Couldn't find anything for this issue. Seen some ideas about cancelling the whole suite if one test fails, but it's not what I need
There is a timeout option on the Mocha context, which you can use in your Cypress tests because Cypress is built on top of Mocha.
Before the problem tests, add this command
beforeEach(function() {
this.timeout(60_000) // timeout when stuck
})
Or for every test, add it in the /cypress/support/e2e.js file.
Reference: How to set timeout for test case in cypress
Also Mocha timeouts, can be used at suite level, test level and hook level.
describe('a suite of tests', function () {
this.timeout(500);
it('should take less than 500ms', function (done) {
setTimeout(done, 300);
});
it('should take less than 500ms as well', function (done) {
setTimeout(done, 250);
});
})
Alternatively, you can try the done() method which signals the end of a test has been reached.
it('gets stuck and needs a kick in the ...', (done) => {
// my sticky test code
cy.then(() => done()) // fails the test if never called
})
You could use the standard setTimeout() function in the beginning of each test. All it does is execute a function after a period of time. If the function it executes throws an exception cypress will catch that, fail the test, and move on.
So you can setTimeout() before the test, then clearTimeout() after the test, then you effectively have a test timeout feature.
const testTimeout = () => { throw new Error('Test timed out') };
describe('template spec', () => {
let timeout;
beforeEach(() => {
// Every test gets 1 second to run
timeout = setTimeout(testTimeout, 1000);
});
afterEach(() => {
// Clear the timeout so it can be reset in the next test
clearTimeout(timeout)
});
it('passes', () => {
cy.wait(500);
});
it('fails', () => {
cy.wait(1500);
});
});
If you wanted to handle this at an individual test level:
const testTimeout = () => { throw new Error('Test timed out') };
it('fails', () => {
// Start timeout at beginning of the test
const timeout = setTimeout(testTimeout, 1000);
// Chain clearTimeout() off the last cypress command in the test
cy.wait(1500).then(() => clearTimeout(timeout));
// Do not put clearTimeout() outside of a cypress .then() callback
// clearTimeout(timeout)
});
You need to call clearTimeout() in a chained .then() off the last cypress command. Otherwise the timeout will actually be cleared immediately because cypress commands are asynchronous.
With all that said, I'll also leave you with Mocha's docs on the this.timeout() feature. In all honestly, I couldn't get it to work the way I expected, so I came up with the setTimeout() method https://mochajs.org/#test-level. Hopefully one of these helps.
Cypress has timeout configuration option. If a command takes longer, its will automatically fail
it('test', () => {
cy.wait(1000);
// ...
})
If you want to skip test which failed with timeout, you can use try/catch
try {
cy.wait(1000);
//...
} catch (e) {
....
}
So i have this code below. It deletes the db and adds two users for test case.
when i verify it manually in mongo database everything shows correct but in mocha test case I get the timeout error even after defining the done argument and calling it.
Please help me on this.
const users = [{
_id: new ObjectID(),
email: 'First'
}, {
_id: new ObjectID(),
email: 'Second'
}];
beforeEach((done) => {
User.remove({}).then(() => {
return User.insertMany(users);
}).then(() => done());
})
In mocha, tests will time out after 2000 ms by default. Even if you were handling the asynchrony 100% correctly (which you are not), if you do an async operation that takes longer than 2 seconds, mocha will assume a failure. This is true even if the async operation is in a beforeEach or some other hook.
To change this, you need to invoke the timeout method on the test instance, giving it a sufficiently-high value. To access the test instance, you need to define your test functions using the function keyword rather than arrow syntax, and it will be available as this in your test functions:
beforeEach(function(done) {
this.timeout(6000); // For 6 seconds.
User.remove({}).then(() => {
return User.insertMany(users);
}).then(() => done());
});
In what way could you handle the asynchrony here better, though? As Henrik pointed out in comments, you'll never call done if either of your database calls fail. To be honest, though, since you're already dealing with promises, you shouldn't even use the done callback. Instead, just use Mocha's built-in promise support by returning the chained promise.
beforeEach(function() {
this.timeout(6000); // For 6 seconds.
return User.remove({})
.then(() => User.insertMany(users));
});
This way, if either of those promises rejects, Mocha will know and will show the rejection instead of just sitting around waiting for your test to time out.
You can even use async/await instead if you prefer:
beforeEach(async function() {
this.timeout(6000); // For 6 seconds.
await User.remove({});
await User.insertMany(users);
});
I am writing a mini-framework for executing unit tests for a product I work on. I want test data to be published and managed as seamlessly as possible. With Mocha, it is easy to schedule test data cleanup using the After() hook.
You could wrap an individual test in a describe() block and use that block's Before/After method, but that I'd rather avoid that if possible.
You could pass a cleanup function to afterEach which specifically targets data populated inside a test. Though that would only be necessary for one cleanup and it seems clunky to do that.
Is it possible to generate test data within one test, just for the sake of that test, and also schedule a cleanup for it with Mocha?
Sure, just run your generation and cleanup in the test itself. If it's asynchronous, you can use the done callback to make it wait until it's called.
mocha.setup('bdd');
describe('suite', function() {
function getData() {
// Simulate asynchronous data generation
console.log('grabbing data');
return new Promise((resolve, reject) => {
setTimeout(() => resolve(100), 500);
});
}
function cleanup() {
// Simulate asynchronous cleanup
console.log('cleaning up...');
return new Promise((resolve, reject) => {
setTimeout(resolve, 500);
});
}
it('should do generation and clean up', function(done) {
// Generate some data
getData()
.then(data => {
// Test the data
if (data !== 100) {
throw new Error('How?!');
}
console.log('test passed');
// Cleanup
return cleanup();
})
.then(_ => {
// Use done() after all asynchronous work completes
console.log('done cleaning');
done();
})
.catch(err => {
// Make sure it cleans up no matter what
cleanup().then(_ => console.error(err));
});
});
});
mocha.run();
<script src="https://cdn.rawgit.com/mochajs/mocha/2.2.5/mocha.js"></script>
<div id="mocha"></div>
I think cleanup after test is generally problematic because cleanup consistency guarantees aren't very strong, ie is cleanup function guaranteed to run? probably not. If it's not assured that cleanup will take place then it could very well leave the next tests in an inconsistent state. I think it's good to make an attempt but you can guard against failure by:
cleaning up/establishing db state BEFORE each test
nuking the world so each test has a consistent state (can be accomplished by executing your test in the context of a transaction and rolling back the transaction after each test, and at least not ever committing the transaction)
having test create unique data. By leveraging unique data you can also run your tests in parallel, since it allows for multiple different tests to have an isolated view of the db. If each test writes its own data you only have to worry about provisioning the whole db at the beginning of each test run
Of the above if you're able to wrap your test in a transaction it is lightning fast, (web frameworks like django and rails do this and it is quite fast and makes tests/db state very easy to reason about)
I'm trying to get the results from a mock backend in Angular 2 for unit testing. Currently, we are using fakeAsync with a timeout to simulate the passing of time.
current working unit test
it('timeout (fakeAsync/tick)', fakeAsync(() => {
counter.getTimeout();
tick(3000); //manually specify the waiting time
}));
But, this means that we are limited to a manually defined timeout. Not when the async task is completed. What I'm trying to do is getting tick() to wait until the task is completed before continuing with the test.
This does not seem to work as intended.
Reading up on the fakeAsync and tick the answer here explains that:
tick() simulates the asynchronous passage of time.
I set up a plnkr example simulating this scenario.
Here, we call the getTimeout() method which calls an internal async task that has a timeout. In the test, we try wrapping it and calling tick() after calling the getTimeout() method.
counter.ts
getTimeout() {
setTimeout(() => {
console.log('timeout')
},3000)
}
counter.specs.ts
it('timeout (fakeAsync/tick)', fakeAsync(() => {
counter.getTimeout();
tick();
}));
But, the unit test fails with the error "Error: 1 timer(s) still in the queue."
Does the issue here in the angular repo have anything to do with this?
Is it possible to use tick() this way to wait for a timeout function? Or is there another approach that I can use?
The purpose of fakeAsync is to control time within your spec. tick will not wait for any time as it is a synchronous function used to simulate the passage of time. If you want to wait until the asynchronous function is complete, you are going to need to use async and whenStable, however, in your example, the spec will take 3 seconds to pass so I wouldn't advise this.
The reason why the counter.spec.ts is failing is that you have only simulated the passage of 0 seconds (typically used to represent the next tick of the event loop). So when the spec completes, there are still mocked timers active and that fails the whole spec. It is actually working properly by informing you that a timeout has been mocked an is unhandled.
Basically, I think you are attempting to use fakeAsync and tick in ways for which they were not intended to be used. If you need to test a timeout the way that you have proposed, the simplest way would be to mock the setTimeout function yourself so that, regardless of the time used, you can just call the method.
EDITED
I ran into a related issue where I wanted to clear the timers, and since it was not the part under test, I didn't care how long it took. I tried:
tick(Infinity);
Which worked, but was super hacky. I ended up going with
discardPeriodicTasks();
And all of my timers were cleared.
Try to add one or a combination of the following function calls to the end of your test:
flush();
flushMicrotasks();
discardPeriodicTasks();
or try to "kill" the pending tasks like this:
Zone.current.get('FakeAsyncTestZoneSpec').pendingTimers = [];
Zone.current.get('FakeAsyncTestZoneSpec').pendingPeriodicTimers = [];
flush (with optional maxTurns parameter) also flushes macrotasks. (This function is not mentionned in the Angular testing tutorial.)
flushMicrotasks flushes the microtask queue.
discardPeriodicTasks cancels "periodic timer(s) still in the queue".
Clearing the pending (periodic) timers array in the current fake async zone is a way to avoid the error if nothing else helps.
Timers in the queue do not necessarily mean that there's a problem with your code. For example, components that observe the current time may introduce such timers. If you use such components from a foreign library, you might also consider to stub them instead of "chasing timers".
For further understanding you may look at the javascript code of the fakeAsync function in zone-testing.js.
At the end of each test add:
fixture.destroy();
flush();
Try this:
// I had to do this:
it('timeout (fakeAsync/tick)', (done) => {
fixture.whenStable().then(() => {
counter.getTimeout();
tick();
done();
});
});
Source
Async
test.service.ts
export class TestService {
getTimeout() {
setTimeout(() => { console.log("test") }, 3000);
}
}
test.service.spec.ts
import { TestBed, async } from '#angular/core/testing';
describe("TestService", () => {
let service: TestService;
beforeEach(() => {
TestBed.configureTestingModule({
providers: [TestService],
});
service = TestBed.get(TestService);
});
it("timeout test", async(() => {
service.getTimeout();
});
});
Fake Async
test.service.ts
export class TestService {
readonly WAIT_TIME = 3000;
getTimeout() {
setTimeout(() => { console.log("test") }, this.WAIT_TIME);
}
}
test.service.spec.ts
import { TestBed, fakeAsync } from '#angular/core/testing';
describe("TestService", () => {
let service: TestService;
beforeEach(() => {
TestBed.configureTestingModule({
providers: [TestService],
});
service = TestBed.get(TestService);
});
it("timeout test", fakeAsync(() => {
service.getTimeout();
tick(service.WAIT_TIME + 10);
});
});
I normally use the flushMicrotasks method in my unit tests for use with my services. I had read that tick() is very similar to flushMicrotasks but also calls the jasmine tick() method.
For me all above didnt helped, but a double call of tick(<async_time>) in my test code.
My explanation so far: for every async call you a need a single/own tick()-call.
I have a .pipe(debounceTime(500)) and a timer(500).subscribe(..) afterwards and this helped:
tick(500);
tick(500);
Say I have a Component that renders a simple div containing an integer that starts at 0 and ticks up 1 every second. So after 5 seconds, the component should render "5" and after 30 seconds, it should render "30" and so on. If I wanted to test this component and make sure its rendering what it should after 5 seconds, I might write something like this.
it('should render <5> after 5 seconds', () => {
const time = mount(<Timer/>)
setTimeout(() => {
expect(time.text()).toEqual('5')
}, 5000)
})
However, this doesn't work as the test never actually runs the expect and returns a pass regardless of anything. And even if it did work, using a timeout like this would be extremely inefficient as the test would have to wait 5 seconds. And what if I wanted to simulate an even larger amount of time? After doing some searching, I found jest actually has a timer mock but I can't seem to figure out how to implement it for this case. Any help would be greatly appreciated. Thanks!
in order for Jest to know that your test is async, you need to either: 1) return a Promise, or 2) declare an argument in the test callback, like so:
it('should render <5> after 5 seconds', done => {
// test stuff
});
source: https://facebook.github.io/jest/docs/en/asynchronous.html
To get your test working:
it('should render <5> after 5 seconds', done => {
jest.useFakeTimers(); // this must be called before any async things happen
const time = mount(<Timer/>);
setTimeout(() => {
// Note the placement of this try/catch is important.
// You'd think it could be placed most anywhere, but nope...
try {
// If this assertion fails, an err is thrown.
// If we do not catch()...done.fail(e) it,
// then this test will take jasmine.DEFAULT_TIMEOUT_INTERVAL (5s unless you override)
// and then fail with an unhelpful message.
expect(time.text()).toEqual('5');
done();
} catch(e) {
done.fail(e);
}
}, 5000);
jest.runTimersToTime(5000); // This basically fast forwards 5s.
});
You must call jest.useFakeTimers(); before `require´ your component to mock setTimeout.
Here we enable fake timers by calling jest.useFakeTimers();. This
mocks out setTimeout and other timer functions with mock functions.
And to test async workflows Jest documentation says you have to return a Promise for each 'it' function callback.
To test an asynchronous function, just return a promise from it. When
running tests, Jest will wait for the promise to resolve before
letting the test complete. You can also return a promise from
beforeEach, afterEach, beforeAll or afterAll functions. For example,
let's say fetchBeverageList() returns a promise that is supposed to
resolve to a list that has lemon in it. You can test this with:
So you can write something like:
it('should render <5> after 5 seconds', () => {
jest.useFakeTimers();
//only after that, require your componenet
let Timer = require('./timer');
const time = mount(<Timer/>);
jest.runAllTimers();
expect(setTimeout.mock.calls.length).toBe(1);
expect(setTimeout.mock.calls[0][1]).toBe(5000);
expect(time.text()).toEqual('5');
})