I'm using Karma with Mocha, Chai and Sinon to test code in a project using this boilerplate. The Subject Under Test uses the Speech Synthesis API.
I start by establishing window.speechSynthesis.getVoices in a beforeEach method
beforeEach(() => {
global.window.speechSynthesis = {
getVoices: () => (null),
};
});
Then I have two test cases, in each one, I want to test what happens when a different set of voices is returned. To accomplish this I'm using Sinon stubs
First test case
it('supports speech and locale', () => {
const getVoicesStub = sinon.stub(
global.window.speechSynthesis,
'getVoices');
getVoicesStub.callsFake(() => (
[{lang: 'en_US'}]
));
Second test case
it('will choose best matching locale', () => {
const getVoicesStub = sinon.stub(
global.window.speechSynthesis,
'getVoices');
getVoicesStub.callsFake(() => (
[{lang: 'es_MX'}, {lang: 'es_US'}]
));
The problem is, when the SUT calls window.speechSynthesis.getVoices during the second test case, it's getting the results from the first stub. It's as if the second stub is doing nothing...
If I comment out the first test case, the second test case succeeds, but if I leave them both in, the second one fails because the wrong set of voices are being returned.
Any idea how to get the second stub to work as expected?
Your stub is not destroyed between tests. You need to restore the default function after a test and create your stub only once in before
describe("Test suite", () => {
let getVoicesStub;
before(() => {
// executes before suite starts
global.window.speechSynthesis = {
getVoices: () => (null),
};
getVoicesStub = sinon.stub(global.window.speechSynthesis, 'getVoices');
});
afterEach(() => {
// executes after each test
getVoicesStub.restore();
});
it('supports speech and locale', () => {
getVoicesStub.callsFake(() => ([{lang: 'en_US'}]));
});
it('will choose best matching locale', () => {
getVoicesStub.callsFake(() => ([{lang: 'es_MX'}, {lang: 'es_US'}]));
});
});
First off, big-ups to #Troopers. Just adding this answer to share the final solution and details I noticed along the way.
The real trick was adding a test-suite-level variable let getVoicesStub, then defining an afterEach method to restore the original function
afterEach(() => {
getVoicesStub.restore();
});
A subtle caveat to #Troopers suggestion about defining the stub in a before method -
If the stub is defined outside of the test cases, I have to use a beforeEach, if the stub is defined within the test cases, I have to use a before method.
In both cases, the afterEach is critical! I've settled on the beforeEach solution as the stub is only defined in one place so there's slightly less code.
describe('Browser Speech', () => {
let getVoicesStub;
beforeEach(() => {
global.window.speechSynthesis = {
getVoices: () => (null),
};
getVoicesStub = sinon.stub(
global.window.speechSynthesis,
'getVoices');
});
afterEach(() => {
getVoicesStub.restore();
});
it('supports speech and locale', () => {
getVoicesStub.callsFake(() => (
[{lang: 'en_US'}]
));
// test case code ..
});
it('will choose best matching locale', () => {
getVoicesStub.callsFake(() => (
[{lang: 'es_MX'}, {lang: 'es_US'}]
));
// test case code ..
});
});
Related
Testing two modules, helper which makes use of render. It's possible for render to throw, so I handle that in helper, and I want tests to ensure that's working as expected.
When I originally wrote the tests, I wrote what was needed for that test in the test itself, including mocks, using jest.doMock. Once all the tests pass I wanted to refactor to share mocks where possible.
So this code works great:
test('throws', async () => {
jest.doMock('./render', () => jest.fn(async () => { throw new Error('mock error'); }));
const helper = require('./helper');
expect(async () => { helper(); }).rejects.toThrow('mock error');
expect(log_bug).toHaveBeenCalled();
});
test('succeeds', async () => {
jest.doMock('./render', () => jest.fn(async () => 'rendered result'));
const helper = require('./helper');
expect(await helper()).toEqual(true); //helper uses rendered result but doesn't return it
expect(log_bug).not.toHaveBeenCalled();
});
HOWEVER, these are not the only two tests and by far most of the other tests that mock render want it to return its success state. I tried to refactor that success use-case out to a file in __mocks__/render.js like so:
// __mocks__/render.js
module.exports = jest.fn(async () => 'rendered result');
And then refactor my tests to this, to be more DRY:
//intention: shared reusable "success" mock for render module
jest.mock('./render');
beforeEach(() => {
jest.resetModules();
jest.resetAllMocks();
});
test('throws', async () => {
//intention: overwrite the "good" render mock with one that throws
jest.doMock('./render', () => jest.fn(async () => { throw new Error('mock error'); }));
const helper = require('./helper');
expect(async () => { await helper(); }).rejects.toThrow('mock error');
expect(log_bug).toHaveBeenCalled();
});
test('succeeds', async () => {
//intention: go back to using the "good" render mock
const helper = require('./helper');
expect(await helper()).toEqual(true); //helper uses rendered result but doesn't return it
expect(log_bug).not.toHaveBeenCalled();
});
With this updated test code, the error-logging test still works as expected -- the mock is overwritten to cause it to throw -- but then for the next test, the error is thrown again.
If I reverse the order of these tests so that the mock overwriting is last, then the failure doesn't happen, but that is clearly not the correct answer.
What am I doing wrong? Why can't I get my mock to properly reset after overriding it with doMock? The doMock docs do kind of illustrate what I'm trying to do, but they don't show mixing it with normal manual mocks.
Aha! I kept digging around and found this somewhat similar Q+A, which led me to try this approach instead of using jest.doMock to override inside of a test:
//for this one test, overwrite the default mock to throw instead of succeed
const render = require('./render');
render.mockImplementation(async () => {
throw new Error('mock error');
});
And with this, the tests pass no matter what order they run!
I have a service that creates an observable that emits values, and it's relatively easy to test that an emitted value is as expected.
For example:
describe('service', () => {
beforeEach(() => {
TestBed.configureTestingModule({providers: [MyService]});
});
it('should emit true', async(() => {
const service = TestBed.get(MyService);
service.values$.subscribe((value) => expect(value).toBeTruthy());
}));
});
The above works to test the expected value, but it only works if the service actually emits a value. If you have the edge case where the service fails to emit a value, then the test itself actually passes and Jasmine logs the warning message "SPEC HAS NO EXPECTATIONS should be created".
I searched Google for a while trying to figure out how to catch this case as an error and came up with this approach.
it('should emit true', async(() => {
const service = TestBed.get(MyService);
let value;
service.values$.subscribe((v) => value = v);
expect(value).toBeTruthy();
}));
The above works only for synchronous observables and feels like code smell to me. Another developer will see this and think it's a poor quality test.
So after thinking about this for a few days. I thought of using takeUntil() to force the observable to complete and then test the expected result then.
For example:
describe('service', () => {
let finished: Subject<void>;
beforeEach(() => {
TestBed.configureTestingModule({providers: [MyService]});
finished = new Subject();
});
afterEach(() => {
finished.next();
finished.complete();
});
it('should emit true', async(() => {
const service = TestBed.get(MyService);
let value;
service.changes$
.pipe(
takeUntil(finished),
finalize(() => expect(value).toBeTruthy())
)
.subscribe((v) => value = v);
}));
});
In the above example the value is being stored in a local variable and then the expected result is checked when the observable completes. I force the completion by using afterEach() with takeUntil().
Question:
Are there any side effects with my approach, and if so what would be the more Angular/Jasmine way of performing these kinds of tests. I am worried that you are not suppose to perform expect assertions during the afterEach() life-cycle call.
This seems overkill to me.
Jasmine offers a callback in its tests, you could simply use it ?
it('should X', doneCallback => {
myObs.subscribe(res => {
expect(x).toBe(y);
doneCallback();
});
});
If the callback isn't called, the test fails with a timeout exception (meaning no more test will run after this failed one)
I have the following test in my react native application, but the test should fail (because the action returned is not equal to the action I put in expectedActions. My guess is that it is passing because the expect test runs after the test has completed.
How can I force the test to wait until the promise is completed and the expect test runs? Is there another way of doing this?
describe('authorize actions', () => {
beforeEach(() => {
store = mockStore({});
});
it('should create an action to signify successful auth', () => {
auth.authorize.mockImplementation(() => Promise.resolve({"something": "value"}));
const expectedActions = [{"type":"AUTHORIZE_RESPONSE","payload":{"something":"sdklfjsdf"}}];
authorizeUser(store.dispatch, store.state).then(() => {
expect(store.getActions()).toEqual(expectedActions);
});
})
});
Ok, looks like I just missed some of the Jest docs - if you return the promise, i.e. return auth.authorize.mockImplementation(() => Promise.resolve... then Jest will wait until it's completed before continuing.
The are varios ways to test async code. Check the docs for examples: https://facebook.github.io/jest/docs/en/asynchronous.html
One could be returning the promise:
describe('authorize actions', () => {
beforeEach(() => {
store = mockStore({});
});
it('should create an action to signify successful auth', () => {
auth.authorize.mockImplementation(() => Promise.resolve({"something": "value"}));
const expectedActions = [{"type":"AUTHORIZE_RESPONSE","payload":{"something":"sdklfjsdf"}}];
return authorizeUser(store.dispatch, store.state).then(() => {
expect(store.getActions()).toEqual(expectedActions);
});
})
});
I have a manual mock of crypto that looks like this:
// __mocks__/crypto.js
const crypto = jest.genMockFromModule('crypto')
const toString: Function = jest.fn(() => {
return {}.toString()
})
const mockStringable = {toString}
const update: Function = jest.fn(() => mockStringable)
const deciper = {update}
crypto.createDecipheriv = jest.fn(() => deciper)
export default crypto
Which is basically tested like this:
const crypto = require('crypto')
jest.mock('crypto')
describe('cookie-parser', () => {
afterEach(() => {
jest.resetAllMocks()
})
describe('decryptCookieValue', () => {
it('should call the crypto library correctly', () => {
const result = decryptCookieValue('test-encryption-key', 'test-encrypted-value')
expect(crypto.pbkdf2Sync).toHaveBeenCalledTimes(2)
expect(crypto.createDecipheriv).toHaveBeenCalled()
// more tests, etc, etc, etc
expect(crypto.createDecipheriv('', '', '').update).toHaveBeenCalled()
expect(result).toEqual({}.toString())
})
})
...
This works however if in that same test file, I test another method that invokes decryptCookieValue from within crypto.createDecipheriv no longer returns my mock decipher. Instead it returns undefined. For instance:
describe('cookie-parser', () => {
afterEach(() => {
jest.resetAllMocks()
})
describe('decryptCookieValue', () => {
it('should call the crypto library correctly', () => {
const result = decryptCookieValue('test-encryption-key', 'test-encrypted-value')
expect(crypto.pbkdf2Sync).toHaveBeenCalledTimes(2)
expect(crypto.createDecipheriv).toHaveBeenCalled()
expect(crypto.createDecipheriv('', '', '').update).toHaveBeenCalled()
expect(result).toEqual({}.toString())
})
})
...
...
describe('parseAuthenticationCookie', () => {
it('should create the correct object', () => {
// parseAuthenticationCookie calls decryptCookieValue internally
const result = parseAuthenticationCookie('', '') // Fails because internal call to crypto.createDecipheriv stops returning mock decipher.
expect(result).toEqual({accessToken: null})
})
})
})
I think this is an issue with resetting the manual mock because if I take that later test and move it into a file all by itself with the same surrounding test harness it works just fine.
// new test file
import crypto from 'crypto'
import { parseAuthenticationCookie } from './index'
jest.mock('crypto')
describe('cookie-parser', () => {
afterEach(() => {
jest.resetAllMocks()
})
describe('parseAuthenticationCookie', () => {
it('should create the correct object', () => {
// Works just fine now
const result = parseAuthenticationCookie('', '')
expect(result).toEqual({accessToken: null})
})
})
})
Is my assessment here correct and, if so, how do I reset the state of the manual mock after each test?
From Jest docs:
Does everything that mockFn.mockClear() does, and also removes any mocked return values or implementations.
ref: https://jestjs.io/docs/en/mock-function-api#mockfnmockreset
In your example you are assuming that calling resetAllMocks will set your manual mock back and it's not.
The reason why your test works in a separate file is because jest runs each file isolated, which is nice since you can screw up only the specs living in the same file.
In your particular case something that might work is calling jest.clearAllMocks() (since this will keep the implementation and returned values).
clearMocks options is also available at the jest config object (false as default), if you want to clear all your mocks on every test, this might be handy.
Hope this help you or anyone else having having a similar issue.
Bonus tip (no quite related) If you are mocking a module that it's being used internally by other module and in some specific test you want to mock that module again with a different mock, make sure to require the module that it's using the mocked module internally again in that specific test, otherwise that module will still reference the mock you specified next to the imports statements.
Looks like the better way to test this is something on the lines of:
jest.mock('crypto')
describe('decrypt()', () => {
afterEach(() => {
jest.resetAllMocks()
})
it('returns value', () => {
const crypto = require('crypto')
const encryptedValue = 'encrypted-value'
const update = jest.fn()
const pbkdf2SyncResult = 'test result'
crypto.pbkdf2Sync = jest.fn().mockImplementation(() => {
return pbkdf2SyncResult
})
crypto.createDecipheriv = jest.fn().mockImplementation((format, key, iv) => {
expect(format).toEqual('aes-256-cbc')
expect(key).toEqual(pbkdf2SyncResult)
expect(iv).toEqual(pbkdf2SyncResult)
return {update}
})
decrypt(encryptedValue)
const inputBuffer = Buffer.from(encryptedValue, 'base64')
expect(update).toHaveBeenCalledWith(inputBuffer)
})
})
This way I don't even have to have the manual mock and I can use mockImplementationOnce if I need to have the mock reset.
I'd like to change the implementation of a mocked dependency on a per single test basis by extending the default mock's behaviour and reverting it back to the original implementation when the next test executes.
More briefly, this is what I'm trying to achieve:
Mock dependency
Change/extend mock implementation in a single test
Revert back to original mock when next test executes
I'm currently using Jest v21. Here is what a typical test would look like:
// __mocks__/myModule.js
const myMockedModule = jest.genMockFromModule('../myModule');
myMockedModule.a = jest.fn(() => true);
myMockedModule.b = jest.fn(() => true);
export default myMockedModule;
// __tests__/myTest.js
import myMockedModule from '../myModule';
// Mock myModule
jest.mock('../myModule');
beforeEach(() => {
jest.clearAllMocks();
});
describe('MyTest', () => {
it('should test with default mock', () => {
myMockedModule.a(); // === true
myMockedModule.b(); // === true
});
it('should override myMockedModule.b mock result (and leave the other methods untouched)', () => {
// Extend change mock
myMockedModule.a(); // === true
myMockedModule.b(); // === 'overridden'
// Restore mock to original implementation with no side effects
});
it('should revert back to default myMockedModule mock', () => {
myMockedModule.a(); // === true
myMockedModule.b(); // === true
});
});
Here is what I've tried so far:
mockFn.mockImplementationOnce(fn)
it('should override myModule.b mock result (and leave the other methods untouched)', () => {
myMockedModule.b.mockImplementationOnce(() => 'overridden');
myModule.a(); // === true
myModule.b(); // === 'overridden'
});
Pros
Reverts back to original implementation after first call
Cons
It breaks if the test calls b multiple times
It doesn't revert to original implementation until b is not called (leaking out in the next test)
jest.doMock(moduleName, factory, options)
it('should override myModule.b mock result (and leave the other methods untouched)', () => {
jest.doMock('../myModule', () => {
return {
a: jest.fn(() => true,
b: jest.fn(() => 'overridden',
}
});
myModule.a(); // === true
myModule.b(); // === 'overridden'
});
Pros
Explicitly re-mocks on every test
Cons
Cannot define default mock implementation for all tests
Cannot extend default implementation forcing to re-declare each mocked method
Manual mocking with setter methods (as explained here)
// __mocks__/myModule.js
const myMockedModule = jest.genMockFromModule('../myModule');
let a = true;
let b = true;
myMockedModule.a = jest.fn(() => a);
myMockedModule.b = jest.fn(() => b);
myMockedModule.__setA = (value) => { a = value };
myMockedModule.__setB = (value) => { b = value };
myMockedModule.__reset = () => {
a = true;
b = true;
};
export default myMockedModule;
// __tests__/myTest.js
it('should override myModule.b mock result (and leave the other methods untouched)', () => {
myModule.__setB('overridden');
myModule.a(); // === true
myModule.b(); // === 'overridden'
myModule.__reset();
});
Pros
Full control over mocked results
Cons
Lot of boilerplate code
Hard to maintain on long term
jest.spyOn(object, methodName)
beforeEach(() => {
jest.clearAllMocks();
jest.restoreAllMocks();
});
// Mock myModule
jest.mock('../myModule');
it('should override myModule.b mock result (and leave the other methods untouched)', () => {
const spy = jest.spyOn(myMockedModule, 'b').mockImplementation(() => 'overridden');
myMockedModule.a(); // === true
myMockedModule.b(); // === 'overridden'
// How to get back to original mocked value?
});
Cons
I can't revert mockImplementation back to the original mocked return value, therefore affecting the next tests
Use mockFn.mockImplementation(fn).
import { funcToMock } from './somewhere';
jest.mock('./somewhere');
beforeEach(() => {
funcToMock.mockImplementation(() => { /* default implementation */ });
// (funcToMock as jest.Mock)... in TS
});
test('case that needs a different implementation of funcToMock', () => {
funcToMock.mockImplementation(() => { /* implementation specific to this test */ });
// (funcToMock as jest.Mock)... in TS
// ...
});
A nice pattern for writing tests is to create a setup factory function that returns the data you need for testing the current module.
Below is some sample code following your second example although allows the provision of default and override values in a reusable way.
const spyReturns = returnValue => jest.fn(() => returnValue);
describe("scenario", () => {
beforeEach(() => {
jest.resetModules();
});
const setup = (mockOverrides) => {
const mockedFunctions = {
a: spyReturns(true),
b: spyReturns(true),
...mockOverrides
}
jest.doMock('../myModule', () => mockedFunctions)
return {
mockedModule: require('../myModule')
}
}
it("should return true for module a", () => {
const { mockedModule } = setup();
expect(mockedModule.a()).toEqual(true)
});
it("should return override for module a", () => {
const EXPECTED_VALUE = "override"
const { mockedModule } = setup({ a: spyReturns(EXPECTED_VALUE)});
expect(mockedModule.a()).toEqual(EXPECTED_VALUE)
});
});
It's important to say that you must reset modules that have been cached using jest.resetModules(). This can be done in beforeEach or a similar teardown function.
See jest object documentation for more info: https://jestjs.io/docs/jest-object.
Little late to the party, but if someone else is having issues with this.
We use TypeScript, ES6 and babel for react-native development.
We usually mock external NPM modules in the root __mocks__ directory.
I wanted to override a specific function of a module in the Auth class of aws-amplify for a specific test.
import { Auth } from 'aws-amplify';
import GetJwtToken from './GetJwtToken';
...
it('When idToken should return "123"', async () => {
const spy = jest.spyOn(Auth, 'currentSession').mockImplementation(() => ({
getIdToken: () => ({
getJwtToken: () => '123',
}),
}));
const result = await GetJwtToken();
expect(result).toBe('123');
spy.mockRestore();
});
Gist:
https://gist.github.com/thomashagstrom/e5bffe6c3e3acec592201b6892226af2
Tutorial:
https://medium.com/p/b4ac52a005d#19c5
When mocking a single method (when it's required to leave the rest of a class/module implementation intact) I discovered the following approach to be helpful to reset any implementation tweaks from individual tests.
I found this approach to be the concisest one, with no need to jest.mock something at the beginning of the file etc. You need just the code you see below to mock MyClass.methodName. Another advantage is that by default spyOn keeps the original method implementation but also saves all the stats (# of calls, arguments, results etc.) to test against, and keeping the default implementation is a must in some cases. So you have the flexibility to keep the default implementation or to change it with a simple addition of .mockImplementation as mentioned in the code below.
The code is in Typescript with comments highlighting the difference for JS (the difference is in one line, to be precise). Tested with Jest 26.6.
describe('test set', () => {
let mockedFn: jest.SpyInstance<void>; // void is the return value of the mocked function, change as necessary
// For plain JS use just: let mockedFn;
beforeEach(() => {
mockedFn = jest.spyOn(MyClass.prototype, 'methodName');
// Use the following instead if you need not to just spy but also to replace the default method implementation:
// mockedFn = jest.spyOn(MyClass.prototype, 'methodName').mockImplementation(() => {/*custom implementation*/});
});
afterEach(() => {
// Reset to the original method implementation (non-mocked) and clear all the mock data
mockedFn.mockRestore();
});
it('does first thing', () => {
/* Test with the default mock implementation */
});
it('does second thing', () => {
mockedFn.mockImplementation(() => {/*custom implementation just for this test*/});
/* Test utilising this custom mock implementation. It is reset after the test. */
});
it('does third thing', () => {
/* Another test with the default mock implementation */
});
});
I did not manage to define the mock inside the test itself so I discover that I could mock several results for the same service mock like this :
jest.mock("#/services/ApiService", () => {
return {
apiService: {
get: jest.fn()
.mockResolvedValueOnce({response: {value:"Value", label:"Test"}})
.mockResolvedValueOnce(null),
}
};
});
I hope it'll help someone :)
It's a very cool way I've discovered on this blog https://mikeborozdin.com/post/changing-jest-mocks-between-tests/
import { sayHello } from './say-hello';
import * as config from './config';
jest.mock('./config', () => ({
__esModule: true,
CAPITALIZE: null
}));
describe('say-hello', () => {
test('Capitalizes name if config requires that', () => {
config.CAPITALIZE = true;
expect(sayHello('john')).toBe('Hi, John');
});
test('does not capitalize name if config does not require that', () => {
config.CAPITALIZE = false;
expect(sayHello('john')).toBe('Hi, john');
});
});