Jest spyOn method without mockImplementation invocation is mocking implementation - javascript

So according to Jest documentation and multiple posts on the web and so was my belief, using jest.spyOn(foo, "bar") would wrap the specified function with methods that allow us to perform assertions over foo.bar without changing the actual implementation. To change the implementation we would need to use mockImplementation etc...
However, I am experiencing an issue where using jest.spyOn(foo, "bar") is clearly behaving like a jest.fn() mock.
/**
* Test causing the issue
*/
test("when invoked with custom options does not override include locations setting", async () => {
const defaultQueryConfig = { include: { locations: true } }
const findFirstSpy = jest.spyOn(prismaConnection.event, "findFirst")
await findOneEventByIdWithLocation(666, { include: { locations: false } })
expect(findFirstSpy).toHaveBeenCalledTimes(1)
expect(findFirstSpy).toHaveBeenCalledWith(expect.objectContaining(defaultQueryConfig))
})
/**
* Another test that runs in the suite at a later point in time fails
* because instead the object it is trying the result is `undefined`
*/
test("when event locations are requested it returns events with related locations", async () => {
const events = await findManyEventsWithLocations()
const locations = events.data.find((e) => e.id === 4)?.locations[0]
expect(locations).toEqual({
...eventLocation,
id: 1,
eventId: 4,
createdAt: expect.any(Date),
updatedAt: expect.any(Date)
})
})
However, if I add a statement: findFirstSpy.mockRestore() after the expectations of the first test the second test will pass fine and dandy... But this is only valid if the first test passes since the test will not get a chance to "mockRestore" if a previous assertion fails.
I could always add a "describe" with a "beforeEach"/"afterEach" around this first test, but I think this should not be the solution, considering jest.spyOn should not change implementation unless explicitly requested.
Does anybody know why this happens?
Result without findFirstSpy.mockRestore()
Result with findFirstSpy.mockRestore()

Related

It seems that the beforeEach hook is executing for another test class (Mocha testing framework)

I am not sure what is happening.
I am using the Mocha testing framework to learn writing tests for solidity classes.
I met a strange behaviour, that I never met before. I have some previous experience with Mocha (with Cypress), but never saw this strange behaviour.
What is the problem:
I have two testing classes.
The first class:
//01. Call library modules.
const assert = require('assert'); // Declare an assertion library module.
const { beforeEach } = require('mocha');
//02. Create an example class with some logic inside.
class exampleClass {
func1() {
return 'random value 1';
}
func2() {
return 'random_value_2';
}
}
//03. Declare global variables.
let exam;
//04. Declare the "beforeEach" function. This function will be executed before every "it" test.
beforeEach(() => {
exam = new exampleClass(); // Declare a constructor for exampleClass.
});
//05. Create a test suite using "describe" function.
describe('name of the describe', () => {
//06. Add tests using "it" function.
it('This example shows how the assert will pass', () => {
assert.equal(exam.func1(), 'random value 1'); // Make an assertion to verify that the two values are equal.
});
xit('This example shows how the assert will fail, but the test will be skipped because the "X" symbol is added like a prefix for "it".', () => {
assert.equal(exam.func1(), 'random value'); // Make an assertion to verify that the two values are equal.
});
it('This example shows how the assert will pass', () => {
assert.equal(exam.func2(), 'random_value_2'); // Make an assertion to verify that the two values are equal.
});
});
The first class is a basic example of using the Mocha testing framework.
The second class:
//01. Call library modules.
const ganache = require('ganache'); // Declare a ganache library module.
const Web3 = require('web3'); // Declare a web3 library module.
//02. Create an instances.
const web3 = new Web3(ganache.provider()); // Declare an instance for Web3 including ganache.
//03. Declare the "beforeEach" function. This function will be executed before every "it" test.
beforeEach(() => {
// Get a list of all accounts.
web3.eth.getAccounts()
.then(fetchedAccounts => { // Get an access to all of the accounts. We use then, because "getAccounts()" function is returning a promise.
console.log(fetchedAccounts);
});
});
//04. Create a test suite using "describe" function.
describe('Generate fake eth accounts example 1', () => {
//05. Add tests using "it" function.
it('A0.1. example 1', () => {
});
});
The second class is an example of the generation of fake data (fake eth accounts).
If I execute the classes separately - everything is working as expected.
But if I execute all tests (both classes) - it seems that the "beforeEach" hook from class 2 is applying to class 1 for some reason.
As result, the console printed fake generated accounts three times. Three because 3 tests were passed. I am not sure why this is happening.

Create React App changes behaviour of jest.fn() when mocking async function

I am confused about the below behaviour of jest.fn() when run from a clean CRA project created using npx create-react-app jest-fn-behaviour.
Example:
describe("jest.fn behaviour", () => {
const getFunc = async () => {
return new Promise((res) => {
setTimeout(() => {
res("some-response");
}, 500)
});;
}
const getFuncOuterMock = jest.fn(getFunc);
test("works fine", async () => {
const getFuncInnerMock = jest.fn(getFunc);
const result = await getFuncInnerMock();
expect(result).toBe("some-response"); // passes
})
test("does not work", async () => {
const result = await getFuncOuterMock();
expect(result).toBe("some-response"); // fails - Received: undefined
})
});
The above test will work as expected in a clean JavaScript project but not in a CRA project.
Can someone please explain why the second test fails? It appears to me that when mocking an async function jest.fn() will not work as expected when called within a non-async function (e.g. describe above). It will work only when called within an async function (test above). But why would CRA alter the behaviour in such a way?
The reason for this is, as I mentioned in another answer, that CRA's default Jest setup includes the following line:
resetMocks: true,
Per the Jest docs, that means (emphasis mine):
Automatically reset mock state before every test. Equivalent to
calling jest.resetAllMocks() before each test. This will lead to
any mocks having their fake implementations removed but does not
restore their initial implementation.
As I pointed out in the comments, your mock is created at test discovery time, when Jest is locating all of the specs and calling the describe (but not it/test) callbacks, not at execution time, when it calls the spec callbacks. Therefore its mock implementation is pointless, as it's cleared before any test gets to run.
Instead, you have three options:
As you've seen, creating the mock inside the test itself works. Reconfiguring an existing mock inside the test would also work, e.g. getFuncOuterMock.mockImplementation(getFunc) (or just getFuncOuterMock.mockResolvedValue("some-response")).
You could move the mock creation and/or configuration into a beforeEach callback; these are executed after all the mocks get reset:
describe("jest.fn behaviour", () => {
let getFuncOuterMock;
// or `const getFuncOuterMock = jest.fn();`
beforeEach(() => {
getFuncOuterMock = jest.fn(getFunc);
// or `getFuncOuterMock.mockImplementation(getFunc);`
});
...
});
resetMocks is one of CRA's supported keys for overriding Jest configuration, so you could add:
"jest": {
"resetMocks": false
},
into your package.json.
However, note that this can lead to false positive tests where you expect(someMock).toHaveBeenCalledWith(some, args) and it passes due to an interaction with the mock in a different test. If you're going to disable the automatic resetting, you should also change the implementation to create the mock in beforeEach (i.e. the let getFuncOuterMock; example in option 2) to avoid state leaking between tests.
Note that this is nothing to do with sync vs. async, or anything other than mock lifecycle; you'd see the same behaviour with the following example in a CRA project (or a vanilla JS project with the resetMocks: true Jest configuration):
describe("the problem", () => {
const mock = jest.fn(() => "foo");
it("got reset before I was executed", () => {
expect(mock()).toEqual("foo");
});
});
● the problem › got reset before I was executed
expect(received).toEqual(expected) // deep equality
Expected: "foo"
Received: undefined

Jasmine Async test generation

let's imagine we have a promise that does a large amounts of operations and return helper functions.
A banal example:
const testPromise = testFn => () => {
const helper = Promise.resolve({testHelper: () => 'an helper function'}) // I/O Promise that returns an helper for testing
return helper.then(testFn).finally(() => console.log('tear down'));
}
// This describe would work as expected
describe('Async test approach', () => {
it('A test', testPromise(async ({testHelper}) => {
expect(testHelper()).toBe('an helper function')
}))
})
// This part doesn't work
describe('Async describe approach', testPromise(async ({testHelper}) => {
it('Test 1', () => {
expect(testHelper()).toBe('an helper function')
})
it('Test 2', () => {
expect(testHelper()).not.toBe('A chair')
})
}))
}
What I would like to achieve is something like the second example where I can use async code within describe without re-evaluating testPromise.
describe doesn't handle async so I am not even able to loop and create dynamic tests properly.
I did read many comments around saying that describe should only be a simple way to group tests but... then... how can someone make async generated tests based on I/O result?
Thanks
= ADDITIONAL CONSIDERATION =
Regarding all the comment you guys kindly added, I should have added few additional details...
I am well aware that tests must be defined synchronously :), that is exactly where problems starts. I totally disagree with that and I am trying to find an alternative that avoids before/after and doing it without specifying an external variable. Within Jest issues there was an open one to address that, it seems they did agree on making describe async but they won't do it. The reason is... Jest is using Jasmine implementation of describe and this "fix" should be done in there.
I wanted to avoid beforeAll and afterAll, as much as I could. My purpose was creating an easy (and neat) way to define integration tests tailored on my needs without letting users to worry about initialize and tear down stuff around. I will continue to use the Example 1 above style, that seems the best solution to me, even if it would be clearly a longer process.
Take a look at Defining Tests. The doc says:
Tests must be defined synchronously for Jest to be able to collect your tests.
This is the principle for defining test cases. Which means the it function should be defined synchronously. That's why your second example doesn't work.
Some I/O operations should be done in beforeAll, afterAll, beforeEach, afterEach methods to prepare your test doubles and fixtures. The test should be isolated from the external environment as much as possible.
If you must do this, maybe you can write the dynamically obtained testHelper function to a static js file, and then test it in a synchronous way
As it was noted, describe serves to group tests.
This can be achieved with beforeAll. Since beforeAll should be called any way, it can be moved to testPromise:
const prepareHelpers = (testFn) => {
beforeAll(() => {
...
return helper.then(testFn);
})
}
describe('Async describe approach', () => {
let testHelper;
prepareHelpers(helpers => { testHelper = helpers.testHelper });
...

Mocking a Node Module which uses chained function calls with Jest in Node

Allow me to note that a similar question to this one can be found here, but the accepted answer's solution did not work for me. There was another question along the same lines, the answer of which suggested to directly manipulate the function's prototypes, but that was equally non-fruitful.
I am attempting to use Jest to mock this NPM Module, called "sharp". It takes an image buffer and performs image processing/manipulation operations upon it.
The actual implementation of the module in my codebase is as follows:
const sharp = require('sharp');
module.exports = class ImageProcessingAdapter {
async processImageWithDefaultConfiguration(buffer, size, options) {
return await sharp(buffer)
.resize(size)
.jpeg(options)
.toBuffer();
}
}
You can see that the module uses a chained function API, meaning the mock has to have each function return this.
The Unit Test itself can be found here:
jest.mock('sharp');
const sharp = require('sharp');
const ImageProcessingAdapter = require('./../../adapters/sharp/ImageProcessingAdapter');
test('Should call module functions with correct arguments', async () => {
// Mock values
const buffer = Buffer.from('a buffer');
const size = { width: 10, height: 10 };
const options = 'options';
// SUT
await new ImageProcessingAdapter().processImageWithDefaultConfiguration(buffer, size, options);
// Assertions
expect(sharp).toHaveBeenCalledWith(buffer);
expect(sharp().resize).toHaveBeenCalledWith(size);
expect(sharp().jpeg).toHaveBeenCalledWith(options);
});
Below are my attempts at mocking:
Attempt One
// __mocks__/sharp.js
module.exports = jest.genMockFromModule('sharp');
Result
Error: Maximum Call Stack Size Exceeded
Attempt Two
// __mocks__/sharp.js
module.exports = jest.fn().mockImplementation(() => ({
resize: jest.fn().mockReturnThis(),
jpeg: jest.fn().mockReturnThis(),
toBuffer:jest.fn().mockReturnThis()
}));
Result
Expected mock function to have been called with:
[{"height": 10, "width": 10}]
But it was not called.
Question
I would appreciate any aid in figuring out how to properly mock this third-party module such that I can make assertions about the way in which the mock is called.
I have tried using sinon and proxyquire, and they don't seem to get the job done either.
Reproduction
An isolated reproduction of this issue can be found here.
Thanks.
Your second attempt is really close.
The only issue with it is that every time sharp gets called a new mocked object is returned with new resize, jpeg, and toBuffer mock functions...
...which means that when you test resize like this:
expect(sharp().resize).toHaveBeenCalledWith(size);
...you are actually testing a brand new resize mock function which hasn't been called.
To fix it, just make sure sharp always returns the same mocked object:
__mocks__/sharp.js
const result = {
resize: jest.fn().mockReturnThis(),
jpeg: jest.fn().mockReturnThis(),
toBuffer: jest.fn().mockReturnThis()
}
module.exports = jest.fn(() => result);

How to increase the code coverage using istanbul in node.js

I am using Istanbul for code coverage, but i m getting very low coverage percentage particularly in Models file.
Consider the following is the model file:
ModelA.js
const mongoose = require('mongoose');
const Schema = mongoose.Schema;
var app = require('../server')
var db = require('../db/dbConnection');
var config = require('../configs/config')
const Schema1 = new Schema({ 'configurations': [] });
exports.save = function (aa, data, callback) {
var logMeta = {
file: 'models/modelA',
function: 'save',
data: {},
error: {}
}
if (!aa) {
return callback('aa is required')
}
global.logs[aa].log('info', 'AA: ' + aa, logMeta);
db.connectDatabase(aa, function(error, mongoDB){
if(error){
logMeta.data['error'] = error
global.logs[aa].log('error', 'error', logMeta);
return callback(error)
}
const ModelA = mongoDB.model('bbb', cccc);
ModelA.findOneAndUpdate({}, data, {upsert: true, new: true, runValidators: true}, function(error ,result){
if (error) {
logMeta.data['error'] = error
global.logs[aa].log('error', 'error', logMeta);
}
else {
logMeta.data = {}
logMeta.data['result'] = JSON.parse(JSON.stringify(result))
global.logs[aa].log('info', 'result', logMeta);
}
callback(error, result);
});
})
}
TestA.js:
var should = require('should'),
sinon = require('sinon'),
ModelA= require("../models/ModelA");
describe('Model test', function () {
it('Should save Model', function (done) {
var todoMock = sinon.mock(new ModelA({'configurations': []}));
var todo = todoMock.object;
todoMock
.expects('save')
.yields(null, 'SAVED');
todo.save(function(err, result) {
todoMock.verify();
todoMock.restore();
should.equal('SAVED', result, "Test fails due to unexpected result")
done();
});
});
});
But i am getting codecoverage percentage 20. SO how can i increase the percentage:
ALso:
1.Whether i have to mock the db.connectDatabase if yews how can i acheive that?
Whether i have to use TestDb to run all my UnitTest? Or i have to assert??
Code Coverage will work for Unit Test or integration test???
Please share your ideas. Thanks
I have been using Istanbul to 100% code cover most of my client/server projects so I might have the answers you are looking for.
How does it work
Whenever you require some local file, this gets wrapped all over the place to understand if every of its parts is reached by your code.
Not only the required file is tainted, your running test is too.
However, while it's easy to code cover the running test file, mocked classes and their code might never be executed.
todoMock.expects('save')
Accordingly to Sinon documentation:
Overrides todo. save with a mock function and returns it.
If Istanbul tainted the real save method, anything within that scope won't ever be reached so that you are actually testing that mock works, not that your real code does.
This should answer your question: Code Coverage will work for Unit Test or integration test ???
The answer is that it covers the code, which is the only thing you're interested from a code cover perspective. Covering Sinon JS is nobody goal.
No need to assert ... but
Once you've understood how Istanbul works, it follows up naturally to understand that it doesn't matter if you assert or not, all it matters is that you reach the code for real and execute it.
Asserting is just your guard against failures, not a mechanism interesting per se in any Istanbul test. When your assertion fails, your test does too, so it's good for you to know that things didn't work and there's no need to keep testing the rest of the code (early failure, faster fixes).
Whether you have to mock the db.connectDatabase
Yes, at least for the code you posted. You can assign db as generic object mock to the global context and expect methods to be called but also you can simplify your life writing this:
function createDB(err1, err2) {
return {
connectDatabase(aa, callback) {
callback(err1, {
model(name, value) {
return {
findOneAndUpdate($0, $1, $3, fn) {
fn(err2, {any: 'object'});
}
};
}
});
}
};
}
global.db = createDB(null, null);
This code in your test file can be used to create a global db that behaves differently accordingly with the amount of errors you pass along, giving you the ability to run the same test file various times with different expectations.
How to run the same test more than once
Once your test is completed, delete require.cache[require.resolve('../test/file')] and then require('../test/file') again.
Do this as many times as you need.
When there are conditional features detection
I usually run the test various times deleting global constructors in case these are patched with a fallback. I also usually store them to be able to put 'em back later on.
When the code is obvious but shouldn't be reached
In case you have if (err) process.exit(1); you rarely want to reach that part of the code. There are various comments understood by Istanbul that would help you skip parts of the test like /* istanbul ignore if */ or ignore else, or even the generic ignore next.
Please consider thinking twice if it's just you being lazy, or that part can really, safely, be skipped ... I got bitten a couple of times with a badly handled error, which is a disaster since when it happens is when you need the most your code to keep running and/or giving you all the info you need.
What is being covered?
Maybe you know this already but the coverage/lcov-report/index.html file, that you can open right away with any browser, will show you all the parts that aren't covered by your tests.

Categories

Resources