Set a time goal for a Jest test - javascript

I'm looking for a way to set a total time limit test with Jest, i.e. something like expect(somefunction()).toTake(1000). I'm aware that the second parameter for the test is a timeout for async functions, but I'm specifically looking to test the performance of the entire function (both async and non async parts) and have the test pass / fail in relation to the time it took to run the function.

Here is a comment which may help you in a related issue in jest repository; https://github.com/facebook/jest/issues/2694#issuecomment-411499373
Also, here is the code piece from that comment.
it('Should create 1000 objects pretty fast', async () => {
var start = new Date()
// Do expensive thing 1000 times
var after_save_all = new Date()
expect(after_save_all.getTime() - start.getTime()).toBeLessThanOrEqual(3000);
})

Related

Jest can't find module

I am just starting out with Jest, and I am trying to test this code that will change the textContent of an element after 1000ms:
const subtext = document.querySelector('.subtext');
function delayChangeText() {
setTimeout(() => {
subtext.textContent = "Dev";
}, 1000);
}
subtext.addEventListener('load', delayChangeText);
This is what Jest returns:
FAIL js/app.test.js
● Test suite failed to run
Cannot find module './delayChangeText' from 'js/app.test.js'
> 1 | const delayChangeText = require('./delayChangeText');
| ^
2 |
3 | test('Change the text after 1000 seconds', () => {
4 | expect(delayChangeText().toBe(subtext.textContent = "Dev"));
at Resolver.resolveModule (node_modules/jest-resolve/build/resolver.js:311:11)
at Object.<anonymous> (js/app.test.js:1:1)
Test Suites: 1 failed, 1 total
Tests: 0 total
Snapshots: 0 total
Time: 0.846 s
I am still pretty new to testing, I'm confident I made a pretty simple goof. Any help is much appreciated. Best regards.
The zeroth rule of testing is:
Code must be written such that it is testable
Not all code can be tested. Sometimes you have to change how your real code is written so that a testing framework can get its hands on the code.
I can see one or two critical problems.
First: I assume you didn't include the full content of your application, but it does not look like your app code exports the delayChangeText function, which means that other modules (such as your test suite) can't import it.
You may need to do something like module.exports = delayChangeText, or export default delayChangeText in your app code.
Second: your function is not a pure function. That is, it depends on stuff that's not passed to it explicitly, namely it expects that subtext is defined within its execution context.
It's not strictly required that all your functions be pure functions, and indeed sometimes it's not possible. But pure functions are usually much easier to test (as well as being easier to design and implement). Here's a pure version of your function:
function delayChangeText(element) {
setTimeout(() => {
element.textContent = "Dev";
}, 1000);
}
You don't have to convert this to a pure function, but your code will break in the test unless your test suite takes steps to ensure that subtext.textContent doesn't throw -- if subtext is undefined, it will throw.
This is important for another reason: if this module's default export is the delayChangeText function, then it's probably not appropriate for the preceding subtext assignment to even be in the file. Which means that fixing the first problem ("it's not being exported") will naturally result in converting the function to a pure function. If you really want to avoid that, you can: you'll probably have to change how the function is imported in the test suite, to this:
const { delayChangeText } = require('./delayChangeText');
Finally (and you didn't ask about this -- yet): you probably don't want this test to have to actually wait 1000 ms to test this function. Jest has some tools for manipulating the global timer functions, which will allow you to validate the function without having to wait.

Different outcomes for a single stub

I'm using Sinon for stubbing some data retrieval methods during unit testing. Most of these data methods are async, so the resolves syntax has been handy so far for this. What I'm trying to achieve is to dynamically generate different test data based on Math.random() to cover different branches on my code automatically, without having actually to provide hardcoded sample input data for each case. Still, I've realized that the stub just is actually called once upon initialization and, not the return value of it gets fixed/constant during the execution of the testing process (Mocha based). Is there any way to actually provide different outcomes for a single stub using? I've checked the onCall syntax, but it also provides fixed output, just selectable based on current iteration index, but not actual dynamic output, which could even be args/params based, perhaps.
All ideas are welcome!
Current stubbing using Sinon:
sinon.stub(dynamodb, 'get').resolves(stubGet())
The stub itself:
function stubGet () {
// Choose random repo
const i = Math.round(Math.random() * sampleData.length)
const repo = sampleData[i]
// Should it have "new code/push date"?
const isNew = Math.round(Math.random()) === 1
if (isNew) {
repo.pushed_at = { S: '1970-01-01T00:00:00Z' }
}
console.log('repo', repo)
const item = { Item: repo }
console.log(item)
return item
}
The goal would be to hopefully get the random repo or the isNew value.
Randomness is unpredictable. Test code should be predictable, including test data. Otherwise, your tests could be failed with some random data in someday
We should write multiple test cases, each test case uses fixed, as simple as possible test data to test each branch, scene, etc. of the code. Assert whether the returned value meets your expectations.
You should make the test code, test data predictable. For more info, see Unpredictable Test Data

Difference between fake, spy, stub and mock of sinon library ( sinon fake vs spy vs stub vs mock )

I tried to understand difference between sinon library's fake, spy, stub and mock but not able to understand it clearly.
Can anybody help me to understand about it?
Just for understanding purpose call:
FuncInfoCollector = is a Function that records arguments, return value, the value of this(context) and exception thrown (if any) for all of its calls.
(this FuncInfoCollector is dummy name given by me, it is not present in SINON lib)
Fake = FuncInfoCollector + can only create a fake function. To replace a function that already exists in the system under test you call sinon.replace(target, fieldname, fake). You can wrap an existing function like this:
const org = foo.someMethod;
sinon.fake((...args) => org(...args));
A fake is immutable: once created, the behavior can't be changed.
var fakeFunc = sinon.fake.returns('foo');
fakeFunc();
// have call count of fakeFunc ( It will show 1 here)
fakeFunc.callCount;
Spy = FuncInfoCollector + can create new function + It can wrap a function that already exists in the system under test.
Spy is a good choice whenever the goal of a test is to verify something happened.
// Can be passed as a callback to async func to verify whether callback is called or not?
const spyFunc = sinon.spy();
// Creates spy for ajax method of jQuery lib
sinon.spy(jQuery, "ajax");
// will tell whether jQuery.ajax method called exactly once or not
jQuery.ajax.calledOnce
Stub = spy + it stubs the original function ( can be used to change behaviour of original function).
var err = new Error('Ajax Error');
// So whenever jQuery.ajax method is called in a code it throws this Error
sinon.stub(jQuery, "ajax").throws(err)
// Here we are writing assert to check where jQuery.ajax is throwing an Error or not
sinon.assert.threw(jQuery.ajax(), err);
Mock = Stub + pre-programmed expectations.
var mk = sinon.mock(jQuery)
// Should be called atleast 2 time and almost 5 times
mk.expects("ajax").atLeast(2).atMost(5);
// It throws the following exception when called ( assert used above is not needed now )
mk.expects("ajax").throws(new Error('Ajax Error'))
// will check whether all above expectations are met or not, hence assertions aren't needed
mk.verify();
Please have a look at this link also sinon.replace vs sinon.stub just to replace return value?
Just to add some more info to the otherwise good answer, we added the Fake API to Sinon because of shortcomings of the other original APIs (Stub and Spy). The fact that these APIs are chainable led to constant design issues and recurring user-problems and they were bloated to cater to quite unimportant use cases, which is why we opted for creating a new immutable API that would be simpler to use, less ambiguous and cheaper to maintain. It was built on top of the Spy and Stub Apis to let Fakes be somewhat recognizable and have explicit method for replacing props on objects (sinon.replace(obj,'prop',fake)).
Fakes can essentially be used anywhere a stub or spy can be used and so I have not used the old APIs myself in 3-4 years, as code using the more limited fakes is simpler to understand for other people.

A more elegant way to test time passing in mocha

I recently had to fix a bug with an old react component at work. Generally when fixing a bug, we write a test with it. The problem is, the react component does an asynchronous fetch from the server (I can refactor the code so the async action is moved into redux, but we don't do refactoring as part of bug fixes here) so when testing what the component is supposed to render, I have to wait 500ms to allow the promise to resolve.
I know I wouldn't have to do a setTimeout if I create an instance of the component and call the method directly, I can just do a .then, but we like to test the input/output of the component without calling internal methods.
Is there a more elegant solution than having to set a timeout? here's the current code:
it('autofills and locks all the fields where user data is present', function(done) {
const emailInput = $wrapper.find('#email');
emailInput.value = 'email#example.com';
emailInput.simulate('blur');
// - autofill does an async request
// - we need to wait for the promise to resolve before
// checking if the inputs are disabled or not
setTimeout(() => {
const identifierInput = $wrapper.find('#id');
const lastNameInput = $wrapper.find('#last-name');
const phoneNumberInput = $wrapper.find('#phone-number');
const firstNameInput = $wrapper.find('#first-name');
expect(firstNameInput.html().includes('disabled')).to.be.true;
expect(lastNameInput.html().includes('disabled')).to.be.true;
expect(phoneNumberInput.html().includes('disabled')).to.be.true;
expect(identifierInput.html().includes('disabled')).to.be.true;
done();
}, 500);
});
" we don't do refactoring as part of bug fixes here"... Wow.
Anyway if you NEED to write a test to complete your job, you will need to mock your API call. That 500ms timeout will fail anytime the network is a bit slow or whatever, it's a very very very fragile test.
Since you didn't specify much about how the request is made I can't tell you anything else, but you will find mocking libraries for all HTTP request flavors: fetch-mock for fetch, nock... There are tons.
The idea is that you should setup a fake answer upfront and return it immediately (with a Promise.resolve() if needed).

Global beforeEach and afterEach in protractor

In each spec I have beforeEach and afterEach statements. Is it possible to add it somehow globally to avoid code duplication between specs ?
Purpose of beforeEach() and afterEach() functions are to add a block of repetitive code that you would need to execute every time you start or complete executing each spec(it). There are other ways to add generalised code to avoid code repetition, here are few -
If you have a piece of code that you would require to run only once before starting a test suite(describe), then you can use beforeAll() and afterAll() functions that jasmine provides.
If you want to run a piece of code that you want to run only once when the execution starts before starting all the test scripts, then add it in your onPrepare() and onComplete() function.
If you want to add a piece of code that should run even before protractor has started instantiating itself or after it has shut itself down, then use beforeLaunch and afterLaunch.
So it all depends on the scenario that you want to use them in. Hope it helps.
My team has the same desire, to run bits of boilerplate code at the start of every test file. From the discussion here, it doesn't sound like there are hooks to globally add to the beforeEach(), afterEach(), etc.
However, we do use the onPrepare() function to abbreviate the amount of before/after boilerplate code that gets repeated in each spec file. Below is a beforeAll() example, but the pattern could be used for beforeEach()/afterEach(). In this case, we're setting up test users in the database with a DataSeeder class, which we do in the outer-most describe() block in every spec file. (I'm also leaving in my catchProtractorErrorInLocation pattern, because it's super useful for us.)
In protractor.conf.ts add boilerplate code to browser.params object.
onPrepare: function () {
...
const browser = require('protractor').browser;
// Define the ConsoleHelper & DataSeeder instances, which will be used by all tests.
const DataSeeder = require('./e2e/support/data-seeder.js');
browser.params.dataSeeder = new DataSeeder();
browser.catchProtractorErrorInLocation = (error, location) => {
throw new Error(`Error in ${location}\n ${error}`);
};
browser.catchProtractorErrorInBeforeAll = (error) => browser.catchProtractorErrorInLocation(error, 'beforeAll()');
// Return a promise that resolves when DataSeeder is connected to service and ready to go
return browser.params.dataSeeder.waitForConnect();
},
With that in place, we can easily do beforeAll() setup code in an abbreviated set of lines.
beforeAll(() => {
return browser.params.dataSeeder.createTestUsers()
.catch(browser.catchProtractorErrorInBeforeAll);
});
You obviously need to do different things in your setup, but you can see how the pattern can apply.

Categories

Resources