BeforeAll vs. BeforeEach. When to use them? - javascript

I was recently looking over a co-workers code and I realized that he implements a jest function in a BeforeAll function at the top of the describe call, and then creates a data object in a beforeEach function. This made me wonder, what exactly are the differences between BeforeAll and BeforeEach.
It was time... I went to Google!! I did find some articles that helped shed some light on some of the functionality differences between the two.
Findings 1: http://breazeal.com/blog/jasmineBefore.html
Findings 2: Difference between #Before, #BeforeClass, #BeforeEach and #BeforeAll
Given the articles I found that BeforeAll is called once and only once. While the BeforeEach is called before each individual test. Which was great! I now had a better idea of when it was being called!
I also found out that the BeforeAll is best used for initializing code. Which makes perfect sense! Initialize it once. Boom, you're done.
My confusion I am having is when is something initialized and when is it not? I have found that BeforeEach in our code is used more often than not. What I am curious about is what kind of code is considered to be "initializing" code, vs whatever code should be in the BeforeEach.
An example from our code below:
beforeAll((done) => {
// Mocking method from within Box file
transferBoxPlanSpy = jest.spyOn(Box, 'transferPlanFromBox').mockImplementation(() => Promise.resolve());
// Pulling data from MongoDB
User.findOne({ user_name: 'testsurgeon1' }, (err, user) => {
user.addMGSPermission();
user.save(done);
});
});
beforeEach(() => {
planData2 = {
user_name: 'hello1',
laterality: 'right',
plan_id: 'testplan42',
order_number: '856-hd-02-l',
file_id: '123456sbyuidbefui',
};
});
I hope my question isn't too vague. Thank you for your time!
Edit 1
I would like to point out that this code was not made by myself, but from one of our members on the software team. He puts the object inside of the BeforeEach, and the mocks inside of the BeforeAll.
My confusion is that it seems like all code can be put just into BeforeAll, with a few exceptions.

Both are used to set up whatever conditions are needed for one or more tests.
If you're certain that the tests don't make any changes to those conditions, you can use beforeAll (which will run once).
If the tests do make changes to those conditions, then you would need to use beforeEach, which will run before every test, so it can reset the conditions for the next one.
Unless the initialization is slow or computationally expensive, it may be safest to default to using beforeEach as it reduces the opportunity for human error, i.e. not realizing that one test is changing the setup for the next one.
The sample you showed is a good example of using both in combination -- the slow network call is put in beforeAll, so it only has to happen once; and the data object (which is presumably modified by the tests) is reset each time in beforeEach.

I know this is an old post but in the event that others come looking I would like to add some important information that I find surprised not to be mentioned here:
That beforeAll is specifically for ASYNCHRONOUS calls that need to complete before the tests are run.
In the original post the beforeAll function is redundant as it doesn't return a promise - you could simply place the function body immediately before your first describe or test
See the jest docs: https://jestjs.io/docs/setup-teardown
In some cases, you only need to do setup once, at the beginning of a file. This can be especially bothersome when the setup is asynchronous, so you can't do it inline. Jest provides beforeAll and afterAll to handle this situation.
E.g. the following returns a promise which will resolve before the tests proceed to run.
beforeAll(() => {
return initializeCityDatabase();
});

Related

How to share an object between multiple test suites in Jest?

I need to share an object (db connection pool) between multiple test suites in Jest.
I have read about globalSetup and it seems to be the right place to initialize this object, but how can I pass that object from globalSetup context to each test suite context?
I'm dealing with a very similar issue, although in my case I was trying to share a single DB container (I'm using dockerode) that I spin up once before all tests and remove after to conserve the overhead of spinning a container for each suite.
Unfortunately for us, after going over a lot of documentation and Github issues my conclusion is that this is by design since Jest's philosophy is to run tests sandboxed and under that restriction, I totally get why they choose not to support this.
Specifically, for my use case I ended up spinning a container in a globalSetup script, tagging it with some unique identifier for the current test run (say, timestamp) and removing it at the end with a globalTeardown (that works since global steps can share state between them).
This looks roughly like:
const options: ContainerCreateOptions = {
Image: POSTGRES_IMAGE,
Tty: false,
HostConfig: {
AutoRemove: true,
PortBindings: {'5432/tcp': [{HostPort: `${randomPort}/tcp`}]}
},
Env: [`POSTGRES_DB=${this.env['DB_NAME']}`,
`POSTGRES_USER=${this.env['DB_USERNAME']}`,
`POSTGRES_PASSWORD=${this.env['DB_PASSWORD']}`]
};
options.Labels = { testContainer: `${CURRENT_TEST_TIMESTAMP}`};
let container = await this.docker.createContainer(options);
await container.start();
where this.env in my case is a .env file I preload based on the test.
To get the container port/ip (or whatever else I'm interested in) for my tests I use a custom envrironment that my DB-requiring tests use to expose the relevant information on a global variable (reminder: you can only put primitives and objects on global, not instances).
In your case, the requirement to pass a connection pool between suites is probably just not suited for jest since it will never let you pass around an instance between different suites (at least that's my understanding based on all of the links I shared). You can, however, try either:
Put all tests that need the same connection pool into a single suite/file (not great but would answer your needs)
Do something similar to what I suggested (setup DB once in global, construct connection pool in custom env and then at least each suite gets its own pool) that way you have a lot of code reuse for setup which is nice. Something similar is discussed here.
Miscalenous: I stumbled across testdeck which might answer your needs as well (setup an abstract test class with your connection pool and inherit it in all required tests) - I didn't try it so I don't know how it will behave with Jest, but when I was writing in Java that's how we used to achieve similar functionality with TestNG
In each test file you can define a function to load the necessary data or to mock the data. User the beforeEach() function for this. Documentation example here!

Is the Chai BDD style 'should' async?

I've been having lots of weird issues with my unit tests (see for example here or here) and I wanted to rule out this as a possibility. So, here's my potentially silly question:
Does the should style from Chai block, or is it async? Is it safe to have a done() call after some sort of should chain, or is the ideal solution callbacks of some sort?
I'm pretty sure this isn't the problem. But it seems like I discover a new problem caused by Node's non-blocking IO (or rather, my lack of experience with it) every day and I wanted to check that I wasn't making a mistake here.
I've had weird experience with .should because it needs to attach itself to object you are should-ing. I had better experience with expect(). And sync/async depends on test runner. mocha is sync. And every assertion with expect() is sequentially run, and is atomic, so there is no async operation there. Same goes for should.
i prefer expect over should because something.should will throw an error if something is undefined. no other reason for my preference.
neither should nor expect make the test async. done is what makes the test async and the done should be called in both the promise resolution and reject block (not just one). you may want to tweak the mocha (i assume mocha) timeout period before done fails. hope this helps.

Why do you need to call both replayAll and verifyAll when using Google Closure mocks?

Reading code using Closure's Mocks and am a big confused by the syntax. Many look like this:
mockChart();
// Test
this.mockControl.$replayAll();
this.mainMethod(testData);
// Verify
this.mockControl.$verifyAll();
// lots of asserts
I don't understand why one would call both replay and then later verify. It sounds like replay is actually doing the work of record, which I would have expected to have started already.
The flow is a bit different from Mockito, the only other framework I'm familiar with, and I haven't found good documentation at this level (just class-level jsdoc).
You can think of there being two stages when mocking something with Closure.
During the first, the test informs the mock framework of what calls are expected and how it should respond to them. Calling mockFoo.doTheThing() during this stage will add an expected call to doTheThing on the Foo mock.
During the second, the mock framework records calls while the test is running. Calling mockFoo.doTheThing() during this stage will record the fact that doTheThing was called, and possibly run some test code that was added in the first stage.
The first stage starts when the MockControl object is created, and ends when $replayAll is called. The second stage starts when $replayAll is called and ends when $verifyAll is called, at which point the mock framework checks that it all of the expected method calls were made.

Should I write unit-tests for 'wrapper' methods

I have function in controller of my directive:
$scope.getPossibleWin = function() {
return betslipService.getPossibleWin()
}
betslipService injected into controller
Now I'm thinking how to test $scope.getPossibleWin:
Test that $scope.getPossibleWin calls betslipService.getPossibleWin
Test that $scope.getPossibleWin return correct value (but this already tested in betslipService!)
Test that $scope.getPossibleWin simply exist
What is best practices in wrappers testing?
Option 2 is the best, option 1 I am not very convinced about. I don't have experience with Javascript so I'm not sure why you should have to verify that a function exists (option 3).
You can find more information on it here but the reason that you should indeed add a test for this method is to prevent yourself from breaking anything in the future. If you only rely on that one method 5 layers deep in your application, it could be that one day you add code in a higher layer which changes the result but it is not being tested. Or at some level some code has a side-effect which disturbs the result that came from the bowels of your codebase.
Therefore I would suggest you to make a test for each (relevant) level. The question what should I test exactly is probably a little bit preference-oriented but I would argue that the very least you should do is testing whether it returns the correct value, as layed out above.
Should you test that it calls that specific inner method? You could, but that isn't exactly something you do in unit-testing because then you would be testing against the unit's internal workings. You don't care how it works inside, you just care that the function gives you the response that you expected. By coupling these two in your unit-test, you'll end up with a broken test for non-broken code when you decide to refactor the internal workings.

Why is using jasmine's runs and waitFor function necessary?

I'm testing an asynchronous piece of code with something that looks like this:
randomService.doSomething().then(function() {
console.log('I completed the operation!');
});
Surprisingly (to me) I've found that it only succeeds (ie console.log output is shown) when wrapped inside jasmine's runs function, like so:
var isDone = false;
runs(function() {
randomService.doSomething().then(function(data) {
console.log('I completed the operation!');
isDone = true;
});
});
waitsFor(function() {
return isDone;
}, 'Operation should be completed', 1000);
As I understood it, I thought waitsFor was only to delay the code, in other words I would use it if I had more code that I had to delay until after the asynchronous call completed - in other words, I would have thought that there'd be no reason for me to use runs and waitsFor since there's nothing that comes after this bit of code, right? That's the impression I got from reading this question: What do jasmine runs and waitsFor actually do? but obviously I've gotten myself mixed up at some point.
Does anyone have any thoughts on this?
EDIT:
Here is a Plunker with far more detail of the problem:
http://plnkr.co/edit/3qnuj5N9Thb2UdgoxYaD?p=preview
Note how the first test always passes, and the second test fails as it should.
Also, I'm sure I should have mentioned this before, but this is using angularJS, and Jasmine 1.3.
I think I found the issue. Here's the article: http://blogs.lessthandot.com/index.php/webdev/uidevelopment/javascript/testing-asynchronous-javascript-w-jasmine/
Essentially it's necessary because Jasmine doesn't wait for the asynchronous calls to finish before it completes a test. According to the article, if a call takes long enough and there are more tests later, an expect statement in an asynchronous callback in a previous test could finally execute in a different test entirely, after the original test completed.
Using runs and waitsFor solve the problem because they force jasmine to wait for the waitsFor to finish before proceeding to the next test; This is a moot point however because evidently Jasmine 2.0 addresses asynchronous testing in a better way than 1.3, obsoleting runs and waitsFor.
That's just how Jasmine works. The question you linked has an answer with a decent explanation:
Essentially, the runs() and waitFor() functions stuff an array with
their provided functions. The array is then processed by jamine
wherein the functions are invoked sequentially. Those functions
registered by runs() are expected to perform actual work while those
registered by waitFor() are expected to be 'latch' functions and will
be polled (invoked) every 10ms until they return true or the optional
registered timeout period expires. If the timeout period expires an
error is reported using the optional registered error message;
otherwise, the process continues with the next function in the array.
To sum it up, each waitsFor call must have a corresponding runs call. They work together. Calling waitsFor without a runs call somewhere before it does not make sense.
My revised plunker (see comments on this answer): http://plnkr.co/edit/9eL9d9uERre4Q17lWQmw
As you can see, I added $rootScope.$apply(); to the the timeout function you were testing with. This makes the console.log inside the promise callback run. HOWEVER it only runs if you ignore the other test with an xit, AND the expect after the console.log does not seem to be recognized as a Jasmine test (though it certainly must run, because the console.log does).
Very weird - I don't really understand why this is happening, but I think it has something to do with how Jasmine works behind the scenes, how it registers tests and whatnot. My understanding at this point is if you have an expect inside an async callback, Jasmine won't "recognize" it as a part of the test suite unless the initial async call was made inside a runs.
As for why this is, I don't know. I don't think it's worth trying to understand - I would just use runs and waitsFor and not worry about it, but that's just me. You can always dig through the source if you're feeling masochistic. Sorry I couldn't be of more help.

Categories

Resources