I've been having lots of weird issues with my unit tests (see for example here or here) and I wanted to rule out this as a possibility. So, here's my potentially silly question:
Does the should style from Chai block, or is it async? Is it safe to have a done() call after some sort of should chain, or is the ideal solution callbacks of some sort?
I'm pretty sure this isn't the problem. But it seems like I discover a new problem caused by Node's non-blocking IO (or rather, my lack of experience with it) every day and I wanted to check that I wasn't making a mistake here.
I've had weird experience with .should because it needs to attach itself to object you are should-ing. I had better experience with expect(). And sync/async depends on test runner. mocha is sync. And every assertion with expect() is sequentially run, and is atomic, so there is no async operation there. Same goes for should.
i prefer expect over should because something.should will throw an error if something is undefined. no other reason for my preference.
neither should nor expect make the test async. done is what makes the test async and the done should be called in both the promise resolution and reject block (not just one). you may want to tweak the mocha (i assume mocha) timeout period before done fails. hope this helps.
Related
I was recently looking over a co-workers code and I realized that he implements a jest function in a BeforeAll function at the top of the describe call, and then creates a data object in a beforeEach function. This made me wonder, what exactly are the differences between BeforeAll and BeforeEach.
It was time... I went to Google!! I did find some articles that helped shed some light on some of the functionality differences between the two.
Findings 1: http://breazeal.com/blog/jasmineBefore.html
Findings 2: Difference between #Before, #BeforeClass, #BeforeEach and #BeforeAll
Given the articles I found that BeforeAll is called once and only once. While the BeforeEach is called before each individual test. Which was great! I now had a better idea of when it was being called!
I also found out that the BeforeAll is best used for initializing code. Which makes perfect sense! Initialize it once. Boom, you're done.
My confusion I am having is when is something initialized and when is it not? I have found that BeforeEach in our code is used more often than not. What I am curious about is what kind of code is considered to be "initializing" code, vs whatever code should be in the BeforeEach.
An example from our code below:
beforeAll((done) => {
// Mocking method from within Box file
transferBoxPlanSpy = jest.spyOn(Box, 'transferPlanFromBox').mockImplementation(() => Promise.resolve());
// Pulling data from MongoDB
User.findOne({ user_name: 'testsurgeon1' }, (err, user) => {
user.addMGSPermission();
user.save(done);
});
});
beforeEach(() => {
planData2 = {
user_name: 'hello1',
laterality: 'right',
plan_id: 'testplan42',
order_number: '856-hd-02-l',
file_id: '123456sbyuidbefui',
};
});
I hope my question isn't too vague. Thank you for your time!
Edit 1
I would like to point out that this code was not made by myself, but from one of our members on the software team. He puts the object inside of the BeforeEach, and the mocks inside of the BeforeAll.
My confusion is that it seems like all code can be put just into BeforeAll, with a few exceptions.
Both are used to set up whatever conditions are needed for one or more tests.
If you're certain that the tests don't make any changes to those conditions, you can use beforeAll (which will run once).
If the tests do make changes to those conditions, then you would need to use beforeEach, which will run before every test, so it can reset the conditions for the next one.
Unless the initialization is slow or computationally expensive, it may be safest to default to using beforeEach as it reduces the opportunity for human error, i.e. not realizing that one test is changing the setup for the next one.
The sample you showed is a good example of using both in combination -- the slow network call is put in beforeAll, so it only has to happen once; and the data object (which is presumably modified by the tests) is reset each time in beforeEach.
I know this is an old post but in the event that others come looking I would like to add some important information that I find surprised not to be mentioned here:
That beforeAll is specifically for ASYNCHRONOUS calls that need to complete before the tests are run.
In the original post the beforeAll function is redundant as it doesn't return a promise - you could simply place the function body immediately before your first describe or test
See the jest docs: https://jestjs.io/docs/setup-teardown
In some cases, you only need to do setup once, at the beginning of a file. This can be especially bothersome when the setup is asynchronous, so you can't do it inline. Jest provides beforeAll and afterAll to handle this situation.
E.g. the following returns a promise which will resolve before the tests proceed to run.
beforeAll(() => {
return initializeCityDatabase();
});
I would like to reach 100% coverage on my javascript project using Jest. Some methods, in my opinion, are not suited for unit testing and should be skipped when analyzing the code coverage. I would like to be able to do something like this:
function toto() {
// Some function performing stuff we don't want to test e.g.
// promise called inside a complex authorization process
// involving an integration test really hard to mock.
}
expect(toto()).toBeUntestable();
The Jest API doesn't appear to have such possibility. It could be really useful since we could use this to aggregate bad code/hard-to-test code using this technique.
I'm really interested in your input regarding this situation since when it comes to tests, I find it really hard to have the right angle of approach.
I'm new to Angular2 and have been charged with developing "robust error handling". So far I've followed the simplistic examples (console.logging) for adding custom error handling. But sometimes, if the page completely stops loading due to the error, we will want to redirect the user.
However sometimes, as below, although there are errors, the page otherwise loads completely. Are there are only certain types of errors that stop the page from completely loading? One of the following 6 types perhaps?
Any error can stop your page from rendering, depending on where it occurs in your process. Any error can fail to be caught if it is in a callback or other asynchronous action.
Be careful with terms like "robust error handling" - I've seen huge commercial projects that claim just that, but actually just silently truck over loads of issues, kind of like on error resume next.
I find the golden rules are:
If your app can continue through an exception (such as getting corrupt JSON data from a non-essential service) then that specific case should always be explicitly handled.
Otherwise a unexpected exception should always break something visible.
That second rule is counter intuitive, but it really is best practice. Users will complain about errors that they see, and visible exceptions and crashes will upset them and reduce their confidence in your application.
However, exceptions that they don't see still happened, and because you've silently trucked through them whatever caused them is still there. Silent exceptions cause data to be lost or corrupted. They cause the kind of bugs that you only find out about after 6 months in production. They cause the kind of bugs you can get sued over.
Users will forgive you for obvious errors that you fix quick, they will leave and never come back if you lose data and don't immediately know about it.
Ok, so that all said, the errors you seem to be highlighting are asynchronous, and related to a problem sometimes described as callback hell.
In your screenshot the error is from an HTTP GET request - this will typically be a method where you make an AJAX request, have a callback to fire when it succeeds, but don't have a callback to handle the exception.
Angular2 uses promises, which is the next error line of your screenshot. Promises wrap those callbacks and allow you to chain them - they really help with callback hell, but they're not a magic bullet: you have to make sure that every .then() has an error handler or a following .catch().
However, there is an even better way: with Angular2 you can use TypeScript, and that means you can use async and await. These are syntactic sugar for promises, but they also work with try-catch to make error handling of asynchronous exceptions much easier.
I've blogged about that in a lot more detail than I can fit here.
TL;DR: in Angular2 use async/await (with TS transpilation if you need it) to make sure that your Promise and callback exceptions make it back up, and then handle what you expect/can work around and visibly crash for what you can't.
I'm using jasmine and karma for unit tests of my app, with approx. 1000 tests at the moment, it takes around 10 seconds until they're finished.
It's not a problem right now, but in a couple of months the number of tests might become much bigger and I'd like to know if there's anything I can do to make them run faster, at least locally.
I found out that using:
jasmine.any(Object)
is much faster than comparing big objects.
Changing:
expect(some.method).toHaveBeenCalledWith("xyz");
into:
expect(some.method.calls.argsFor(0)[0]).toBe("xyz");
also seems to be a little bit faster.
Karma is lovely but it doesn't seem to have anything that improves performance yet, it's really useful for debugging though (reportSlowerThan).
Any other ideas how can I improve the performance of the tests?
What kind of performance improvements are you seeing in switching from toHaveBeenCalledWith?
I appreciate what you're trying to achieve – you have a test suite that runs 10 seconds and you're right to try and improve that situation – but if the savings are in < 500ms range I would be careful as the readability and clarity of your tests are put at risk.
toHaveBeenCalledWith communicates your intentions to others much better than the argsFor approach does, as does the message that would displayed if that test were to fail;
Expected function to have been called with "xyz"
vs
Expected undefined to be "xyz"
With that said, some ideas...
1
Look for areas where you can safely replace beforeEach calls;
The beforeEach function is called once before each spec in the describe in which it is called
With beforeAll calls;
The beforeAll function is called only once before all the specs in describe are run.
But be careful not to introduce shared state between tests which could skew your results (using Jasmine's option to run the tests in a random order might help here but I'm not sure how beforeAll is handled by this, it could be that those specs are still run together).
2
Continue using reportSlowerThan as you have been and pick off any that are really slow. If changes like the one you suggested are unavoidable, put them behind helper functions with well-chosen names so that what you're trying to achieve is still clear to other developers. Or better still, create Custom Matchers for them because that will also result in clear messages if the tests fail (add-matchers can help make this easier).
3
Consider switching from Jasmine to Jest, tests are written in pretty much the same way, but the test runner is much faster.
It seems that the QUnit functions stop() and start() allow to wait for asynchronous tests, but during that waiting period the whole test suite hangs. Is there a way to run asynchronous tests in a non-blocking fashion using QUnit?
Looking at the docs for asyncTest and stop, there's two reason's I can see that it's set up like that.
So that you aren't accidentally running two tests at a time which might conflict with something (ie, modifying the DOM and so changing each others' test results).
So that QUnit knows when the tests have finished. If it comes to the end of all the synchronous tests, then it'll write up the results, which you don't really want it to do if there are still async tests happening in the background.
So these are a good thing, and you probably don't actually want the async tests to not block as they run. You could probably do it by calling start immediately after the start of your async tests, but remember, JavaScript is actually single threaded (even though it sometimes gives the appearance of multi-threading), so this might cause unexpected results, as you can't guarantee when your async test will continue running... it might not (probably won't) be until after the other tests have finished and the results have been published.