I'm using jasmine and karma for unit tests of my app, with approx. 1000 tests at the moment, it takes around 10 seconds until they're finished.
It's not a problem right now, but in a couple of months the number of tests might become much bigger and I'd like to know if there's anything I can do to make them run faster, at least locally.
I found out that using:
jasmine.any(Object)
is much faster than comparing big objects.
Changing:
expect(some.method).toHaveBeenCalledWith("xyz");
into:
expect(some.method.calls.argsFor(0)[0]).toBe("xyz");
also seems to be a little bit faster.
Karma is lovely but it doesn't seem to have anything that improves performance yet, it's really useful for debugging though (reportSlowerThan).
Any other ideas how can I improve the performance of the tests?
What kind of performance improvements are you seeing in switching from toHaveBeenCalledWith?
I appreciate what you're trying to achieve – you have a test suite that runs 10 seconds and you're right to try and improve that situation – but if the savings are in < 500ms range I would be careful as the readability and clarity of your tests are put at risk.
toHaveBeenCalledWith communicates your intentions to others much better than the argsFor approach does, as does the message that would displayed if that test were to fail;
Expected function to have been called with "xyz"
vs
Expected undefined to be "xyz"
With that said, some ideas...
1
Look for areas where you can safely replace beforeEach calls;
The beforeEach function is called once before each spec in the describe in which it is called
With beforeAll calls;
The beforeAll function is called only once before all the specs in describe are run.
But be careful not to introduce shared state between tests which could skew your results (using Jasmine's option to run the tests in a random order might help here but I'm not sure how beforeAll is handled by this, it could be that those specs are still run together).
2
Continue using reportSlowerThan as you have been and pick off any that are really slow. If changes like the one you suggested are unavoidable, put them behind helper functions with well-chosen names so that what you're trying to achieve is still clear to other developers. Or better still, create Custom Matchers for them because that will also result in clear messages if the tests fail (add-matchers can help make this easier).
3
Consider switching from Jasmine to Jest, tests are written in pretty much the same way, but the test runner is much faster.
Related
Regrettably there is more than one way to do things in Chai.
Is there a benefit either way, to using to.be.an('undefined') over to.equal(undefined)?
My intuition there would be a cost to reusing/recreating undefined. Our test runner gives times for individual tests and it seems like what matters more which runs first (on a test watch the second one is faster, but doing two separate runs means they both take ~2 seconds (full set up) ).
I don't think it really matters. The closest I manage to find to an answer was this article
In that sense, code the expectation in a human-like language, declarative BDD style using expect or should and not using custom code.
The author doesn't seem to draw a distinction between the two and even your testing says they're more-or-less equal.
I say go with whatever makes sense.
Edit:
Based on this stack overflow answer
The more deep is a property nested, more time will be required to perform the property lookup.
That would imply that to.be.an('undefined') would actually be slower than to.equal(undefined) due to the additional lookup, but IMO the prototype pollution that comes with it could give false positives.
Same conclusion as before really: go with what makes sense.
So this is more of a theoretical question rather than a programming question.
In my basic understanding of the Angular 4 framework a component can have inputs, outputs and services which also interact with the store to store data.
Now all of this, in the case of a web-app is displayed on a template. I have recently been looking at testing of such an app.
I plan to use Jasmine as the framework and Karma as the test runner on Chrome (for now).
With that said, what should I be testing for?
According to my understanding the basic tests should include some mock object handed to the component and checking if that is rendered properly on the template, mocking a service and checking what is rendered when we receive the correct response.
So what do I do after this? After I have tested this?
How is this behavior driven development? What can I do more?
What is a "good angular test"?
Since you have mensioned jasmine with karma, My perspective of answering this question will be mostly on Unit Testing. Let us go one question after another
What is a "good angular test"?
1) Unit testing should be written by the developer not by anyone else (tester).
2) It must have both positive and negative scenarios handled.
3) All network calls should be mocked or stubbed.
4) It should pass irrespective of environment.
5) It should have be easily readable, reliable and should not test the integration.
How is this behavior driven development? What can I do more?
In a nut shell BDD:
1) Write unit test cases prior to your development
2) Let all of your unit test cases fail
3) Start with your development unit by unit. Pass the assertion.
4) BDD is a advancement of TDD. In BDD we should not test the implementation but behavior should be tested.
5) BDD increases the code quality as well since we create test case based on behavior.
So what do I do after this? After I have tested this?
Current trend as explained above, after writing the test cases start with the development. Write the assertion, even the non technical person can understand the behavior.
You should also check if your bread-crumbs are in position, bread-crumbs are used for internal routing between pages. Also use the right path of your routing component in your app-module.
I've been having lots of weird issues with my unit tests (see for example here or here) and I wanted to rule out this as a possibility. So, here's my potentially silly question:
Does the should style from Chai block, or is it async? Is it safe to have a done() call after some sort of should chain, or is the ideal solution callbacks of some sort?
I'm pretty sure this isn't the problem. But it seems like I discover a new problem caused by Node's non-blocking IO (or rather, my lack of experience with it) every day and I wanted to check that I wasn't making a mistake here.
I've had weird experience with .should because it needs to attach itself to object you are should-ing. I had better experience with expect(). And sync/async depends on test runner. mocha is sync. And every assertion with expect() is sequentially run, and is atomic, so there is no async operation there. Same goes for should.
i prefer expect over should because something.should will throw an error if something is undefined. no other reason for my preference.
neither should nor expect make the test async. done is what makes the test async and the done should be called in both the promise resolution and reject block (not just one). you may want to tweak the mocha (i assume mocha) timeout period before done fails. hope this helps.
I have function in controller of my directive:
$scope.getPossibleWin = function() {
return betslipService.getPossibleWin()
}
betslipService injected into controller
Now I'm thinking how to test $scope.getPossibleWin:
Test that $scope.getPossibleWin calls betslipService.getPossibleWin
Test that $scope.getPossibleWin return correct value (but this already tested in betslipService!)
Test that $scope.getPossibleWin simply exist
What is best practices in wrappers testing?
Option 2 is the best, option 1 I am not very convinced about. I don't have experience with Javascript so I'm not sure why you should have to verify that a function exists (option 3).
You can find more information on it here but the reason that you should indeed add a test for this method is to prevent yourself from breaking anything in the future. If you only rely on that one method 5 layers deep in your application, it could be that one day you add code in a higher layer which changes the result but it is not being tested. Or at some level some code has a side-effect which disturbs the result that came from the bowels of your codebase.
Therefore I would suggest you to make a test for each (relevant) level. The question what should I test exactly is probably a little bit preference-oriented but I would argue that the very least you should do is testing whether it returns the correct value, as layed out above.
Should you test that it calls that specific inner method? You could, but that isn't exactly something you do in unit-testing because then you would be testing against the unit's internal workings. You don't care how it works inside, you just care that the function gives you the response that you expected. By coupling these two in your unit-test, you'll end up with a broken test for non-broken code when you decide to refactor the internal workings.
I've used qunit to write a series of tests for javascript code I have. Right now for some reason, the first test in my list will run, and then the LAST test in the list runs, followed by the 2nd to last, 3rd to last, 4th to last, etc... It's crucial for my tests that things run in the order that I have them in. I tried turning off that option where qunit runs tests that failed last time first, but it's still doing this. Is there any way to fix this?
First, figure out why your tests MUST run in a specific order. The whole point of unit testing is that the tests are atomic and it should be possible to run them in any order - if your test suite isn't capable of this, you need to figure out why as it may represent a larger problem.
If you can't figure it out, then you may need to break your test suite up into smaller groups of tests until you find the one(s) causing the issue.
edit: Found this reference at http://www.educatedguesswork.org/2011/06/curse_you_qunit_1.html. Apparently, adding this to your test suite will help QUnit.config.reorder = false;
Maybe you could consider placing the code that does each major computation in a function that has a test at the beginning that checks if the computation was already done. If the computation is not done, do the computation and save the result somewhere. If the computation has already been done then just return the results. In this way you can have a single computation for all the tests but still be autonomous to the order the tests are done.
I can relate to the problems of time consuming computations in unit testing, but it is imperative to the test group to be able to take any unit test and execute as an independent autonomous test. This is especially true when a critical problem comes up and has to be addressed specifically.