Getting QUnit to run tests in order - javascript

I've used qunit to write a series of tests for javascript code I have. Right now for some reason, the first test in my list will run, and then the LAST test in the list runs, followed by the 2nd to last, 3rd to last, 4th to last, etc... It's crucial for my tests that things run in the order that I have them in. I tried turning off that option where qunit runs tests that failed last time first, but it's still doing this. Is there any way to fix this?

First, figure out why your tests MUST run in a specific order. The whole point of unit testing is that the tests are atomic and it should be possible to run them in any order - if your test suite isn't capable of this, you need to figure out why as it may represent a larger problem.
If you can't figure it out, then you may need to break your test suite up into smaller groups of tests until you find the one(s) causing the issue.
edit: Found this reference at http://www.educatedguesswork.org/2011/06/curse_you_qunit_1.html. Apparently, adding this to your test suite will help QUnit.config.reorder = false;

Maybe you could consider placing the code that does each major computation in a function that has a test at the beginning that checks if the computation was already done. If the computation is not done, do the computation and save the result somewhere. If the computation has already been done then just return the results. In this way you can have a single computation for all the tests but still be autonomous to the order the tests are done.
I can relate to the problems of time consuming computations in unit testing, but it is imperative to the test group to be able to take any unit test and execute as an independent autonomous test. This is especially true when a critical problem comes up and has to be addressed specifically.

Related

Why a web element throws CypressError with TimedOut reason, only randomly?

In my script, I am trying to locate and click one of the many document links, with this syntax:
cy.wait(3000); cy.get('a[href^="/articleDetail/"]').first().click();
I got this error:
CypressError: Timed out retrying: Expected to find element:
'a[href^="/articleDetail/"] but never found it'
The issue is this happens only few times, not all the times. Like 3 out 5 times. How should I solve this issue ?
Testing it via the Selector Playground (as N. suggested) is a good step. What you also can do is investigate screenshots which Cypress can make on failure. That shows the exact state of the application when the failure happened. That usually gives a good hint to the problem.
Besides that you can also try to set the wait to an absurt value like 10000. If Cypress can find the element at that case, the application is slow and therefor Cypress is not waiting long enough.
For different reasons (internet speed, CPU, Memory, errors) your page could take longer to load or not load at all. As a good practice, your page should have a loading system, where it is shown until the page is completely rendered. This way you could have something like cy.get('your-loading-element').should('not.be.visible'), which will hold the next command while the loading is in place.
Waiting is not the right approach as you never know exactly how long it will take and raising the time will only delay your tests.
It is very important to think of your test in the same way a test analyst would execute them, because one of the steps would be to wait the page to be rendered.
Here is some good testing good practices: UI test automation good practices

How to/ best way to run a test for each element found? Protractor/Jasmine

I am using Protractor and Jasmine.
I'm automating a website that is heavily data-driven with a lot of dynamic elements (elements displayed depend on data available). As a result, I never know exactly how many elements there are, but I need to test each one since data-driven means that just because one works, doesn't mean the rest will.
I'm not sure the best way to go about this - I've done tons of research, but none of the ideas work, or only partially work.
What I've tried:
Throwing an it block into a loop that dynamically grabs the element count
I found that this doesn't work because it appears Jasmine evaluates which / how many tests run at compile. And since I need get to the page before I can grab the count, the count is always 0, so it runs the test 0 times
This only works with static data, but again, my data is dynamic, so this won't work. At least, I couldn't find a way to
Throwing an it block into a loop that loops using a variable and then reassigning the variable in a beforeAll
Same issue as the previous, reassigning doesn't work because Jasmine uses the value that was available on compile
Looping through the elements inside of an 'it' and doing an 'expect' for each element
This works for the most part, but only the first error gets reported. I'd ideally like to see every element that has an issue. Jasmine loops through all elements even when one expect fails, so I'm not sure why it doesn't report them all / or how to report them all
I'd prefer to use solution #3 if I can see all expect failures, but I'm open to any suggestions. If there's a better way, best practice, or a way you handle this instead of what I have listed, I'd like to hear those as well.
Let me know if any more information is needed.

Is the Chai BDD style 'should' async?

I've been having lots of weird issues with my unit tests (see for example here or here) and I wanted to rule out this as a possibility. So, here's my potentially silly question:
Does the should style from Chai block, or is it async? Is it safe to have a done() call after some sort of should chain, or is the ideal solution callbacks of some sort?
I'm pretty sure this isn't the problem. But it seems like I discover a new problem caused by Node's non-blocking IO (or rather, my lack of experience with it) every day and I wanted to check that I wasn't making a mistake here.
I've had weird experience with .should because it needs to attach itself to object you are should-ing. I had better experience with expect(). And sync/async depends on test runner. mocha is sync. And every assertion with expect() is sequentially run, and is atomic, so there is no async operation there. Same goes for should.
i prefer expect over should because something.should will throw an error if something is undefined. no other reason for my preference.
neither should nor expect make the test async. done is what makes the test async and the done should be called in both the promise resolution and reject block (not just one). you may want to tweak the mocha (i assume mocha) timeout period before done fails. hope this helps.

jasmine tests, how can I improve the performance?

I'm using jasmine and karma for unit tests of my app, with approx. 1000 tests at the moment, it takes around 10 seconds until they're finished.
It's not a problem right now, but in a couple of months the number of tests might become much bigger and I'd like to know if there's anything I can do to make them run faster, at least locally.
I found out that using:
jasmine.any(Object)
is much faster than comparing big objects.
Changing:
expect(some.method).toHaveBeenCalledWith("xyz");
into:
expect(some.method.calls.argsFor(0)[0]).toBe("xyz");
also seems to be a little bit faster.
Karma is lovely but it doesn't seem to have anything that improves performance yet, it's really useful for debugging though (reportSlowerThan).
Any other ideas how can I improve the performance of the tests?
What kind of performance improvements are you seeing in switching from toHaveBeenCalledWith?
I appreciate what you're trying to achieve – you have a test suite that runs 10 seconds and you're right to try and improve that situation – but if the savings are in < 500ms range I would be careful as the readability and clarity of your tests are put at risk.
toHaveBeenCalledWith communicates your intentions to others much better than the argsFor approach does, as does the message that would displayed if that test were to fail;
Expected function to have been called with "xyz"
vs
Expected undefined to be "xyz"
With that said, some ideas...
1
Look for areas where you can safely replace beforeEach calls;
The beforeEach function is called once before each spec in the describe in which it is called
With beforeAll calls;
The beforeAll function is called only once before all the specs in describe are run.
But be careful not to introduce shared state between tests which could skew your results (using Jasmine's option to run the tests in a random order might help here but I'm not sure how beforeAll is handled by this, it could be that those specs are still run together).
2
Continue using reportSlowerThan as you have been and pick off any that are really slow. If changes like the one you suggested are unavoidable, put them behind helper functions with well-chosen names so that what you're trying to achieve is still clear to other developers. Or better still, create Custom Matchers for them because that will also result in clear messages if the tests fail (add-matchers can help make this easier).
3
Consider switching from Jasmine to Jest, tests are written in pretty much the same way, but the test runner is much faster.

Should I write unit-tests for 'wrapper' methods

I have function in controller of my directive:
$scope.getPossibleWin = function() {
return betslipService.getPossibleWin()
}
betslipService injected into controller
Now I'm thinking how to test $scope.getPossibleWin:
Test that $scope.getPossibleWin calls betslipService.getPossibleWin
Test that $scope.getPossibleWin return correct value (but this already tested in betslipService!)
Test that $scope.getPossibleWin simply exist
What is best practices in wrappers testing?
Option 2 is the best, option 1 I am not very convinced about. I don't have experience with Javascript so I'm not sure why you should have to verify that a function exists (option 3).
You can find more information on it here but the reason that you should indeed add a test for this method is to prevent yourself from breaking anything in the future. If you only rely on that one method 5 layers deep in your application, it could be that one day you add code in a higher layer which changes the result but it is not being tested. Or at some level some code has a side-effect which disturbs the result that came from the bowels of your codebase.
Therefore I would suggest you to make a test for each (relevant) level. The question what should I test exactly is probably a little bit preference-oriented but I would argue that the very least you should do is testing whether it returns the correct value, as layed out above.
Should you test that it calls that specific inner method? You could, but that isn't exactly something you do in unit-testing because then you would be testing against the unit's internal workings. You don't care how it works inside, you just care that the function gives you the response that you expected. By coupling these two in your unit-test, you'll end up with a broken test for non-broken code when you decide to refactor the internal workings.

Categories

Resources