Why is using jasmine's runs and waitFor function necessary? - javascript

I'm testing an asynchronous piece of code with something that looks like this:
randomService.doSomething().then(function() {
console.log('I completed the operation!');
});
Surprisingly (to me) I've found that it only succeeds (ie console.log output is shown) when wrapped inside jasmine's runs function, like so:
var isDone = false;
runs(function() {
randomService.doSomething().then(function(data) {
console.log('I completed the operation!');
isDone = true;
});
});
waitsFor(function() {
return isDone;
}, 'Operation should be completed', 1000);
As I understood it, I thought waitsFor was only to delay the code, in other words I would use it if I had more code that I had to delay until after the asynchronous call completed - in other words, I would have thought that there'd be no reason for me to use runs and waitsFor since there's nothing that comes after this bit of code, right? That's the impression I got from reading this question: What do jasmine runs and waitsFor actually do? but obviously I've gotten myself mixed up at some point.
Does anyone have any thoughts on this?
EDIT:
Here is a Plunker with far more detail of the problem:
http://plnkr.co/edit/3qnuj5N9Thb2UdgoxYaD?p=preview
Note how the first test always passes, and the second test fails as it should.
Also, I'm sure I should have mentioned this before, but this is using angularJS, and Jasmine 1.3.

I think I found the issue. Here's the article: http://blogs.lessthandot.com/index.php/webdev/uidevelopment/javascript/testing-asynchronous-javascript-w-jasmine/
Essentially it's necessary because Jasmine doesn't wait for the asynchronous calls to finish before it completes a test. According to the article, if a call takes long enough and there are more tests later, an expect statement in an asynchronous callback in a previous test could finally execute in a different test entirely, after the original test completed.
Using runs and waitsFor solve the problem because they force jasmine to wait for the waitsFor to finish before proceeding to the next test; This is a moot point however because evidently Jasmine 2.0 addresses asynchronous testing in a better way than 1.3, obsoleting runs and waitsFor.

That's just how Jasmine works. The question you linked has an answer with a decent explanation:
Essentially, the runs() and waitFor() functions stuff an array with
their provided functions. The array is then processed by jamine
wherein the functions are invoked sequentially. Those functions
registered by runs() are expected to perform actual work while those
registered by waitFor() are expected to be 'latch' functions and will
be polled (invoked) every 10ms until they return true or the optional
registered timeout period expires. If the timeout period expires an
error is reported using the optional registered error message;
otherwise, the process continues with the next function in the array.
To sum it up, each waitsFor call must have a corresponding runs call. They work together. Calling waitsFor without a runs call somewhere before it does not make sense.
My revised plunker (see comments on this answer): http://plnkr.co/edit/9eL9d9uERre4Q17lWQmw
As you can see, I added $rootScope.$apply(); to the the timeout function you were testing with. This makes the console.log inside the promise callback run. HOWEVER it only runs if you ignore the other test with an xit, AND the expect after the console.log does not seem to be recognized as a Jasmine test (though it certainly must run, because the console.log does).
Very weird - I don't really understand why this is happening, but I think it has something to do with how Jasmine works behind the scenes, how it registers tests and whatnot. My understanding at this point is if you have an expect inside an async callback, Jasmine won't "recognize" it as a part of the test suite unless the initial async call was made inside a runs.
As for why this is, I don't know. I don't think it's worth trying to understand - I would just use runs and waitsFor and not worry about it, but that's just me. You can always dig through the source if you're feeling masochistic. Sorry I couldn't be of more help.

Related

BeforeAll vs. BeforeEach. When to use them?

I was recently looking over a co-workers code and I realized that he implements a jest function in a BeforeAll function at the top of the describe call, and then creates a data object in a beforeEach function. This made me wonder, what exactly are the differences between BeforeAll and BeforeEach.
It was time... I went to Google!! I did find some articles that helped shed some light on some of the functionality differences between the two.
Findings 1: http://breazeal.com/blog/jasmineBefore.html
Findings 2: Difference between #Before, #BeforeClass, #BeforeEach and #BeforeAll
Given the articles I found that BeforeAll is called once and only once. While the BeforeEach is called before each individual test. Which was great! I now had a better idea of when it was being called!
I also found out that the BeforeAll is best used for initializing code. Which makes perfect sense! Initialize it once. Boom, you're done.
My confusion I am having is when is something initialized and when is it not? I have found that BeforeEach in our code is used more often than not. What I am curious about is what kind of code is considered to be "initializing" code, vs whatever code should be in the BeforeEach.
An example from our code below:
beforeAll((done) => {
// Mocking method from within Box file
transferBoxPlanSpy = jest.spyOn(Box, 'transferPlanFromBox').mockImplementation(() => Promise.resolve());
// Pulling data from MongoDB
User.findOne({ user_name: 'testsurgeon1' }, (err, user) => {
user.addMGSPermission();
user.save(done);
});
});
beforeEach(() => {
planData2 = {
user_name: 'hello1',
laterality: 'right',
plan_id: 'testplan42',
order_number: '856-hd-02-l',
file_id: '123456sbyuidbefui',
};
});
I hope my question isn't too vague. Thank you for your time!
Edit 1
I would like to point out that this code was not made by myself, but from one of our members on the software team. He puts the object inside of the BeforeEach, and the mocks inside of the BeforeAll.
My confusion is that it seems like all code can be put just into BeforeAll, with a few exceptions.
Both are used to set up whatever conditions are needed for one or more tests.
If you're certain that the tests don't make any changes to those conditions, you can use beforeAll (which will run once).
If the tests do make changes to those conditions, then you would need to use beforeEach, which will run before every test, so it can reset the conditions for the next one.
Unless the initialization is slow or computationally expensive, it may be safest to default to using beforeEach as it reduces the opportunity for human error, i.e. not realizing that one test is changing the setup for the next one.
The sample you showed is a good example of using both in combination -- the slow network call is put in beforeAll, so it only has to happen once; and the data object (which is presumably modified by the tests) is reset each time in beforeEach.
I know this is an old post but in the event that others come looking I would like to add some important information that I find surprised not to be mentioned here:
That beforeAll is specifically for ASYNCHRONOUS calls that need to complete before the tests are run.
In the original post the beforeAll function is redundant as it doesn't return a promise - you could simply place the function body immediately before your first describe or test
See the jest docs: https://jestjs.io/docs/setup-teardown
In some cases, you only need to do setup once, at the beginning of a file. This can be especially bothersome when the setup is asynchronous, so you can't do it inline. Jest provides beforeAll and afterAll to handle this situation.
E.g. the following returns a promise which will resolve before the tests proceed to run.
beforeAll(() => {
return initializeCityDatabase();
});

When creating a function to run only one time, should the redefinition happen before or after the function?

I want a function to only run one time. I found this question here on SO and the highest voted examples redefine the function and then execute the code that is supposed to run once. I'm sure there is a good reason for doing that instead of running the code and then redefining the function but I can't figure out what that would be. My first instinct would be to only redefine the function after the code is run since that seems "safer" because the code would have to have run before being redefined.
Can someone explain why convention seems to be to redefine first?
Basically if you want to avoid that the function is "called" twice, disabling the function immediately after the first call is the safest thing to do. When nothing else can happen between the call and the disabler, then nothing unexpected can happen either. If you were to run the code that should be executed only once first, various things might happen:
the code calls the function recursively, or through a callback - you should consider reentrancy
the code does throw an exception
the code returns early
In any of these cases, and possibly more, the disabling code is not reached and a second call could be made.
Notice also that replacing the function is not enough. A caller could easily have created an alias variable that still holds the original function, unaffected by setting some other variable to a noop function. You always need to combine this solution with the "static boolean variable" approach from the accepted answer, unless you control the calling code and know what it does.
Looks like the main reason is for multi-threaded languages. If a function is supposed to run only once and it gets called a second time before the first instance is complete, that would result in being called more than once. Redefining it first means that the function can't run twice.

Is this recursion or not

function x(){
window.setTimeout(function(){
foo();
if(notDone()){
x();
};
},1000);
}
My concern being unbounded stack growth. I think this is not recursion since the x() call in the timer results in a brand new set of stack frames based on a new dispatch in the JS engine.
But reading the code as an old-fashioned non JS guy it makes me feel uneasy
One extra side question, what happens if I scheduled something (based on math rather than a literal) that resulted in no delay. Would that execute in place or would it be in immediately executed async, or is that implementation defined
It's not - I call it "pseudo-recursion".
The rationale is that it kind of looks like recursion, except that the function always correctly terminates immediately, hence unwinding the stack. It's then the JS event loop that triggers the next invocation.
It is recusive in a sense that it is a function that calls itself but I believe you are right about the stack trace being gone. Under normal execution the stack will just show that it was invoked by setTimeout. The chrome debugger for example will allow you to keep stack traces on async execution, I am not sure how they are doing it but the engine can keep track of the stack somehow.
No matter how the literal is calculated the execution will still be async.
setTimeout(function(){console.log('timeout');}, 0);console.log('executing');
will output:
executing
undefined
timeout
One extra side question, what happens if I scheduled something (based on math rather than a literal) that resulted in no delay. Would that execute in place or would it be in immediately executed async, or is that implementation defined
Still asynchronous. It's just that the timer will be processed immediately once the function returns and the JavaScript engine can process events on the event loop.
Recursion has many different definitions, but if we define it as the willful (as opposed to bug-induced) use of a function that calls itself repeatedly in order to solve a programming problem (which seems to be a common one in a Javascript context), it absolutely is.
The real question is whether or not this could crash the browser. The answer is no in my experience...at least not on Firefox or Chrome. Whether good practice or not, what you've got there is a pretty common Javascript pattern, used in a lot of semi-real-time web applications. Notably, Twitter used to do something very similar to provide users with semi-real-time feed updates (I doubt they still do it now that they're using a Node server).
Also, out of curiously I ran your script with the schedule reset to run every 50ms, and have experienced no slowdowns.

Why do you need to call both replayAll and verifyAll when using Google Closure mocks?

Reading code using Closure's Mocks and am a big confused by the syntax. Many look like this:
mockChart();
// Test
this.mockControl.$replayAll();
this.mainMethod(testData);
// Verify
this.mockControl.$verifyAll();
// lots of asserts
I don't understand why one would call both replay and then later verify. It sounds like replay is actually doing the work of record, which I would have expected to have started already.
The flow is a bit different from Mockito, the only other framework I'm familiar with, and I haven't found good documentation at this level (just class-level jsdoc).
You can think of there being two stages when mocking something with Closure.
During the first, the test informs the mock framework of what calls are expected and how it should respond to them. Calling mockFoo.doTheThing() during this stage will add an expected call to doTheThing on the Foo mock.
During the second, the mock framework records calls while the test is running. Calling mockFoo.doTheThing() during this stage will record the fact that doTheThing was called, and possibly run some test code that was added in the first stage.
The first stage starts when the MockControl object is created, and ends when $replayAll is called. The second stage starts when $replayAll is called and ends when $verifyAll is called, at which point the mock framework checks that it all of the expected method calls were made.

Non-blocking asynchronous tests using QUnit

It seems that the QUnit functions stop() and start() allow to wait for asynchronous tests, but during that waiting period the whole test suite hangs. Is there a way to run asynchronous tests in a non-blocking fashion using QUnit?
Looking at the docs for asyncTest and stop, there's two reason's I can see that it's set up like that.
So that you aren't accidentally running two tests at a time which might conflict with something (ie, modifying the DOM and so changing each others' test results).
So that QUnit knows when the tests have finished. If it comes to the end of all the synchronous tests, then it'll write up the results, which you don't really want it to do if there are still async tests happening in the background.
So these are a good thing, and you probably don't actually want the async tests to not block as they run. You could probably do it by calling start immediately after the start of your async tests, but remember, JavaScript is actually single threaded (even though it sometimes gives the appearance of multi-threading), so this might cause unexpected results, as you can't guarantee when your async test will continue running... it might not (probably won't) be until after the other tests have finished and the results have been published.

Categories

Resources