Unit testing window.onerror with jasmine - javascript

I am fairly new to javascript and am trying to use jasmine to unit test some error-handling code.
In particular, I'm trying to write some tests that verify that our custom code (called windowHandleError) that replaces window.onerror() gets called, and is doing what we want it to.
I've tried something along the lines of:
it("testing window.onerror", function() {
spyOn(globalerror, 'windowHandleError');
globalerror.install();
var someFunction = function() {
undefinedFunction();
};
expect(function() {someFunction();}).toThrow();
expect(globalerror.windowHandleError).toHaveBeenCalled();
});
But it doesn't trigger the onerror. There are some related questions I've looked at, but they seem to ask about specific browsers, or how/where to use onerror instead of how to test it.
window.onerror not firing in Firefox
Capturing JavaScript error in Selenium
window.onerror does not work
How to trigger script.onerror in Internet Explorer?
Based on what some of those said, I thought running the spec tests in a debugger would force the onerror to trigger, but no dice. Anyone know a better approach to this?

I recently developed small JavaScript error handler with unit tests based on Buster.JS which is similar to Jasmine.
The test that exercises the window.onerror looks like this:
"error within the system": function (done) {
setTimeout(function() {
// throw some real-life exception
not_defined.not_defined();
}, 10);
setTimeout(function() {
assert.isTrue($.post.called);
done();
}, 100);
}
It throws a real-life error within a setTimeout callback which will not stop the test execution and will check that a spy was called after 100ms in another setTimeout and then call done() which is how you test async functionality with Buster.JS. The same approach is available with Jasmine by using done() in async tests.

Without knowledge of Jasmine.
All unit tests run inside a try/catch block so that if one test dies, the next test can run (True for QUnit at least). And since window.onerror doesn't catch exceptions that is already caught inside a try/catch, it will not run when testing this in a unittest.
Try calling the onerror function manually based on the exception.
try {
//Code that should fail here.
someUndefinedFunction();
} catch (e) {
window.onerror.call(window, e.toString(), document.location.toString(), 2);
}
expect(globalerror.windowHandleError).toHaveBeenCalled();
This is far from perfect as document.location is not the same as the url argument, and you need to manually set line number. A better way would be to parse e.stack for the correct file and line number.
When calling the function like this inside a unit test, it might be a better idea to simply test that your function is set and that it functions properly when called with all faked arguments.

Related

Expect jasmine Spy to be called "eventually", before timeout

I've written a lot of asynchronous unit tests lately, using a combination of Angular's fakeAsync, returning Promises from async test body functions, the Jasmine done callback, etc. Generally I've been able to make everything work in a totally deterministic way.
A few parts of my code interact in deeply-tangled ways with 3rd party libraries that are very complex and difficult to mock out. I can't figure out a way to hook an event or generate a Promise that's guaranteed to resolve after this library is finished doing background work, so at the moment my test is stuck using setTimeout:
class MyService {
public async init() {
// Assume library interaction is a lot more complicated to replace with a mock than this would be
this.libraryObject.onError.addEventListener(err => {
this.bannerService.open("Load failed!" + err);
});
// Makes some network calls, etc, that I have no control over
this.libraryObject.loadData();
}
}
it("shows a banner on network error", async done => {
setupLibraryForFailure();
await instance.init();
setTimeout(() => {
expect(banner.open).toHaveBeenCalled();
done();
}, 500); // 500ms is generally enough... on my machine, probably
});
This makes me nervous, especially the magic number in the setTimeout. It also scales poorly, as I'm sure 500ms is far longer than any of my other tests take to complete.
What I think I'd like to do, is be able to tell Jasmine to poll the banner.open spy until it's called, or until the test timeout elapses and the test fails. Then, the test should notice as soon as the error handler is triggered and complete. Is there a better approach, or is this a good idea? Is it a built-in pattern somewhere that I'm not seeing?
I think you can take advantage of callFake, basically calling another function once this function is called.
Something like this:
it("shows a banner on network error", async done => {
setupLibraryForFailure();
// maybe you have already spied on banner open so you have to assign the previous
// spy to a variable and use that variable for the callFake
spyOn(banner, 'open').and.callFake((arg: string) => {
expect(banner.open).toHaveBeenCalled(); // maybe not needed because we are already doing callFake
done(); // call done to let Jasmine know you're done
});
await instance.init();
});
We are setting a spy on banner.open and when it is called, it will call the callback function using the callFake and we call done inside of this callback letting Jasmine know we are done with our assertions.

Why does the debugger fail and test pass unless done() callback invoked?

1) Can anyone explain why, when debugging this jasmine test for hapi, the debugger never hits any breakpoint inside the injected section (see comment) unless done is called later on? How can the absence of a line of code that is not yet reached affect the debugger earlier on ?
I am aware that it is important to call the done method (which I have commented out on purpose). I am however surprised by the consequences.
2) Another unfortunate side-effect of forgetting to call the done method is that the test always passes. Instead of passing I would rather see it fail if I make an error. Any suggestions?
const server = require("../lib/server");
describe("Server hello", function () {
it("returns status code 200", function (done) {
server.inject({ method: 'GET', url: '/' }, (res) => {
// Never reached if done uncommented - even by debugger breakpoint - why?");
console.log("GOT " + res.payload);
expect(res.statusCode).toBe(200);
// done(); // Test always passes if uncommented - is there any way to force an error instead?
});
});
});
Read the source, Luke! Jasmine docs for asynchronous testing note:
This spec will not start until the done function is called in the call to beforeEach above. And this spec will not complete until its done is called.
So if you don't call done your suite is not run, not that it runs and times out!

Q.delay not working with Q promise library and Jasmine clock?

I have a very basic Jasmine test using Q, and it doesn't seem to be working. I'm simply using the Jasmine mock clock and trying to use Q.delay, which I think uses setTimeout under the hood.
I had some more complex tests that involved calling setTimeout() from a Q promise's then handler, and that didn't seem to work either, but I thought this would make a simpler test case to post on Stack Overflow.
Here's my very simple test case:
it('clock test', function() {
jasmine.clock().install();
var foo = null;
Q.delay('hi', 10000).then(function(arg) {
console.log('foo');
foo = arg;
});
jasmine.clock().tick(10010);
expect(foo).toEqual('hi');
jasmine.clock().uninstall();
});
(This test was based on the test case found in a similar SO question: Jasmine clock tick & Firefox: failing to trigger a Q.delay method)
When I run the test, it fails saying Expected null to equal 'hi'. The console.log never even executes.
To see if the problem was with Q or something else, I tried adding a simple setTimeout call inside the spec:
setTimeout(function() {
console.log("bar");
}, 10000);
This worked - bar was printed to the console after the call to jasmine.clock().tick.
After the Jasmine clock is uninstalled, the normal clock kicks in, and after waiting 10 seconds, then the foo gets printed out.
Anyone have any idea what's going on?
You are asking it to do async in a sync way. Don't. Try using Jasmine's async instead and drop using the clock.
it('clock test', function(done) {
var foo = null;
Q.delay('hi', 10000).then(function(arg) {
console.log('foo');
foo = arg;
expect(foo).toEqual('hi');
done();
});
}, 15000);
Anyone have any idea what's going on?
Probably Q is serious about the asynchrony guarantee given by in the Promises/A+ spec. So even when setTimeout executes earlier than expected, that is no reason to suddenly call then callbacks synchronously. It still will need to wait a tick after the promise is fulfilled.
An alternative explanation would be that Q took its private copy of setTimout during its module initialising, to prevent exactly such messing around with builtins. Instead of calling the global function, it would use its internal reference to the old function, and not be affected by jasmine.clock() at all.
You can spyOn Q.delay and avoid having your test wait for the given delay:
describe('Q.delay', function() {
it('should call Q.delay', function(done) {
spyOn(Q.makePromise.prototype, 'delay').and.returnValue(function() {
return this;
});
Q('A resolve value').delay(3000).then(done);
expect(Q.makePromise.prototype.delay).toHaveBeenCalledWith(3000);
});
});

How does Mocha know about test failure in an asynchronous test?

I am trying to understand how the asynchronous code for Mocha (at http://mochajs.org/#getting-started) works.
describe('User', function() {
describe('#save()', function() {
it('should save without error', function(done) {
var user = new User('Luna');
user.save(function(err) {
if (err) throw err;
done();
});
});
});
});
I want to know how Mocha decides whether a test has succeeded or failed behind the scenes.
I can understand from the above code that user.save() being asynchronous would return immediately. So Mocha would not decide if the test has succeeded or failed after it executes it(). When user.save() ends up calling done() successfully, that's when Mocha would consider it to be a successful test.
I cannot understand how it Mocha would ever come to know about a test failure in the above case. Say, user.save() calls its callback with the err argument set, then the callback throws an error. None of Mocha's function was called in this case. Then how would Mocha know that an error occurred in the callback?
Mocha is able to detect failures that prevent calling the callback or returning a promise because it uses process.on('uncaughtException', ...); to detect exceptions which are not caught. Since it runs all tests serially, it always knows to which test an uncaught exception belongs. (Sometimes people are confused by this: telling Mocha a test is asynchronous does not mean Mocha will run it in parallel with other tests. It just tells Mocha it should wait for a callback or a promise.)
Unless there is something that intervenes to swallow exceptions, Mocha will know that the test failed and will report the error as soon as it detects it. Here is an illustration. The first test fails due to a generic exception thrown. The 2nd one fails due to an expect check that failed. It also raises an unhandled exception.
var chai = require("chai");
var expect = chai.expect;
it("failing test", function (done) {
setTimeout(function () {
throw new Error("pow!");
done();
}, 1000);
});
it("failing expect", function (done) {
setTimeout(function () {
expect(1).to.equal(2);
done();
}, 1000);
});
This is the output on my console:
1) failing test
2) failing expect
0 passing (2s)
2 failing
1) failing test:
Uncaught Error: pow!
at null._onTimeout (test.js:6:15)
2) failing expect:
Uncaught AssertionError: expected 1 to equal 2
+ expected - actual
-1
+2
at null._onTimeout (test.js:13:22)
The stack traces point to the correct code lines. If the exceptions happened deeper, the stack would be fuller.
When Mocha cannot report what went wrong exactly, that's usually because there is intervening code that swallows the exception that was raised. Or when you use promises the problem may be that someone forgot to call a method that indicates whether the promise is supposed to be completely processed and unhandled exceptions should be thrown. (How you do this depends on the promise implementation you use.)
It won't, it's a shame. It has no way to know that your callback is executing. It's an easier way to do asynchronous testing, where you just tell the test when you are finished. The downside, as you have noticed, is that errors in asynchronous callbacks won't be detected. Nevermind, Mocha hooks to process.on('uncaughtException',...) as mentioned by Louis. Note that if you use done instead of waitsFor and runs in jasmine, then you will have the problem.
Other frameworks like js-test-driver force to you wrap callbacks so the the testing framework can put a try catch around your callbacks (and you don't need to call done). Your test would look like the following:
var AsynchronousTest = AsyncTestCase('User');
AsynchronousTest.prototype.testSave = function(queue) {
queue.call('Saving user', function(callbacks) {
var user = new User('Luna');
user.save(callbacks.add(function(err) {
if (err) throw err;
// Run some asserts
}));
});
};

Event callback's code coverage

I use Karma (currently v0.10.10) and Jasmine for my unit tests, and Istanbul (via karma-coverage) for code coverage reports. I've noticed a strange behaviour of the code coverage reporter in a particular case.
The code I'm trying to test is roughly this:
/**
* #param {HTMLInputElement} element
*/
var foo = function(element) {
var callback = function() {
// some code
};
element.addEventListener("input", callback);
};
In my test, I dispatch a custom input event on the tested element and the callback function executes. The test checks the effects of the callback, and the test passes. In fact, even when I put a hairy console.log("foo") in the callback, I can clearly see it being printed out. However, Istanbul's report erroneously indicates that the callback was not executed at all.
Modifying the tested code to use an anonymous function in the event listener's callback fixes the misbehaviour:
element.addEventListener("input", function() {
callback();
});
However, I utterly despise "solutions" that modify the application's code to compensate for a code quality control tool's deficiency.
Is there a way I can make the code coverage get picked up correctly without wrapping the callback in an anonymous function?
The callback is being passed into your method. Istanbul is completely unaware of where that callback came from, other than from your function definition. Istanbul knows that callback() came from the parameter callback, but doesn't know the insides of that callback (e.g. the function before it was passed in as a callback).
// edit example
var callback = function(args) {
//do stuff
}
var foo = function (callback) {
// do stuff
callback(arguments);
}
Now create a test for foo's functionality, and a separate unit test for callback's functionality. A unit test should only test one thing. Each function should have it's own unit test. Test that foo does what it's supposed to (regardless of the callback) and that callback does what it's supposed to, (passing in your own mock data for the test). In general, named functions are always the way to go.
I had this exact problem with callbacks in parts of my nodejs code not being marked as covered even though I had tests that definitely did cover them. I had this issue both with mocha/istanbul and with mocha/blanket.js.
I eventually noticed that many of my tests were not running with code coverage which lead me to the issue.
I solved it by adding the --recursive option to mocha as some tests were in sub-directories.

Categories

Resources