Configuring Intern to setup/teardown my server mock - javascript

I am writing a test suite for a JavaScript widget using Intern.
I have written some pure-JavaScript tests and some in-page DOM tests, but I'm a little stuck on how to write functional tests for the Ajax functionality, which should talk to my simple Node.js mock server (which works a treat for manual tests).
Specifically, what I would like to do:
Start the Node.js mock server as part of the test suite's setup phase
Teardown the mock server when the test is over
(Bonus points) Be able to interrogate the mock server from my Intern tests, for example, checking on the contents of a POST request to the mock
I am stuck on all three - I can't find any documentation or example code from Intern on how to handle setup or teardown of a separate process (like a Node.js mock server) in the test suite.
I am using Intern with Sauce Labs (hosted Selenium) - I'm not sure if my problem needs to be solved on just the Intern side, or on the Sauce Labs side as well. Hopefully somebody has got this working and can advise.

If you want a server to start and stop for each suite, the setup and teardown methods would be the place to do this, something like:
var server;
registerSuite({
name: 'myTests',
setup: function () {
server = startServer();
},
teardown: function () {
server.close();
},
...
});
startServer would be whatever function you use to start your test server. Presumably it would return an object that would be used to interact with the server. Any tests within the suite would then have access to the server object.

Related

jest testing pass variable to another test

I'm trying to assign a variable from one test to be accessed within another test. For example:
let user;
test('create new user', async () => {
const response = await createUser()
user = response
})
test('delete user', async () => {
const response = await deleteUser(user.id)
})
I understand that jest has a --runInBand option, however this still has user as undefined in "delete user". Any ideas how to accomplish this with jest? Thank you
Each test runs independently, and for good reason. Tests should be confirming isolated conditions, methods, and logic. All --runInBand does is run the tests in serially, but they still won't necessarily be able to share data objects the way you seem to be expecting.
Also, assuming these methods defer to a backend service of some kind, you're not going to easily be able to fully test the behavior of that system. It sounds like you want an end-to-end or integration testing framework, as opposed to a unit testing framework like Jest.
Keeping with Jest, you're likely going to need to mock whatever backend service is being called in createUser and deleteUser. Jest mocks can help replace external functions with new ones that create the types of conditions you want to test.
Alternatively or in addition, you might be able to stub your user object using beforeAll or beforeEach, creating sample data that allows you to test how deleteUser behaves when it's passed a particular object (likely bypassing whatever backend persistence with an aforementioned mock).

Determining when karma-pact mock server has started

We're using the karma-pact plugin to run our pact JS client tests, based on the example from https://github.com/pact-foundation/pact-js/blob/master/karma/mocha/client-spec.js .
In the example there's a timeout in the before(), I believe to ensure the mock service has started before running the tests (see comment "required for slower Travis CI builds").
I'm reluctant to set a fixed timeout in our tests as it'll either be too short or too long in different environments (e.g. CI vs local) and so I was looking for a way to check if the server has started.
I'd tried using the pact API https://github.com/pact-foundation/pact-node#check-if-a-mock-server-is-running , however this appears to start a new mock server which conflicts with the one started by the karma-pact plugin (an Error: kill ESRCH error is reported when trying to run pact.createServer().running from within a test).
Is there a way to determine if the mock server has started up e.g. by waiting for a URL to become available? Possibly there's a way to get a reference the mock server started by the karma-pact plugin in order to use the pact-node API?
Actually the simplest way is to wait for the port to be in use.
Karma Pact by default will start the Mock on port 1234 (and you can specify your own). Once the port is up, the service is running and you can proceed.
For example, you could use something like wait-for-host to detect the running mock service:
var waitForPort = require('wait-for-port');
waitForPort('localhost', 1234, function(err) {
if (err) throw new Error(err);
// ... Mock Service is up - now we can run the tests
});

Making real requests to HTTP server in AngularJS unit/integration tests

Making a request that wasn't mocked with $httpBackend.when in Angular 1.x unit/integration test results in an error:
Error: Unexpected request: GET /real-request
Is it possible to make real HTTP requests with ngMock and Karma+Jasmine test rig? What is a good practice to do that?
AngularJS is opinionated framework, and its opinion on HTTP requests in unit tests is that all of them should be mocked.
It is not advisable to do real HTTP requests in unit tests for two reasons. Unit tests are supposed to be isolated and be fast. Making a real request makes a test asynchronous, which slows down test runs significantly. Making a real request breaks the isolation, the fact if a test passes depends on both tested unit and a backend.
This was taken into consideration when AngularJS ngMock module was designed (it is loaded automatically in unit tests by angular-mocks.js). The developer will hardly ever do asynchronous Jasmine unit tests with Angular, because there's no need to do that.
Integration tests differ. They may be not as broad as E2E tests (which are often run by Protractor) and test how several units work together, this may include a backend (HTTP server). So in the end Karma and Jasmine are still used, but the tests may be slower and asynchronous and do real HTTP requests.
This is where ngMockE2E module (usually used in E2E tests) kicks in. It is included in angular-mocks.js alongside with ngMock but isn't loaded by default.
The ngMockE2E is an AngularJS module which contains mocks suitable for end-to-end testing. Currently there is only one mock present in this module - the e2e $httpBackend mock.
ngMockE2E contains different $httpBackend implementation which can be used for the purpose. Its API varies. It isn't supposed to use flush and extend methods. $rootScope.$digest() may be used if there are $q promise chains that should be executed.
ngMockE2E won't work out of the box properly because of the the adjustments that are being made to Angular services by ngMock when its helper functions module and inject are used. A helper module for intergration tests can be used instead:
angular.module('ngMockI9n', []).config(function ($provide) {
// hack to restore original implementations which were overridden by ngMock
angular.injector(['ng', function ($httpBackendProvider, $browserProvider) {
$provide.provider('$httpBackend', $httpBackendProvider);
$provide.provider('$browserI9n', $browserProvider);
}]);
// make ngMockE2E $httpBackend use original $browser
var httpBackendI9nDecorator = angular.mock.e2e.$httpBackendDecorator
.map(function (dep) {
return (dep === '$browser') ? '$browserI9n' : dep;
});
$provide.decorator('$httpBackend', httpBackendI9nDecorator);
});
Additionally, a recipe for whitelisted real HTTP requests can be used to make the testing easier, although the best practice is to enumerate real and mocked requests explicitly.
beforeEach(module('app'));
beforeEach(module('ngMockI9n'));
beforeEach(inject(function ($httpBackend) {
$httpBackend.when('GET', '/mocked-request').respond(200, {});
// all other requests will be automatically whitelisted and treated as real
// so make sure that mocked requests are mocked above this point
angular.forEach(['GET', 'DELETE', 'JSONP', 'HEAD', 'PUT', 'POST', 'PATCH'],
function (method) {
$httpBackend.when(method).passThrough();
});
}));
it('does real async request', function (done) {
// async tests need extra `done` param
inject(function () {
$http.get('real-request').then(function (response) {
expect(response.data).toEqual(...);
})
.then(done, done.fail);
$rootScope.$digest();
});
});
it('does mocked sync request', function (done) {
// tests with mocked requests are async, too
inject(function () {
$http.get('mocked-request').then(function (response) {
expect(response.data).toEqual(...);
})
.then(done, done.fail);
$rootScope.$digest();
});
});
TL;DR: Use $httpBackend from ngMockE2E in integration tests for real requests, this requires some extra work to make it compatible with ngMock. Never do real requests in unit tests, this results in slow and trashy tests.

When using Jasmine's exceptGET / $httpBackend, what's the benefit of having .respond()?

I am using the book called AngularJS Up and Running. It gives an example of using exceptGET. This is the example:
mockBackend = $httpBackend;
mockBackend.exceptGET('/api/note')
.respond([{id:1, label: 'Mock'}]);
My question is, isn't the point of unittesting server calls to make the server call and verify that the server call is what we expect it to be?
With the above code, does it not just make a server call and force the response to equal [{id:1, label: 'Mock'}]? What's the point of doing it if we aren't able to check what the actual response is?
Because later on in the code, it checks the response like so:
mockBackend.flush();
expect(ctrl.items).toEqual([{id:1, label: 'Mock'}]);
Wouldn't it Always equal [{id:1, label: 'Mock'}] because that's what we forced the response to equal? What's the benefit of having .respond() and controlling the response?
If you would actually hit an API endpoint on the server in your unit test, it would not be a unit test anymore - it would now involve much more than just your component under test (controller/service/provider etc) - the network, the web server, the backend itself, probably a database etc - now this becomes an integration test or a system/functional test.
Your unit test would not be isolated anymore and would depend on more things than it should. The point of mocking is to make the test isolated, independent and imitate certain desired conditions - in this case an HTTP response - and then check how your component under test would react.
expect(ctrl.items).toEqual([{id:1, label: 'Mock'}]);
This expect call itself, at least, would check that the mock was successfully applied and the items controller variable contains the mocked response.
Please see more at:
Unit tests vs Functional tests
What is Mocking?

How to keep protractor running?

I'm trying to access Db in protractor tests using a sql server driver for NodeJs (protractor is a nodejs application so this is no problem)
The idea is to check Db data in our e2e tests:
We can check whether some hidden things are written correctly in the Db that cannot be seen on the UI (e.x Logs,..)
We can isolate features in our e2e testing: we don't rely on another feature to display the data to check whether the feature writing the data works correctly.
The problem I'm having is whenever protractor finishes interacting with the browser, it will terminate. Therefore, my code to access the Db cannot verify the data retrieved (e.x expect(dataFromDb).toEqual('foo')) because requests to Db are asynchronous in NodeJs.
At the time when I retrieve the data via the callback, protractor has been terminated.
It looks to me that protractor is only aware of web browser promises and terminates when there are no outstanding browser promises.
Is there any solution to keeping protractor alive so that I can verify my Db data? Thanks.
Two things to keep in mind.
1) expect(dataFromDb).toEqual('foo')): Protractor wrapped expect to understand promises. However, it only understands webdriver.promise (i.e. no $q or any other promise). If you want to make assertions against non webdriver promises, you have to resolve the promise yourself like:
dataFromDb.then(function(resolvedData) {
expect(resolvedData).toEqual('foo')
})
2) Protractor does not "terminate". Protractor only helps you kick off your test using another test framework (i.e. jasmine, mocha); once it does that it is only a library of tools (i.e. locators, waitForAngular, etc) that you run on top of that test framework. It's that other framework you must prevent from terminating. I don't know what framework you're using, but I'll use jasmine as an example:
it('call db', function(done) { //notice the inclusion of `done`
browser.get('something'); //this is protractor
element(by.xyz).click(); //this is protractor
var data = queryDatabase(); // you must tell jasmine to wait for this.
data.then(function(resolvedData) {
expect(resolvedData).toBe('foo');
done(); // tell jasmine you're done.
})
})
Side note, protractor patched jasmine it to wait for webdriver commands to finish (just like how it patched expect) for user's convenience. However, if you don't use webdriver's promise you need to tell it when the test is done via the done callback

Categories

Resources