Determining when karma-pact mock server has started - javascript

We're using the karma-pact plugin to run our pact JS client tests, based on the example from https://github.com/pact-foundation/pact-js/blob/master/karma/mocha/client-spec.js .
In the example there's a timeout in the before(), I believe to ensure the mock service has started before running the tests (see comment "required for slower Travis CI builds").
I'm reluctant to set a fixed timeout in our tests as it'll either be too short or too long in different environments (e.g. CI vs local) and so I was looking for a way to check if the server has started.
I'd tried using the pact API https://github.com/pact-foundation/pact-node#check-if-a-mock-server-is-running , however this appears to start a new mock server which conflicts with the one started by the karma-pact plugin (an Error: kill ESRCH error is reported when trying to run pact.createServer().running from within a test).
Is there a way to determine if the mock server has started up e.g. by waiting for a URL to become available? Possibly there's a way to get a reference the mock server started by the karma-pact plugin in order to use the pact-node API?

Actually the simplest way is to wait for the port to be in use.
Karma Pact by default will start the Mock on port 1234 (and you can specify your own). Once the port is up, the service is running and you can proceed.
For example, you could use something like wait-for-host to detect the running mock service:
var waitForPort = require('wait-for-port');
waitForPort('localhost', 1234, function(err) {
if (err) throw new Error(err);
// ... Mock Service is up - now we can run the tests
});

Related

Node.js API to spawn off a call to another API

I created a Node.js API.
When this API gets called I return to the caller fairly quickly. Which is good.
But now I also want API to call or launch an different API or function or something that will go off and run on it's own. Kind of like calling a child process with child.unref(). In fact, I would use child.spawn() but I don't see how to have spawn() call another API. Maybe that alone would be my answer?
Of this other process, I don't care if it crashes or finishes without error.
So it doesn't need to be attached to anything. But if it does remain attached to the Node.js console then icing on the cake.
I'm still thinking about how to identify & what to do if the spawn somehow gets caught up in running a really long time. But ready to cross that part of this yet.
Your thoughts on what I might be able to do?
I guess I could child.spawn('node', [somescript])
What do you think?
I would have to explore if my cloud host will permit this too.
You need to specify exactly what the other spawned thing is supposed to do. If it is calling an HTTP API, with Node.js you should not launch a new process to do that. Node is built to run HTTP requests asynchronously.
The normal pattern, if you really need some stuff to happen in a different process, is to use something like a message queue, the cluster module, or other messaging/queue between processes that the worker will monitor, and the worker is usually set up to handle some particular task or set of tasks this way. It is pretty unusual to be spawning another process after receiving an HTTP request since launching new processes is pretty heavy-weight and can use up all of your server resources if you aren't careful, and due to Node's async capabilities usually isn't necessary especially for things mainly involving IO.
This is from a test API I built some time ago. Note I'm even passing a value into the script as a parameter.
router.put('/test', function (req, res, next) {
var u = req.body.u;
var cp = require('child_process');
var c = cp.spawn('node', ['yourtest.js', '"' + u + '"'], { detach: true });
c.unref();
res.sendStatus(200);
});
The yourtest.js script can be just about anything you want it to be. But I thought I would have enjoy learning more if I thought to first treat the script as a node.js console desktop app. FIRST get your yourtest.js script to run without error by manually running/testing it from your console's command line node yourstest.js yourparamtervalue THEN integrate it in to the child.spawn()
var u = process.argv[2];
console.log('f2u', u);
function f1() {
console.log('f1-hello');
}
function f2() {
console.log('f2-hello');
}
setTimeout(f2, 3000); // wait 3 second before execution f2(). I do this just for troubleshooting. You can watch node.exe open and then close in TaskManager if node.exe is running long enough.
f1();

How to keep protractor running?

I'm trying to access Db in protractor tests using a sql server driver for NodeJs (protractor is a nodejs application so this is no problem)
The idea is to check Db data in our e2e tests:
We can check whether some hidden things are written correctly in the Db that cannot be seen on the UI (e.x Logs,..)
We can isolate features in our e2e testing: we don't rely on another feature to display the data to check whether the feature writing the data works correctly.
The problem I'm having is whenever protractor finishes interacting with the browser, it will terminate. Therefore, my code to access the Db cannot verify the data retrieved (e.x expect(dataFromDb).toEqual('foo')) because requests to Db are asynchronous in NodeJs.
At the time when I retrieve the data via the callback, protractor has been terminated.
It looks to me that protractor is only aware of web browser promises and terminates when there are no outstanding browser promises.
Is there any solution to keeping protractor alive so that I can verify my Db data? Thanks.
Two things to keep in mind.
1) expect(dataFromDb).toEqual('foo')): Protractor wrapped expect to understand promises. However, it only understands webdriver.promise (i.e. no $q or any other promise). If you want to make assertions against non webdriver promises, you have to resolve the promise yourself like:
dataFromDb.then(function(resolvedData) {
expect(resolvedData).toEqual('foo')
})
2) Protractor does not "terminate". Protractor only helps you kick off your test using another test framework (i.e. jasmine, mocha); once it does that it is only a library of tools (i.e. locators, waitForAngular, etc) that you run on top of that test framework. It's that other framework you must prevent from terminating. I don't know what framework you're using, but I'll use jasmine as an example:
it('call db', function(done) { //notice the inclusion of `done`
browser.get('something'); //this is protractor
element(by.xyz).click(); //this is protractor
var data = queryDatabase(); // you must tell jasmine to wait for this.
data.then(function(resolvedData) {
expect(resolvedData).toBe('foo');
done(); // tell jasmine you're done.
})
})
Side note, protractor patched jasmine it to wait for webdriver commands to finish (just like how it patched expect) for user's convenience. However, if you don't use webdriver's promise you need to tell it when the test is done via the done callback

Node.js http-proxy drops websocket requests

Okay, I've spent over a week trying to figure this out to no avail, so if anyone has a clue, you are a hero. This isn't going to be an easy question to answer, unless I am being a dunce.
I am using node-http-proxy to proxy sticky sessions to 16 node.js workers running on different ports.
I use Socket.IO's Web Sockets to handle a bunch of different types of requests, and use traditional requests as well.
When I switched my server over to proxying via node-http-proxy, a new problem crept up in that sometimes, my Socket.IO session cannot establish a connection.
I literally can't stably reproduce it for the life of me, with the only way to turn it on being to throw a lot of traffic from multiple clients to the server.
If I reload the user's browser, it can then sometimes re-connect, and sometimes not.
Sticky Sessions
I have to proxy sticky sessions as my app authenticates on a per-worker basis, and so it routes a request based on its Connect.SID cookie (I am using connect/express).
Okay, some code
This is my proxy.js file that runs in node and routes to each of the workers:
var http = require('http');
var httpProxy = require('http-proxy');
// What ports the proxy is routing to.
var data = {
proxyPort: 8888,
currentPort: 8850,
portStart: 8850,
portEnd: 8865,
};
// Just gives the next port number.
nextPort = function() {
var next = data.currentPort++;
next = (next > data.portEnd) ? data.portStart : next;
data.currentPort = next;
return data.currentPort;
};
// A hash of Connect.SIDs for sticky sessions.
data.routes = {}
var svr = httpProxy.createServer(function (req, res, proxy) {
var port = false;
// parseCookies is just a little function
// that... parses cookies.
var cookies = parseCookies(req);
// If there is an SID passed from the browser.
if (cookies['connect.sid'] !== undefined) {
var ip = req.connection.remoteAddress;
if (data.routes[cookies['connect.sid']] !== undefined) {
// If there is already a route assigned to this SID,
// make that route's port the assigned port.
port = data.routes[cookies['connect.sid']].port;
} else {
// If there isn't a route for this SID,
// create the route object and log its
// assigned port.
port = data.currentPort;
data.routes[cookies['connect.sid']] = {
port: port,
}
nextPort();
}
} else {
// Otherwise assign a random port, it will/
// pick up a connect SID on the next go.
// This doesn't really happen.
port = nextPort();
}
// Now that we have the chosen port,
// proxy the request.
proxy.proxyRequest(req, res, {
host: '127.0.0.1',
port: port
});
}).listen(data.proxyPort);
// Now we handle WebSocket requests.
// Basically, I feed off of the above route
// logic and try to route my WebSocket to the
// same server regular requests are going to.
svr.on('upgrade', function (req, socket, head) {
var cookies = parseCookies(req);
var port = false;
// Make sure there is a Connect.SID,
if (cookies['connect.sid'] != undefined) {
// Make sure there is a route...
if (data.routes[cookies['connect.sid']] !== undefined) {
// Assign the appropriate port.
port = data.routes[cookies['connect.sid']].port;
} else {
// this has never, ever happened, i've been logging it.
}
} else {
// this has never, ever happened, i've been logging it.
};
if (port === false) {
// this has never happened...
};
// So now route the WebSocket to the same port
// as the regular requests are getting.
svr.proxy.proxyWebSocketRequest(req, socket, head, {
host: 'localhost',
port: port
});
});
Client Side / The Phenomena
Socket connects like so:
var socket = io.connect('http://whatever:8888');
After about 10 seconds on logging on, I get this error back on this listener, which doesn't help much.
socket.on('error', function (data) {
// this is what gets triggered. ->
// Firefox can't establish a connection to the server at ws://whatever:8888/socket.io/1/websocket/Nnx08nYaZkLY2N479KX0.
});
The Socket.IO GET request that the browser sends never comes back - it just hangs in pending, even after the error comes back, so it looks like a timeout error. The server never responds.
Server Side - A Worker
This is how a worker receives a socket request. Pretty simple. All workers have the same code, so you think one of them would get the request and acknowledge it...
app.sio.socketio.sockets.on('connection', function (socket) {
// works... some of the time! all of my workers run this
// exact same process.
});
Summary
That's a lot of data, and I doubt anyone is willing to confront it, but i'm totally stumped, don't know where to check next, log next, whatever, to solve it. I've tried everything I know to see what the problem is, to no avail.
UPDATE
Okay, I am fairly certain that the problem is in this statement on the node-http-proxy github homepage:
node-http-proxy is <= 0.8.x compatible, if you're looking for a >=
0.10 compatible version please check caronte
I am running Node.js v0.10.13, and the phenomena is exactly as some have commented in github issues on this subject: it just drops websocket connections randomly.
I've tried to implement caronte, the 'newer' fork, but it is not at all documented and I have tried my hardest to piece together their docs in a workable solution, but I can't get it forwarding websockets, my Socket.IO downgrades to polling.
Are there any other ideas on how to get this implemented and working? node-http-proxy has 8200 downloads yesterday! Sure someone is using a Node build from this year and proxying websockets....
What I am look for exactly
I want to accomplish a proxy server (preferrably Node) that proxies to multiple node.js workers, and which routes the requests via sticky sessions based on a browser cookie. This proxy would need to stably support traditional requests as well as web sockets.
Or...
I don't mind accomplishing the above via clustered node workers, if that works. My only real requirement is maintaining sticky sessions based on a cookie in the request header.
If there is a better way to accomplish the above than what I am trying, I am all for it.
In general I don't think node is not the most used option as a proxy server, I, for one use nginx as a frontend server for node and it's a really great combination. Here are some instructions to install and use the nginx sticky sessions module.
It's a lightweight frontend server with json like configuration, solid and very well tested.
nginx is also a lot faster if you want to serve static pages, css. It's ideal to configure your caching headers, redirect traffic to multiple servers depending on domain, sticky sessions, compress css and javascript, etc.
You could also consider a pure load balancing open source solution like HAProxy. In any case I don't believe node is the best tool for this, it's better to use it to implement your backend only and put something like nginx in front of it to handle the usual frontend server tasks.
I agree with hexacyanide. To me it would make the most sense to queue workers through a service like redis or some kind of Message Query system. Workers would be queued through Redis Pub/Sub functionality by web nodes(which are proxied). Workers would callback upon error, finish, or stream data in realtime with a 'data' event. Maybe check out the library kue. You could also roll your own similar library. RabbitMQ is another system for similar purpose.
I get using socket.io if you're already using that technology, but you need to use tools for their intended purpose. Redis or a MQ system would make the most sense, and pair great with websockets(socket.io) to create realtime, insightful applications.
Session Affinity(sticky sessions) is supported through Elastic LoadBalancer for aws, this supports webSockets. A PaaS provider(Modulus) does this exactly. Theres also satalite which provides sticky sessions for node-http-proxy, however I have no idea if it supports webSockets.
I've been looking into something very similar to this myself, with the intent of generating (and destroying) Node.js cluster nodes on the fly.
Disclaimer: I'd still not recommend doing this with Node; nginx is more stable for the sort of design architecture that you're looking for, or even more so, HAProxy (very mature, and easily supports sticky-session proxying). As #tsturzl indicates, there is satellite, but given the low volume of downloads, I'd tread carefully (at least in a production environment).
That said, since you appear to have everything already set up with Node, rebuilding and re-architecting may be more work than it's worth. Therefore, to install the caronte branch with NPM:
Remove your previous http-node-proxy Master installation with npm uninstall node-proxy and/or sudo npm -d uninstall node-proxy
Download the caronte branch .zip and extract it.
Run npm -g install /path/to/node-http-proxy-caronte
In my case, the install linkage was broken, so I had to run sudo npm link http-proxy
I've got it up and running using their basic proxy example -- whether or not this resolves your dropped sessions issue or not, only you will know.

nodejs running two apps on one port

To make downtime's caused by long lasting app-reboots less painful, I thought of something like:
Main App #1 on Port 80.
Fail-over App #2 on Port 80 too, but answer requests only in case Application #1 isn't working.
Let App #2 serve a 'maintaining' message for active users.
Running two processes on the same port ends up in an Error: EADDRINUSE - so the simple way isn't working. I stumbled upon the server.on('error') event and decided to let App #2 wait until App #1 might stop, so the port becomes available:
function tryPitchIn(){
var server = http.createServer(app);
server.on('listening', function(){
console.log('Application #1 crashed/ended');
console.log('Pitching in...');
});
server.on('error', function(){
console.log('noting to do');
setTimeout(tryPitchIn, 250);
});
server.listen(80);
}
tryPitchIn();
Although the above is working great, I have to struggle with ending App #2 on the initialization of App #1, which isn't easy to do on different operating systems.
Is it possible to give a node process (inited by npm start) an static ID to terminate it from another process - preferably cross os? Or other ideas for the scenario?
You can serve your App #1 on another port, and write a microapp that will proxy requests to it, and return something else if the request to App #1 fails. You can use node-http-proxy like here, or roll your own solution like this and just adding an "on error" clause.
I like Amadan's answer. On top of that I have to mention that doing much other than logging in .on('error') (and also for unchaughtException) is not recommended. The system could be in a volatile state (internal states of modules could be inconsistent, for example), and you don't want to keep running for long like that.
Either do what Amadan says, or use node.js domains, or use several processes behind a load balancer (which is basically what Amadan says).

Configuring Intern to setup/teardown my server mock

I am writing a test suite for a JavaScript widget using Intern.
I have written some pure-JavaScript tests and some in-page DOM tests, but I'm a little stuck on how to write functional tests for the Ajax functionality, which should talk to my simple Node.js mock server (which works a treat for manual tests).
Specifically, what I would like to do:
Start the Node.js mock server as part of the test suite's setup phase
Teardown the mock server when the test is over
(Bonus points) Be able to interrogate the mock server from my Intern tests, for example, checking on the contents of a POST request to the mock
I am stuck on all three - I can't find any documentation or example code from Intern on how to handle setup or teardown of a separate process (like a Node.js mock server) in the test suite.
I am using Intern with Sauce Labs (hosted Selenium) - I'm not sure if my problem needs to be solved on just the Intern side, or on the Sauce Labs side as well. Hopefully somebody has got this working and can advise.
If you want a server to start and stop for each suite, the setup and teardown methods would be the place to do this, something like:
var server;
registerSuite({
name: 'myTests',
setup: function () {
server = startServer();
},
teardown: function () {
server.close();
},
...
});
startServer would be whatever function you use to start your test server. Presumably it would return an object that would be used to interact with the server. Any tests within the suite would then have access to the server object.

Categories

Resources