Stop a process by port from javascript - javascript

I'm using jest to write some tests for a node.js application. I have a server server.js and a test file server.test.js. In my server.js I use the line
var secureServer = https.createServer(options,
app).listen(config.node_port, () => {
logger.info("Service running on " + config.node_port)
});
to start my server on port 8082. In server.test.js I use
var posClient = require('./pos-ClientJS.js');
to get access to the functions, that I have to test. When I run my test I get the following console output:
Jest did not exit one second after the test run has completed.
This usually means that there are asynchronous operations that weren't
stopped in your tests. Consider running Jest with `--
detectOpenHandles` to troubleshoot this issue.
So my question is: Is there a way to stop the server running with javascript code like stop(8082) or sth.? Or is there a less difficult way to solve this problem without stopping the process?

From the Nodejs HTTP module documentation, you can call secureServer.close() to stop listening at the specified port. If the module exposes the server to Jest, you can use the teardown methods to stop the server automatically after tests complete.

Related

Run Cypress without checking localhost for running web server

In addition to web based e2e tests, I am using Cypress to assert some graphql responses are correct.
However if I just try to run those tests on their own, with the web server not running, then Cypress complains with the following message;
Cypress could not verify that this server is running:
http://localhost:3000
This server has been configured as your baseUrl, and tests will likely fail if it is not running.
Is it possible to run Cypress in a mode that doesn't check this server first?

express runs on two ports even when port is specified

I use app.listen(PORTNO) for running my express app.
It runs on 127.0.0.1:PORTNO but also on 127.0.0.1:3000
3000 is the default port no on which express runs out of box.
Why this unexpected behaviour?
I have tried setting the env variable to production and also using http.createServer(app).listen(PORTNO);
I am generating my express app files using express-generator.
I am on a windows machine if its relevant
UPDATE:
I start the server using npm start which runs bin\www, and it specifies the port to run the server.
But this does not explains the binding to two port :the one specified in app.js and the other in bin\www for the same app and the app being accessible from both of them.
Can you explain the why?
You should start your server using node server.js(filename). Try this if it helps since when you start it with npm it will get default configurations. And Moreover npm command is used to install the node modules(mostly) rather than running the server.

PhantomJS doesn't start while websocket connection is open

I'm working on slack bot and I've encountered curious problem. I have a module which scrapes web-page using phantomJS (via SpookyJS & CasperJS on top of it). I wrote this module and tested it running it from command line manually. It works well. But then I added slackbots npm module which abstracts Slack realtime API, and created a module with bot class. This bot module requires my module with scraping code (phantomJS) and calls its function when message event triggers:
var getAnswer = require('./getAnswer');
myBot.prototype._onMessage = function (message) {
if (this._isChatMessage(message) &&
this._isChannelConversation(message) &&
this._isMentioningMe(message)) {
this._reply(message);
}
};
this._reply basically just calls getAnswer(originalMessage.text) and then self.postMessageToChannel(channel.name, reply, {as_user: true});
getAnswer returns a promise, but it never get's fulfilled. I made CasperJS be verbose and saw that nothing happens after
[info] [phantom] Starting...
Everything just hangs...
I have no idea, how to fix this. I guess it's because slackbots module establishes websocket connection when I call Bot.prototype.run. Any suggestions?
As I said I use Spooky to spawn child CasperJS process. I went to Spooky documentation page and read this:
Specifically, each Spooky instance spawns a child Casper process that
runs a bootstrap script. The bootstrap script sets up a JSON-RPC
server that listens for commands from the parent Spooky instance over
a transport (either HTTP or stdio). The script also sets up a JSON-RPC
client that sends events to the parent Spooky instance via stdout.
I used http as a transport and it didn't work, so I changed it to stdio and it helped. I'd appreciate it if someone could explain why it helped?

What is the meaning of this??? Vanilla Node.js I/O blockage clears up when I use strongloop controller API?

Earlier I asked a question about what was causing a particular delay in node.js:
150ms delay in performing a HTTPS versus HTTP get request in Node
Unsatisfied with some of the answers I received, I decided to try to figure it out myself. I came across StrongLoop api server and decided to try it out just by chance. The result was that it fixed the delay! But I do not have any clue why and what is going on! I would like to know what could possibly be causing this blockage in vanilla node, and why running strongloop fixes it.
Here is my test code:
var https = require('https');
var http = require('http')
console.time("Stage1");
console.time("Stage2");
console.time("Response");
console.time("End");
var options = {
hostname: 'www.google.com',
method: 'GET'
}
function request() {
console.timeEnd("Stage1");
var req = https.request(options, function(res) {
res.on('data', function (chunk) {
buffer =+ chunk;
});
res.on('end', function () {
console.timeEnd("End");
});
}).on('response', function () {
console.timeEnd("Response");
});
console.timeEnd("Stage2");
req.end();
}
request();
This is what it looks like when I run it in vanilla node.js:
C:\Users\Jonathan\Desktop>node test
Stage1: 0ms
Stage2: 148ms
Response: 425ms
End: 537ms
And this is what it looks like running in SLC:
C:\Users\Jonathan\Desktop>slc run test
INFO strong-agent not profiling, StrongOps configuration not found.
Generate configuration with:
npm install -g strongloop
slc strongops
See http://docs.strongloop.com/strong-agent for more information.
supervisor running without clustering (unsupervised)
Stage1: 0ms
Stage2: 10ms
Response: 274ms
End: 387ms
What is going on??? Why does vanilla node take an additional 100+ ms to perform the https.response() function? What is causing this blockage?
PS. I am somewhat confident that it is within the node.js core as Process Monitor shows no file or network reads causing this significant of a delay.
EDIT: Additional Info:
Yes, I am using the latest version of node, and I have ran this code dozens of times with similar results on both a local machine and an online VPS.
I strongly suspect measurement error. You are one-shoting a call to google. How long that will take is going to vary a lot. I'd suggest running in a loop dozens to hundreds of times to get a better sense of variation.
My run of your code (on linux):
sam#samtu:/tmp % node _.js
Stage1: 0ms
Stage2: 29ms
Response: 290ms
End: 293ms
sam#samtu:/tmp % slc run _.js
INFO strong-agent not profiling, StrongOps configuration not found.
supervisor running without clustering (unsupervised)
Stage1: 0ms
Stage2: 11ms
Response: 299ms
End: 301ms
Btw, I think you mean "buffer += chunk;" (but you don't define buffer anywhere).
For the record, we (I'm one of the slc authors) don't have a custom build of node, we just run node. Also, the only sls crun code that is running is enough to see that you don't want clustering, and don't have a strongloop.json file, so we aren't going to cluster and aren't going to load our compiled addon... so we do nothing but start your app. This smells like a problem with your system, but you don't describe node version, or system, or how you installed node, or what systems you reproed on. Its clearly not some kind of universal problem (you can see my run above, node 0.10.32, ubuntu 14.10).
I suggest the relationship with slc is illusory. If your test file is test.js, try running this in the same directory:
require('module')._load(
require('path').resolve('test.js'),
null, true);
Which is effectively what slc run does.

Two files using supertest with mocha causing EADDRINUSE

I'm using supertest to unit test my server configurations and route handlers. The server configurations tests are in test.server.js and the route handling tests are in test.routes.handlers.js.
When I run all the test files using mocha ., I get EADDRINUSE. When I run each file individually, everything works as expected.
Both files define and require supertest, request = require('supertest'), and the express server file, app = require('../server.js'). In server.js, the server is started like so:
http.createServer(app).listen(app.get('port'), config.hostName, function () {
console.log('Express server listening on port ' + app.get('port'));
});
Is there something wrong in my implementation? How can I avoid EADDRINUSE error when running my tests?
mocha has a root Suite:
You may also pick any file and add "root" level hooks, for example add beforeEach() outside of describe()s then the callback will run before any test-case regardless of the file its in. This is because Mocha has a root Suite with no name.
We use that to start an Express server once (and we use an environment variable so that it runs on a different port than our development server):
before(function () {
process.env.NODE_ENV = 'test';
require('../../app.js');
});
(We don't need a done() here because require is synchronous.) This was, the server is started exactly once, no matter how many different test files include this root-level before function.
Try requiring supertest from within a root level before function in each of your files.
Answering my own question:
My supertest initialization looks like this:
var app = require('../server.js');
var request = require('supertest')(app);
In test.server.js, I had these require statements directly inside a describe. In test.routes.handlers.js, the statements were inside a before inside a describe.
After reading dankohn's answer, I was inspired to simply move the statements to the very top outside any describe or before and the tests all run without problems now.

Categories

Resources