nodejs running two apps on one port - javascript

To make downtime's caused by long lasting app-reboots less painful, I thought of something like:
Main App #1 on Port 80.
Fail-over App #2 on Port 80 too, but answer requests only in case Application #1 isn't working.
Let App #2 serve a 'maintaining' message for active users.
Running two processes on the same port ends up in an Error: EADDRINUSE - so the simple way isn't working. I stumbled upon the server.on('error') event and decided to let App #2 wait until App #1 might stop, so the port becomes available:
function tryPitchIn(){
var server = http.createServer(app);
server.on('listening', function(){
console.log('Application #1 crashed/ended');
console.log('Pitching in...');
});
server.on('error', function(){
console.log('noting to do');
setTimeout(tryPitchIn, 250);
});
server.listen(80);
}
tryPitchIn();
Although the above is working great, I have to struggle with ending App #2 on the initialization of App #1, which isn't easy to do on different operating systems.
Is it possible to give a node process (inited by npm start) an static ID to terminate it from another process - preferably cross os? Or other ideas for the scenario?

You can serve your App #1 on another port, and write a microapp that will proxy requests to it, and return something else if the request to App #1 fails. You can use node-http-proxy like here, or roll your own solution like this and just adding an "on error" clause.

I like Amadan's answer. On top of that I have to mention that doing much other than logging in .on('error') (and also for unchaughtException) is not recommended. The system could be in a volatile state (internal states of modules could be inconsistent, for example), and you don't want to keep running for long like that.
Either do what Amadan says, or use node.js domains, or use several processes behind a load balancer (which is basically what Amadan says).

Related

WebRTC Node.JS WebSocket Server, Block a Request if WSS Parameter is Not 1

I would like that the node Web Socket server (command: node /root/server.js) will refuse connection if PARAM is not 1. Connection like this will be refused: wss://example.com/socket.io/?**PARAM**=**0**&EIO=3&transport=websocket
server.js looks like this(EasyRTC Framework):
https://github.com/open-easyrtc/open-easyrtc/blob/master/server_example/server_ssl.js
I finally realized it's actually not that simple (doing inside server.js if PARAM != 1 then exit). Anyone can confirm this complexity or offer a simple solution, without recommending complex authentication frameworks. Also can't see that EasyRTC Framework will offer something like this.
Can someone lead me in the right direction because I am lost.
Thank you
I crawled all the EasyRTC Group, there is an authentication system for node server.js web socket and it's very simple:
var onAuthenticate = function(socket, easyrtcid, appName, username, credential, easyrtcAuthMessage, next){
// do your checks
next(null); // Run next with a null if they are allowed in / or if not the socket is "disconnected" for them not for others.
};
easyrtc.events.on("authenticate", onAuthenticate);
Tested and works perfectly. More testing will follow.

Node.js how to handle drop down of server?

On Node.js api there lots of ifs and one can easily send request with some undefined var and crash the whole server until it re-starts again - something that could take up to 20 seconds.
I know that it should be checked if a variable is defined before working with it. But its very easy to forget something and keep working with an undefined var.
Is there a global definition to the server to avoid such of a drop down?
The easiest solution I could think of is implementing a cluster, in which only one process will go down, not the whole server. You could also make the process to go up again automatically. See more here
const cluster = require('cluster');
const http = require('http');
const numCPUs = require('os').cpus().length;
if (cluster.isMaster) {
console.log(`Master ${process.pid} is running`);
// Fork workers.
for (let i = 0; i < numCPUs; i++) {
cluster.fork();
}
cluster.on('exit', (worker, code, signal) => {
console.log(`worker ${worker.process.pid} died`);
});
} else {
// Workers can share any TCP connection
// In this case it is an HTTP server
http.createServer((req, res) => {
res.writeHead(200);
res.end('hello world\n');
}).listen(8000);
console.log(`Worker ${process.pid} started`);
}
In any application there are a lot of "ifs" and assumptions. With JavaScript, being weakly typed and dynamic, you can really shoot yourself in the foot.
But the same rules apply here as any other language. Practice defensive programming. That is, cover all the bases in each function and statement block.
You can also try out programming Nodejs with Typescript. It ads some static type checking and other nice features that help you miss your foot when you shoot. You can also use (I think) Flow to statically type check things. But these won't make you a better programmer.
One other suggestion is to design your system as a SOA. So that one portion going down doesn't necessarily affect others. "Microservices" being a subset of that.
First, defensive programming and extensive testing are your friends. Obviously, preventing an issue before it happens is much better than trying to react to it after it happens.
Second, there is no foolproof mechanism for catching all exceptions at some high level and then putting your server back into a known, safe state. You just can't really do that in any complex server because you don't know what you were in the middle of when the exception happened. People will often try to do this, but it's like proceeding with a wounded server that may have some messed up internals. It's not safe or advisable. If a problem was not intercepted (e.g. exception caught or error detected) at the level where it occurred by code that knows how to properly handle that situation, then the only completely safe path forward is to restart your server.
So, if after implementing as much defensive programming as you possibly can and testing the heck out of it, you still want to prevent end-user downtime from a server crash/restart, then the best way to do that is to assume that a given server process will occasionally need to be restarted and plan for that.
The simplest way to prevent end-user downtime when a server process restarts is to use clustering and thus have multiple server processes with some sort of load balancer that both monitors server processes and routes incoming connections among the healthy server processes. When one server process is down, it is temporarily taken out of the rotation and other server processes can handle new, incoming connections. When the failed server process is successfully restarted, it can be added back to the rotation and be used again for new requests. Clustering in this way can be done within a single server (multiple processes on the same server) or across servers (multiple servers, each with server processes on them).
In some cases, this same process can even be used to roll out a new version of server code without any system downtime (doing this requires additional planning).

Determining when karma-pact mock server has started

We're using the karma-pact plugin to run our pact JS client tests, based on the example from https://github.com/pact-foundation/pact-js/blob/master/karma/mocha/client-spec.js .
In the example there's a timeout in the before(), I believe to ensure the mock service has started before running the tests (see comment "required for slower Travis CI builds").
I'm reluctant to set a fixed timeout in our tests as it'll either be too short or too long in different environments (e.g. CI vs local) and so I was looking for a way to check if the server has started.
I'd tried using the pact API https://github.com/pact-foundation/pact-node#check-if-a-mock-server-is-running , however this appears to start a new mock server which conflicts with the one started by the karma-pact plugin (an Error: kill ESRCH error is reported when trying to run pact.createServer().running from within a test).
Is there a way to determine if the mock server has started up e.g. by waiting for a URL to become available? Possibly there's a way to get a reference the mock server started by the karma-pact plugin in order to use the pact-node API?
Actually the simplest way is to wait for the port to be in use.
Karma Pact by default will start the Mock on port 1234 (and you can specify your own). Once the port is up, the service is running and you can proceed.
For example, you could use something like wait-for-host to detect the running mock service:
var waitForPort = require('wait-for-port');
waitForPort('localhost', 1234, function(err) {
if (err) throw new Error(err);
// ... Mock Service is up - now we can run the tests
});

Node.js http-proxy drops websocket requests

Okay, I've spent over a week trying to figure this out to no avail, so if anyone has a clue, you are a hero. This isn't going to be an easy question to answer, unless I am being a dunce.
I am using node-http-proxy to proxy sticky sessions to 16 node.js workers running on different ports.
I use Socket.IO's Web Sockets to handle a bunch of different types of requests, and use traditional requests as well.
When I switched my server over to proxying via node-http-proxy, a new problem crept up in that sometimes, my Socket.IO session cannot establish a connection.
I literally can't stably reproduce it for the life of me, with the only way to turn it on being to throw a lot of traffic from multiple clients to the server.
If I reload the user's browser, it can then sometimes re-connect, and sometimes not.
Sticky Sessions
I have to proxy sticky sessions as my app authenticates on a per-worker basis, and so it routes a request based on its Connect.SID cookie (I am using connect/express).
Okay, some code
This is my proxy.js file that runs in node and routes to each of the workers:
var http = require('http');
var httpProxy = require('http-proxy');
// What ports the proxy is routing to.
var data = {
proxyPort: 8888,
currentPort: 8850,
portStart: 8850,
portEnd: 8865,
};
// Just gives the next port number.
nextPort = function() {
var next = data.currentPort++;
next = (next > data.portEnd) ? data.portStart : next;
data.currentPort = next;
return data.currentPort;
};
// A hash of Connect.SIDs for sticky sessions.
data.routes = {}
var svr = httpProxy.createServer(function (req, res, proxy) {
var port = false;
// parseCookies is just a little function
// that... parses cookies.
var cookies = parseCookies(req);
// If there is an SID passed from the browser.
if (cookies['connect.sid'] !== undefined) {
var ip = req.connection.remoteAddress;
if (data.routes[cookies['connect.sid']] !== undefined) {
// If there is already a route assigned to this SID,
// make that route's port the assigned port.
port = data.routes[cookies['connect.sid']].port;
} else {
// If there isn't a route for this SID,
// create the route object and log its
// assigned port.
port = data.currentPort;
data.routes[cookies['connect.sid']] = {
port: port,
}
nextPort();
}
} else {
// Otherwise assign a random port, it will/
// pick up a connect SID on the next go.
// This doesn't really happen.
port = nextPort();
}
// Now that we have the chosen port,
// proxy the request.
proxy.proxyRequest(req, res, {
host: '127.0.0.1',
port: port
});
}).listen(data.proxyPort);
// Now we handle WebSocket requests.
// Basically, I feed off of the above route
// logic and try to route my WebSocket to the
// same server regular requests are going to.
svr.on('upgrade', function (req, socket, head) {
var cookies = parseCookies(req);
var port = false;
// Make sure there is a Connect.SID,
if (cookies['connect.sid'] != undefined) {
// Make sure there is a route...
if (data.routes[cookies['connect.sid']] !== undefined) {
// Assign the appropriate port.
port = data.routes[cookies['connect.sid']].port;
} else {
// this has never, ever happened, i've been logging it.
}
} else {
// this has never, ever happened, i've been logging it.
};
if (port === false) {
// this has never happened...
};
// So now route the WebSocket to the same port
// as the regular requests are getting.
svr.proxy.proxyWebSocketRequest(req, socket, head, {
host: 'localhost',
port: port
});
});
Client Side / The Phenomena
Socket connects like so:
var socket = io.connect('http://whatever:8888');
After about 10 seconds on logging on, I get this error back on this listener, which doesn't help much.
socket.on('error', function (data) {
// this is what gets triggered. ->
// Firefox can't establish a connection to the server at ws://whatever:8888/socket.io/1/websocket/Nnx08nYaZkLY2N479KX0.
});
The Socket.IO GET request that the browser sends never comes back - it just hangs in pending, even after the error comes back, so it looks like a timeout error. The server never responds.
Server Side - A Worker
This is how a worker receives a socket request. Pretty simple. All workers have the same code, so you think one of them would get the request and acknowledge it...
app.sio.socketio.sockets.on('connection', function (socket) {
// works... some of the time! all of my workers run this
// exact same process.
});
Summary
That's a lot of data, and I doubt anyone is willing to confront it, but i'm totally stumped, don't know where to check next, log next, whatever, to solve it. I've tried everything I know to see what the problem is, to no avail.
UPDATE
Okay, I am fairly certain that the problem is in this statement on the node-http-proxy github homepage:
node-http-proxy is <= 0.8.x compatible, if you're looking for a >=
0.10 compatible version please check caronte
I am running Node.js v0.10.13, and the phenomena is exactly as some have commented in github issues on this subject: it just drops websocket connections randomly.
I've tried to implement caronte, the 'newer' fork, but it is not at all documented and I have tried my hardest to piece together their docs in a workable solution, but I can't get it forwarding websockets, my Socket.IO downgrades to polling.
Are there any other ideas on how to get this implemented and working? node-http-proxy has 8200 downloads yesterday! Sure someone is using a Node build from this year and proxying websockets....
What I am look for exactly
I want to accomplish a proxy server (preferrably Node) that proxies to multiple node.js workers, and which routes the requests via sticky sessions based on a browser cookie. This proxy would need to stably support traditional requests as well as web sockets.
Or...
I don't mind accomplishing the above via clustered node workers, if that works. My only real requirement is maintaining sticky sessions based on a cookie in the request header.
If there is a better way to accomplish the above than what I am trying, I am all for it.
In general I don't think node is not the most used option as a proxy server, I, for one use nginx as a frontend server for node and it's a really great combination. Here are some instructions to install and use the nginx sticky sessions module.
It's a lightweight frontend server with json like configuration, solid and very well tested.
nginx is also a lot faster if you want to serve static pages, css. It's ideal to configure your caching headers, redirect traffic to multiple servers depending on domain, sticky sessions, compress css and javascript, etc.
You could also consider a pure load balancing open source solution like HAProxy. In any case I don't believe node is the best tool for this, it's better to use it to implement your backend only and put something like nginx in front of it to handle the usual frontend server tasks.
I agree with hexacyanide. To me it would make the most sense to queue workers through a service like redis or some kind of Message Query system. Workers would be queued through Redis Pub/Sub functionality by web nodes(which are proxied). Workers would callback upon error, finish, or stream data in realtime with a 'data' event. Maybe check out the library kue. You could also roll your own similar library. RabbitMQ is another system for similar purpose.
I get using socket.io if you're already using that technology, but you need to use tools for their intended purpose. Redis or a MQ system would make the most sense, and pair great with websockets(socket.io) to create realtime, insightful applications.
Session Affinity(sticky sessions) is supported through Elastic LoadBalancer for aws, this supports webSockets. A PaaS provider(Modulus) does this exactly. Theres also satalite which provides sticky sessions for node-http-proxy, however I have no idea if it supports webSockets.
I've been looking into something very similar to this myself, with the intent of generating (and destroying) Node.js cluster nodes on the fly.
Disclaimer: I'd still not recommend doing this with Node; nginx is more stable for the sort of design architecture that you're looking for, or even more so, HAProxy (very mature, and easily supports sticky-session proxying). As #tsturzl indicates, there is satellite, but given the low volume of downloads, I'd tread carefully (at least in a production environment).
That said, since you appear to have everything already set up with Node, rebuilding and re-architecting may be more work than it's worth. Therefore, to install the caronte branch with NPM:
Remove your previous http-node-proxy Master installation with npm uninstall node-proxy and/or sudo npm -d uninstall node-proxy
Download the caronte branch .zip and extract it.
Run npm -g install /path/to/node-http-proxy-caronte
In my case, the install linkage was broken, so I had to run sudo npm link http-proxy
I've got it up and running using their basic proxy example -- whether or not this resolves your dropped sessions issue or not, only you will know.

In Node.js, is it normal to create several "server" objects, but only bind one to a port?

I'm just about done reading "Node.js in Action", and I'm trying to put together the pieces of Node.js --> Connect --> Express. I have a question about the "servers" that we create in Node.
node_server = http.createServer();
connect_app = Connect();
express_app = Express();
In the code above, is it true that connect_app is basically a "subclass" of node_server? (I know, this is JavaScript, so we don't really have subclassing, but I don't know what else to call it; extension?). And likewise express_app is basically a "subclass" of connect_app? It's my understanding that all of these objects are servers which could be bound to a port and respond to requests, but that in practice we typically only bind ONE of them to a port and use it to proxy requests to other server objects.
Am I on the right track in learning this?
First of all, shake off the idea that there are 3 running servers - because there's only one.
Express is a framework that relies on Connect, which is another framework/set of middlewares. Further, Connect relies on the NodeJS's API (HTTP module). Basically an abstraction, one on top of another.
An analogy is that Express is a car, Connect is like an engine, NodeJS is the engine parts. You only have one running car (one server in your case), but multiple components powering it.
#josh3736 Has commented a better explanation how it works.

Categories

Resources