Is there an event emitted for socket.io ack/responses? - javascript

I'm using socket.io and express both with feathersjs. For metrics gathering I'm trying to capture round-trips for requests made both through Express as well as through socket.io.
The express side is easy with express middleware.
I can catch the socket.io inbound request via socket.use:
const app = express(feathers());
... (set up feathers services etc.)
app.configure(socketio(function(io) {
io.use(function(socket, next) {
socket.use((packet, next) => {
... (extract the verb and pathing info from the packet)
next();
});
next();
});
});
However, I can't find any equivalent of socket.use on the outbound/acknowledgement side. There's some events inside engine.io under the covers but I can't access them.
I'm really trying to find an equivalent set of events emitted for each request/response (the latter being the equivalent to finish in express).
I can't use connect/disconnect events here; I want to capture each request made over the socket and the responses sent for them, regardless of the feathers service and module.
Feathers.js hooks could be used for this, but it would require passing a bunch of context from the socket.io middleware into feathers, which I was hoping to not do.
Anyone know a way to do this?

In case anyone comes here looking for a way to do this, I'm not sure why I didn't think of it sooner.
The inbound packet (available in socket.use) will include a function as its last parameter if it should be acknowledged.
Wrapping the last function to inject my own logic worked.
pseudo-code
socket.use((packet, next) => {
const id = uuidv4();
console.log(`start of request: ${id}`);
const cb = packet[packet.length - 1];
if (typeof cb === 'function') {
packet[packet.length - 1] = function() {
const [err, data] = arguments;
console.log(`end of request: ${id}`);
if (err) {
console.error(err);
}
cb(err, data);
};
};
next();
});

Related

Modify response body before res.send() executes in ExpressJS

In application which I currently develop, it's using Express. In my case I want to get response before it's been sent and modify it (for purpose of JWT). In this application, there is a dozen of endpoints and I don't want to create my own function like sendAndSign() and replace res.send() everywhere in code. I heard there is option to override/modify logic of res.send(...) method.
I found something like this example of modifying, but in my case this doesn't work. Is there any other option (maybe using some plugin) to manage this action?
You can intercept response body in Express by temporary override res.send:
function convertData(originalData) {
// ...
// return something new
}
function responseInterceptor(req, res, next) {
var originalSend = res.send;
res.send = function(){
arguments[0] = convertData(arguments[0]);
originalSend.apply(res, arguments);
};
next();
}
app.use(responseInterceptor);
I tested in Node.js v10.15.3 and it works well.
I have created an NPM package called experss-response-hooks that provides response hooks.
You can register a hook in a middleware before all your other routes, that will enable you to change the response body when send() will be called.
For example:
const responseHooks = require('express-response-hooks');
// response hooks initialization
app.use(responseHooks());
// register a middleware that modifies the response body before being sent to the client
app.use(function (req, res, next) {
// hook on "send()" function
res.hooks.on('send', (args) => {
args[0] = 'new-body'; // args[0] is the body passed to send()
});
});

When tunnelling a TLS connection, how to pass additional information?

I have created a bare-bones HTTP proxy that performs HTTP tunnelling using HTTP CONNECT method.
const http = require('http');
const https = require('https');
const pem = require('pem');
const net = require('net');
const util = require('util');
const createHttpsServer = (callback) => {
pem.createCertificate({
days: 365,
selfSigned: true
}, (error, {serviceKey, certificate, csr}) => {
const httpsOptions = {
ca: csr,
cert: certificate,
key: serviceKey
};
const server = https.createServer(httpsOptions, (req, res) => {
// How do I know I here whats the target server port?
res.writeHead(200);
res.end('OK');
});
server.listen((error) => {
if (error) {
console.error(error);
} else {
callback(null, server.address().port);
}
});
});
};
const createProxy = (httpsServerPort) => {
const proxy = http.createServer();
proxy.on('connect', (request, requestSocket, head) => {
// Here I know whats the target server PORT.
const targetServerPort = Number(request.url.split(':')[1]);
console.log('target server port', targetServerPort);
const serverSocket = net.connect(httpsServerPort, 'localhost', () => {
requestSocket.write(
'HTTP/1.1 200 Connection established\r\n\r\n'
);
serverSocket.write(head);
serverSocket.pipe(requestSocket);
requestSocket.pipe(serverSocket);
});
});
proxy.listen(9000);
};
const main = () => {
createHttpsServer((error, httpsServerPort) => {
if (error) {
console.error(error);
} else {
createProxy(httpsServerPort);
}
});
};
main();
The server accepts a HTTPS connection and responds with "OK" message without forwarding the request further.
As you can see in the code (see // Here I know whats the target server PORT.), I can obtain the target server's port within the HTTP CONNECT event handler. However, I am unable to figure out how to pass this information to the createHttpsServer HTTP server router (see // How do I know I here whats the target server port?).
When tunnelling a TLS connection, how to pass additional information?
The above code can be tested by running:
$ node proxy.js &
$ curl --proxy http://localhost:9000 https://localhost:59194/foo.html -k
The objective is to respond with "OK localhost:59194".
You can't add anything to a TLS stream (thankfully), short of tunneling it inside another protocol⁠—which is what the Connect method already does. But, since you have the HTTP proxy and the HTTPS server in the same codebase, you don't need to fling the TLS stream over the network another time. Instead, you want to parse the TLS stream, and then you can pass any variables to the code that handles it.
However, after parsing TLS you'll still have a raw HTTP stream, and you'll need an HTTP server to turn it into requests and to handle responses.
The quick and rather dirty way to go about it is to use Node's HTTPS server to both decode TLS and parse HTTP. But the server's API doesn't provide for dealing with sockets that are already connected, and server's code isn't cleanly separated from connection code. So you need to hijack the server's internal connection-handling logic—this is of course susceptible to breakage in case of future changes:
const http = require('http');
const https = require('https');
const pem = require('pem');
const createProxy = (httpsOptions) => {
const proxy = http.createServer();
proxy.on('connect', (request, requestSocket, head) => {
const server = https.createServer(httpsOptions, (req, res) => {
res.writeHead(200);
res.end('OK');
});
server.emit('connection', requestSocket);
requestSocket.write('HTTP/1.1 200 Connection established\r\n\r\n');
});
proxy.listen(9000);
};
const main = () => {
pem.createCertificate({
days: 365,
selfSigned: true
}, (error, {serviceKey, certificate, csr}) => {
createProxy({
ca: csr,
cert: certificate,
key: serviceKey
});
});
};
main();
To avoid creating an HTTPS server instance on every request, you can move the instance out and tack you data onto the socket object instead:
const server = https.createServer(httpsOptions, (req, res) => {
res.writeHead(200);
// here we reach to the net.Socket instance saved on the tls.TLSSocket object,
// for extra dirtiness
res.end('OK ' + req.socket._parent.marker + '\n');
});
proxy.on('connect', (request, requestSocket, head) => {
requestSocket.marker = Math.random();
server.emit('connection', requestSocket);
requestSocket.write('HTTP/1.1 200 Connection established\r\n\r\n');
});
With the above code, if you do several successive requests:
curl --proxy http://localhost:9000 https://localhost:59194/foo.html \
https://localhost:59194/foo.html https://localhost:59194/foo.html \
https://localhost:59194/foo.html https://localhost:59194/foo.html -k
then you'll also notice that they're processed on a single connection, which is nice:
OK 0.6113572936982015
OK 0.6113572936982015
OK 0.6113572936982015
OK 0.6113572936982015
OK 0.6113572936982015
I can't quite vouch that nothing will be broken by handing the socket to the HTTPS server while the proxy server already manages it. [The server has the presence of mind to not overwrite another instance on the socket object](https://github.com/nodejs/node/blob/v10.9.0/lib/_http_server.js#L331), but otherwise seems to be rather closely involved with the socket. You'll want to test it with longer-running connections.
As for the `head` argument, [which can indeed contain initial data](https://www.rfc-editor.org/rfc/rfc2817#section-5.2):
you might be able to put it back on the stream with requestSocket.unshift(head), but I'm not sure that it won't be immediately consumed by the proxy server.
Or, you might be able to chuck it over to the HTTPS server with requestSocket.emit('data', head) since the HTTP server seems to use the stream events, however TLS socket source calls read() for whatever reason, and that's mutually exclusive with the events, so I'm not sure how they even work with each other.
One solution would be to make your own wrapper for stream.Duplex that will forward all calls and events, except for read() in the case when this initial buffer exists—and then use this wrapper in place of requestSocket. But you'll then need to replicate the 'data' event also, in accordance with the logic of Node's readable streams.
Finally, you can try creating a new duplex stream, write head and pipe the socket to it, like you did initially, and use the stream in place of the socket for the HTTPS server—not sure that it will be compatible with HTTP server's rather overbearing management of the socket.
An cleaner approach is to decode the TLS stream and use a standalone parser for the resultant HTTP stream. Thankfully, Node has a tls module that is nicely isolated and turns TLS sockets into regular sockets:
proxy.on('connect', (request, requestSocket, head) => {
const httpSocket = new tls.TLSSocket(requestSocket, {
isServer: true,
// this var can be reused for all requests,
// as it's normally saved on an HTTPS server instance
secureContext: tls.createSecureContext(httpsOptions)
});
...
});
See caveats on tls.createSecureContext regarding replicating the behavior of the HTTPS server.
Alas, Node's HTTP parser isn't so usable: it's a C library, which necessitates quite a bit of legwork between the socket and the parser calls. And the API can (and does) change between versions, without warnings, with a larger surface for incompatibilities compared to the HTTP server internals used above.
There are NPM modules for parsing HTTP: e.g. one, two, but none seem too mature and maintained.
I also have doubts about the feasibility of a custom HTTP server because network sockets tend to require plenty of nurture over time due to edge cases, with hard-to-debug timeout issues and such—which should all be already accounted for in the Node's HTTP server.
P.S. One possible area of investigation is how the Cluster module handles connections: afaik the parent process in a cluster hands connection sockets over to the children, but it doesn't fork on every request—which suggests that the child processes somehow deal with connected sockets, in code that's outside of an HTTP server instance. However, since the Cluster module is now in the core, it may exploit non-public APIs.

How do I use client side JavaScript to find and return data from MongoDB and a Node.js server with no framework?

I have read all the questions on SO that I could find. They all use Express, Mongoose or they leave something out. I understand that Node.js is the server. I understand the MongoDB require is the driver the Node.js server uses to open a connection to the MongoDB. Then, on the server, I can do (from the documentation):
var MongoClient = require('mongodb').MongoClient;
var assert = require('assert');
var ObjectId = require('mongodb').ObjectID;
var url = 'mongodb://localhost:27017/test';
var findRestaurants = function(db, callback) {
var cursor =db.collection('restaurants').find( );
cursor.each(function(err, doc) {
assert.equal(err, null);
if (doc != null) {
console.dir(doc);
} else {
callback();
}
});
};
// Connect to the db
MongoClient.connect(url, function(err, db) {
assert.equal(null, err);
findRestaurants(db, function() { //I don't want to do this as soon as the server starts
db.close();
});
});
//if I put findRestaurant here,
function findRestaurant(data){
}
How do I call it from the client?
I do not want to find data as soon as I start the server. I realize those are examples, but what I cannot find is a way where the client requests some data and where the Node.js server returns it.
I have seen close examples using jQuery, Angular on the client, and then Express, Mongoose, Meteor, , etc.
All I want to understand is how I make this request from the client's browser. I can do that with XMLhttpRequest(), so I can put that part together, I believe. But, any example is appreciated.
But what is waiting on the Node.js side of things (how do I set up my function to be called once the server is listening)?
How do I create a function on the server side, maybe "GetRestaurants" and have that return the data it gets using find()?
I cannot find this information, this simple, anywhere. Is it too complicated to do the example without a framework?
I do not wish to copy and paste from something using Express, etc. without understanding what's going on. Most explanations never say, this goes on the Node.js side. This is client. I know I am expected to do my own research, but I am not putting it together, too used to RDBMSes, IIS, Apache, PHP, and so on.
I believe I have a fundamental misunderstanding of what's going on in the paradigm.
Please. No REST API creation, no frameworks of any kind on Node.js other than using the MongoDB library (unless there is an absolute requirement), not even jQuery, Angular, Jade, or anything else for the client side, straight up JavaScript on all sides.
I have seen questions like this,
How to display data from MongoDB to the frontend via Node.js without using a framework
But they do not show what I am asking. They do it all at once, as soon as the database connects. What if I want to do a delete or insert or find? There are many SO questions like this, but I have not hit the one that shows what I am looking for.
This should give the guidance. Once you go to a browser and type http://localhost:5155 the callback function (request, response) { will be called and the request to db will be made. Make sure you get response and then start working on the client side code:
const http = require('http');
const MongoClient = require('mongodb').MongoClient;
const assert = require('assert');
const url = 'mongodb://localhost:27017/test';
const server = http.createServer(function (request, response) {
getData(function (data) {
response.end(data);
});
});
function getData(callback) {
// Connect to the db
MongoClient.connect(url, function (err, db) {
assert.equal(null, err);
findRestaurants(db, function (data) {
db.close();
callback(data);
});
});
const findRestaurants = function (db, callback) {
const cursor = db.collection('restaurants').find();
const data = [];
cursor.each(function (err, doc) {
assert.equal(err, null);
data.push(doc);
if (doc === null) {
callback(data);
}
});
};
}
server.listen(5155);

Dangling callbacks: return response before every callback has returned

Question: Would you consider dangling callbacks as bad node.js style or even dangerous? If so under which premise?
Case: as described below, imagine you need to make calls to a DB in an express server that updates some data. Yet the client doesn't need to be informed about the result. In this case you could return a response immediately, not waiting for the asynchronous call to complete. This would be described as dangling callback for lack of a better name.
Why is this interesting?: Because tutorials and documentation in most cases show the case of waiting, in worst cases teaching callback hell. Recall your first experiences with say express, mongodb and passport.
Example:
'use strict'
const assert = require('assert')
const express = require('express')
const app = express()
function longOperation (value, cb) {
// might fail and: return cb(err) ...here
setTimeout(() => {
// after some time invokes the callback
return cb(null, value)
}, 4000)
}
app.get('/ping', function (req, res) {
// do some declartions here
//
// do some request processesing here
// call a long op, such as a DB call here.
// however the client does not need to be
// informed about the result of the operation
longOperation(1, (err, val) => {
assert(!err)
assert(val === 1)
console.log('...fired callback here though')
return
})
console.log('sending response here...')
return res.send('Hello!')
})
let server = app.listen(3000, function () {
console.log('Starting test:')
})
Yeah, this is basically what called a "fire and forget" service in other contexts, and could also be the first step in a good design implementing command-query response separation.
I don't consider it a "dangling callback", the response in this case acknowledges that the request was received. Your best bet here would be to make sure your response includes some kind of hypermedia that lets clients get the status of their request later, and if it's an error they can fix have the content at the new resource URL tell them how.
Think of it in the case of a user registration workflow where the user has to be approved by an admin, or has to confirm their email before getting access.

Mocking mongoskin in node.js application

As I try to follow the TDD way of development, I still struggle to find out how one can mock certain stuff in JavaScript. I am used to mocking in Java with Mockito and Spring (e.g. inject a mongo mock instead of a real mongo instance), but how do I approach this in JavaScript?
Let me make a simple example node.js with node-restify:
var mongoskin = require('mongoskin');
var restify = require('restify');
// ###############################
// ## Global Configuration
// ###############################
var mongoURL = process.env.MONGOHQ_URL || "mongodb://localhost/test";
var serverPort = process.env.PORT || 5000;
// ###############################
// ## Basic Setup
// ###############################
var server = restify.createServer({
name: 'test'
});
server.use(connect.logger());
server.use(restify.acceptParser(server.acceptable));
server.use(restify.bodyParser());
var db = mongoskin.db(mongoURL);
// ###############################
// ## API
// ###############################
server.get('/api/v1/projects', function (req, res, next) {
db.collection('projects').find().toArray(function (error, projects) {
if (error) {
return next(new restify.InternalError());
}
res.json(200, projects);
return next();
});
});
server.get('/api/v1/projects/:projectId', function (req, res, next) {
if (req.params.projectId === null) {
return next(new restify.InvalidArgumentError('ProjectId must not be null or empty.'))
}
db.collection('projects').findById(req.params.projectId, function (error, project) {
if (error) {
return next(new restify.InternalError());
}
res.json(200, project);
return next();
});
});
// ###############################
// ## Main Server Initialization
// ###############################
server.listen(serverPort, function () {
console.log('%s listening at %s', server.name, server.url);
});
I would like to have now a test javascript file, where I can test those two 'get' methods. Furthermore I would like to mock the mongoskin instance ('db') so that I can use for example JSMockito to spy and pretend some behaviour.
What is now the best approach to this? Can someone post a small example file? And how do I manage to inject the mocked db instance?
Thanks for your help!
Thierry
Plenty of precedence out there for easily mocking a rest api in general:
https://github.com/flatiron/nock
The problem with mocking a database is that its usually got an extremely complex and hairy api. There are two easy (and thus less correct in strict unit testing sense) ways to do this.
One is to have 'models' which wrap your entity access instead of going direct to the database driver. Then you can easily mock your model apis. This is fine, but a little annoying if you're just doing some basic database operations and you don't have a need for a big model abstraction.
The second is to just spin up a database with some test data and hook up to it during the test. This is a bit of a 'functional test' but also in my experience a lot more practical.

Categories

Resources