Pooled connections - monitoring concurrent requests in Node.js Request library? - javascript

I'm sending out a lot of requests to another server, and want to limit them so as to not overload the server. My impression is that this can be done with the pool parameter in options, but I'm not sure if I'm doing so properly.
I'd like to be able to keep track of when the requests are sent out, as I'm trying to establish a duplex connection, and need to make sure the corresponding GET and POST requests are sent out at the same time.
Here's a simplified example of what I'm trying:
var request = require('request');
var options = {
'url': 'http://www.google.com',
'pool': {
'maxSockets': 3
}
};
for (var i = 0; i < 100; i++) {
request.get(options, (function(j) {
return function(err, res, body) {
console.log(j);
}
})(i));
}
Is there an event emitted when the requests are actually sent out? Is there any way for me to track when, and in what order each request is being sent out?

I found this in the Node.js documentation:
Event: 'socket'#
function (socket) { }
Emitted after a socket is assigned to this request.
You can use it to monitor connections as they are assigned sockets like this:
http.get(options, function(res) {
// Do stuff
}).on("socket", function (socket) {
socket.on("connection") {
// record connection
};
});
The same event is emitted with the request library, since it's just a wrapper around the builtin http module.

Related

multiple tcp connections despite of http-alive

I'm not sure whether I understand http-keep-alive correctly, In my opinion, it should reuse the tcp connection, not building a new one. However, I found something really strange, it seems like it is hard to anticipate the behavior of http keep-alive.
Server: NodeJS & Express ^4.16.3
and I have used Wireshark to analyze the results
Situation 1:
Server-side
for(let i =1; i<11; i++){
app.use('/' + i, (req, res) => {
res.header('cache-control', 'no-store');
res.send('i');
});
}
server.keepAliveTimeout = 50000;
Client side
setTimeout(() => {
for (let i = 1; i < 11; i++) {
fetch('' + i).then(data => console.log(data));
}
}, 10000);
result: tcp connection is reused(only one tcp connection), all fetch requests reuse the tcp connection established by index.html
Situation 2:
Client side codes are the same, only server side codes change here
for(let i =1; i<11; i++){
app.use('/' + i, (req, res) => {
res.header('cache-control', 'no-store');
// here I have added timeout!
setTimeout(() => {
res.send('i');
}, 2000);
});
}
result: 5 more tcp connection are established(in the picture only 4, because the screenshot is not complete), despite that I have set server.keepAliveTimeout = 50000;
So my question is, what does http keep alive really mean? why it behaves like this?
If it will not use the same tcp connection in situation 2, what is the meaning of keep alive??
appreciate for any thoughts!
Yes, HTTP Keep Alive should reuse your TCP connection with the server. The server append Connection: keep-alive Header with the response, So the client keeps the connection alive. So the client won't keep the connection alive till your server response.
So in your first scenario, The server replies with the header as soon as the request received. So the second response (actually may reuse, you got lucky since the server respond your request, before it sends the second one) reuses the TCP connection.
But in second scenario, server waits 2 seconds to send the response, So the client won't know it should be a keep alive connection till next 2 seconds. But all other requests need to be sent before that, so as default it will create a new connection for each HTTP Request.
This might be efficient if you need to continuously call HTTP interface, like req -> res -> req -> res, But also this might be inefficient if you want to get independent collection of data from the server.
Try this on client side if you have any doubts,
setTimeout(() => {
fetch('' + i).then(data => console.log(data));
setTimeout(function () {
for (let i = 2; i < 11; i++) {
fetch('' + i).then(data => console.log(data));
}
}, 5000)
}, 10000);

Error handling over websockets a design dessision

Im currently building a webapp that has two clear use cases.
Traditional client request data from server.
Client request a stream from the server after wich the server starts pushing data to the client.
Currently im implementing both 1 and 2 using json message passing over a websocket. However this has proven hard since I need to handcode lots of error handling since the client is not waiting for the response. It just sends the message hoping it will get a reply sometime.
Im using Js and react on the frontend and Clojure on the backend.
I have two questions regarding this.
Given the current design, what alternatives are there for error handling over a websocket?
Would it be smarter to split the two UC using rest for UC1 and websockets for UC2 then i could use something like fetch on the frontend for rest calls.
Update.
The current problem is not knowing how to build an async send function over websockets can match send messages and response messages.
Here's a scheme for doing request/response over socket.io. You could do this over plain webSocket, but you'd have to build a little more of the infrastructure yourself. This same library can be used in client and server:
function initRequestResponseSocket(socket, requestHandler) {
var cntr = 0;
var openResponses = {};
// send a request
socket.sendRequestResponse = function(data, fn) {
// put this data in a wrapper object that contains the request id
// save the callback function for this id
var id = cntr++;
openResponses[id] = fn;
socket.emit('requestMsg', {id: id, data: data});
}
// process a response message that comes back from a request
socket.on('responseMsg', function(wrapper) {
var id = wrapper.id, fn;
if (typeof id === "number" && typeof openResponses[id] === "function") {
fn = openResponses[id];
delete openResponses[id];
fn(wrapper.data);
}
});
// process a requestMsg
socket.on('requestMsg', function(wrapper) {
if (requestHandler && wrapper.id) {
requestHandler(wrapper.data, function(responseToSend) {
socket.emit('responseMsg', {id: wrapper.id, data; responseToSend});
});
}
});
}
This works by wrapping every message sent in a wrapper object that contains a unique id value. Then, when the other end sends it's response, it includes that same id value. That id value can then be matched up with a particular callback response handler for that specific message. It works both ways from client to server or server to client.
You use this by calling initRequestResponseSocket(socket, requestHandler) once on a socket.io socket connection on each end. If you wish to receive requests, then you pass a requestHandler function which gets called each time there is a request. If you are only sending requests and receiving responses, then you don't have to pass in a requestHandler on that end of the connection.
To send a message and wait for a response, you do this:
socket.sendRequestResponse(data, function(err, response) {
if (!err) {
// response is here
}
});
If you're receiving requests and sending back responses, then you do this:
initRequestResponseSocket(socket, function(data, respondCallback) {
// process the data here
// send response
respondCallback(null, yourResponseData);
});
As for error handling, you can monitor for a loss of connection and you could build a timeout into this code so that if a response doesn't arrive in a certain amount of time, then you'd get an error back.
Here's an expanded version of the above code that implements a timeout for a response that does not come within some time period:
function initRequestResponseSocket(socket, requestHandler, timeout) {
var cntr = 0;
var openResponses = {};
// send a request
socket.sendRequestResponse = function(data, fn) {
// put this data in a wrapper object that contains the request id
// save the callback function for this id
var id = cntr++;
openResponses[id] = {fn: fn};
socket.emit('requestMsg', {id: id, data: data});
if (timeout) {
openResponses[id].timer = setTimeout(function() {
delete openResponses[id];
if (fn) {
fn("timeout");
}
}, timeout);
}
}
// process a response message that comes back from a request
socket.on('responseMsg', function(wrapper) {
var id = wrapper.id, requestInfo;
if (typeof id === "number" && typeof openResponse[id] === "object") {
requestInfo = openResponses[id];
delete openResponses[id];
if (requestInfo) {
if (requestInfo.timer) {
clearTimeout(requestInfo.timer);
}
if (requestInfo.fn) {
requestInfo.fn(null, wrapper.data);
}
}
}
});
// process a requestMsg
socket.on('requestMsg', function(wrapper) {
if (requestHandler && wrapper.id) {
requestHandler(wrapper.data, function(responseToSend) {
socket.emit('responseMsg', {id: wrapper.id, data; responseToSend});
});
}
});
}
There are a couple of interesting things in your question and your design, I prefer to ignore the implementation details and look at the high level architecture.
You state that you are looking to a client that requests data and a server that responds with some stream of data. Two things to note here:
HTTP 1.1 has options to send streaming responses (Chunked transfer encoding). If your use-case is only the sending of streaming responses, this might be a better fit for you. This does not hold when you e.g. want to push messages to the client that are not responding to some sort of request (sometimes referred to as Server side events).
Websockets, contrary to HTTP, do not natively implement some sort of request-response cycle. You can use the protocol as such by implementing your own mechanism, something that e.g. the subprotocol WAMP is doing.
As you have found out, implementing your own mechanism comes with it's pitfalls, that is where HTTP has the clear advantage. Given the requirements stated in your question I would opt for the HTTP streaming method instead of implementing your own request/response mechanism.

NodeJS HTTP server stalled on V8 execution

EDITED
I have a nodeJS http server that is meant for receiving uploads from multiple clients and processing them separately.
My problem is that I've verified that the first request blocks the reception of any other request until the previous request is served.
This is the code I've tested:
var http = require('http');
http.globalAgent.maxSockets = 200;
var url = require('url');
var instance = require('./build/Release/ret');
http.createServer( function(req, res){
var path = url.parse(req.url).pathname;
console.log("<req>"+path+"</req>");
switch (path){
case ('/test'):
var body = [];
req.on('data', function (chunk) {
body.push(chunk);
});
req.on('end', function () {
body = Buffer.concat(body);
console.log("---req received---");
console.log(Date.now());
console.log("------------------");
instance.get(function(result){
postHTTP(result, res);
});
});
break;
}
}).listen(9999);
This is the native side (omitting obvious stuff) where getInfo is the exported method:
std::string ret2 (){
sleep(1);
return string("{\"image\":\"1.JPG\"}");
}
Handle<Value> getInfo(const Arguments &args) {
HandleScope scope;
if(args.Length() == 0 || !args[0]->IsFunction())
return ThrowException(Exception::Error(String::New("Error")));
Persistent<Function> fn = Persistent<Function>::New(Handle<Function>::Cast(args[0]));
Local<Value> objRet[1] = {
String::New(ret2().c_str())
};
Handle<Value> ret = fn->Call(Context::GetCurrent()->Global(), 1, objRet);
return scope.Close(Undefined());
}
I'm resting this with 3 curl parallel requests
for i in {1..3}; do time curl --request POST --data-binary "#/home/user/Pictures/129762.jpg" http://192.160.0.1:9999/test & done
This is the output from the server:
<req>/test</req>
---req received---
1397569891165
------------------
<req>/test</req>
---req received---
1397569892175
------------------
<req>/test</req>
---req received---
1397569893181
------------------
These the response and the timing from the client:
"1.JPG"
real 0m1.024s
user 0m0.004s
sys 0m0.009s
"1.JPG"
real 0m2.033s
user 0m0.000s
sys 0m0.012s
"1.JPG"
real 0m3.036s
user 0m0.013s
sys 0m0.001s
Apparently requests are received after the previous has been served. The sleep(1) simulates a synchronous operation that requires about 1s to complete and can't be changed.
The client receives the responses with an incremental delay of ~1s.
I would like to achieve a kind of parallelism, although I'm aware I'm in a single threaded environment such as nodeJS. What I would like to achieve is receiving all 3 answers is ~1s.
Thanks in advance for your help.
This:
for(var i=0;i<1000000000;i++) var a=a+i;
Is a pretty severe blocking operation. As soon as the first block ends. Your whole server hangs until this for loop is done. I'm interested in why you are trying to do this.
Perhaps you are trying to simulate a delayed response ?
setTimeout(function)({
send404(res);
}, 3000);
Right now you are turning a non-flowing stream into flowing mode by attaching a data event handler, and subsequently loading the whole stream into memory. You probably don't want to do this.
You can use the stream in now-flowing mode as illustrated below, this is useful if you want to send the data to some place that is only accessible after some other event.
However, using the stream in flowing mode is the fastest. If you want to write your own body parser I suppose you might want to use flowing mode, it depends on your use case.
req.on('readable', function () {
var chunk;
while (null !== (chunk = readable.read())) {
body.push(chunk);
}
});
Flowing and non-flowing mode is also know as respectively v1 and v2 streams, as the older streams used in node only supported flowing mode.

Can't close server (nodeJS)

Why I can't close the server by requesting localhost:13777/close in browser (it continues to accept new requests), but it will gracefully close on timeout 15000? Node version is 0.10.18. I fell into this problem, trying to use code example from docs on exceptions handling by domains (it was giving me 'Not running' error every time I secondly tried to request error page) and finally came to this code.
var server
server = require("http").createServer(function(req,res){
if(req.url == "/close")
{
console.log("Closing server (no timeout)")
setTimeout(function(){
console.log("I'm the timeout")
}, 5000);
server.close(function(){
console.log("Server closed (no timeout)")
})
res.end('closed');
}
else
{
res.end('ok');
}
});
server.listen(13777,function(){console.log("Server listening")});
setTimeout(function(){
console.log("Closing server (timeout 15000)")
server.close(function(){console.log("Server closed (timeout 15000)")})
}, 15000);
The server is still waiting on requests from the client. The client is utilizing HTTP keep-alive.
I think you will find that while the existing client can make new requests (as the connection is already established), other clients won't be able to.
Nodejs doesn't implement a complex service layer on top of http.Server. By calling server.close() you are instructing the server to no longer accept any "new" connections. When a HTTP Connection:keep-alive is issued the server will keep the socket open until the client terminates or the timeout is reached. Additional clients will not be able to issue requests
The timeout can be changed using server.setTimeout() https://nodejs.org/api/http.html#http_server_settimeout_msecs_callback
Remember if a client has created a connection before the close event that connection can continually be used.
It seems that a lot of people do not like this current functionality but this issue has been open for quite a while:
https://github.com/nodejs/node/issues/2642
As the other answers point out, connections may persist indefinitely and the call to server.close() will not truly terminate the server if any such connections exist.
We can write a simple wrapper function which attaches a destroy method to a given server that terminates all connections, and closes the server (thereby ensuring that the server ends nearly immediately!)
Given code like this:
let server = http.createServer((req, res) => {
// ...
});
later(() => server.close()); // Fails to reliably close the server!
We can define destroyableServer and use the following:
let destroyableServer = server => {
// Track all connections so that we can end them if we want to destroy `server`
let sockets = new Set();
server.on('connection', socket => {
sockets.add(socket);
socket.once('close', () => sockets.delete(socket)); // Stop tracking closed sockets
});
server.destroy = () => {
for (let socket of sockets) socket.destroy();
sockets.clear();
return new Promise((rsv, rjc) => server.close(err => err ? rjc(err) : rsv()));
};
return server;
};
let server = destroyableServer(http.createServer((req, res) => {
// ...
}));
later(() => server.destroy()); // Reliably closes the server almost immediately!
Note the overhead of entering every unique socket object into a Set

Node.js WebSocket Broadcast

I'm using the ws library for WebSockets in Node.js and
I'm trying this example from the library examples:
var sys = require("sys"),
ws = require("./ws");
ws.createServer(function (websocket) {
websocket.addListener("connect", function (resource) {
// emitted after handshake
sys.debug("connect: " + resource);
// server closes connection after 10s, will also get "close" event
setTimeout(websocket.end, 10 * 1000);
}).addListener("data", function (data) {
// handle incoming data
sys.debug(data);
// send data to client
websocket.write("Thanks!");
}).addListener("close", function () {
// emitted when server or client closes connection
sys.debug("close");
});
}).listen(8080);
All OK. It works, but running 3 clients, for instance, and sending "Hello!" from one will make the server only reply "Thanks!" to the client which sent the message, not to all.
How can I broadcast "Thanks!" to all connected clients when someone sends "Hello!"?
Thanks!
If you want to send out to all clients, you have to keep track of them. Here is a sample:
var sys = require("sys"),
ws = require("./ws");
// # Keep track of all our clients
var clients = [];
ws.createServer(function (websocket) {
websocket.addListener("connect", function (resource) {
// emitted after handshake
sys.debug("connect: " + resource);
// # Add to our list of clients
clients.push(websocket);
// server closes connection after 10s, will also get "close" event
// setTimeout(websocket.end, 10 * 1000);
}).addListener("data", function (data) {
// handle incoming data
sys.debug(data);
// send data to client
// # Write out to all our clients
for(var i = 0; i < clients.length; i++) {
clients[i].write("Thanks!");
}
}).addListener("close", function () {
// emitted when server or client closes connection
sys.debug("close");
for(var i = 0; i < clients.length; i++) {
// # Remove from our connections list so we don't send
// # to a dead socket
if(clients[i] == websocket) {
clients.splice(i);
break;
}
}
});
}).listen(8080);
I was able to get it to broadcast to all clients, but it's not heavily tested for all cases. The general concept should get you started though.
EDIT: By the way I'm not sure what the 10 second close is for so I've commented it out. It's rather useless if you're trying to broadcast to all clients since they'll just keep getting disconnected.
I would recommend you to use socket.io. It has example web-chat functionality out of the box and also provides abstraction layer from the socket technology on client (WebSockets are supported by Safari, Chrome, Opera and Firefox, but disabled in Firefox and Opera now due to security vulnerabilities in ws-protocol).

Categories

Resources