Reusing one opened TCP connection between TCP client and TCP server - javascript

There is third party service that expose TCP server to which my node server(TCP client) should establish TCP connection using tls node module.
Being TCP client, node server is also HTTP server at same time, that should act something like proxy between customers coming from web browser and third party TCP server. So common use case would be that browser will send HTTP request to node server from where node server will communicate with TCP server via TCP sockets to gather/construct response and to send it back to browser.
Current solution that i had is that per each customer/each HTTP request coming from web browser new separated TCP connection will be established with TCP server. This solution proved to be bad, wasting time to do SSL handshake every time and TCP server does not allow more than 50 concurrent connections coming from single client. So with this solution it is not possible to have more than 50 customers communicate with node server at once.
What would be something like standard approach to do this thing with tls module in node?
What i'm up to is having one single TCP connection that will be something like always active and that connection will be established in the time when node app will start and what is most important this connection should be reused for many HTTP request coming from web browser.
First concern that i have is how to construct different HTTP responses based on data that is coming from TCP server via TCP raw socket. The good thing is that i can send something like unique token via headers to TCP server when describing which action should be taken on TCP server side.
socket.write(JSON.stringify({
header: {uniqueToken: 032424242, type: 'create something bla bla'},
data: {...}
}))
Having unique token TCP server side guarantee that JSON when combined from different chunks coming over TCP socket and parsed will have this uniqueToken which means im able to map this JSON to HTTP request and to return HTTP response.
My question is does in general TCP protocol guarantee that in this case different successive chunks will belong to the same response that needs to created when those chunks are combined and parsed (when '\n\n' occur)
In another words is there any guarantee that chunks will not be mixed.
(Im aware that it can happen that chunk that contains '\n\n' can belong to two different responses but i will be able to handle that)
If that is not possible than i don't see a way in which first solution (having one connection for one response that needs to be created) can be improved. Only way would be to introduce some connection pooling concept which as far as i know tls module does not provide in any way out of the box.
EDIT based on comments bellow, short version of question:
Lets say that TCP server needs 2 seconds to send all chunks when it receives command create something bla bla
If TCP client send command create something bla bla and immediately after 1 millisecond it send second create something bla bla, is there any chance that could happen that TCP server will write chunk related to second command before it writes all chunks related to first command.

... is there any chance that could happen that TCP server will write chunk related to second command before it writes all chunks related to first command.
If I understand your question correctly you are asking if a write("AB") followed by a write("CD") on the same socket at the server side could result that the clients reads ACDB from the server.
This is not the case if both writes are successful and have actually written all the data to the underlying socket buffer. But, since TCP is a stream protocol with no implicit message boundary the read on the client side could be something like ABCD or AB followed by CD or A followed by BC followed by D etc. Thus to distinguish between the messages from the server you have to add some application level message detection, like an end of message marker, a size prefix or similar.
Also, I restricted the previous statement to both writes are successful and have actually written all the data to the underlying socket buffer. This is not necessarily the case. For example you might functions which do a buffered write like (in C) fwrite instead of write. In this case you usually don't control which parts of the buffer are written at which time so it might be that fwrite("AB") would result in "A" written to the socket while "B" kept in the buffer. If you then have another buffered writer which use the same underlying file descriptor (i.e socket) but not the same buffer then you could actually end up with something like ACDB send to the underlying socket and thus to the client.
This case could even happen if the unbuffered write with not fully successful, i.e. if a write("AB") has only written "A" and signals through the return value that "B" needs to be written later. If you have then a multi-threaded application with insufficient synchronization between threads you could end up with a case that the first threads sends "A" to the socket in the incomplete attempt to write "AB", followed by another thread sending "CD" successfully and then the first thread again completing the send by writing "B". In this case you also end up with "ACDB" on the socket.
In summary: the TCP layer guarantees that the send order is the same as the received order but the user space (i.e. application) needs to make sure that it really sends the data in the right order to the socket. Also, TCP has no message boundary so the distinction of messages inside the TCP stream need to be implemented inside the application by using message boundaries, length prefix, fixed message size or similar.

Related

Socket io Client in ReactJS is getting multiple emits from the server

I am building a chat room with some extra features, and I have a socket io server, as well as a socket io client in ReactJS.
I have it so if someone pins a message or changes the settings of the chat room, it emits the changes to the server, the server saves those changes, and then emits them back out to everyone so everyone is synced.
The settings and pinned messages successfully transfer and are communicated, I have console.logs at almost every step of the transfer, the client logs out a single request, the server logs that it emits a single time, but the client logs that it recieved an emit multiple times, sometimes a couple of times like 2-6 requests, sometime it gives out 60 requests. I'm trying to really control the efficiency, but I have no idea what is causing this
I'm not sure it matters but another thing of note is that the client also connects to a native WebSocket server to get messages from another source
what the client generally looks like is:
effect(()=>{
socket.emit('changeSetting', setting)
},[setting])
socket.on('recieveSetting', (arg)=>{
if(arg != setting){
setSetting(arg);
}
})
the server then looks like this:
socket.on('changeSetting', (arg)=>{
storedSetting = arg
socket.emit('recieveSetting', storedSetting)
})
That's the general structure, so I don't think its an issue of the code, more like if reactJS or connecting to the other websocket causes it to get multiple emits

Is it okay to create a net.Socket() 'data' event handler once for each Node.js http request/response?

I've got a Node.js web server communicating with a locally running Python TCP socket server (communicating via their respective socket modules net.Socket, socket).
Clients make HTTP post requests from the browser which get handled by a Node http.createServer function, with some of them sent to the Python server for heavy computations, the results of which are then sent back to Node and back to the browser for rendering.
The Python server is necessary instead of a Node child process as there are some large (immutable) objects required for the Python computations that take a while to initialise and are then shared across threads. It would be infeasible to create and destroy these objects for every browser request.
So my question is, using the Node callback paradigm, how do I capture the response object for each POST request in the net.Socket data event handler/s?
Note 0: Each request has a unique id that is sent to the Python server and returned.
This currently works* inside my http.createServer callback:
http.createServer((request, response) => {
// route and parse incoming requests etc.
// send POST data to Python
python_socket.write(parsed_request_post_data);
// Python works away diligently then emits a data event handled below
python_socket.once('data', (data_from_python) => {
// error and exception handling
response.setHeader('Content-Type', 'application/json');
response.end(data_from_python);
});
}).listen(HTTPport);
*However if I bomb the server with multiple requests, sometimes I get the same data returned in each response (even though Python handles each data separately). I worry that I am trying to assign multiple once('data' callbacks in the same Node event loop and only one of them is persisting, and that is the one repeatedly sending the Python data back to the browser? Though if this were the case the response object would also be getting repeated and I would get an error for trying to end an already closed response right? But each response seems to end fine.
Apologies for the rather long and vague question. I'm still learning and would really appreciate any advice or references I can study to help me understand what is going on. Also very open to trying different approaches (except changing web server - see note 2 below).
Note 1: I tried declaring a global data handler (note the on instead of once) for the net.Socket server as follows, but couldn't figure out how to forward the returned data to each http response?
python_socket.on('data', (data_from_python) => {
// error and exception handling
// how do I get data_from_python out to each http response
// then close it in a non-blocking way?
});
Note 2: I'm not allowed to use a Python web server as the business wants to reuse this design in future to plug and play other services (R, Julia, C++, ...) into Node web servers.

How to capture SSE Eventsource acknowledgement

I'm using this code to send an SSE message to the browser client.
https://nodejs.org/api/http.html#http_response_write_chunk_encoding_callback
Node server
response.writeHead(200, { 'Content-Type': 'text/event-stream', 'Access-Control-Allow-Origin': '*' });
response.write(message);
response.end();
And for the client I'm using this javascript:
var source = new EventSource('myurl');
source.addEventListener('add', addHandler, false);
source.addEventListener('remove', removeHandler, false);
Everything is working fine, but how the server knows for sure that the client actually received it ? I guess SSE is using TCP, is there any way to received the acknowledgement ?
SSEs are a one to many Push protocol. So there is no acknowledgement. You could send an AJAX request back on receipt, but there is nothing in the pattern to provide this functionality.
SSE is a one way communication protocol to push data from server to client.
There no way for a client to ack event reception.
If acknowledge is a must have, you probably need a two way communication like websockets.
I know this is many years old, but none of the answers is correct. 1) TCP does indeed ACK the push stream - its standard http! (though whether your code is at a low-enough level to detect it is a different story)
2) it's not to difficult to develope your own ACK system (I've done it! - To free up resources when last client disappears) and yes, it tends to go against the "spirit" of the protocol and duplicate to a degree the websocket paradigm...but it is wrong to say its impossible. Send a unique per-client "token" in the first message which the browser saves and starts a js "ping" timer which ajaxes a "still alive" message. In your erver code, handle the ajax and restart client-stil-alive timer. If that expires, client has gone, clean up / free resources etc.
Yes its a bit "lumpy" but it works and its not rocket-science difficulty.
just my (very late) 2c worth
The attached image was me diagnosing a case where the ACK was missing, but one every other one apart from the indicated one you can see the ACK

Flooding WebSocket

Am new to websocket and i implemented websocket on web application which the server-side is written in java and client-side is javascript. Server send notifications to client via websocket.
I wonder what would happened if client won't be fast enough to handle incoming messages as fast as server is sending them.
For example it is possible that server will be sending about 200 text messages per second, client is slow and is handling 100 messages per second.
I believe that browser queue incoming messages before it's processed but not sure. I what to also know how to check this buffer size and it's limit, and what would happen if buffer limit is reached.
Any idea on how i can simulate this kind of situation, i tried:
webSocket.onmessage = function (message) {
var bool = true;
var datenexexec = Date.now() + 1000;
while(bool) {
if(Date.now() > datenexexec){
bool = false;
}
}
}
but this causes the browser to only hang and later crash.
Thanks for help.
For sending data more rapidly than the client can read it, here's what will eventually happen.
The client receive buffer would fill up
TCP flow control will kick in and the server will be told to stop sending more packets on this socket.
The server will then buffer outgoing packets until flow control restrictions are removed
Eventually the server-side buffer limit will be hit and the underlying TCP would reject the socket write
This will return an error from the TCP send.
Depending upon what server-side library you are using for webSocket, you should get an error from a send operation at some point.
TCP is a reliable protocol so it will just buffer and transmit later until the buffer is full. It shouldn't lose packets by itself (unless the connection drops), but when buffers are full, it will give you an error that it can't send any more because the buffer is full.
As for the client-side code you tried, you can't busy/wait in Javascript for very long. That kills the event loop and eventually brings down the script engine.
The only way for you to simulate this is to try to actually send more packets than the client can process. You can code a "slow" client that takes maybe 250ms to process each packet in a short busy/wait loop and a "fast" server that sends a flood of packets and you should be able to simulate it.

Private messaging through node.js

I'm making a multiplayer (2 player) browser game in JavaScript. Every move a player makes will be sent to a server and validated before being transmitted to the opponent. Since WebSockets isn't ready for prime time yet, I'm looking at long polling as a method of transmitting the data and node.js looks quite interesting! I've gone through some example code (chat examples, standard long polling examples and suchlike) but all the examples I've seen seem to broadcast everything to every client, something I'm hoping to avoid. For general server messages this is fine but I want two players to be able to square off in a lobby or so and go into "private messaging" mode.
So I'm wondering if there's a way to implement private messaging between two clients using nodejs as a validating bridge? Something like this:
ClientA->nodejs: REQUEST
nodejs: VALIDATE REQUEST
nodejs->ClientA: VALID
nodejs->ClientB: VALID REQUEST FROM ClientA
You need some way to keep track of which clients are in a lobby together. You can do this with a simple global array like so process.lobby[1] = Array(ClientASocket, ClientBSocket) or something similar (possibly with some additional data, like nicknames and such), where the ClientXSocket is the socket object of each client that connects.
Now you can hook the lobby id (1 in this case) onto each client's socket object. A sort of session variable (without the hassle of session ids) if you will.
// i just made a hashtable to put all the data in,
// so that we don't clutter up the socket object too much.
socket.sessionData['lobby'] = 1;
What this allows you to do also, is add an event hook in the socket object, so that when the client disconnects, the socket can remove itself from the lobby array immediately, and message the remaining clients that this client has disconnected.
// see link in paragraph above for removeByValue
socket.on('close', function(err) {
process.lobby[socket.sessionData['lobby']].removeByValue(socket);
// then notify lobby that this client has disconnected.
});
I've used socket in place of the net.Stream or request.connection or whatever the thing is.
Remember in HTTP if you don't have keep-alive connections, this will make the TCP connection close, and so of course make the client unable to remain within a lobby. If you're using a plain TCP connection without HTTP on top (say within a Flash application or WebSockets), then you should be able to keep it open without having to worry about keep-alive. There are other ways to solve this problem than what I've shown here, but I hope I got you started at least. The key is keeping a persistent object for each client.
Disclaimer: I'm not a Node.js expert (I haven't even gotten around to installing it yet) but I have been reading up on it and I'm very familiar with browser js, so I'm hoping this is helpful somehow.

Categories

Resources