Can't close server (nodeJS) - javascript

Why I can't close the server by requesting localhost:13777/close in browser (it continues to accept new requests), but it will gracefully close on timeout 15000? Node version is 0.10.18. I fell into this problem, trying to use code example from docs on exceptions handling by domains (it was giving me 'Not running' error every time I secondly tried to request error page) and finally came to this code.
var server
server = require("http").createServer(function(req,res){
if(req.url == "/close")
{
console.log("Closing server (no timeout)")
setTimeout(function(){
console.log("I'm the timeout")
}, 5000);
server.close(function(){
console.log("Server closed (no timeout)")
})
res.end('closed');
}
else
{
res.end('ok');
}
});
server.listen(13777,function(){console.log("Server listening")});
setTimeout(function(){
console.log("Closing server (timeout 15000)")
server.close(function(){console.log("Server closed (timeout 15000)")})
}, 15000);

The server is still waiting on requests from the client. The client is utilizing HTTP keep-alive.
I think you will find that while the existing client can make new requests (as the connection is already established), other clients won't be able to.

Nodejs doesn't implement a complex service layer on top of http.Server. By calling server.close() you are instructing the server to no longer accept any "new" connections. When a HTTP Connection:keep-alive is issued the server will keep the socket open until the client terminates or the timeout is reached. Additional clients will not be able to issue requests
The timeout can be changed using server.setTimeout() https://nodejs.org/api/http.html#http_server_settimeout_msecs_callback
Remember if a client has created a connection before the close event that connection can continually be used.
It seems that a lot of people do not like this current functionality but this issue has been open for quite a while:
https://github.com/nodejs/node/issues/2642

As the other answers point out, connections may persist indefinitely and the call to server.close() will not truly terminate the server if any such connections exist.
We can write a simple wrapper function which attaches a destroy method to a given server that terminates all connections, and closes the server (thereby ensuring that the server ends nearly immediately!)
Given code like this:
let server = http.createServer((req, res) => {
// ...
});
later(() => server.close()); // Fails to reliably close the server!
We can define destroyableServer and use the following:
let destroyableServer = server => {
// Track all connections so that we can end them if we want to destroy `server`
let sockets = new Set();
server.on('connection', socket => {
sockets.add(socket);
socket.once('close', () => sockets.delete(socket)); // Stop tracking closed sockets
});
server.destroy = () => {
for (let socket of sockets) socket.destroy();
sockets.clear();
return new Promise((rsv, rjc) => server.close(err => err ? rjc(err) : rsv()));
};
return server;
};
let server = destroyableServer(http.createServer((req, res) => {
// ...
}));
later(() => server.destroy()); // Reliably closes the server almost immediately!
Note the overhead of entering every unique socket object into a Set

Related

WebSocket needs browser refresh to update list

My project works as intended except that I have to refresh the browser every time my keyword list sends something to it to display. I assume it's my inexperience with Expressjs and not creating the route correctly within my websocket? Any help would be appreciated.
Browser
let socket = new WebSocket("ws://localhost:3000");
socket.addEventListener('open', function (event) {
console.log('Connected to WS server')
socket.send('Hello Server!');
});
socket.addEventListener('message', function (e) {
const keywordsList = JSON.parse(e.data);
console.log("Received: '" + e.data + "'");
document.getElementById("keywordsList").innerHTML = e.data;
});
socket.onclose = function(code, reason) {
console.log(code, reason, 'disconnected');
}
socket.onerror = error => {
console.error('failed to connect', error);
};
Server
const ws = require('ws');
const express = require('express');
const keywordsList = require('./app');
const app = express();
const port = 3000;
const wsServer = new ws.Server({ noServer: true });
wsServer.on('connection', function connection(socket) {
socket.send(JSON.stringify(keywordsList));
socket.on('message', message => console.log(message));
});
// `server` is a vanilla Node.js HTTP server, so use
// the same ws upgrade process described here:
// https://www.npmjs.com/package/ws#multiple-servers-sharing-a-single-https-server
const server = app.listen(3000);
server.on('upgrade', (request, socket, head) => {
wsServer.handleUpgrade(request, socket, head, socket => {
wsServer.emit('connection', socket, request);
});
});
In answer to "How to Send and/or Stream array data that is being continually updated to a client" as arrived at in comment.
A possible solution using WebSockets may be to
Create an interface on the server for array updates (if you haven't already) that isolates the array object from arbitrary outside modification and supports a callback when updates are made.
Determine the latency allowed for multiple updates to occur without being pushed. The latency should allow reasonable time for previous network traffic to complete without overloading bandwidth unnecessarily.
When an array update occurs, start a timer if not already running for the latency period .
On timer expiry JSON.stringify the array (to take a snapshot), clear the timer running status, and message the client with the JSON text.
A slightly more complicated method to avoid delaying all push operations would be to immediately push single updates unless they occur within a guard period after the most recent push operation. A timer could then push modifications made during the guard period at the end of the guard period.
Broadcasting
The WebSockets API does not directly support broadcasting the same data to multiple clients. Refer to Server Broadcast in ws documentation for an example of sending data to all connected clients using a forEach loop.
Client side listener
In the client-side message listener
document.getElementById("keywordsList").innerHTML = e.data;
would be better as
document.getElementById("keywordsList").textContent = keywordList;
to both present keywords after decoding from JSON and prevent them ever being treated as HTML.
So I finally figured out what I wanted to accomplish. It sounds straight forward after I learned enough and thought about how to structure the back end of my project.
If you have two websockets running and one needs information from the other, you cannot run them side by side. You need to have one encapsulate the other and then call the websocket INSIDE of the other websocket. This can easily cause problems down the road for other projects since now you have one websocket that won't fire until the other is run but for my project it makes perfect sense since it is locally run and needs all the parts working 100 percent in order to be effective. It took me a long time to understand how to structure the code as such.

Weird socket.io behavior when Node server is down and then restarted

I implemented a simple chat for my website where users can talk to each other with ExpressJS and Socket.io. I added a simple protection from a ddos attack that can be caused by one person spamming the window like this:
if (RedisClient.get(user).lastMessageDate > currentTime - 1 second) {
return error("Only one message per second is allowed")
} else {
io.emit('message', ...)
RedisClient.set(user).lastMessageDate = new Date()
}
I am testing this with this code:
setInterval(function() {
$('input').val('message ' + Math.random());
$('form').submit();
}, 1);
It works correctly when Node server is always up.
However, things get extremely weird if I turn off the Node server, then run the code above, and start Node server again in a few seconds. Then suddenly, hundreds of messages are inserted into the window and the browser crashes. I assume it is because when Node server is down, socket.io is saving all the client emits, and once it detects Node server is online again, it pushes all of those messages at once asynchronously.
How can I protect against this? And what is exactly happening here?
edit: If I use Node in-memory instead of Redis, this doesn't happen. I am guessing cause servers gets flooded with READs and many READs happen before RedisClient.set(user).lastMessageDate = new Date() finishes. I guess what I need is atomic READ / SET? I am using this module: https://github.com/NodeRedis/node_redis for connecting to Redis from Node.
You are correct that this happens due to queueing up of messages on client and flooding on server.
When the server receives messages, it receives messages all at once, and all of these messages are not synchronous. So, each of the socket.on("message:... events are executed separately, i.e. one socket.on("message... is not related to another and executed separately.
Even if your Redis-Server has a latency of a few ms, these messages are all received at once and everything always goes to the else condition.
You have the following few options.
Use a rate limiter library like this library. This is easy to configure and has multiple configuration options.
If you want to do everything yourself, use a queue on server. This will take up memory on your server, but you'll achieve what you want. Instead of writing every message to server, it is put into a queue. A new queue is created for every new client and delete this queue when processing the last item in queue.
(update) Use multi + watch to create lock so that all other commands except the current one will fail.
the pseudo-code will be something like this.
let queue = {};
let queueHandler = user => {
while(queue.user.length > 0){
// your redis push logic here
}
delete queue.user
}
let pushToQueue = (messageObject) => {
let user = messageObject.user;
if(queue.messageObject.user){
queue.user = [messageObject];
} else {
queue.user.push(messageObject);
}
queueHandler(user);
}
socket.on("message", pushToQueue(message));
UPDATE
Redis supports locking with WATCH which is used with multi. Using this, you can lock a key, and any other commands that try to access that key in thet time fail.
from the redis client README
Using multi you can make sure your modifications run as a transaction,
but you can't be sure you got there first. What if another client
modified a key while you were working with it's data?
To solve this, Redis supports the WATCH command, which is meant to be
used with MULTI:
var redis = require("redis"),
client = redis.createClient({ ... });
client.watch("foo", function( err ){
if(err) throw err;
client.get("foo", function(err, result) {
if(err) throw err;
// Process result
// Heavy and time consuming operation here
client.multi()
.set("foo", "some heavy computation")
.exec(function(err, results) {
/**
* If err is null, it means Redis successfully attempted
* the operation.
*/
if(err) throw err;
/**
* If results === null, it means that a concurrent client
* changed the key while we were processing it and thus
* the execution of the MULTI command was not performed.
*
* NOTICE: Failing an execution of MULTI is not considered
* an error. So you will have err === null and results === null
*/
});
}); });
Perhaps you could extend your client-side code, to prevent data being sent if the socket is disconnected? That way, you prevent the library from queuing messages while the socket is disconnected (ie the server is offline).
This could be achieved by checking to see if socket.connected is true:
// Only allow data to be sent to server when socket is connected
function sendToServer(socket, message, data) {
if(socket.connected) {
socket.send(message, data)
}
}
More information on this can be found at the docs https://socket.io/docs/client-api/#socket-connected
This approach will prevent the built in queuing behaviour in all scenarios where a socket is disconnected, which may not be desirable, however if should protect against the problem you are noting in your question.
Update
Alternatively, you could use a custom middleware on the server to achieve throttling behaviour via socket.io's server API:
/*
Server side code
*/
io.on("connection", function (socket) {
// Add custom throttle middleware to the socket when connected
socket.use(function (packet, next) {
var currentTime = Date.now();
// If socket has previous timestamp, check that enough time has
// lapsed since last message processed
if(socket.lastMessageTimestamp) {
var deltaTime = currentTime - socket.lastMessageTimestamp;
// If not enough time has lapsed, throw an error back to the
// client
if (deltaTime < 1000) {
next(new Error("Only one message per second is allowed"))
return
}
}
// Update the timestamp on the socket, and allow this message to
// be processed
socket.lastMessageTimestamp = currentTime
next()
});
});

How to catch and deal with "WebSocket is already in CLOSING or CLOSED state" in Node

I've been searching for a solution to the issue "WebSocket is already in CLOSING or CLOSED state" and found this:
Meteor WebSocket is already in CLOSING or CLOSED state error
WebSocket is already in CLOSING or CLOSED state.
Answer #1 is for strictly related to Meteor and #2 has no answers... I have a Node server app with a socket:
const WebSocket = require('ws');
const wss = new WebSocket.Server({ server });
wss.on('connection', function connection(socket) {
socket.on('message', function incoming(data) {
console.log('Incoming data ', data);
});
});
And clients connect like this:
const socket = new WebSocket('ws://localhost:3090'); //Create WebSocket connection
//Connection opened
socket.addEventListener('open', function(event) {
console.log("Connected to server");
});
//Listen to messages
socket.addEventListener('message', function(event) {
console.log('Message from server ', event);
});
However after a few minutes, clients randomly disconnect and the function
socket.send(JSON.stringify(data));
Will then throw a "WebSocket is already in CLOSING or CLOSED state.".
I am looking for a way to detect and deal these disconnections and immediately attempt to connect again.
What is the most correct and efficient way to do this?
The easiest way is to check if the socket is open or not before sending.
For example - write a simple function:
function isOpen(ws) { return ws.readyState === ws.OPEN }
Then - before any socket.send make sure it is open:
if (!isOpen(socket)) return;
socket.send(JSON.stringify(data));
You can also rewrite the send function like this answer but in my way you can log this situations.
And, for your second request
immediately attempt to connect again
There is no way you can do it from the server.
The client code should monitor the WebSocket state and apply reconnect method based on your needs.
For example - check this VueJS library that do it nicely. Look at Enable ws reconnect automatically section
Well, my answer is simple, is just you send message to the web socket in an interval of time, to understand that you are using the service. It is better than you got another connection. Now, you start your project where are the web socket function and inspect elements to see the state Time, and see the time that change of "pending" for the time when closes. So now you will define a media of interval to make a setInterval functions like this for example: enter code here
const conn = WebSocket("WSS://YourLocationWebSocket.com");
setInterval(function(){
var object = {"message":"ARandonMessage"};
object = JSON.stringify(object);
conn.send(object);
},/*The time, I suggest 40 seconds, so*/ 40000)
might be late to the party, but i recently encountered this problem & figured that the reason is because the readystate property of the websocket connection is 3 (CLOSING) https://developer.mozilla.org/en-US/docs/Web/API/WebSocket/readyState at the time the message was sent.
i resolved this by checking the readystate property; if it equals to 3, close and reinitialize the websocket connection. then do a while loop that exits when the readystate property equals to 1, otherwise a delay, to ensure that the new connection is already open.
if ( this.ws.readyState === 3 ) {
this.ws.close();
this.ws = new WebSocket(`wss://...`);
// wait until new connection is open
while (this.ws.readyState !== 1) {
await new Promise(r => setTimeout(r, 250));
}
}
this.ws.send(...)

How to terminate a WebSocket connection?

Is it possible to terminate a websocket connection from server without closing the entire server? If it is then, how can I achieve it?
Note: I'm using NodeJS as back-end and 'ws' websocket module.
So because of some sort of omission in the documentation regarding ws.close() and ws.terminate() I think the solutions in provided answers won't close the sockets gracefully in some cases, thus keeping them hanging in the Event Loop.
Compare the next two methods of ws package:
ws.close():
Initializes close handshake, sending close frame to the peer and awaiting to receive close frame from the peer, after that sending FIN packet in attempt to perform a clean socket close. When answer received, the socket is destroyed. However, there is a closeTimeout that will destroy socket only as a worst case scenario, and it potentially could keep socket for additional 30 seconds, preventing the graceful exit with your custom timeout:
// ws/lib/WebSocket.js:21
const closeTimeout = 30 * 1000; // Allow 30 seconds to terminate the connection cleanly.
ws.terminate():
Forcibly destroys the socket without closing frames or fin packets exchange, and does it instantly, without any timeout.
Hard shutdown
Considering all of the above, the "hard landing" scenario would be as follows:
wss.clients.forEach((socket) => {
// Soft close
socket.close();
process.nextTick(() => {
if ([socket.OPEN, socket.CLOSING].includes(socket.readyState)) {
// Socket still hangs, hard close
socket.terminate();
}
});
});
Soft shutdown
You can give your clients some time to respond, if you could allow yourself to wait for a while (but not 30 seconds):
// First sweep, soft close
wss.clients.forEach((socket) => {
socket.close();
});
setTimeout(() => {
// Second sweep, hard close
// for everyone who's left
wss.clients.forEach((socket) => {
if ([socket.OPEN, socket.CLOSING].includes(socket.readyState)) {
socket.terminate();
}
});
}, 10000);
Important: proper execution of close() method will emit 1000 close code for close event, while terminate() will signal abnormal close with 1006 (MDN WebSocket Close event).
If you want to kick ALL clients without closing the server you can do this:
for(const client of wss.clients)
{
client.close();
}
you can also filter wss.clients too if you want to look for one in particular. If you want to kick a client as part of the connection logic (i.e. it sends bad data etc), you can do this:
let WebSocketServer = require("ws").Server;
let wss = new WebSocketServer ({ port: 8080 });
wss.on('connection', function connection(ws) {
ws.send('something');
ws.close(); // <- this closes the connection from the server
});
and with a basic client
"use strict";
const WebSocket = require("ws");
let ws = new WebSocket("ws://localhost:8080");
ws.onopen = () => {
console.log("opened");
};
ws.onmessage = (m) => {
console.log(m.data);
};
ws.onclose = () => {
console.log("closed");
};
you'll get:
d:/example/node client
opened
something
closed
According to the ws documentation, you need to call websocket.close() to terminate a connection.
let server = new WebSocketServer(options);
server.on('connection', ws => {
ws.close(); //terminate this connection
});
Just use ws.close() in this way.
var socketServer = new WebSocketServer();
socketServer.on('connection', function (ws) {
ws.close(); //Close connecton for connected client ws
});
If you use var client = net.createConnection() to create the socket you can use client.destroy() to destroy it.
With ws it should be:
var server = new WebSocketServer();
server.on('connection', function (socket) {
// Do something and then
socket.close(); //quit this connection
});

Socket.io with Cluster: iterating over all open connections

I'm running Socket.io multi-threaded with the native cluster functionality provided by Node.js v0.6.0 and later (with RedisStore).
For every new change in state, the server iterates over each connection and sends a message if appropriate. Note: this isn't "broadcasting" to all connections, it's comparing server data with data the client sent on connection to decide whether to send the server data to that particular client. Consider this code sample:
io.sockets.clients().forEach(function (socket) {
socket.get('subscription', function (err, message) {
if(message.someProperty === someServerData) {
socket.emit('position', someServerData);
}
});
This worked fine when there was only one process, but now, the client receives a message for each Node process (ie. if there are 8 Node process running, all clients receive the messages 8 times).
I understand why the issue arises, but I'm not sure of a fix. How can I assign a 1-to-1 relation from one process to only on client. Perhaps something using NODE_WORKER_ID of Cluster?
This previous SO question seems somewhat related, although I'm not sure it's helpful.
This seems like a pretty common request. Surely, I must be missing something?
So if I get this straight you need to emit custom events from the server. You can do that by creating your own custom EventEmitter and triggering events on that emitter, for example:
var io = require('socket.io').listen(80);
events = require('events'),
customEventEmitter = new events.EventEmitter();
io.sockets.on('connection', function (socket) {
// here you handle what happens on the 'positionUpdate' event
// which will be triggered by the server later on
eventEmitter.on('positionUpdate', function (data) {
// here you have a function that checks if a condition between
// the socket connected and your data set as a param is met
if (condition(data,socket)) {
// send a message to each connected socket
// if the condition is met
socket.emit('the new position is...');
}
});
});
// sometime in the future the server will emit one or more positionUpdate events
customEventEmitter.emit('positionUpdate', data);
Another solution would be to have those users join the 'AWE150', so only they will receive updates for 'AWE150', like so:
var io = require('socket.io').listen(80);
io.sockets.on('connection', function (socket) {
if (client_is_interested_in_AWE) { socket.join('AWE150'); }
io.sockets.in('AWE150').emit('new position here');
});
Resources:
http://spiritconsulting.com.ar/fedex/2010/11/events-with-jquery-nodejs-and-socket-io/

Categories

Resources