I'm trying to write a simple node.js server with Server Sent Event abilities without socket.io. My code runs well enough, but I ran into a huge problem when I was testing it.
If I connect to the node.js server from my browser, it will receive the server events fine. But when I refresh the connection, the same browser will start to receive the event twice. If I refresh n times, the browser will receive data n+1 times.
The most I tried this out with was 16 times (16 refreshes) before I stopped trying to count it.
See the screenshot below of the console from my browser. After the refresh (it tries to make an AJAX call), it will output "SSE persistent connection established!" and afterwards it will start receiving events.
Note the time when the events arrive. I get the event twice at 19:34:07 (it logs to the console twice -- upon receipt of the event and upon writing of the data to the screen; so you can see four logs there).
I also get the event twice at 19:34:12.
Here's what it looks like at the server side after the client has closed the connection (with source.close() ):
As you can see, it is still trying to send messages to the client! Also, it's trying to send the messages twice (so I know this is a server side problem)!
I know it tries to send twice because it sent two heartbeats when it's only supposed to send once per 30 seconds.
This problem magnifies when open n tabs. What happens is each open tab will receive n*n events. Basically, how I interpret it is this:
Opening the first tab, I subscribe to the server once. Opening the second tab, I subscribe both open tabs to the server once again -- so that's 2 subscriptions per tab. Opening a third tab, I subscribe all three open tabs to the events once more, so 3 subscriptions per tab, total of 9. And so on...
I can't verify this, but my guess is that if I can create one subscription, I should be able to unsubscribe if certain conditions are met (ie, heartbeat fails, I must disconnect). And the reason this is happening is only because of the following:
I started setInterval once, and an instance of it will forever run unless I stop it, or
The server is still trying to send data to the client trying to keep the connection open?
As for 1., I've already tried to kill the setInterval with clearInterval, it doesn't work. As for 2, while it's probably impossible, I'm leaning towards believing that...
Here's server side code snippets of just the relevant parts (editing the code after the suggestions from answers):
server = http.createServer(function(req, res){
heartbeatPulse(res);
}).listen(8888, '127.0.0.1', 511);
server.on("request", function(req, res){
console.log("Request Received!");
var route_origin = url.parse(req.url);
if(route_origin.query !== null){
serveAJAX(res);
} else {
serveSSE(res);
//hbTimerID = timerID.hbID;
//msgTimerID = timerID.msgID;
}
req.on("close", function(){
console.log("close the connection!");
res.end();
clearInterval(res['hbID']);
clearInterval(res['msgID']);
console.log(res['hbID']);
console.log(res['msgID']);
});
});
var attachListeners = function(res){
/*
Adding listeners to the server
These events are triggered only when the server communicates by SSE
*/
// this listener listens for file changes -- needs more modification to accommodate which file changed, what to write
server.addListener("file_changed", function(res){
// replace this with a file reader function
writeUpdate = res.write("data: {\"date\": \"" + Date() + "\", \"test\": \"nowsssss\", \"i\": " + i++ + "}\n\n");
if(writeUpdate){
console.log("Sent SSE!");
} else {
console.log("SSE failed to send!");
}
});
// this listener enables the server to send heartbeats
server.addListener("hb", function(res){
if(res.write("event: hb\ndata:\nretry: 5000\n\n")){
console.log("Sent HB!");
} else {
// this fails. We can't just close the response, we need to close the connection
// how to close a connection upon the client closing the browser??
console.log("HB failed! Closing connection to client...");
res.end();
//console.log(http.IncomingMessage.connection);
//http.IncomingMessage.complete = true;
clearInterval(res['hbID']);
clearInterval(res['msgID']);
console.log(res['hbID']);
console.log(res['msgID']);
console.log("\tConnection Closed.");
}
});
}
var heartbeatPulse = function(res){
res['hbID'] = "";
res['msgID'] = "";
res['hbID'] = setInterval(function(){
server.emit("hb", res);
}, HB_period);
res['msgID'] = setInterval(function(){
server.emit("file_changed", res);
}, 5000);
console.log(res['hbID']);
console.log(res['msgID'])
/*return {
hbID: hbID,
msgID: msgID
};*/
}
var serveSSE = function(res){
res.writeHead(200, {
"Content-Type": "text/event-stream",
"Cache-Control": "no-cache",
"Access-Control-Allow-Origin": "*",
"Connection": "keep-alive"
});
console.log("Establishing Persistent Connection...");
if(res.write("event: connector\ndata:\nretry: 5000\n\n")){
// Only upon receiving the first message will the headers be sent
console.log("Established Persistent Connection!");
}
attachListeners(res);
console.log("\tRequested via SSE!");
}
This is largely a self project for learning, so any comments are definitely welcome.
One issue is that you are storing request-specific variables outside the scope of the http server. So what you could do instead is to just call your setInterval()s once right after you start the http server and not start individual timers per-request.
An alternative to adding event handlers for every request might be to instead add the response object to an array that is looped through inside each setInterval() callback, writing to the response. When the connection closes, remove the response object from the array.
The second issue about detecting a dead connection can be fixed by listening for the close event on the req object. When that is emitted, you remove the server event (e.g. file_changed and hb) listeners you added for that connection and do whatever other necessary cleanup.
Here's how I got it to work:
Made a global heart object that contains the server plus all the methods to modify the server
var http = require("http");
var url = require("url");
var i = 0; // for dev only
var heart = {
server: {},
create: function(object){
if(!object){
return false;
}
this.server = http.createServer().listen(8888, '127.0.0.1');
if(!this.server){
return false;
}
for(each in object){
if(!this.listen("hb", object[each])){
return false;
}
}
return true;
},
listen: function(event, callback){
return this.server.addListener(event, callback);
},
ignore: function(event, callback){
if(!callback){
return this.server.removeAllListeners(event);
} else {
return this.server.removeListener(event, callback);
}
},
emit: function(event){
return this.server.emit(event);
},
on: function(event, callback){
return this.server.on(event, callback);
},
beating: 0,
beatPeriod: 1000,
lastBeat: false,
beat: function(){
if(this.beating === 0){
this.beating = setInterval(function(){
heart.lastBeat = heart.emit("hb");
}, this.beatPeriod);
return true;
} else {
return false;
}
},
stop: function(){ // not applicable if I always want the heart to beat
if(this.beating !== 0){
this.ignore("hb");
clearInterval(this.beating);
this.beating = 0;
return true;
} else {
return false;
}
},
methods: {},
append: function(name, method){
if(this.methods[name] = method){
return true;
}
return false;
}
};
/*
Starting the heart
*/
if(heart.create(object) && heart.beat()){
console.log("Heart is beating!");
} else {
console.log("Failed to start the heart!");
}
I chained the req.on("close", callback) listener on the (essentially) server.on("request", callback) listener, then remove the callback if the close event is triggered
I chained a server.on("heartbeat", callback) listener on the server.on("request", callback) listener and made a res.write() when a heartbeat is triggered
The end result is each response object is being dictated by the server's single heartbeat timer, and will remove its own listeners when the request is closed.
heart.on("request", function(req, res){
console.log("Someone Requested!");
var origin = url.parse(req.url).query;
if(origin === "ajax"){
res.writeHead(200, {
"Content-Type": "text/plain",
"Access-Control-Allow-Origin": "*",
"Connection": "close"
});
res.write("{\"i\":\"" + i + "\",\"now\":\"" + Date() + "\"}"); // this needs to be a file reading function
res.end();
} else {
var hbcallback = function(){
console.log("Heartbeat detected!");
if(!res.write("event: hb\ndata:\n\n")){
console.log("Failed to send heartbeat!");
} else {
console.log("Succeeded in sending heartbeat!");
}
};
res.writeHead(200, {
"Content-Type": "text/event-stream",
"Cache-Control": "no-cache",
"Access-Control-Allow-Origin": "*",
"Connection": "keep-alive"
});
heart.on("hb", hbcallback);
req.on("close", function(){
console.log("The client disconnected!");
heart.ignore("hb", hbcallback);
res.end();
});
}
});
Related
I want to create a live order page where clients can see the status of their order.
For that reason I want to run a function every 10 seconds that checks the SQL database if the order is ready.
function checkOrder(socket, userid, checkinterval) {
pool.getConnection(function(err, connection) {
// Use the connection
connection.query('SELECT * FROM orders WHERE user = ' + userid + ' ORDER BY timestamp DESC', function(err, rows) {
var alldone = false;
for (var i = 0; i < rows.length; i++) {
if (rows[i]['status'] == 'completed') {
alldone = true;
} else {
alldone = false;
break;
}
}
socket.emit('order-update', rows);
connection.release();
if (alldone) {
console.log('all done');
socket.emit('execute', '$("#orderstatus").html(\'Done\');');
clearInterval(checkinterval);
}
});
});
}
var express = require('express');
var app = express();
var app = express();
var options = {
key: fs.readFileSync('privkey.pem'),
cert: fs.readFileSync('cert.pem'),
ca: fs.readFileSync("chain.pem")
};
var server = require('https').createServer(options, app);
var io = require('socket.io')(server);
var port = 443;
server.listen(port, function() {
console.log('Server listening at port %d', port);
});
io.on('connection', function(socket) {
socket.on('trackorder', function(userid) {
var checkinterval = setInterval(function() {
checkOrder(socket, userid, checkinterval);
}, 10000);
});
socket.on('disconnect', function() {
clearInterval(checkinterval);
});
});
Now I'm having issues on stopping the function if either the job is completed or the client disconnects.
How could I achieve that? I suppose the clearInterval() would work inside the function since it is passed but there is an issue with the on disconnect event handler. Either checkinterval is undefined or if I define it globally it stops the wrong function.
How can this be done properly?
Your checkInterval variable is out of scope when the disconnect event comes. You need to move its definition up a level.
io.on('connection', function(socket) {
// checkInterval variable is declared at this scope so all event handlers can access it
var checkInterval;
socket.on('trackorder', function(userid) {
// make sure we never overwrite a checkInterval that is running
clearInterval(checkInterval);
checkInterval = setInterval(function() {
checkOrder(socket, userid, checkInterval);
}, 10000);
});
socket.on('disconnect', function() {
clearInterval(checkinterval);
});
});
In addition:
I added a guard against overwriting the checkInterval variable if you ever get the trackorder event more than once for the same client.
You mispelled checkinterval in one place.
As others have said, polling your database on behalf of every single client is a BAD design and will not scale. You need to either use database triggers (so it will tell you when something interesting changed) or have your own code that makes relevant changes to the database trigger a change. Do not poll on behalf of every single client.
You have no error handling in either pool.getConnection() or connection.query().
Instead of that complicated setInterval stuff, just add a small IIFE that calls itself if the result isnt there yet. Some pseudocode:
function checkOrder(socket, userid){
//a variable pointing to the running timer
var timer;
//on error clear
socket.on("disconnect", ()=>clearTimout(timer));
//a small IIFE
(function retry(){
pool.getConnection(function(err, connection) {
//parse & notice socket
if (!alldone) //retry
timer = setTimeout(retry, 1000);
});
})();
}
I would say you're using a bad approach. You should go for push rather than pull.
What I mean is, emit the event when status of order changes. Don't put the burden on your database to hit it frequently for no reason.
On successful change of status, emit the event order_status_update with order id and what is the new status
socket.emit('order_status_update', {order_id: 57, status: 'In Process'});
This way you don't need any kind of loop or setinterval etc. No worries even if client is connected or not, its sockat.io business to take care of it. You will just raise the event.
After struggling with socket.io connection authentication (here and here) and thanks to #sgress454, I realized how to get this to work and I am sending the authentication/authorization token as part of the query in the connection (see below).
Upon authentication failure (invalid/expired token or in-active user), I return the callback with false parameter to indicate the connection is rejected.
On the client side though, I am not sure how I should handled it and it seems the socket is trying to reconnect even after explicitly disconnecting - I keep seeing that it is trying to reconnect.
The client code is something like this:
var _onConnectError = function(err) {
if (err.description == 400) {
console.log("Connection rejected");
_disconnectAndCleanupSocket();
} else {
console.log("##SOCKET - connect_error", err.description, err);
}
}
var _disconnectAndCleanupSocket = function() {
if (io.socket) {
if (io.socket.isConnected()) {
io.socket.disconnect();
}
io.socket.removeAllListeners();
delete io.socket;
}
};
io.socket = io.sails.connect({ query: "token=" + token});
io.socket.on('connect', _onConnect);
io.socket.on('connect_error', _onConnectError);
io.socket.on('reconnect_error', _onConnectError);
On the server (config/sockets.js) I have:
beforeConnect: function(handshake, cb) {
var token = handshake._query ? handshake._query.token : null;
CipherService.verifyToken(token, function verifyTokenResults(err, decoded, info) {
if (err || !decoded) {
if (err.name === "TokenExpiredError") {
// token expired - user can't connect...
return cb(null, false);
} else {
// some other error...
return cb(err, false);
}
}
AuthUser.findOne(decoded.user.id).exec(function(err, user) {
if (err || !user || !user.is_active) {
return cb(null, false);
}
return cb(null, true);
});
});
// (`false` would reject the connection)
},
I have tried to find documentation and explored the response object (in developer tools) but the only thing I saw there was thedescription field which return 400 on rejection and 0 in case there is no response (e.g. server is down).
Is there some example/documentation for this? Overall, I didn't find detailed description of using the SailsSocket in non-standard cases (other then use io.sails.connect()).
What is the proper way to handle such rejection (and shouldn't it handle it as part of the sails socket.io client?)
As an aside, I cannot instantiate SailsSocket myself and only do this with the 'io.sails.connect()' function. Is that on purpose? Is there no option to "miss" an event when I create the socket with the connect method and only then assign event handlers?
The short answer to your question is that you can set the reconnection flag to turn automatic reconnection on or off:
// Disable auto-reconnect for all new socket connections
io.sails.reconnection = false;
// Disable auto-reconnect for a single new socket connection
var mySocket = io.sails.connect({reconnection: false});
As far as SailsSocket creation, you are correct in that io.sails.connect() is the only way to create a SailsSocket. Since the connection is asynchronous, any event handlers you bind immediately after calling connect will be added before the actual connection takes place, so you won't miss any notifications.
To provide context, here's the problem I'm attempting to solve:
I've made a giphy bot for a casual groupchat with friends of mine. By typing /giphy [terms] in a message, it will automatically post the top result for [terms]. My friends, being the rambunctious assholes that they are, quickly started abusing it to spam the groupchat. What I would like to do to prevent this is only allow my postMessage function to be called once per minute.
What I've tried:
Using setTimeout(), which doesn't do exactly what I'd like, since it will only call the function after the amount of time specified in the argument has passed. As far as I can tell, this will cause a delay in messages from the time the bot is called, but it won't actually prevent the bot from accepting new postMessage() calls in that time.
Using setInterval(), which just causes the function to be called forever at a certain interval.
What I think might work:
Right now, I'm working with two .js files.
Index.js
var http, director, cool, bot, router, server, port;
http = require('http');
director = require('director');
bot = require('./bot.js');
router = new director.http.Router({
'/' : {
post: bot.respond,
get: ping
}
});
server = http.createServer(function (req, res) {
req.chunks = [];
req.on('data', function (chunk) {
req.chunks.push(chunk.toString());
});
router.dispatch(req, res, function(err) {
res.writeHead(err.status, {"Content-Type": "text/plain"});
res.end(err.message);
});
});
port = Number(process.env.PORT || 5000);
server.listen(port);
function ping() {
this.res.writeHead(200);
this.res.end("This is my giphy side project!");
}
Bot.js
var HTTPS = require('https');
var botID = process.env.BOT_ID;
var giphy = require('giphy-api')();
function respond() {
var request = JSON.parse(this.req.chunks[0]);
var giphyRegex = /^\/giphy (.*)$/;
var botMessage = giphyRegex.exec(request.text);
var offset = Math.floor(Math.random() * 10);
if(request.text && giphyRegex.test(request.text) && botMessage != null) {
this.res.writeHead(200);
giphy.search({
q: botMessage[1],
rating: 'pg-13'
}, function (err, res) {
try {
postMessage(res.data[offset].images.downsized.url);
} catch (err) {
postMessage("There is no gif of that.");
}
});
this.res.end();
} else {
this.res.writeHead(200);
this.res.end();
}
function postMessage(phrase) {
var botResponse, options, body, botReq;
botResponse = phrase;
options = {
hostname: 'api.groupme.com',
path: '/v3/bots/post',
method: 'POST'
};
body = {
"bot_id" : botID,
"text" : botResponse
};
botReq = HTTPS.request(options, function(res) {
if(res.statusCode == 202) {
} else {
console.log('Rejecting bad status code: ' + res.statusCode);
}
});
botReq.on('error', function(err) {
console.log('Error posting message: ' + JSON.stringify(err));
});
botReq.on('timeout', function(err) {
console.log('Timeout posting message: ' + JSON.stringify(err));
});
botReq.end(JSON.stringify(body));
}
exports.respond = respond;
Basically, I'm wondering where would be the ideal place to implement the timer that I'm envisioning. It seems like I would want to have it only listen for /giphy [terms] after one minute, rather than waiting one minute to post.
My Question(s):
Would the best way to go about this be to set a timer on the response() function, since then it will only actually parse the incoming information once per minute? Is there a more elegant place to put this?
How should the timer work on that function? I don't think I can just run response() once every minute, since that seems to mean it'll only parse incoming json from the GroupMe API once per minute, so it could potentially miss incoming messages that I would want it to capture.
Store the time when a request is made and then use that to see if subsequent requests should be ignored if these are executed to fast.
var waitTime = 10*1000; // 10 s in millis
var lastRequestTime = null;
function respond() {
if(lastRequestTime){
var now = new Date();
if(now.getTime() - lastRequestTime.getTime() <= waitTime){
this.res.writeHead(200);
this.res.end("You have to wait "+waitTime/1000+" seconds.");
return;
}
}
lastRequestTime = new Date();
postMessage();
}
I need to handle a websocket timeout when the socket is trying to connect to a server available but busy.
More in deep I have a web application that can connect to several single-thread remote socket server in charge to post-process data passed as input.
The goal is to collect data from web GUI and then submit them to the first socket available and listening.
In case a server socket daemon is already processing data, it cannot serve the user request that has instead to be addressed to the second socket of the list.
Basically and with a practical example, I have such socket ready to accept a call coming from a web brworser:
ws://10.20.30.40:8080/
ws://10.20.30.40:8081/
ws://10.20.30.40:8082/
ws://10.20.30.40:8083/
ws://192.192.192.192:9001/
When the first socket receive the call and perform handshake, it starts to process data (and this process may took some minutes).
When another client do the same request on a different set of data, I would like that that the request is served by the 2nd socket available and so on.
So the question is... how can I contact the socket (the first of the list), wait 250 milliseconds and (in case of timeout) skip to the next one?
I've started from the following approach:
$.interval = function(func, wait, times) {
var interv = function(w, t) {
return function() {
if (typeof t === "undefined" || t-- > 0) {
setTimeout(interv, w);
try {
func.call(null);
} catch(e) {
t = 0;
throw e.toString();
}
} else {
alert('stop');
}
};
} (wait, times);
setTimeout(interv, wait);
}
handler = new WebSocket(url);
loop = 1;
$.interval(function(){
rs = handler.readyState;
console.log("(" + loop++ + ") readyState: " + rs);
}, 25, 10); //loop 10 times every 25 milliseconds
But how can I stop in case the socket.readyState became 1 (OPEN) or the limit is reached?
Note: I'm using Autobahn.js for the client-side WAMP implementation, and when.js for promises.
I'm trying to create re-usable code so that only one websocket 'session', or connection exists, and whenever a dev wants to subscribe to a topic using autobahn, they can just use the current connection object to do so if it already exists; else a new one is created.
My issue is that, if the connection already exists, I have to use a setTimeout() to wait for a second to make sure it's actually connected, and then duplicate all the subscription code - I don't like this at all.
Here's my current code:
(function() {
var connection = null;
subscribeTo('subject', __userId, __token, function(onconnect) {
console.log('Yay, connected');
});
function subscribeTo(subject, userId, token, onConnect, onDisconnect) {
if (connection === null)
{
connection = new ab.Session('ws://localhost:8080', function(onopen) {
connection.subscribe(JSON.stringify({subject: subject, userId: userId, token: token}), function(subscription, data) {
data = $.parseJSON(data);
// Do something with the data ...
});
if (typeof onConnect === 'function') {
onConnect();
}
}, function(onclose) {
if (typeof onDisconnect === 'function') {
onDisconnect();
}
}, { 'skipSubprotocolCheck': true });
}
}
})();
Great. Now the issue is, what if I have another subscribeTo() straight after the previous one? Connection won't be null any more, but it also won't be connected. So the following is what I have to do:
// subscribeTo() multiple times at the top ...
subscribeTo('subject', __userId, __token, function(onconnect) {
console.log('Yay, connected');
});
subscribeTo('anothersubject', __userId, __token, function(onconnect) {
console.log('Yay, connected');
});
// The first one works, the second one requires a setTimeout() for the connection
// if connection is NOT null...
} else {
setTimeout(function() {
connection.subscribe(topic... etc...) // Really!?
}, 1000);
}
Remove the setTimeout() and you'll get an error saying that "Autbahn is not connected".
Is there a better way to have a single, re-usable connection, without code-duplication, or am I doomed to create a new connection for each subscription because of the promises (perhaps I can use promises to my advantage here, although I haven't used them before this)?
This is all way too complex, unneeded and wrong. You want to do your subscribes in response to a session being created:
var session = null;
function start() {
// turn on WAMP debug output
//ab.debug(true, false, false);
// use jQuery deferreds instead of bundle whenjs
//ab.Deferred = $.Deferred;
// Connect to WAMP server ..
//
ab.launch(
// WAMP app configuration
{
// WAMP URL
wsuri: "ws://localhost:9000/ws",
// authentication info
appkey: null, // authenticate as anonymous
appsecret: null,
appextra: null,
// additional session configuration
sessionConfig: {maxRetries: 10, sessionIdent: "My App"}
},
// session open handler
function (newSession) {
session = newSession;
main();
},
// session close handler
function (code, reason, detail) {
session = null;
}
);
}
function main() {
session.subscribe("http://myapp.com/mytopic1", function(topic, event) {});
session.subscribe("http://myapp.com/mytopic2", function(topic, event) {});
session.subscribe("http://myapp.com/mytopic3", function(topic, event) {});
}
start();
The ab.launch helper will manage automatic reconnects for you (and also do WAMP-CRA authentication if required). init() is then automatically called again when a reconnect happens. Using raw Session object is not recommended (unless you know what you are doing).
Also: topics must be URIs from the http or https scheme. Using serialized objects (JSON) is not allowed.