After struggling with socket.io connection authentication (here and here) and thanks to #sgress454, I realized how to get this to work and I am sending the authentication/authorization token as part of the query in the connection (see below).
Upon authentication failure (invalid/expired token or in-active user), I return the callback with false parameter to indicate the connection is rejected.
On the client side though, I am not sure how I should handled it and it seems the socket is trying to reconnect even after explicitly disconnecting - I keep seeing that it is trying to reconnect.
The client code is something like this:
var _onConnectError = function(err) {
if (err.description == 400) {
console.log("Connection rejected");
_disconnectAndCleanupSocket();
} else {
console.log("##SOCKET - connect_error", err.description, err);
}
}
var _disconnectAndCleanupSocket = function() {
if (io.socket) {
if (io.socket.isConnected()) {
io.socket.disconnect();
}
io.socket.removeAllListeners();
delete io.socket;
}
};
io.socket = io.sails.connect({ query: "token=" + token});
io.socket.on('connect', _onConnect);
io.socket.on('connect_error', _onConnectError);
io.socket.on('reconnect_error', _onConnectError);
On the server (config/sockets.js) I have:
beforeConnect: function(handshake, cb) {
var token = handshake._query ? handshake._query.token : null;
CipherService.verifyToken(token, function verifyTokenResults(err, decoded, info) {
if (err || !decoded) {
if (err.name === "TokenExpiredError") {
// token expired - user can't connect...
return cb(null, false);
} else {
// some other error...
return cb(err, false);
}
}
AuthUser.findOne(decoded.user.id).exec(function(err, user) {
if (err || !user || !user.is_active) {
return cb(null, false);
}
return cb(null, true);
});
});
// (`false` would reject the connection)
},
I have tried to find documentation and explored the response object (in developer tools) but the only thing I saw there was thedescription field which return 400 on rejection and 0 in case there is no response (e.g. server is down).
Is there some example/documentation for this? Overall, I didn't find detailed description of using the SailsSocket in non-standard cases (other then use io.sails.connect()).
What is the proper way to handle such rejection (and shouldn't it handle it as part of the sails socket.io client?)
As an aside, I cannot instantiate SailsSocket myself and only do this with the 'io.sails.connect()' function. Is that on purpose? Is there no option to "miss" an event when I create the socket with the connect method and only then assign event handlers?
The short answer to your question is that you can set the reconnection flag to turn automatic reconnection on or off:
// Disable auto-reconnect for all new socket connections
io.sails.reconnection = false;
// Disable auto-reconnect for a single new socket connection
var mySocket = io.sails.connect({reconnection: false});
As far as SailsSocket creation, you are correct in that io.sails.connect() is the only way to create a SailsSocket. Since the connection is asynchronous, any event handlers you bind immediately after calling connect will be added before the actual connection takes place, so you won't miss any notifications.
Related
I'm trying to write a simple node.js server with Server Sent Event abilities without socket.io. My code runs well enough, but I ran into a huge problem when I was testing it.
If I connect to the node.js server from my browser, it will receive the server events fine. But when I refresh the connection, the same browser will start to receive the event twice. If I refresh n times, the browser will receive data n+1 times.
The most I tried this out with was 16 times (16 refreshes) before I stopped trying to count it.
See the screenshot below of the console from my browser. After the refresh (it tries to make an AJAX call), it will output "SSE persistent connection established!" and afterwards it will start receiving events.
Note the time when the events arrive. I get the event twice at 19:34:07 (it logs to the console twice -- upon receipt of the event and upon writing of the data to the screen; so you can see four logs there).
I also get the event twice at 19:34:12.
Here's what it looks like at the server side after the client has closed the connection (with source.close() ):
As you can see, it is still trying to send messages to the client! Also, it's trying to send the messages twice (so I know this is a server side problem)!
I know it tries to send twice because it sent two heartbeats when it's only supposed to send once per 30 seconds.
This problem magnifies when open n tabs. What happens is each open tab will receive n*n events. Basically, how I interpret it is this:
Opening the first tab, I subscribe to the server once. Opening the second tab, I subscribe both open tabs to the server once again -- so that's 2 subscriptions per tab. Opening a third tab, I subscribe all three open tabs to the events once more, so 3 subscriptions per tab, total of 9. And so on...
I can't verify this, but my guess is that if I can create one subscription, I should be able to unsubscribe if certain conditions are met (ie, heartbeat fails, I must disconnect). And the reason this is happening is only because of the following:
I started setInterval once, and an instance of it will forever run unless I stop it, or
The server is still trying to send data to the client trying to keep the connection open?
As for 1., I've already tried to kill the setInterval with clearInterval, it doesn't work. As for 2, while it's probably impossible, I'm leaning towards believing that...
Here's server side code snippets of just the relevant parts (editing the code after the suggestions from answers):
server = http.createServer(function(req, res){
heartbeatPulse(res);
}).listen(8888, '127.0.0.1', 511);
server.on("request", function(req, res){
console.log("Request Received!");
var route_origin = url.parse(req.url);
if(route_origin.query !== null){
serveAJAX(res);
} else {
serveSSE(res);
//hbTimerID = timerID.hbID;
//msgTimerID = timerID.msgID;
}
req.on("close", function(){
console.log("close the connection!");
res.end();
clearInterval(res['hbID']);
clearInterval(res['msgID']);
console.log(res['hbID']);
console.log(res['msgID']);
});
});
var attachListeners = function(res){
/*
Adding listeners to the server
These events are triggered only when the server communicates by SSE
*/
// this listener listens for file changes -- needs more modification to accommodate which file changed, what to write
server.addListener("file_changed", function(res){
// replace this with a file reader function
writeUpdate = res.write("data: {\"date\": \"" + Date() + "\", \"test\": \"nowsssss\", \"i\": " + i++ + "}\n\n");
if(writeUpdate){
console.log("Sent SSE!");
} else {
console.log("SSE failed to send!");
}
});
// this listener enables the server to send heartbeats
server.addListener("hb", function(res){
if(res.write("event: hb\ndata:\nretry: 5000\n\n")){
console.log("Sent HB!");
} else {
// this fails. We can't just close the response, we need to close the connection
// how to close a connection upon the client closing the browser??
console.log("HB failed! Closing connection to client...");
res.end();
//console.log(http.IncomingMessage.connection);
//http.IncomingMessage.complete = true;
clearInterval(res['hbID']);
clearInterval(res['msgID']);
console.log(res['hbID']);
console.log(res['msgID']);
console.log("\tConnection Closed.");
}
});
}
var heartbeatPulse = function(res){
res['hbID'] = "";
res['msgID'] = "";
res['hbID'] = setInterval(function(){
server.emit("hb", res);
}, HB_period);
res['msgID'] = setInterval(function(){
server.emit("file_changed", res);
}, 5000);
console.log(res['hbID']);
console.log(res['msgID'])
/*return {
hbID: hbID,
msgID: msgID
};*/
}
var serveSSE = function(res){
res.writeHead(200, {
"Content-Type": "text/event-stream",
"Cache-Control": "no-cache",
"Access-Control-Allow-Origin": "*",
"Connection": "keep-alive"
});
console.log("Establishing Persistent Connection...");
if(res.write("event: connector\ndata:\nretry: 5000\n\n")){
// Only upon receiving the first message will the headers be sent
console.log("Established Persistent Connection!");
}
attachListeners(res);
console.log("\tRequested via SSE!");
}
This is largely a self project for learning, so any comments are definitely welcome.
One issue is that you are storing request-specific variables outside the scope of the http server. So what you could do instead is to just call your setInterval()s once right after you start the http server and not start individual timers per-request.
An alternative to adding event handlers for every request might be to instead add the response object to an array that is looped through inside each setInterval() callback, writing to the response. When the connection closes, remove the response object from the array.
The second issue about detecting a dead connection can be fixed by listening for the close event on the req object. When that is emitted, you remove the server event (e.g. file_changed and hb) listeners you added for that connection and do whatever other necessary cleanup.
Here's how I got it to work:
Made a global heart object that contains the server plus all the methods to modify the server
var http = require("http");
var url = require("url");
var i = 0; // for dev only
var heart = {
server: {},
create: function(object){
if(!object){
return false;
}
this.server = http.createServer().listen(8888, '127.0.0.1');
if(!this.server){
return false;
}
for(each in object){
if(!this.listen("hb", object[each])){
return false;
}
}
return true;
},
listen: function(event, callback){
return this.server.addListener(event, callback);
},
ignore: function(event, callback){
if(!callback){
return this.server.removeAllListeners(event);
} else {
return this.server.removeListener(event, callback);
}
},
emit: function(event){
return this.server.emit(event);
},
on: function(event, callback){
return this.server.on(event, callback);
},
beating: 0,
beatPeriod: 1000,
lastBeat: false,
beat: function(){
if(this.beating === 0){
this.beating = setInterval(function(){
heart.lastBeat = heart.emit("hb");
}, this.beatPeriod);
return true;
} else {
return false;
}
},
stop: function(){ // not applicable if I always want the heart to beat
if(this.beating !== 0){
this.ignore("hb");
clearInterval(this.beating);
this.beating = 0;
return true;
} else {
return false;
}
},
methods: {},
append: function(name, method){
if(this.methods[name] = method){
return true;
}
return false;
}
};
/*
Starting the heart
*/
if(heart.create(object) && heart.beat()){
console.log("Heart is beating!");
} else {
console.log("Failed to start the heart!");
}
I chained the req.on("close", callback) listener on the (essentially) server.on("request", callback) listener, then remove the callback if the close event is triggered
I chained a server.on("heartbeat", callback) listener on the server.on("request", callback) listener and made a res.write() when a heartbeat is triggered
The end result is each response object is being dictated by the server's single heartbeat timer, and will remove its own listeners when the request is closed.
heart.on("request", function(req, res){
console.log("Someone Requested!");
var origin = url.parse(req.url).query;
if(origin === "ajax"){
res.writeHead(200, {
"Content-Type": "text/plain",
"Access-Control-Allow-Origin": "*",
"Connection": "close"
});
res.write("{\"i\":\"" + i + "\",\"now\":\"" + Date() + "\"}"); // this needs to be a file reading function
res.end();
} else {
var hbcallback = function(){
console.log("Heartbeat detected!");
if(!res.write("event: hb\ndata:\n\n")){
console.log("Failed to send heartbeat!");
} else {
console.log("Succeeded in sending heartbeat!");
}
};
res.writeHead(200, {
"Content-Type": "text/event-stream",
"Cache-Control": "no-cache",
"Access-Control-Allow-Origin": "*",
"Connection": "keep-alive"
});
heart.on("hb", hbcallback);
req.on("close", function(){
console.log("The client disconnected!");
heart.ignore("hb", hbcallback);
res.end();
});
}
});
I spent a while trying to diagnose this error.
First I had created a subclass of EventEmitter
File Client.js
var bindToProcess = function(fct) {
if (fct && process.domain) {
return process.domain.bind(fct)
}
return fct
};
function Client(){
EventEmitter.call(this);
}
util.inherits(Client, EventEmitter);
Client.prototype.success =
function(fct) {
this.on('success', bindToProcess(fct))
return this;
}
Client.prototype.login = function(username, password) {
body = {
username : username,
password : password
};
var self = this;
request.post(url, { json:true, body: body }, function (error, response, body) {
if (error ||response.statusCode != HTTPStatus.OK ) {
return self.emit('error', error);
}
return self.emit('success', body);
});
return this;
}
module.exports = Client
Then in another file in my Express App
File user.js
var Client = require('../utils/client');
var client = new Client();
// GET '/login'
exports.login = function(req, res){
client.login(username, password).success( function(user) {
res.redirect('/');
}).error( function(error) {
res.redirect('login');
});
}
The thing is though on the second request, the server crashes with the error:
Error: Can't set headers after they are sent.
In the interim I've solved the problem by created the Client inside the middleware rather than having it a global variable. I'm just curious why this is happening ?
Thanks,
(hopefully there is enough information)
What happens here is the call of the event handling function from the first request during second request because the variable client is shared between the requests.
At first, the client is created in the global scope. Then two handlers are attached to its events and then the request is actually performed and corresponding handler is called.
On the second request, two more handlers are attached to the same object and then on either success or fail two handlers (from previous call and from the current) are notified.
So you need to move the client creation to the action method or change how the code responds on the events - I can suggest promises-like approach: pass two callbacks as parameters to one method; or just the standard callback approach: pass the error result as first argument of the callback.
I'm new to mongoose/mongodb and I am trying to do some sort of error handling with my document save.
I am trying to create a stub id to store into the db for easier data retrieval later on (and also to put into the url bar so people can send links to my website to that particular page more easily -- like jsfiddle or codepen).
Basically I want to search for a document with a page_id and if it exists, I want to regenerate that page_id and search until it gets to one that's unused like this:
while(!done){
Model.findOne({'page_id': some_hex}, function (err, doc) {
if(doc){
some_hex = generate_hex();
}
else
{
done = true;
}
});
}
model.page_id = some_hex;
model.save();
However, since mongoose is asynchronous, the while loop will pretty much run indefinitely while the find works in the background until it finds something. This will kill the resources on the server.
I'm looking for an efficient way to retry save() when it fails (with a change to page_id). Or to try and find an unused page_id. I have page_id marked as unique:true in my schema.
Retrying should be performed asynchronously:
var tryToSave = function(doc, callback) {
var instance = new Model(doc);
instance.page_id = generate_hex();
instance.save(function(err) {
if (err)
if (err.code === 11000) { // 'duplicate key error'
// retry
return tryToSave(doc, callback);
} else {
// another error
return callback(err);
}
}
// it worked!
callback(null, instance);
});
};
// And somewhere else:
tryToSave(doc, function(err, instance) {
if (err) ...; // handle errors
...
});
Note: I'm using Autobahn.js for the client-side WAMP implementation, and when.js for promises.
I'm trying to create re-usable code so that only one websocket 'session', or connection exists, and whenever a dev wants to subscribe to a topic using autobahn, they can just use the current connection object to do so if it already exists; else a new one is created.
My issue is that, if the connection already exists, I have to use a setTimeout() to wait for a second to make sure it's actually connected, and then duplicate all the subscription code - I don't like this at all.
Here's my current code:
(function() {
var connection = null;
subscribeTo('subject', __userId, __token, function(onconnect) {
console.log('Yay, connected');
});
function subscribeTo(subject, userId, token, onConnect, onDisconnect) {
if (connection === null)
{
connection = new ab.Session('ws://localhost:8080', function(onopen) {
connection.subscribe(JSON.stringify({subject: subject, userId: userId, token: token}), function(subscription, data) {
data = $.parseJSON(data);
// Do something with the data ...
});
if (typeof onConnect === 'function') {
onConnect();
}
}, function(onclose) {
if (typeof onDisconnect === 'function') {
onDisconnect();
}
}, { 'skipSubprotocolCheck': true });
}
}
})();
Great. Now the issue is, what if I have another subscribeTo() straight after the previous one? Connection won't be null any more, but it also won't be connected. So the following is what I have to do:
// subscribeTo() multiple times at the top ...
subscribeTo('subject', __userId, __token, function(onconnect) {
console.log('Yay, connected');
});
subscribeTo('anothersubject', __userId, __token, function(onconnect) {
console.log('Yay, connected');
});
// The first one works, the second one requires a setTimeout() for the connection
// if connection is NOT null...
} else {
setTimeout(function() {
connection.subscribe(topic... etc...) // Really!?
}, 1000);
}
Remove the setTimeout() and you'll get an error saying that "Autbahn is not connected".
Is there a better way to have a single, re-usable connection, without code-duplication, or am I doomed to create a new connection for each subscription because of the promises (perhaps I can use promises to my advantage here, although I haven't used them before this)?
This is all way too complex, unneeded and wrong. You want to do your subscribes in response to a session being created:
var session = null;
function start() {
// turn on WAMP debug output
//ab.debug(true, false, false);
// use jQuery deferreds instead of bundle whenjs
//ab.Deferred = $.Deferred;
// Connect to WAMP server ..
//
ab.launch(
// WAMP app configuration
{
// WAMP URL
wsuri: "ws://localhost:9000/ws",
// authentication info
appkey: null, // authenticate as anonymous
appsecret: null,
appextra: null,
// additional session configuration
sessionConfig: {maxRetries: 10, sessionIdent: "My App"}
},
// session open handler
function (newSession) {
session = newSession;
main();
},
// session close handler
function (code, reason, detail) {
session = null;
}
);
}
function main() {
session.subscribe("http://myapp.com/mytopic1", function(topic, event) {});
session.subscribe("http://myapp.com/mytopic2", function(topic, event) {});
session.subscribe("http://myapp.com/mytopic3", function(topic, event) {});
}
start();
The ab.launch helper will manage automatic reconnects for you (and also do WAMP-CRA authentication if required). init() is then automatically called again when a reconnect happens. Using raw Session object is not recommended (unless you know what you are doing).
Also: topics must be URIs from the http or https scheme. Using serialized objects (JSON) is not allowed.
I am writing small node.js server for helping maintaining build machines. It's basically for testers to be able to drop db or restart server remotely. I have some issues with pg connections. Can anybody have an idea why it is not being closed after first request?
var client = new pg.Client(conString);
var server = http.createServer(function (req, res) {
var url = parse(req.url);
if (url.pathname =='/'){
(...)
}else{
var slash_index = url.pathname.indexOf('/',1);
var command = url.pathname.slice(1,slash_index);
if (command =='restart'){
res.write('restarting server please wait');
} else if (command == 'drop-db'){
console.log('drop-db');
client.connect();
console.log('connect');
var query = client.query("select datname from pg_database;", function(err, result) {
if (err) throw err;
console.log('callback');
});
query.on('end', function() {
console.log('close');
client.end();
});
} else{
res.write('unknown command : '+ command);
}
res.write('\n');
res.end();
}
}).listen(5337);
So what I get on screen after first request is :
drop-db
connect
callback
close
great but after next request I get only
drop-db
connect
after next one I already get an pg error
what do I do wrong?
Edit : No errors after second commit . Error after third :
events.js:48
throw arguments[1]; // Unhandled 'error' event
^
error: invalid frontend message type 0
at [object Object].<anonymous> (/home/wonglik/workspace/server.js/node_modules/pg/lib/connection.js:412:11)
at [object Object].parseMessage (/home/wonglik/workspace/server.js/node_modules/pg/lib/connection.js:287:17)
at Socket.<anonymous> (/home/wonglik/workspace/server.js/node_modules/pg/lib/connection.js:45:22)
at Socket.emit (events.js:88:20)
at TCP.onread (net.js:347:14)
I think it is related to opening new connection while old is still on.
Edit 2 :
I've checked postgres logs :
after second request :
2012-03-13 09:23:22 EET LOG: invalid length of startup packet
after third request :
2012-03-13 09:24:48 EET FATAL: invalid frontend message type 0
It looks like client (pg.Client) is declared outside the scope of a request, this is probably your issue. It's hard to tell from the code snippet, but it looks like you might have issues with scoping and how async callback control flow works in general, e.g. calling res.end() while callbacks are still in the IO queue. This is totally legal with node, just not sure that is your intent.
It is preferred to use pg.connect which returns a client. see https://github.com/brianc/node-postgres/wiki/pg
var pg = require('pg');
var server = http.createServer(function (req, res) {
var url = parse(req.url);
if (url.pathname =='/'){
(...)
}else{
var slash_index = url.pathname.indexOf('/',1);
var command = url.pathname.slice(1,slash_index);
if (command =='restart'){
res.write('restarting server please wait');
} else if (command == 'drop-db'){
console.log('drop-db');
pg.connect(conString, function(err, client) {
console.log('connect');
var query = client.query("select datname from pg_database;", function(err, result) {
if (err) throw err;
console.log('callback');
});
query.on('end', function() {
console.log('close');
// client.end(); -- not needed, client will return to the pool on drain
});
});
} else{
res.write('unknown command : '+ command);
}
// these shouldn't be here either if you plan to write to res from within the pg
// callback
res.write('\n');
res.end();
}
}).listen(5337);
I was getting this error, similar to you, and it was that the connection wasn't closed. When you attempt to (re)connect via an already open connection, things go boom. I would suggest that you use the direct connection stuff, since you don't seem to need the pooling code - might make it easier to trap the problem. (Though, given that I've resurrected an older post, I suspect that you probably already fixed this.)