Will Mongo DB Connection closes or expires automatically - javascript

I have written a Queue Trigger Azure Function App(Node JS) where on each queue trigger data will be inserted into MongoDB. I am creating MongoClient above function level and re-using same MongoClient for all the Triggers
if(mongoClient.topology.isConnected())
//Use Same Connection
else //Creating new client
mongoClient = await mongoDB.MongoClient.connect();
Sometimes on my mongo db cluster i am getting error connections to your cluster(s) have exceeded i dont understand is it because i am keeping connection open for too long? will connection automatically expire after sometime? Is it good to keep Client Connection above function level and reuse it? Can some one suggest please.
If i do open and close connection at function level then i am getting another error in function Cannot use Session that has ended

If you've deployed the Function App in Consumption Plan, then the number of outbound connections is limited (~600/instance) but you'll get the connections exceeded when you exceed the limit.
I would suggest enabling the Application Insights on the Function App to track the requests time, response time and other metrics that helps to troubleshoot more.
Is it good to keep Client Connection above function level and reuse it?
Yes, you can keep client connections above function level and reuse them instead of creating new connection whatever the client connection it is Http Client, Document Client or Database client.
Do not create a new client with every function invocation.
Do create a single, static client that every function invocation can use.
Consider creating a single, static client in a shared helper class if different functions use the same service.
Refer to MSFT Doc of Azure Functions Client Connections regarding the best practices when managing the client connections in Function Instances.

Related

Does not Cloud Spanner manage sessions properly?

I have looked up for this issue but could not find any sufficient information about it.
Google Cloud Spanner client libraries handles sessions automatically and its limit is 10.000 sessions for each node, no problem till here.
I have a micro serviced application which also has Google Cloud Functions. I am doing some specific database jobs on Cloud Functions and I'm also calling those functions continuously. After a little while, Cloud Spanner is starting to throw an error;
Too many active sessions in database, limit is 10000. Increase the node count to allow more sessions.
I know about the limits, but there is not any operation that will cause my app to exceed those limits.
After I noticed this, I have two questions which I could not find any answer;
1- Does Cloud Functions creates new session for every call? (I am using HTTP Trigger)
Here is what I did so far;
1- Here is example cloud functions declaration of mine;
exports.myFunction = function myFunction(req, res) {}
I was declaring my database instance out of this scope before I realize this issue;
const db = Spanner({projectId: '[my-project]'}).instance('[my-cs-instance]').database('[my-database]');
exports.myFunction = function myFunction(req, res) {}
After this issue, I have put it in the scope like this, and closed the database session after I'm done;
exports.myFunction = function myFunction(req, res) {
const db = Spanner({projectId: '[my-project]'}).instance('[my-cs-instance]').database('[my-database]');
// codes
db.close();
}
That didn't change anything, it still exceeds the session limit after a while.
Do you have any experience what causes this? Is this related to Cloud Functions or Cloud Spanner itself?
2- If every transaction object use one connection at a time, what happens in this scenario.
I have a REST endpoint other than these Cloud Functions. It creates a database instance when its starting to listen HTTP endpoints and I am not creating any other instance in its lifecycle anymore. At that endpoint, I am making CRUDs and I am using transactions and they all use the same instance which I created at the start of process. My experience is;
Sometimes transactions or other CRUD operations works with a bit delay which does not happen all the time.
My question is;
Is that because when transaction starts to work, does it lock the connection and all other operations should wait until it ends? If so, should I create independent database instances for transactions on that endpoint?
Thanks in advance
This now has been fixed per the issue opened at #89 and the fix at #91, and logged as #71987137 at Google Issue Trackers.
If any issue persists, please report at Google issue tracker they will re-open to examine.

Correct way to handle Websocket

I've a client to server Websocket connection which should be there for 40 seconds or so. Ideally it should be forever open.
The client continually sends data to server and vice-versa.
Right now I'm using this sequence:
var socket;
function senddata(data)
{
if (!socket)
{
socket = new WebSocket(url);
socket.onopen = function (evt) {
socket.send(data);
socket.onmessage = function (evt) {
var obj = JSON.parse(evt.data);
port.postMessage(obj);
}
socket.oneerror = function (evt) {
socket.close();
socket = null;
}
socket.onclose = function(evt){
socket = null;
}
}
}
else
{
socket.send(data);
}
}
Clearly as per current logic, in case of error, the current request data may not be sent at all.
To be frank it sometimes gives error that websocket is still in connecting state. This connection breaks often due to networking issues. In short it does not work perfectly well.
I've read a better design : How to wait for a WebSocket's readyState to change but does not cover all cases I need to handle.
Also I've Googled about this but could not get the correct procedure for this.
So what is the right way to send regular data through Websockets which handles well these issues like connection break etc?
An event you don't seem to cover is onclose. Which should work really well, since it's called whenever the connection terminates. This is more reliable than onerror, because not all connection disruptions result in an error.
I personally use Socket.IO, it enables real-time bidirectional event-based communication between client and server.
It is event driven. Events such as
on connection :: socket.on('conection',callback);
and
on disconnect :: socket.on('disconnect',callback);
are built in with socket.io so it can help you with your connection concerns. Pretty much very easy to use, check out their site if you are interested.
I use two-layer scheme on client: abstract-wrapper + websocket-client:
The responsibilities of the websocket-client are interacting with a server, recovering the connection and providing interfaces (event-emitter and some methods) to abstract-wrapper.
The abstract-wrapper is a high-level layer, which interacts with websocket-client, subscribes to its events and aggregating data, when the connection is temporary failed. The abstract-wrapper can provide to application layer any interface such as Promise, EventEmitter and so on.
On application layer, I just work with abstract-wrapper and don't worry about connection or data losing. Undoubtedly, it's a good idea to have here information about the status of connection and data sending confirmation, because it's useful.
If it is necessary, I can provide some code for example
This apparently is a server issue not a problem in the client.
I don't know how the server looks like here. But this was a huge problem for me in the past when I was working on a websocket based project. The connection would continuously break.
So I created a websocket server in java, and that resolved my problem.
websockets depend on lots of settings, like if you're using servlets then servlet container's settings matter, if you're using some php etc, apache and php settings matter, for example if you create a websocket server in php and php has default time-out of 30 seconds, it will break after 30 seconds. If keep-alive is not set, the connection wont stay alive etc.
What you can do as quick solution is
keep sending pings to a server after a certain amount of time (like 2 or 3 seconds, so that if a websocket is disconnected it is known to the client so it could invoke onclose or ondisconnect, I hope you know that there is no way to find if a connection is broken other than failing to send something.
check server's keep-alive header
If you have access to server, then it's timeouts etc.
I think that would help

What is the right way to manage connections to mongoDB, using node?

I'm using node.js and mongoDB. Right now, for my test app, the connection to the db is in the main node file, but I guess this is a wrong practice.
What I want/need: a secure way (i.e. not storing password on files users can access) to connect to the db just when needed.
For example: I want several admin pages (users, groups, etc..). Each page should connect to the db, find some data, and display it. It also have a form for adding a document to the db and a delete option.
I thought maybe to create some kind of a connection function - send it what you want to do (add, update, find, delete), to where (collection name) and whatever it needs. But I can't just include this function, because then it'll reveal the password to the db. So what can I do?
Thanks!
I'm going to answer your question bit by bit.
Right now, for my test app, the connection to the db is in the main node file
This is fine, though you might want to put it in a separate file for easier reuse. NodeJS is a continuesly running process, so in theory you could serve all of your HTTP responses using the same connection to the database. In practice you'd want to create a connection pool, but the Mongodb driver for NodeJS already does this automatically.
Each page should connect to the db, find some data, and display it.
When you issue a query on the MongoDB driver, it will automatically use a connection from its internal connection pool, as long as you gave it the credentials when your application was starting up.
What I want/need: a secure way (i.e. not storing password on files users can access) to connect to the db just when needed.
I would advice to keep your application configuration (any variables that depend on the environment in which the app is running) in a separate file which you don't commit to your VCS. A module like node-config can help a great deal with that.
The code you will end up with, using node-config, is something like:
config/default.json:
{
"mongo": null
}
This is the default configuration file which you commit.
config/local.json:
{
"mongo": "mongo://user:pass#host:port/db"
}
The local.json should be ignored by your VCS. It contains secret sauce.
connection.js:
var config = require('config');
var MongoClient = require('mongodb').MongoClient;
var cache;
module.exports = function(callback){
if(cache){
return callback(cache);
}
MongoClient.connect(config.get('mongo'), function(err, db){
if(err){
console.error(err.stack);
process.exit(1);
}
cache = db;
callback(db);
});
}
An incomplete example of how you might handle reusing the database connection. Note how the configuration is gotten using config.get(*). An actual implementation should have more robust error handling and prevent multiple connections from being made. Using Promises would make all that a lot easier.
index.js:
var connect = require('./connection');
connect(function(db){
db.find({whatever: true})
});
Now you can just require your database file anywhere you want, and reuse the same database connection, which handles pooling for you and you don't have your passwords hard-coded anywhere.

Cassandra connections best practice

I'm using Node JS with Cassandra and I wonder what the best way to interact. I have multiple modules that interact with Cassandra and I want to know if it's better to
keep a single connection for all the modules
set a connection for each module, or if the best is to;
connect to Cassandra each time I have a request.
This web application uses Cassandra for most of the requests.
I would recommend you to use the DataStax Node.js driver for Cassandra, it features connection pooling and transparent failover, you only need to execute your queries and it will handle the rest for you.
var cassandra = require('cassandra-driver');
var client = new cassandra.Client({
contactPoints: ['host1', 'host2'],
keyspace: 'ks1'
});
var query = 'SELECT email, last_name FROM user_profiles WHERE key=?';
//the driver will handle connection pool and failover
client.execute(query, ['guy'], function(err, result) {
assert.ifError(err);
console.log('User profile email ' + result.rows[0].email);
});
Disclaimer: I'm an active developer of the project
I'd pool connections and recycle them rather than going with one of the options you listed. That way you don't need to destroy already created connections. The only thing I'd be weary of is having too large a pool, so make sure you set a sensible threshold.
Something like this:
no connections are available in pool
create connection (add it back once finished using it)
connections are available in pool
fetch connection from pool
Reasons for choosing a pool rather than a hardcoded number:
keep a single connection for all the modules - This will be a bottleneck unless you are running a single threaded app and you aren't
set a connection for each module - You need to provide us with more context. This might a good approach based on how threaded each module is.
connect to Cassandra each time I have a request - Building connections isn't cheap (code below), so don't discard them!
.
Cluster cluster = Cluster.builder().addContactPoints("localhost").build();
long start = System.currentTimeMillis();
Session session = cluster.connect();
System.out.println(String.format("Took %s ms", System.currentTimeMillis() - start));
Output: 490 ms.

Private messaging through node.js

I'm making a multiplayer (2 player) browser game in JavaScript. Every move a player makes will be sent to a server and validated before being transmitted to the opponent. Since WebSockets isn't ready for prime time yet, I'm looking at long polling as a method of transmitting the data and node.js looks quite interesting! I've gone through some example code (chat examples, standard long polling examples and suchlike) but all the examples I've seen seem to broadcast everything to every client, something I'm hoping to avoid. For general server messages this is fine but I want two players to be able to square off in a lobby or so and go into "private messaging" mode.
So I'm wondering if there's a way to implement private messaging between two clients using nodejs as a validating bridge? Something like this:
ClientA->nodejs: REQUEST
nodejs: VALIDATE REQUEST
nodejs->ClientA: VALID
nodejs->ClientB: VALID REQUEST FROM ClientA
You need some way to keep track of which clients are in a lobby together. You can do this with a simple global array like so process.lobby[1] = Array(ClientASocket, ClientBSocket) or something similar (possibly with some additional data, like nicknames and such), where the ClientXSocket is the socket object of each client that connects.
Now you can hook the lobby id (1 in this case) onto each client's socket object. A sort of session variable (without the hassle of session ids) if you will.
// i just made a hashtable to put all the data in,
// so that we don't clutter up the socket object too much.
socket.sessionData['lobby'] = 1;
What this allows you to do also, is add an event hook in the socket object, so that when the client disconnects, the socket can remove itself from the lobby array immediately, and message the remaining clients that this client has disconnected.
// see link in paragraph above for removeByValue
socket.on('close', function(err) {
process.lobby[socket.sessionData['lobby']].removeByValue(socket);
// then notify lobby that this client has disconnected.
});
I've used socket in place of the net.Stream or request.connection or whatever the thing is.
Remember in HTTP if you don't have keep-alive connections, this will make the TCP connection close, and so of course make the client unable to remain within a lobby. If you're using a plain TCP connection without HTTP on top (say within a Flash application or WebSockets), then you should be able to keep it open without having to worry about keep-alive. There are other ways to solve this problem than what I've shown here, but I hope I got you started at least. The key is keeping a persistent object for each client.
Disclaimer: I'm not a Node.js expert (I haven't even gotten around to installing it yet) but I have been reading up on it and I'm very familiar with browser js, so I'm hoping this is helpful somehow.

Categories

Resources