Passport.js - use same token for different server - javascript

I'm using passport.js and MongoDB for user login and postAPI authentications. However whenever I deploy my node server to another AWS instance, I need to go through the signup process, do the login and get a new token.
I know I can see the saved users and their jwt tokens from MongoDB. Are there anyway that I can copy the token and, when initialize new database, save the same username-jwttoken pair by default, so I can use the same token string (not with password, though it is more easily to be done) to pass the passport authentication test?
Thanks!

It sounds like your deployment process involves tearing everything (application and MongoDB) down & rebuilding from zero, possibly with some seed data but without any of the "live" data in the AWS instance. Here are a couple of ideas:
copy all the data from the old MongoDB instance to the new one as part of your deployment process. This will ensure that the users are present on the new instance and (should) ensure that users don't have to go through the signup process again. I'm not too familiar with MongoDB so I don't know how to do this, but I'm sure there's a way - maybe you can copy the data files from one to the other?
set up your environment with two servers: a MongoDB server and an application server. This way you can tear down your application and create a new AWS instance just for the application without touching your MongoDB server. Just update the MongoDB connection configuration in your new application instance to point to the same MongoDB server you've been using.
The first option is more suitable if you have a very small application without too much data. If your database gets too large, you're going to experience long periods of downtime during deployment as you take the application down, copy the data out of the old Mongo instance, copy the data into the new Mongo instance, and bring the application back up.
The second option is probably the better one, although it does require some knowledge of networking and securing MongoDB so that only your application has access to your data.

Related

Use separate server for centralized users database

I am using Meteor 1.10 + mongodb.
I have multiple mobile chat & information applications.
These mobile application are natively developed using Meteor DDP libraries.
But I have same users base for all the apps.
Now I want to create a separate meteor instance on separate individual server to keep the users base centralized.
I need suggestions that how can I acheive this architecture with meteor.
Keeping reactivity and performance in mind.
For a centralized user-base with full reactive functionality you need an Authorization Server which will be used by your apps (= Resource Servers) in order to allow an authenticated/authorized request. This is basically the OAuth2 3-tier workflow.
See:
https://www.rfc-editor.org/rfc/rfc6749
https://www.oauth.com/
Login Service
You will also have to write your own login handler (Meteor.loginWithMyCustomAuthServer) in order to avoid DDP.connect because you would then have to manage two userbases (one for the app itself and one for the Authorization Server) and this will get really messy.
This login handler is then retrieving the user account data after the Oauth2 authorization request has been successful, which will make the Authorization Server's userbase the single point of truth for any of your app that is registered (read on Oauth2 workflow about clientId and secret).
Subcribing to users
The Auth server is the single point of truth where you create, updat or delete your users there and on a successfull login your local app will always get the latest user data synced from this accounts Auth Server (this is how Meteor does it with loginWith<Service> too)
You then subscribe to your users to the app itself without any ddp remote connection. This of course works only if the user data you want to get is actually for online users.
If you want to subscribe for any user (where the data might have not been synced yet) you still need a remote subscription to a publication on the Authorizazion server.
Note, that in order to authenticate users with this remote subscription you need an authenticated DDP request (which is also backed by the packages below).
Implementation
Warning - the following is an implementation by myself. This is due to I have faced the same issue and found no other implementation before mine.
There is a full working Accounts server (but constantly work in progress)
https://github.com/leaonline/leaonline-accounts
it uses an Oauth2 nodejs implementation, which has been wrapped inside a Meteor package:
https://github.com/leaonline/oauth2-server
and the respective login handler has also been created:
https://github.com/leaonline/meteor-accounts-lea
So finally I got a work around. It might not be the perfect way to handle this, but to my knowledge it worked for me so well. But yes I still open for suggestions.
Currently I have 4 connecting applications which are dependent on same users base.
So I decided to build SSO (Centralized Server for managing Users Database)
All 4 connecting applications ping SSO for User-Authentication and getting users related data.
Now those 4 connecting applications are developed using Meteor.
Main challenge here was to make things Reactive/Realtime.
E.g Chat/Messaging, Group Creations, Showing users list & listeners for newly registered users.
So in this scenario users database was on other remote server (SSO), so on connecting application I couldn't just:
Meteor.publish("getUsers")
So on connecting applications I decided to create a Temporary Collection called:
UserReactiveCollection
With following structure:
UserReactiveCollection.{
_id: 1,
userId: '2',
createdAt: new Date()
}
And I published subscription:
Meteor.publish("subscribeNewUserSso", function () {
return UserReactiveCollection.find({});
});
So for updating UserReactiveCollection I exposed Rest Api's on each connecting application respectively.
Those apis receive data from SSO and updates in UserReactiveCollection.
So on SSO side when ever a new user is registered. I ping those Apis (on connecting applications) and send the inserted userId in the payload.
So now those connecting applications receives onDataChanged ping from the subscription and gets userId.
Using that userId the connecting applications pings back to SSO and get user details of that specific userId and prepends to the users list.
Thats how I got it all working so for now I am just marking my answer accepted but as I mentioned above that: "It might not be the perfect way to handle this, but to my knowledge it worked for me so well. But yes I still open for suggestions."
And special thanks to #Jankapunkt for helping me out.

How to implement a client to server connection that is secure and syncs

I’m struggling to understand how the pouchDB interactions should be implemented. Say I want an offline-first app with syncing and auth, would I need to implement a middleman such as a node server to ensure my credentials to my main server are protected as having a PouchDB on the client with new PouchDB(‘name’, ‘https://username:password#server/dbname’) my creds to my main database are exposed. Would it be better to connect to a node server and that decide wether or not to allow access?
How would this be done? Can I handle a direct connection to the server with auth and it be secure? Or is a middle man needed to ensure security.
If a middle man is needed will you need to implement a sort of api i.e
//client
const db = new Pouch('days')
db.sync(remote)
//server
app.get('/db/days', (res, req) => // do some pouch stuff for each db)
https://github.com/pouchdb-community/pouchdb-authentication
Somewhat simplified, if your application is backed by an application 'master' database and it runs using a single set of credentials, you need a middle layer: you will then need to multiplex all users' data into a single database.
Applications backed by CouchDB/Cloudant often instead use the 'database-per-user' pattern, meaning that each application user have their own database, and their own credentials, meaning that a lot of things become simpler, conceptually, and a middle layer might not be required.
Note that the 'database-per-user' pattern needs some thought to scale well if you intend to cater for millions of users.
On Cloudant you can also use API keys to define access.
If you want the simplicity of the db-per-user pattern without (some of) the drawbacks, you may be able to draw some inspiration from Cloudant Envoy (https://github.com/cloudant-labs/envoy) -- a thin proxy that multiplexes users' data into a single db, whilst still presenting the db-per-user API surface outwards. Disclaimer: I'm one of the authors of Envoy.
Another approach that I use depends on crypto-pouch (https://github.com/calvinmetcalf/crypto-pouch) to encrypt all of your databases on the client. The first time the site is visited, username/password is required to access a cloud couch instance and get things installed on the client.
During this process, a pouchdb database is created on the client for each possible user (retrieved from the cloud couch instance), with each database encrypted with the user's password, and in each database is placed a single document that contains a master password. In addition to these user databases, the 'main' database that stores real data is created and encrypted with the master password.
Subsequent visits to the site whether online or offline, will require the user to enter their username/password, which will attempt to unlock the appropriate user database and get the master password, which is then used to unlock the main database. Only with the master password can the data be accessed and a sync performed to the cloud instance.

Creating database model with Mongoose

I have a node.js app that is essentially a sketchpad and currently I'm working on a feature to enable to save all of the sketches they've drawn during a "session" to a database so they can pick back up at a later time where they left off. I'm using a MongoDB database that I'm connecting to via the Mongoose ORM.
The server is started up in the file main.js which is currently where I'm opening the connection to the DB; however, the code for the saving of sketch data (which is currently just being saved to a JSON file on the server) is in a separate file. Based on this tutorial it seems that the code for the creation of models for a document are to go inside of a callback function that is run once the connection is open. But given that the logic for saving sketches in the app is in a different file from where the connection is being opened and since it says here that model instances aren't created/removed until the connection is open, it seems that there would either have to be a way to open different connections opened to create the models or that there would need to be a way to initiate the creation of the model for the sketches from the connection callback code in main.js.
I'm very new to MongoDB and Mongoose so I'm not sure if this is the correct way to think about creating models but given the needs of the feature, what would be the correct approach to opening the connection to the database and saving the sketches to the database once the save sketch function is called?
You may be overthinking this.
Just open your mongoose connection (a shared connection pool) via a mongoose.connect call during app start up and then create and save your Mongoose models whenever. Your models will use the shared connection pool as needed and will wait until the connection is established if necessary.

Most efficient way of authenticating and storing user login info in node.js

I know of two ways of storing and authenticating the user login info:
Storing the user id in a server side session and then when someone calls to the server check if they have a user session. (Using node client sessions)
When the user logins, store a authentication token in the user's table and store the token locally on the users client as well. Then when the user calls to the server they send the authentication token as a header and check if the token is in the user table.
While both of these ways are viable and applicable, I have problems/questions with both of them:
I've been told storing the info in session goes against the rest api idea of auto scalability. Is this true and is there a way around it?
When storing the authentication key, won't you only be able to store one key/instance per user. What would you do if you wanted to have the same account logged in on two computers or clients (I know I can just create an authentication table, but what if a client loses a token and the authentication token stays forever in the authentication table).
If there are better ways of doing this please bring it up, but I am very confused which direction to move towards. I am gravitating toward the second way, but I still like the first way.
Edit: I have narrowed it down to JWT and my second idea. I don't know which would be better with node.
How about JSON Web Tokens? They're a variant of the second method you mention and are a recognised industry standard, so you can easily find an implementation for your stack.
You can store the tokens in a key-value store like Redis instead of a relational database, which will be much faster. Redis also supports timing out a key after a while, so expired tokens will disappear automatically. You can also set it up so that each token is invalidated once used, and any request to the API returns a new token for use in the next request, allowing users to continually refresh their token.
Assuming you are using express, you can use express-session for managing your sessions.
Then, you need to add a suitable session store instead of the default MemoryStore, which is for debug use only, and will not scale to more than one process instance (for the reasons you mentioned in your question).
Fro example, if you are using a PostgreSQL database, you could consider using connect-pg-simple. This would store your sessions in your DB, so that your session management does not prevent you from scaling your node.js server. In addition, you can store multiple sessions per user, that will expire (and get automatically erased) based on the maxAge that you configure, thus solving the second problem you mentioned.

How does Sails.js manages data without saving in database

I am very new to MVC and Sails.js. I just started learning it yesterday and tried to learn it by doing something. But I am having some confusions regarding Sails models.
After creating the application, I configured the database in config/connection.js. Then I created a blueprint api named user. Next thing I did is I started the server and loaded following url:
http://localhost:1337/user/create?user=Mr.X
I didn't configure anything in api/models/user.js. So it is not supposed to save any data in database. When I browse my database as expectedly I can't see any record. But when I load the following url:
http://localhost:1337/user/
I can see the record of Mr.X there. And even if I restart the server the record of Mr.X is still there. What I can't understand is, how is it happening? How is Sails saving this data without affecting the configured database? Is it normal thing in all MVC frameworks or just Sails do that?
I'm guessing that you set up a connection in config/connections.js, but didn't specify a default connection in config/models.js, so your app is still using the default localDiskDb connection. You can see this database by opening the .tmp/localDiskDb.db file in your project. It's a pretty handy development tool.
See the docs for config/models.js for more info on global model settings. You can also set a connection on a per-model by basis using the connection property in the model's class file (e.g. api/models/User.js).
All your user data gets stored in localDiskDb.db by default.
You can Use postman app for retrieval and insertion of records.
Its better for you to first go through these videos.
https://www.youtube.com/watch?v=60PaCpTP5L4&list=PLLxyAuVpwujMQjlsF9l_qojC31m83NOCG&index=2

Categories

Resources