I'm trying to implement Google sign-in and API access for a web app with a Node.js back end. Google's docs provide two options using a combo of platform.js client-side and google-auth-library server-side:
Google Sign-In with back-end auth, via which users can log into my app using their Google account. (auth2.signIn() on the client and verifyIdToken() on the server.)
Google Sign-in for server-side apps, via which I can authorize the server to connect to Google directly on behalf of my users. (auth2.grantOfflineAccess() on the client, which returns a code I can pass to getToken() on the server.)
I need both: I want to authenticate users via Google sign-in; and, I want to set up server auth so it can also work on behalf of the user.
I can't figure out how to do this with a single authentication flow. The closest I can get is to do the two in sequence: authenticate the user first with signIn(), and then (as needed), do a second pass via grantOfflineAccess(). This is problematic:
The user now has to go through two authentications back to back, which is awkward and makes it look like there's something broken with my app.
In order to avoid running afoul of popup blockers, I can't give them those two flows on top of each other; I have to do the first authentication, then supply a button to start the second authentication. This is super-awkward because now I have to explain why the first one wasn't enough.
Ideally there's some variant of signIn() that adds the offline access into the initial authentication flow and returns the code along with the usual tokens, but I'm not seeing anything. Help?
(Edit: Some advice I received elsewhere is to implement only flow #2, then use a secure cookie store some sort of user identifier that I check against the user account with each request. I can see that this would work functionally, but it basically means I'm rolling my own login system, which would seem to increase the chance I introduce bugs in a critical system.)
To add an API to an existing Google Sign-In integration the best option is to implement incremental authorization. For this, you need to use both google-auth-library and googleapis, so that users can have this workflow:
Authenticate with Google Sign-In.
Authorize your application to use their information to integrate it with a Google API. For instance, Google Calendar.
For this, your client-side JavaScript for authentication might require some changes to request
offline access:
$('#signinButton').click(function() {
auth2.grantOfflineAccess().then(signInCallback);
});
In the response, you will have a JSON object with an authorization code:
{"code":"4/yU4cQZTMnnMtetyFcIWNItG32eKxxxgXXX-Z4yyJJJo.4qHskT-UtugceFc0ZRONyF4z7U4UmAI"}
After this, you can use the one-time code to exchange it for an access token and refresh token.
Here are some workflow details:
The code is your one-time code that your server can exchange for its own access token and refresh token. You can only obtain a refresh token after the user has been presented an authorization dialog requesting offline access. If you've specified the select-account prompt in the OfflineAccessOptions [...], you must store the refresh token that you retrieve for later use because subsequent exchanges will return null for the refresh token
Therefore, you should use google-auth-library to complete this workflow in the back-end. For this,
you'll use the authentication code to get a refresh token. However, as this is an offline workflow,
you also need to verify the integrity of the provided code as the documentation explains:
If you use Google Sign-In with an app or site that communicates with a backend server, you might need to identify the currently signed-in user on the server. To do so securely, after a user successfully signs in, send the user's ID token to your server using HTTPS. Then, on the server, verify the integrity of the ID token and use the user information contained in the token
The final function to get the refresh token that you should persist in your database might look like
this:
const { OAuth2Client } = require('google-auth-library');
/**
* Create a new OAuth2Client, and go through the OAuth2 content
* workflow. Return the refresh token.
*/
function getRefreshToken(code, scope) {
return new Promise((resolve, reject) => {
// Create an oAuth client to authorize the API call. Secrets should be
// downloaded from the Google Developers Console.
const oAuth2Client = new OAuth2Client(
YOUR_CLIENT_ID,
YOUR_CLIENT_SECRET,
YOUR_REDIRECT_URL
);
// Generate the url that will be used for the consent dialog.
await oAuth2Client.generateAuthUrl({
access_type: 'offline',
scope,
});
// Verify the integrity of the idToken through the authentication
// code and use the user information contained in the token
const { tokens } = await client.getToken(code);
const ticket = await client.verifyIdToken({
idToken: tokens.id_token!,
audience: keys.web.client_secret,
});
idInfo = ticket.getPayload();
return tokens.refresh_token;
})
}
At this point, we've refactored the authentication workflow to support Google APIs. However, you haven't asked the user to authorize it yet. Since you also need to grant offline access, you should request additional permissions through your client-side application. Keep in mind that you already need an active session.
const googleOauth = gapi.auth2.getAuthInstance();
const newScope = "https://www.googleapis.com/auth/calendar"
googleOauth = auth2.currentUser.get();
googleOauth.grantOfflineAccess({ scope: newScope }).then(
function(success){
console.log(JSON.stringify({ message: "success", value: success }));
},
function(fail){
alert(JSON.stringify({message: "fail", value: fail}));
});
You're done with the front-end changes and you're only missing one step. To create a Google API's client in the back-end with the googleapis library, you need to use the refresh token from the previous step.
For a complete workflow with a Node.js back-end, you might find my gist helpful.
While authentication (sign in), you need to add "offline" access type (by default online) , so you will get a refresh token which you can use to get access token later without further user consent/authentication. You don't need to grant offline later, but only during signing in by adding the offline access_type. I don't know about platform.js but used "passport" npm module . I have also used "googleapis" npm module/library, this is official by Google.
https://developers.google.com/identity/protocols/oauth2/web-server
https://github.com/googleapis/google-api-nodejs-client
Check this:
https://github.com/googleapis/google-api-nodejs-client#generating-an-authentication-url
EDIT: You have a server side & you need to work on behalf of the user. You also want to use Google for signing in. You just need #2 Google Sign-in for server-side apps , why are you considering both #1 & #2 options.
I can think of #2 as the proper way based on your requirements. If you just want to signin, use basic scope such as email & profile (openid connect) to identify the user. And if you want user delegated permission (such as you want to automatically create an event in users calendar), just add the offline access_type during sign in. You can use only signing in for registered users & offline_access for new users.
Above is a single authentication flow.
For a new node.js project I'm working on, I'm thinking about switching over from a cookie based session approach (by this, I mean, storing an id to a key-value store containing user sessions in a user's browser) to a token-based session approach (no key-value store) using JSON Web Tokens (jwt).
The project is a game that utilizes socket.io - having a token-based session would be useful in such a scenario where there will be multiple communication channels in a single session (web and socket.io)
How would one provide token/session invalidation from the server using the jwt Approach?
I also wanted to understand what common (or uncommon) pitfalls/attacks I should look out for with this sort of paradigm. For example, if this paradigm is vulnerable to the same/different kinds of attacks as the session store/cookie-based approach.
So, say I have the following (adapted from this and this):
Session Store Login:
app.get('/login', function(request, response) {
var user = {username: request.body.username, password: request.body.password };
// Validate somehow
validate(user, function(isValid, profile) {
// Create session token
var token= createSessionToken();
// Add to a key-value database
KeyValueStore.add({token: {userid: profile.id, expiresInMinutes: 60}});
// The client should save this session token in a cookie
response.json({sessionToken: token});
});
}
Token-Based Login:
var jwt = require('jsonwebtoken');
app.get('/login', function(request, response) {
var user = {username: request.body.username, password: request.body.password };
// Validate somehow
validate(user, function(isValid, profile) {
var token = jwt.sign(profile, 'My Super Secret', {expiresInMinutes: 60});
response.json({token: token});
});
}
--
A logout (or invalidate) for the Session Store approach would require an update to the KeyValueStore
database with the specified token.
It seems like such a mechanism would not exist in the token-based approach since the token itself would contain the info that would normally exist in the key-value store.
I too have been researching this question, and while none of the ideas below are complete solutions, they might help others rule out ideas, or provide further ones.
1) Simply remove the token from the client
Obviously this does nothing for server side security, but it does stop an attacker by removing the token from existence (ie. they would have to have stolen the token prior to logout).
2) Create a token blocklist
You could store the invalid tokens until their initial expiry date, and compare them against incoming requests. This seems to negate the reason for going fully token based in the first place though, as you would need to touch the database for every request. The storage size would likely be lower though, as you would only need to store tokens that were between logout & expiry time (this is a gut feeling, and is definitely dependent on context).
3) Just keep token expiry times short and rotate them often
If you keep the token expiry times at short enough intervals, and have the running client keep track and request updates when necessary, number 1 would effectively work as a complete logout system. The problem with this method, is that it makes it impossible to keep the user logged in between closes of the client code (depending on how long you make the expiry interval).
Contingency Plans
If there ever was an emergency, or a user token was compromised, one thing you could do is allow the user to change an underlying user lookup ID with their login credentials. This would render all associated tokens invalid, as the associated user would no longer be able to be found.
I also wanted to note that it is a good idea to include the last login date with the token, so that you are able to enforce a relogin after some distant period of time.
In terms of similarities/differences with regards to attacks using tokens, this post addresses the question: https://github.com/dentarg/auth0-blog/blob/master/_posts/2014-01-07-angularjs-authentication-with-cookies-vs-token.markdown
The ideas posted above are good, but a very simple and easy way to invalidate all the existing JWTs is simply to change the secret.
If your server creates the JWT, signs it with a secret (JWS) then sends it to the client, simply changing the secret will invalidating all existing tokens and require all users to gain a new token to authenticate as their old token suddenly becomes invalid according to the server.
It doesn't require any modifications to the actual token contents (or lookup ID).
Clearly this only works for an emergency case when you wanted all existing tokens to expire, for per token expiry one of the solutions above is required (such as short token expiry time or invalidating a stored key inside the token).
This is primarily a long comment supporting and building on the answer by #mattway
Given:
Some of the other proposed solutions on this page advocate hitting the datastore on every request. If you hit the main datastore to validate every authentication request, then I see less reason to use JWT instead of other established token authentication mechanisms. You've essentially made JWT stateful, instead of stateless if you go to the datastore each time.
(If your site receives a high volume of unauthorized requests, then JWT would deny them without hitting the datastore, which is helpful. There are probably other use cases like that.)
Given:
Truly stateless JWT authentication cannot be achieved for a typical, real world web app because stateless JWT does not have a way to provide immediate and secure support for the following important use cases:
User's account is deleted/blocked/suspended.
User's password is changed.
User's roles or permissions are changed.
User is logged out by admin.
Any other application critical data in the JWT token is changed by the site admin.
You cannot wait for token expiration in these cases. The token invalidation must occur immediately. Also, you cannot trust the client not to keep and use a copy of the old token, whether with malicious intent or not.
Therefore:
I think the answer from #matt-way, #2 TokenBlackList, would be most efficient way to add the required state to JWT based authentication.
You have a blacklist that holds these tokens until their expiration date is hit. The list of tokens will be quite small compared to the total number of users, since it only has to keep blacklisted tokens until their expiration. I'd implement by putting invalidated tokens in redis, memcached or another in-memory datastore that supports setting an expiration time on a key.
You still have to make a call to your in-memory db for every authentication request that passes initial JWT auth, but you don't have to store keys for your entire set of users in there. (Which may or may not be a big deal for a given site.)
I would keep a record of the jwt version number on the user model. New jwt tokens would set their version to this.
When you validate the jwt, simply check that it has a version number equal to the users current jwt version.
Any time you want to invalidate old jwts, just bump the users jwt version number.
Haven't tried this yet, and it is uses a lot of information based on some of the other answers. The complexity here is to avoid a server side data store call per request for user information. Most of the other solutions require a db lookup per request to a user session store. That is fine in certain scenarios but this was created in an attempt to avoid such calls and make whatever required server side state to be very small. You will end up recreating a server side session, however small to provide all the force invalidation features. But if you want to do it here is the gist:
Goals:
Mitigate use of a data store (state-less).
Ability to force log out all users.
Ability to force log out any individual at any time.
Ability to require password re-entry after a certain amount of time.
Ability to work with multiple clients.
Ability to force a re-log in when a user clicks logout from a particular client. (To prevent someone "un-deleting" a client token after user walks away - see comments for additional information)
The Solution:
Use short lived (<5m) access tokens paired with a longer lived (few hours) client stored refresh-token.
Every request checks either the auth or refresh token expiration date for validity.
When the access token expires, the client uses the refresh token to refresh the access token.
During the refresh token check, the server checks a small blacklist of user ids - if found reject the refresh request.
When a client doesn't have a valid(not expired) refresh or auth token the user must log back in, as all other requests will be rejected.
On login request, check user data store for ban.
On logout - add that user to the session blacklist so they have to log back in. You would have to store additional information to not log them out of all devices in a multi device environment but it could be done by adding a device field to the user blacklist.
To force re-entry after x amount of time - maintain last login date in the auth token, and check it per request.
To force log out all users - reset token hash key.
This requires you to maintain a blacklist(state) on the server, assuming the user table contains banned user information. The invalid sessions blacklist - is a list of user ids. This blacklist is only checked during a refresh token request. Entries are required to live on it as long as the refresh token TTL. Once the refresh token expires the user would be required to log back in.
Cons:
Still required to do a data store lookup on the refresh token request.
Invalid tokens may continue to operate for access token's TTL.
Pros:
Provides desired functionality.
Refresh token action is hidden from the user under normal operation.
Only required to do a data store lookup on refresh requests instead of every request. ie 1 every 15 min instead of 1 per second.
Minimizes server side state to a very small blacklist.
With this solution an in memory data store like reddis isn't needed, at least not for user information as you are as the server is only making a db call every 15 or so minutes. If using reddis, storing a valid/invalid session list in there would be a very fast and simpler solution. No need for a refresh token. Each auth token would have a session id and device id, they could be stored in a reddis table on creation and invalidated when appropriate. Then they would be checked on every request and rejected when invalid.
An approach I've been considering is to always have an iat (issued at) value in the JWT. Then when a user logs out, store that timestamp on the user record. When validating the JWT just compare the iat to the last logged out timestamp. If the iat is older, then it's not valid. Yes, you have to go to the DB, but I'll always be pulling the user record anyway if the JWT is otherwise valid.
The major downside I see to this is that it'd log them out of all their sessions if they're in multiple browsers, or have a mobile client too.
This could also be a nice mechanism for invalidating all JWTs in a system. Part of the check could be against a global timestamp of the last valid iat time.
----------------Bit late for this answer but may be it will help to someone----------------
From the Client Side, the easiest way is to remove the token from the storage of browser.
But, What if you want to destroy the token on the Node server -
The problem with JWT package is that it doesn't provide any method or way to destroy the token.
You may use different methods with respect to JWT which are mentioned above. But here i go with the jwt-redis.
So in order to destroy the token on the serverside you may use jwt-redis package instead of JWT
This library (jwt-redis) completely repeats the entire functionality of the library jsonwebtoken, with one important addition. Jwt-redis allows you to store the tokenIdentifier in redis to verify validity. The absence of a tokenIdentifier in redis makes the token not valid. To destroy the token in jwt-redis, there is a destroy method
it works in this way :
Install jwt-redis from npm
To Create:
var redis = require('redis');
var JWTR = require('jwt-redis').default;
var redisClient = redis.createClient();
var jwtr = new JWTR(redisClient);
const secret = 'secret';
const tokenIdentifier = 'test';
const payload = { jti: tokenIdentifier }; // you can put other data in payload as well
jwtr.sign(payload, secret)
.then((token)=>{
// your code
})
.catch((error)=>{
// error handling
});
To verify:
jwtr.verify(token, secret);
To Destroy:
// if jti passed during signing of token then tokenIdentifier else token
jwtr.destroy(tokenIdentifier or token)
Note :
1). You can provide expiresIn during signin of token in the same as it is provided in JWT.
2). If jti is not passed during signing of token then jti is generated randomly by the library.
May be this will help you or somebody else. Thanks.
I'm a bit late here, but I think I have a decent solution.
I have a "last_password_change" column in my database that stores the date and time when the password was last changed. I also store the date/time of issue in the JWT. When validating a token, I check if the password has been changed after the token was issued and if it was the token is rejected even though it hasn't expired yet.
Keep an in-memory list like this
user_id revoke_tokens_issued_before
-------------------------------------
123 2018-07-02T15:55:33
567 2018-07-01T12:34:21
If your tokens expire in one week then clean or ignore the records older than that. Also keep only the most recent record of each user.
The size of the list will depend on how long you keep your tokens and how often users revoke their tokens.
Use db only when the table changes. Load the table in memory when your application starts.
You can have a "last_key_used" field on your DB on your user's document/record.
When the user logs in with user and pass, generate a new random string, store it in the last_key_used field, and add it to the payload when signing the token.
When the user logs in using the token, check the last_key_used in the DB to match the one in the token.
Then, when user does a logout for instance, or if you want to invalidate the token, simply change that "last_key_used" field to another random value and any subsequent checks will fail, hence forcing the user to log in with user and pass again.
I did it the following way:
Generate a unique hash, and then store it in redis and your JWT. This can be called a session
We'll also store the number of requests the particular JWT has made - Each time a jwt is sent to the server, we increment the requests integer. (this is optional)
So when a user logs in, a unique hash is created, stored in redis and injected into your JWT.
When a user tries to visit a protected endpoint, you'll grab the unique session hash from your JWT, query redis and see if it's a match!
We can extend from this and make our JWT even more secure, here's how:
Every X requests a particular JWT has made, we generate a new unique session, store it in our JWT, and then blacklist the previous one.
This means that the JWT is constantly changing and stops stale JWT's being hacked, stolen, or something else.
Unique per user string, and global string hashed together to serve as the JWT secret portion allow both individual and global token invalidation. Maximum flexibility at the cost of a db lookup/read during request auth. Also easy to cache as well, since they are seldom changing.
Here's an example:
HEADER:ALGORITHM & TOKEN TYPE
{
"alg": "HS256",
"typ": "JWT"
}
PAYLOAD:DATA
{
"sub": "1234567890",
"some": "data",
"iat": 1516239022
}
VERIFY SIGNATURE
HMACSHA256(
base64UrlEncode(header) + "." +
base64UrlEncode(payload),
HMACSHA256('perUserString'+'globalString')
)
where HMACSHA256 is your local crypto sha256
nodejs
import sha256 from 'crypto-js/sha256';
sha256(message);
for example usage see https://jwt.io (not sure they handle dynamic 256 bit secrets)
Late to the party, MY two cents are given below after some research.
During logout, make sure following things are happening...
Clear the client storage/session
Update the user table last login date-time and logout date-time whenever login or logout happens respectively. So login date time always should be greater than logout (Or keep logout date null if the current status is login and not yet logged out)
This is way far simple than keeping additional table of blacklist and purging regularly. Multiple device support requires additional table to keep loggedIn, logout dates with some additional details like OS-or client details.
Why not just use the jti claim (nonce) and store that in a list as a user record field (db dependant, but at very least a comma-separated list is fine)? No need for separate lookup, as others have pointed out presumably you want to get the user record anyway, and this way you can have multiple valid tokens for different client instances ("logout everywhere" can reset the list to empty)
Give 1 day expiry time for the tokens
Maintain a daily blacklist.
Put the invalidated / logout tokens into the blacklist
For token validation, check for the token expiry time first and then the blacklist if token not expired.
For long session needs, there should be a mechanism for extending token expiry time.
Kafka message queue and local black lists
I thought about using a messaging system like kafka. Let me explain:
You could have one micro service (let call it userMgmtMs service) for example which is responsible for the login and logout and to produce the JWT token. This token then gets passed to the client.
Now the client can use this token to call different micro services (lets call it pricesMs), within pricesMs there will be NO database check to the users table from which the initial token creation was triggered. This database has only to exist in userMgmtMs. Also the JWT token should include the permissions / roles so that the pricesMs do not need to lookup anything from the DB to allow spring security to work.
Instead of going to the DB in the pricesMs the JwtRequestFilter could provide a UserDetails object created by the data provided in the JWT token (without the password obviously).
So, how to logout or invalidate a token? Since we do not wanna call the database of userMgmtMs with every request for priecesMs (which would introduce quite a lot of unwanted dependencies) a solution could be to use this token blacklist.
Instead of keeping this blacklist central and haveing a dependency on one table from all microservices, I propose to use a kafka message queue.
The userMgmtMs is still responsible for the logout and once this is done it puts it into its own blacklist (a table NOT shared among microservices). In addition it sends a kafka event with the content of this token to a internal kafka service where all other microservices are subscribed to.
Once the other microservices receive the kafka event they will put it as well in their internal blacklist.
Even if some microservices are down at the time of logout they will eventually go up again and will receive the message at a later state.
Since kafka is developed so that clients have their own reference which messages they did read it is ensured that no client, down or up will miss any of this invalid tokens.
The only issue again what I can think of is that the kafka messaging service will again introduce a single point of failure. But it is kind of reversed because if we have one global table where all invalid JWT tokens are saved and this db or micro service is down nothing works. With the kafka approach + client side deletion of JWT tokens for a normal user logout a downtime of kafka would in most cases not even be noticeable. Since the black lists are distributed among all microservies as an internal copy.
In the off case that you need to invalidate a user which was hacked and kafka is down this is where the problems start. In this case changing the secret as a last resort could help. Or just make sure kafka is up before doing so.
Disclaimer: I did not implement this solution yet but somehow I feel that most of the proposed solution negate the idea of the JWT tokens with having a central database lookup. So I was thinking about another solution.
Please let me know what you think, does it make sense or is there an obvious reason why it cant?
A good approach to invalidating a token would still need database trips. For a purpose that includes when some parts of the user record change, for example changing roles, changing passwords, email, and more. One can add a modified or updated_at field in the user record, which records the time of this change, and then you include this in the claims. So when a JWT is authenticated, you compare the time in the claims with the one recorded in the DB, if that of the claim was before, then token is invalid. This approach is also similar to storing the iat in the DB.
Note: If you're using the modified or updated_at option, then you will also have to update it when the user logs in and out.
If you want to be able to revoke user tokens, you can keep track of all issued tokens on your DB and check if they're valid (exist) on a session-like table.
The downside is that you'll hit the DB on every request.
I haven't tried it, but i suggest the following method to allow token revocation while keeping DB hits to a minimum -
To lower the database checks rate, divide all issued JWT tokens into X groups according to some deterministic association (e.g., 10 groups by first digit of the user id).
Each JWT token will hold the group id and a timestamp created upon token creation. e.g., { "group_id": 1, "timestamp": 1551861473716 }
The server will hold all group ids in memory and each group will have a timestamp that indicates when was the last log-out event of a user belonging to that group.
e.g., { "group1": 1551861473714, "group2": 1551861487293, ... }
Requests with a JWT token that have an older group timestamp, will be checked for validity (DB hit) and if valid, a new JWT token with a fresh timestamp will be issued for client's future use.
If the token's group timestamp is newer, we trust the JWT (No DB hit).
So -
We only validate a JWT token using the DB if the token has an old group timestamp, while future requests won't get validated until someone in the user's group will log-out.
We use groups to limit the number of timestamp changes (say there's a user logging in and out like there's no tomorrow - will only affect limited number of users instead of everyone)
We limit the number of groups to limit the amount of timestamps held in memory
Invalidating a token is a breeze - just remove it from the session table and generate a new timestamp for the user's group.
If "logout from all devices" option is acceptable (in most cases it is):
Add the token version field to the user record.
Add the value in this field to the claims stored in the JWT.
Increment the version every time the user logs out.
When validating the token compare its version claim to the version stored in the user record and reject if it is not the same.
A db trip to get the user record in most cases is required anyway so this does not add much overhead to the validation process. Unlike maintaining a blacklist, where DB load is significant due to the necessity to use a join or a separate call, clean old records and so on.
USING REFRESHING OF JWT...
An approach that I take as being practical is to store a refresh token (which can be a GUID) and a counterpart refresh token ID (that does not change no matter how many refreshes are done) on the database and add them as claims for the user when the user's JWT is being generated. An alternative to a database can be used, e.g. memory cache. But I'm using database in this answer.
Then, create a JWT refresh Web API endpoint that the client can call before the expiry of the JWT. When the refresh is called, get the refresh token from the claims in the JWT.
On any call to the JWT refresh endpoint, validate the current refresh token and the refresh token ID as a pair on the database. Generate a new refresh token, and use it to replace the old refresh token on the database, using the refresh token ID. Remember they are claims that can be extracted from the JWT
Extract the user's claims from the current JWT. Begin the process of generating a new JWT. Replace the value of the old refresh token claim with the newly generated refresh token that has also been newly saved on the database. With all that, generate the new JWT and send it to the client.
So, after a refresh token has been used, whether by the intended user or an attacker, any other attempt to use a/the refresh token, that is not paired, on the database, with its refresh token ID, would not lead to the generation of a new JWT, hence preventing any client having that refresh token ID from being able to use the backend anymore, leading to a full logout of such clients (including the legitimate client).
That explains the basic information.
The next thing to add to that is to have a window for when a JWT can be refreshed, such that anything outside that window would be a suspicious activity. For example, the window can be 10min before the expiration of a JWT. The date-time a JWT was generated can be saved as a claim in that JWT itself. And when such suspicious activity occurs, i.e. when someone else tries to reuse that refresh token ID outside or within the window after it has already been used within the window, should mark the refresh token ID as invalid. Hence, even the valid owner of the refresh token ID would have to log in afresh.
A refresh token that can't be found to be paired, on the database, with a presented refresh token ID implies that the refresh token ID should be invalidated. Because an idle user may try to use a refresh token that an attacker, for example, has already used.
A JWT that was stolen and used by an attacker, before the intended user does, would be marked as invalid too when the user attempts to use the refresh token too, as explained earlier.
The only situation not covered is if a client never attempts to refresh its JWT even after an attacker may have already stolen it. But this is unlikely to happen to a client that's not in custody (or something similar) of an attacker, meaning that the client cannot be predicted by the attacker as regards when the client would stop using the backend.
If the client initiates a usual logout. The logout should be made to delete the refresh token ID and associated records from the database, hence, preventing any client from generating a refresh JWT.
The following approach could give best of both worlds solution:
Let "immediate" mean "~1 minute".
Cases:
User attempts a successful login:
A. Add an "issue time" field to the token, and keep the expiry time as needed.
B. Store the hash of user's password's hash
or create a new field say tokenhash in the user's table.
Store the tokenhash in the generated token.
User accesses a url:
A. If the "issue time" is in the "immediate" range, process the token normally. Don't change the "issue time". Depending upon the duration of "immediate" this is the duration
one is vulnerable in. But a short duration like a minute or two shouldn't be
too risky. (This is a balance between performance and security). Three is no need to hit the db here.
B. If the token is not in the "immediate" range, check the tokenhash against the db. If its okay, update the "issue time" field. If not okay then don't process the request (Security is finally enforced).
User changes the tokenhash to secure the account. In the "immediate" future the account is secured.
We save the database lookups in the "immediate" range.
This is most beneficial if there are a bursts of requests from the client in the "immediate" time duration.
WITHOUT USING REFRESHING OF JWT...
2 scenarios of an attack come to mind. One is about compromised login credentials. And the other is an actual theft of JWT.
For compromised login credentials, when a new login happens, normally send the user an email notification. So, if the customer doesn't consent to being the one who logged in, they should be advised to do a reset of credentials, which should save to database/cache the date-time the password was last set (and set this too when user sets password during initial registration). Whenever a user action is being authorized, the date-time a user changed their password should be fetched from database/cache and compared to the date-time a given JWT was generated, and forbid the action for JWTs that were generated before the said date-time of credentials reset, hence essentially rendering such JWTs useless. That means save the date-time of generation of a JWT as a claim in the JWT itself. In ASP.NET Core, a policy/requirement can be used to do do this comparison, and on failure, the client is forbidden. This consequently logs out the user on the backend, globally, whenever a reset of credentials is done.
For actual theft of JWT... A theft of JWT is not easy to detect but a JWT that expires easily solves this. But what can be done to stop the attacker before the JWT expires? It is with an actual global logout. It is similar to what was described above for credentials reset. For this, normally save on database/cache the date-time a user initiated a global logout, and on authorizing a user action, get it and compare it to the date-time of generation of a given JWT too, and forbid the action for JWTs that were generated before the said date-time of global logout, hence essentially rendering such JWTs useless. This can be done using a policy/requirement in ASP.NET Core, as previously described.
Now, how do you detect the theft of JWT? My answer to this for now is to occasionally alert user to globally log out and log in again, as this would definitely log the attacker out.
Simply create add the following object to your user schema:
const userSchema = new mongoose.Schema({
{
... your schema code,
destroyAnyJWTbefore: Date
}
and whenever you receive a POST request on /login change the date of this document to Date.now()
finally, in your authentication checking code, i.e, in your middleware where you check for isAuthanticated or protected or whatever the name you're using, simply add a validation that checks the myjwt.iat is greater than userDoc.destroyAnyJWTbefore.
This solution is the best when it comes to security if you wanted to destroy the JWT on the server-side.
This solution is no longer client-side dependent, it breaks the main goal of using JWTs which is to stop storing tokens on the server-side.
it depends on your project context, but most likely you will want to destroy the JWT from the server.
If you wanted to only destroy the token from the client-side, simply remove the cookie from the browser (if your client is a browser), the same can be done on smartphones or any other client.
In case chose to destroy the token from the server-side, I suggest that you use Radis to quickly perform this operation, by implementing the black-list style mentioned by other users.
The main question now is: are JWTs useless? God knows.
Even if you delete the token from storage, it is still valid, but only for short period to reduce the probability of it being used maliciously.
You could create a deny-listing and once you delete the token from the storage you can add the token to this list. If you have a microservice service, all other services which consume this token have to add extra logic to check this listing. This will centralized your authentication, because each server has to check an centralized data structure.
I ended up with access-refresh tokens, where refresh tokens uuids stored in database and access tokens uuids stored in cache server as a whitelist of valid access tokens. For example, I have critical changes in user data, for example, his access rights, next thing I do - I remove his access token from cache server whitelist and by the next access to any resource of my api, auth service will be asked for token's validity, then, if it isn't present in cache server whitelist, I will reject user's access token and force him to reauthorize by refresh token. If I want to drop user's session or all of his sessions, I simply drop all his tokens from whitelist and remove refresh tokens from database, so he musts re-enter credentials to continue accessing resources.
I know, that my authentication is no longer state-less, but to be fair, why do I even want state-less authentication?
IAM solution like Keycloak (which I'have worked on) provide Token Revocation endpoint like
Token Revocation Endpoint
/realms/{realm-name}/protocol/openid-connect/revoke
Of if you simply want to logout an useragent(or user), you could call an endpoint as well(this would simply invalidate the Tokens). Again, in the case of Keycloak, the Relying Party just needs to call the endpoint
/realms/{realm-name}/protocol/openid-connect/logout
Link in case if you want to learn more
An alternative would be to have a middleware script just for critical API endpoints.
This middleware script would check in the database if the token is invalidated by an admin.
This solution may be useful for cases where is not necessary to completely block the access of a user right away.
In this example, I am assuming the end user also has an account. If this isn't he case, then the rest of the approach is unlikely to work.
When you create the JWT, persist it in the database, associated with the account that is logging in. This does mean that just from the JWT you could pull out additional information about the user, so depending on the environment, this may or may not be OK.
On every request after, not only do you perform the standard validation that (I hope) comes with what ever framework you use (that validates the JWT is valid), it also includes soemthing like the user ID or another token (that needs to match that in the database).
When you log out, delete the cookie (if using) and invalidate the JWT (string) from the database. If the cookie can't be deleted from the client side, then at least the log out process will ensure the token is destroyed.
I found this approach, coupled with another unique identifier (so there are 2 persist items in the database and are available to the front end) with the session to be very resilient
Here's how to do it without having to call the database on every request:
Keep a hashmap of valid tokens in a memory cache (e.g a LRU with limited size)
When checking a token: if the token is in cache, return the result immediately, no database query is needed (majority case). Otherwise perform a full check (query the database, check for user status & invalidated tokens...). Then update the cache.
When invalidating a token: add it to a blacklist in database, then update the cache, send signal to all servers if needed.
Keep in mind that the cache should have limited size, like a LRU, otherwise you might run out of memory.
This seems really difficult to solve without a DB lookup upon every token verification. The alternative I can think of is keeping a blacklist of invalidated tokens server-side; which should be updated on a database whenever a change happens to persist the changes across restarts, by making the server check the database upon restart to load the current blacklist.
But if you keep it in the server memory (a global variable of sorts) then it's not gonna be scalable across multiple servers if you are using more than one, so in that case you can keep it on a shared Redis cache, which should be set-up to persist the data somewhere (database? filesystem?) in case it has to be restarted, and every time a new server is spun up it has to subscribe to the Redis cache.
Alternative to a black-list, using the same solution, you can do it with a hash saved in redis per session as this other answer points out (not sure that would be more efficient with many users logging in though).
Does it sound awfully complicated? it does to me!
Disclaimer: I have not used Redis.