Is it bad practice to put additional claims for authorization - javascript

I want to implement authorization layer on my microservices project. I have 3 microservices customer-service, workspace-service and cloud-service. So if a customer want to create a cloud instance he will create on behalf of a workspace which means that the cloud instance will belong to the workspace instead of him and also he may invite other customer to the workspace, and have access to the cloud instance. The data structure may looks like this.
// workspace
{
"workspaceId": "ws-123",
"customers": ["customer-123", "customer-456"]
}
// cloud-instance
{
"cloudId": "cloud-123",
"workspaceId: "ws-123"
}
I want to implement authorization logic that check if a user has access to a particular cloud instance. In order to do that I need to have workspaceId somewhere else in my authentication object. One thing that I can think of is that to put workspaceId in the jwt claims so my jwt may looks like this.
header.{ ..., workspaceId: ["ws-123"] }.signature
but the drawback of this solution is that the workspaceId claim won't be updated until the token has been refresh.
another solution that is to implement a service that query data from workspace-service and use it to validate.
const hasAccess = (customerId, workspaceId_on_cloud_instance) => {
let actual_workspaceId_that_he_has = workspace_service_exposed_api.findWorkspaceByCustomerId(customerId)
return actual_workspaceId_that_he_has == workspaceId_on_cloud_instance
}
but this approach would heavily rely on workspace-service if workspace-service is down then the other service can not handle a request at all since it doesn't have access to workspaceId.
So IMHO I would rather go for the 1st option since I use oauth2.0 and token will be refresh every 30s, but is it bad practice to do that? I wonder if there's better solution.

Having 3 microservices you cannot implement functionality with assumption that one service is down. I have feeling that access token lifespan is also defined based on this restriction - to have up to date data in the token. As I correctly understand, in worst case there is also ~30 sec. delay related to workspaceId update in token payload.
Token claim will change only when you invite or remove person from workspace so this service must work anyway. I would use 2nd solution with longer token lifespan to not generate it so often.
Another solution is to generate new token every time when workspace is changed - you can treat adding/removing to workspace as a business logic that invalidates token, but probably in this case communication to workspace service is also required.
If you are afraid that microservice will be down or you will have problem with communication, maybe you should focus more on app infrastructure or choose a new solution based on monolith application.
And back to question from title - adding any custom claim to token payload is standard approach.

Related

How to use one JWT token to sign a second JWT token?

The Scenario: A web-app user wants to create an authorised view of a private asset. The user has authenticated and has a jwt token. The app wants to make a fresh secondary jwt token, which can be verified as having been created with the original token.
FYI: My use case is signing a url - adding the second jwt token to the url, to allow controlled public viewing of the private asset.
How should the app do that?
E.g. is there a recommended way to set secret and alg for this 2nd token?
In theory, to use one jwt to sign another, you'd use the HS256 algorithm, with the first jwt as the secret. In practice, this approach leads to a couple of issues, outlined below:
Firstly, only the server and the original token-holder will be able to verify the authenticity of this token, and in order for the server to perform verification, you'll need to persist the original token somewhere. This is outside the scope of your question, but it does begin to complicate the implementation, since now both tokens mush share a lifespan, and the original token needs to be available wherever the second token might be used. That might not be an issue for your use case but it does somewhat limit portability, as well as future-proofing if, for example, another party needed to verify the token (such a use case can be achieved without too much overhead by using RS256 and asymmetric keys instead of the HS256/symmetric key method).
Secondly, JWTs are commonly transient values with short lifespan. This is usually due to the nature of their use: since they are shared between a client and server, they are not strictly speaking "secret" values, and the longer they live, the greater the chance that they may have been compromised. In using them as secret material for other tokens, you now require a longer lifespan for those tokens, and you are potentially introducing a security vulnerability wherein the "secondary" tokens could be spoofed by an attacker who gets their hands on one of these "primary" tokens. In order mitigate this specific threat, secret material should be something that is not transmitted over the network.
Perhaps you might consider using the same token generation procedure (same algorithm & secret) for both tokens, and simply include an identifier for the "issuer" (a unique identifier for the user who holds the original token) as part of the second token. Using this method, you don't need to worry about which verification process to use for a given token (since it's now the same for both), nor do you have to worry about token lifespan or key spoofing through a stolen token.
I think the best answer is You shouldn't, at least not on the client side. If you mean your back-end is node or something, you could do something like this.
Have the client make an authenticated request to give me access to resource x.
The server can at that point take any information from the original token to create a new JWT token, with the data below.
Sign the jwt serverside with whatever method you prefer (I would always use RS256, with certificate).
Respond the the client with, you can access the resource at protected/resource_x?key=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJhdWQiOiJodHRwczovL3lvdXIud2ViLnNpdGUvcHJvdGVjdGV4L3Jlc291cmNlX3giLCJpc3MiOiJodHRwczovL3lvdXIud2ViLnNpdGUiLCJleHAiOjE2MTU4MTIyOTQ5ODMsInNjcCI6InByaXZhdGVfcmVzb3VyY2UiLCJzdWIiOiJvcmlnaW5hbC11c2VyLWlkLWZyb20tY2xpZW50LXRva2VuIn0.cga9CQ1IqUwzBRgYM3vlUN0g37yJWZREQQEExV29UWs
Your jwt can contain the following information:
{
"aud": "https://your.web.site/protectex/resource_x",
"iss": "https://your.web.site",
"exp": 1615812294983,
"scp": "private_resource",
"sub": "original-user-id-from-client-token"
}

What headers do I need for "PUT" request - Kinvey

Hi everyone I make simple SPA application with JS and Kinvey. I have advertisements and every advert. must have views - how many times is seen(when "GET" request for that advert is called, another "PUT" request is called for the advert with increased views). The problem is that I can't figure out which headers to use: Authorization basic with username:pass and "Kinvey + authToken" return 401 Unauthorized. How to modify collection element which is not created by the currently logged in user?
You will want to use the Javascript SDK which means you don't have to do the quite complicated login / token generation process yourself. It's not a Basic Auth system. The SDK's will handle everything for you.
You cannot by default modify elements that are not created by the logged-in user, which is of course a good idea for security reasons. But, in the Collection Settings, you can change the collection permissions from "Shared" to "Public" to allow anybody write access to any element.
If you want finer grained controls, you can use Business Logic to inspect ACLs at runtime: http://devcenter.kinvey.com/tutorials/using-acls

Torii provider name from adapter?

I have a Torii adapter that is posting my e.g. Facebook and Twitter authorization tokens back to my API to establish sessions. In the open() method of my adapter, I'd like to know the name of the provider to write some logic around how to handle the different types of providers. For example:
// app/torii-adapters/application.js
export default Ember.Object.extend({
open(authorization) {
if (this.provider.name === 'facebook-connect') {
var provider = 'facebook';
// Facebook specific logic
var data = { ... };
}
else if (this.provider.name === 'twitter-oauth2') {
var provider = 'twitter';
// Twitter specific logic
var data = { ... };
}
else {
throw new Error(`Unable to handle unknown provider: ${this.provider.name}`);
}
return POST(`/api/auth/${provider}`, data);
}
}
But, of course, this.provider.name is not correct. Is there a way to get the name of the provider used from inside an adapter method? Thanks in advance.
UPDATE: I think there are a couple ways to do it. The first way would be to set the provider name in localStorage (or sessionStorage) before calling open(), and then use that value in the above logic. For example:
localStorage.setItem('providerName', 'facebook-connect');
this.get('session').open('facebook-connect');
// later ...
const providerName = localStorage.getItem('providerName');
if (providerName === 'facebook-connect') {
// ...
}
Another way is to create separate adapters for the different providers. There is code in Torii to look for e.g. app-name/torii-adapters/facebook-connect.js before falling back on app-name/torii-adapters/application.js. I'll put my provider-specific logic in separate files and that will do the trick. However, I have common logic for storing, fetching, and closing the session, so I'm not sure where to put that now.
UPDATE 2: Torii has trouble finding the different adapters under torii-adapters (e.g. facebook-connect.js, twitter-oauth2.js). I was attempting to create a parent class for all my adapters that would contain the common functionality. Back to the drawing board...
UPDATE 3: As #Brou points out, and as I learned talking to the Torii team, fetching and closing the session can be done—regardless of the provider—in a common application adapter (app-name/torii-adapters/application.js) file. If you need provider-specific session-opening logic, you can have multiple additional adapters (e.g. app-name/torii-adapters/facebook-oauth2.js) that may subclass the application adapter (or not).
Regarding the session lifecycle in Torii: https://github.com/Vestorly/torii/issues/219
Regarding the multiple adapters pattern: https://github.com/Vestorly/torii/issues/221
Regarding the new authenticatedRoute() DSL and auto-sesssion-fetching in Torii 0.6.0: https://github.com/Vestorly/torii/issues/222
UPDATE 4: I've written up my findings and solution on my personal web site. It encapsulates some of the ideas from my original post, from #brou, and other sources. Please let me know in the comments if you have any questions. Thank you.
I'm not an expert, but I've studied simple-auth and torii twice in the last weeks. First, I realized that I needed to level up on too many things at the same time, and ended up delaying my login feature. Today, I'm back on this work for a week.
My question is: What is your specific logic about?
I am also implementing provider-agnostic processing AND later common processing.
This is the process I start implementing:
User authentication.
Basically, calling torii default providers to get that OAuth2 token.
User info retrieval.
Getting canonical information from FB/GG/LI APIs, in order to create as few sessions as possible for a single user across different providers. This is thus API-agnotic.
➜ I'd then do: custom sub-providers calling this._super(), then doing this retrieval.
User session fetching or session updates via my API.
Using the previous canonical user info. This should then be the same for any provider.
➜ I'd then do: a single (application.js) torii adapter.
User session persistence against page refresh.
Theoretically, using simple-auth's session implementation is enough.
Maybe the only difference between our works is that I don't need any authorizer for the moment as my back-end is not yet secured (I still run local).
We can keep in touch about our respective progress: this is my week task, so don't hesitate!
I'm working with ember 1.13.
Hope it helped,
Enjoy coding! 8-)

How to customize the OData server using JayData?

I'm quite new to JayData, so this may sound like a stupid question.
I've read the OData server tutorial here: http://jaydata.org/blog/install-your-own-odata-server-with-nodejs-and-mongodb - it is very impressive that one can set up an OData provider just like that. However the tutorial did not go into details about how to customize the provider.
I'd be interested in seeing how I can set it up with a custom database and how I can add a layer of authentication/authorization to the OData server. What I mean is, not every user may have permissions to every entity and not every user has the permission to add new entities.
How would I handle such use cases with JayData?
Thanks in advance for your answers!
UPDATE:
Here are two posts that will get you started:
How to use the odata-server npm module
How to set up authentication/authorization
The $data.createODataServer method frequently used in the posts is a convenience method that hides the connect/express pipleline from you. To interact with the pipeline examine the method body of $data.createODataServer function found in node_modules/odata-server folder.
Disregard text below
Authentication must be solved with the connect pipeline there are planty of middleware for that.
For authorization EntityContext constructor accepts an authorization function that must be promise aware.
The all-allow authorizator looks like this.
function checkPerm(access, user, entitysets, callback) {
var pHandler = new $data.PromiseHandler();
var clbWrapper = pHandler.createCallback(callback);
var pHandlerResult = pHandler.getPromise();
clbWrapper.success(true); // this grants a joker rw permission to everyone
//consult user, entitySet and acces to decide on success/error
//since you return a promise you can call async stuff (will not be fast though)
return pHandlerResult;
}
I have to consult with one of the team members on the syntax that let you pass this into the build up process - but I can confirm this is doable and is supported. I'll get back with the answer ASAP.
Having authenticated the user you can also use EntityContext Level Events to intercept Read/Update/Create/Delete operations.
$data.EntityContext.extend({
MySet: { type: $data.EntitySet, elementType: Foobar,
beforeDelete: function(items) {
//if delete was in batch you'll get multiple items
//check items here,access this.request.user
return false // deny access
}
});
And there is a declarative way, you can annotate Role names with permissions on entity sets, this requirest that your user object actually has a roles field with an array of role names.
I too have been researching oData recently and as we develop our platform in both node and C# naturally looked at JayStorm. From my understanding of the technical details of JayStorm the whole capability of Connect and Express are available to make this topic possible. We use Restify to provide the private API of our platform and there we have written numerous middleware modules for exactly this case.
We are using JayData for our OData Service layer also, and i have implemnment a very simple basic authentication with it.
Since the JayData is using Express, so we can leverage Express' features. For Basic Auth, the simplest way is:
app.use(c.session({ secret: 'session key' }));
// Authenticator
app.use(c.basicAuth('admin', 'admin'));
app.use("/odata.svc", $data.JayService.OData.Utils.simpleBodyReader());
you also can refer to this article for more detail for authentication with Express: http://blog.modulus.io/nodejs-and-express-basic-authentication
Thanks.
I wrote that blogpost, I work for JayData.
What do you mean by custom database?
We have written a middleware for authentication and authorization but it is not open source. We might release it later.
We have a service called JayStorm, it has a free version, maybe that is good for you.
We probably will release an appliance version of it.

Authentication for a SPA and a node.js server

I have a very simple node.js app built on express which has been handling authentication using a session memory store. Basically a user logs in by:
app.post('/sessions', function(req, res) {
// check username/password and if valid set authenticated to true
if (authenticated){
req.session.user = req.body.username;
} ...
});
Then in each call from the browser a requiresLogin middleware function is called which checks to see if that user property on the session has been set.
I'm now transitioning the app to basically just provide a service that may or may not be consumed in the browser, so instead of using cookies/sessions, I'm considering changing the system so that one would post to /getToken (instead of /sessions) which would return a temporary random token associated with a user's account that could then be used for a period of time to access the service. Using the service would then require a valid token to be included in each call. (I assume this would be better than passing the username/password each time so that the password would not have to be stored in memory on the client's computer after the call to get token?)
Would such a system basically be just as secure as the above current system or Is there a much more standard/safe way to handle this? What's the standard way to handle something like this?
Thanks in advance for you help!
What you are looking for is called an HMAC and there is a great article here to get ideas on how to implement for your service.
As to whether session based security is more secure than public/private keypairs is widely debated and really depends on the implementation/application. In your case, since you want per request authentication on a public facing API, the HMAC is the way to go.

Categories

Resources