I'm building a small static website that I have hosted on s3. I used Cognito to get some basic user verification up and running (login, logout). I want to restrict certain parts of the website to only logged in users.
I worked through module 2 of this workshop https://github.com/aws-samples/aws-serverless-workshops/tree/master/WebApplication. In this workshop, the page /rides.html is restricted to logged in users. If you are not logged in and try to access /rides.html, the page will start to load, and then quickly redirect you to /signin.html. The trouble with this is that unauthorized users can still see the rides page for a split second before redirection occurs.
Here is their code that handles redirecting a user who hasn't logged in. It is run as javascript when a user tries to access /rides.html
WildRydes.authToken.then(function setAuthToken(token) {
if (token) {
authToken = token;
} else {
window.location.href = '/signin.html';
}
}).catch(function handleTokenError(error) {
alert(error);
window.location.href = '/signin.html';
});
I am having a lot of trouble determining the best way to ensure only users who have signed in can access parts of my website. Very new to anything webdev/AWS related, and I'm having some trouble finding this information online.
Edit: To clear up what I want to achieve - I want the entire rides.html page to be inaccessible to anyone who hasn't logged in.
Solution: We ended up putting a restricted CloudFront in front of the s3 bucket. Then, we had a lambda triggered when someone tried to access the CloudFront. Here is a tutorial: https://douglasduhaime.com/posts/s3-lambda-auth.html
I did not work through the workshop you mention, but from reading the README of module 2 I understand that they are implementing User Authentication and Registration with Amazon Cognito User Pools.
Redirecting from a site which is inaccessible is fine, you must not ensure that it is never loaded. Let me explain why:
The "sensitive" information which is displayed on the site is not static. It is loaded from a REST backend in module 4. Since the authentication is static by means of JWT, the data is never loaded from the REST backend if the user is not authenticated.
So what should the page /rides.html do?
if the user is authenticated (i.e. has obtained a JWT which is valid) the REST backend should be called to obtain the data
if the user is not authenticated (i.e. no JWT present) or the JWT is present and not valid anymore the user should be redirected to the sign-in page; note that no sensible data was obtained from the REST backend before the redirect
EDIT:
In order to restrict access to one single object in S3, you could add a bucket policy like the following one to the s3 bucket:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::<your-bucket-name>/*"
},
{
"Effect": "Deny",
"NotPrincipal": {
"AWS": "<your-user-arn>"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::<your-bucket-name>/rides.html"
}
]
}
This will make all objects public except the rides.html file. If you want to access it, you will have to use a signed url. [1]
Please note that you must not use a bucket or object ACL which grants public access to everyone in conjunction with this approach since it might prevent the object from staying private.
Another approach (for using a federated user instead of a regular IAM user)
I do not know if the following works because of limitations in the docs [2], but you could give it a try.
It might be possible to use a web identity federation provider in the NotPrincipal attribute: "Federated": "cognito-identity.amazonaws.com".
You could then narrow down which federated user has access to the rides.html object via condition keys (e.g. cognito-identity.amazonaws.com:sub). [3]
[1] https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-signed-urls.html
[2] https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_notprincipal.html
[3] https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_iam-condition-keys.html
Related
I have endpoints coded in Nodejs... I use the following codes to keep them safe...
const corsOption = {
origin: ['https://www.mywebsite.com'],
};
app.use(cors(corsOption));
if (host !== "myendpoint.com") {
return res.status(403).json({ message: "forbidden access" });
}
will these keep my endpoints safe... or do I have to do anything more for my endpoints to keep them safe... I don't want bots or anyone else to use it... I know that they are public but I want to restrict access... pls, any help or suggestion ???
thank you
To be sure you can control who can access your endpoint, you can setup a token authentication.
When you send a request to your endpoint, the header should include:
Authorization: Token {your token}
And in your endpoint, you can check if the token is authorized or not (by storing authorized token in a database). If the token is not recognized, you can send back a 403 error.
If your website accesses your endpoints, this means that any browser that can display your website must also be able to access your endpoints. Requests are not made by your website, they are made by browsers visiting your website.
You must first ask how much you want to restrict access:
Restrict to individual known users to whom you send a password via mail, which they must then type into your website ("log on") before they can make any requests to your endpoints.
Restrict to users who have self-registered. Can anyone in the world then self-register, or do you demand confirmation via an email address?
Restrict to users who can log on with their Google (or Facebook, or ...) account.
Zain_Ul_Din's answer shows details of a possible implementation for the "self-registration" case. See also What's the best way to add social login (Sign in with Google) to existing email/password app and database?
you can implement user authentication and authorization in your Node js app to restrict access.
for this you can use the jsonwebtoken npm package.
Look up John Smilga's node and express projects on google for a 10hr video including 4 projects. One of the projects introduces JSON web tokens and how to use them.I highly recommend that.
You can also use the express-rate-limit package. With this you should be able to 'limit' how many requests a user can make to your API endpoints within a set amount of time. If the requests exceed that limit then this middleware steps in and stops further access (Haven't tested it in production myself but looks good)
We recently added auth0 for integrating SSO from different oauth2 providers (e.g. contoso1.auth.com and contoso2.auth.com)
https://auth0.com/docs/quickstart/spa/angular/01-login
I followed the above link and Our front end app successfully integrated this in the code and able to signin and get the token.
{
"iss": "https://TENANT_NAME.auth0.com/",
"sub": "auth0|SOME_HASH",
"aud": [
"https://API_IDENTIFIER",
"https://TENANT_NAME.auth0.com/userinfo"
],
"iat": 1563699940,
"exp": 1563786340,
"azp": "SOME_OTHER_HASH",
"scope": "openid profile email"
}
In our angular app we want to render ui (show or hide links based on which authentication(contoso1/contoso2) user has gone through. But auth0 accesstoken doesn't give any details about the issuer "iss" (e.g.contoso1.auth.com or contoso2.auth.com)
We cannot rely on the email to say which SSO user belongs to as in our case contoso1 and contoso2 can have users from each others system with their own email ids.
After spending sometime on auth0 page i realized we have a field "connection" in the datacontext of auth0 object and it stores the name . While we can use this as a temporary workaround we can't rely on this determine which SSO flow user signed in with.
{
tenant: "identity-dev"
clientID: "fdsfsdf-dfsdfsd8989",
clientName: "Angualr Portal",
clientMetadata: "{}"
connection : "contoso1-backchannel",
connectionStrategy:"oidc"
....more
}
Please let me know how we can fetch iss or issuer url details in the token.
Is it a requirement to get this info using the frontend only?
As per this Auth0 article, it is a bit easier if you have a backend in place:
If your code runs in the backend, then we can assume that your server is trusted to safely store secrets (as you will see, we use a secret in the backend scenario).
With the backend you will be able to retrieve and parse the identities array user.identities[i].provider, which clearly identifies the original issuer under provider and connection keys.
If using only a frontend, it is more work and you need to build a proxy:
When working with a frontend app, the process for calling IdP APIs differs from the backend process because frontend apps are public applications that cannot hold credentials securely. Because SPA code can be viewed and altered, and native/mobile apps can be decompiled and inspected, they cannot be trusted to hold sensitive information like secret keys or passwords.
The quoted article contains links in the "Show me how" box that might be of further interest in this regard.
From your post it seems to be that only a frontend is used, but I included info about the backend in case it is worth your while to implement a small backend, if purely to just make retrieving the identity provider a bit easier.
I'm trying to implement Google sign-in and API access for a web app with a Node.js back end. Google's docs provide two options using a combo of platform.js client-side and google-auth-library server-side:
Google Sign-In with back-end auth, via which users can log into my app using their Google account. (auth2.signIn() on the client and verifyIdToken() on the server.)
Google Sign-in for server-side apps, via which I can authorize the server to connect to Google directly on behalf of my users. (auth2.grantOfflineAccess() on the client, which returns a code I can pass to getToken() on the server.)
I need both: I want to authenticate users via Google sign-in; and, I want to set up server auth so it can also work on behalf of the user.
I can't figure out how to do this with a single authentication flow. The closest I can get is to do the two in sequence: authenticate the user first with signIn(), and then (as needed), do a second pass via grantOfflineAccess(). This is problematic:
The user now has to go through two authentications back to back, which is awkward and makes it look like there's something broken with my app.
In order to avoid running afoul of popup blockers, I can't give them those two flows on top of each other; I have to do the first authentication, then supply a button to start the second authentication. This is super-awkward because now I have to explain why the first one wasn't enough.
Ideally there's some variant of signIn() that adds the offline access into the initial authentication flow and returns the code along with the usual tokens, but I'm not seeing anything. Help?
(Edit: Some advice I received elsewhere is to implement only flow #2, then use a secure cookie store some sort of user identifier that I check against the user account with each request. I can see that this would work functionally, but it basically means I'm rolling my own login system, which would seem to increase the chance I introduce bugs in a critical system.)
To add an API to an existing Google Sign-In integration the best option is to implement incremental authorization. For this, you need to use both google-auth-library and googleapis, so that users can have this workflow:
Authenticate with Google Sign-In.
Authorize your application to use their information to integrate it with a Google API. For instance, Google Calendar.
For this, your client-side JavaScript for authentication might require some changes to request
offline access:
$('#signinButton').click(function() {
auth2.grantOfflineAccess().then(signInCallback);
});
In the response, you will have a JSON object with an authorization code:
{"code":"4/yU4cQZTMnnMtetyFcIWNItG32eKxxxgXXX-Z4yyJJJo.4qHskT-UtugceFc0ZRONyF4z7U4UmAI"}
After this, you can use the one-time code to exchange it for an access token and refresh token.
Here are some workflow details:
The code is your one-time code that your server can exchange for its own access token and refresh token. You can only obtain a refresh token after the user has been presented an authorization dialog requesting offline access. If you've specified the select-account prompt in the OfflineAccessOptions [...], you must store the refresh token that you retrieve for later use because subsequent exchanges will return null for the refresh token
Therefore, you should use google-auth-library to complete this workflow in the back-end. For this,
you'll use the authentication code to get a refresh token. However, as this is an offline workflow,
you also need to verify the integrity of the provided code as the documentation explains:
If you use Google Sign-In with an app or site that communicates with a backend server, you might need to identify the currently signed-in user on the server. To do so securely, after a user successfully signs in, send the user's ID token to your server using HTTPS. Then, on the server, verify the integrity of the ID token and use the user information contained in the token
The final function to get the refresh token that you should persist in your database might look like
this:
const { OAuth2Client } = require('google-auth-library');
/**
* Create a new OAuth2Client, and go through the OAuth2 content
* workflow. Return the refresh token.
*/
function getRefreshToken(code, scope) {
return new Promise((resolve, reject) => {
// Create an oAuth client to authorize the API call. Secrets should be
// downloaded from the Google Developers Console.
const oAuth2Client = new OAuth2Client(
YOUR_CLIENT_ID,
YOUR_CLIENT_SECRET,
YOUR_REDIRECT_URL
);
// Generate the url that will be used for the consent dialog.
await oAuth2Client.generateAuthUrl({
access_type: 'offline',
scope,
});
// Verify the integrity of the idToken through the authentication
// code and use the user information contained in the token
const { tokens } = await client.getToken(code);
const ticket = await client.verifyIdToken({
idToken: tokens.id_token!,
audience: keys.web.client_secret,
});
idInfo = ticket.getPayload();
return tokens.refresh_token;
})
}
At this point, we've refactored the authentication workflow to support Google APIs. However, you haven't asked the user to authorize it yet. Since you also need to grant offline access, you should request additional permissions through your client-side application. Keep in mind that you already need an active session.
const googleOauth = gapi.auth2.getAuthInstance();
const newScope = "https://www.googleapis.com/auth/calendar"
googleOauth = auth2.currentUser.get();
googleOauth.grantOfflineAccess({ scope: newScope }).then(
function(success){
console.log(JSON.stringify({ message: "success", value: success }));
},
function(fail){
alert(JSON.stringify({message: "fail", value: fail}));
});
You're done with the front-end changes and you're only missing one step. To create a Google API's client in the back-end with the googleapis library, you need to use the refresh token from the previous step.
For a complete workflow with a Node.js back-end, you might find my gist helpful.
While authentication (sign in), you need to add "offline" access type (by default online) , so you will get a refresh token which you can use to get access token later without further user consent/authentication. You don't need to grant offline later, but only during signing in by adding the offline access_type. I don't know about platform.js but used "passport" npm module . I have also used "googleapis" npm module/library, this is official by Google.
https://developers.google.com/identity/protocols/oauth2/web-server
https://github.com/googleapis/google-api-nodejs-client
Check this:
https://github.com/googleapis/google-api-nodejs-client#generating-an-authentication-url
EDIT: You have a server side & you need to work on behalf of the user. You also want to use Google for signing in. You just need #2 Google Sign-in for server-side apps , why are you considering both #1 & #2 options.
I can think of #2 as the proper way based on your requirements. If you just want to signin, use basic scope such as email & profile (openid connect) to identify the user. And if you want user delegated permission (such as you want to automatically create an event in users calendar), just add the offline access_type during sign in. You can use only signing in for registered users & offline_access for new users.
Above is a single authentication flow.
I have a multi-tenant service principal that exposes a custom API. Using MSAL.js' UserAgentApplication I'm able to ask for consent for the resources I need on first-time use with loginPopup. However, I'm confused as to which resources to specify in the request. For instance, let's say I use the following popup (note the lack of scope):
await this.userAgentApplication.loginPopup({
prompt: 'consent',
authority: "https://login.microsoftonline.com/organizations"
})
The application will simply request the user's profile. Fair enough.
However, let's say I configure the popup as follows:
await this.userAgentApplication.loginPopup({
scopes: ["api://xyz/Some.Scope"]
prompt: "consent"
authority: "https://login.microsoftonline.com/organizations"
})
This causes an exception:
The user or administrator has not consented to use the application with ID 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx' named 'XYZ'. Send an interactive authorization request for this user and resource.
Why do I get this error even when I'm logging in using a Global Administrator account?
Lastly, in addition to our own API data we needed to be able to read Graph Groups in application context, so I requested these using the .default endpoint (permissions are specified in the Service Principal registration). I did this using the following popup:
await this.userAgentApplication.loginPopup({
scopes: ["https://graph.microsoft.com/.default"],
prompt: 'consent',
authority: "https://login.microsoftonline.com/organizations"
})
The result of the last attempt was ... all the permissions I was hoping for!
Sign in and read user profile
Read and write all groups
My Custom API scope (Application name)
But why does our Graph consent request automatically include requests for other custom scopes?
It is the expected behaviour if you use admin consent for the resources.
When you set ***/.default as the scope, it is equivalent to executing "Grant admin consent for {your tenant}" in Azure portal.
So it will asks admin consent for all the required permissions no matter whether they are from the required resource or not.
But if you set https://graph.microsoft.com/user.read, it will ask you to do consent only for user.read permission.
So in this case, once you use the last one to do the admin consent, api://xyz/Some.Scope will also take effect.
I also have a test with my custom API api://***/.default and api://***/user.write and both work as expected.
You can also try to use the following request to do the admin consent:
https://login.microsoftonline.com/jmaster.onmicrosoft.com/oauth2/v2.0/authorize?
client_id={client id}
&response_type=code
&redirect_uri={redirect url}
&response_mode=query
&scope=api://xyz/Some.Scope
&state=12345
&prompt=consent
Please have a retry with api://***/.default. Don't worry if it doesn't work, because api://xyz/Some.Scope will also take effect using the last code snippet.
Following on from JavaScript OAuth2 flow for Azure AD v2 login does not give an access_token, I'm trying to figure out the best endpoint to use, to get the logged in users details (eg, display name, email, etc.).
However, I noticed in there are 2 potential endpoints I can use
https://outlook.office.com/api/v2.0/me
https://graph.microsoft.com/v1.0/me
1, is used in bell for hapijs and is documented in Use the Outlook REST API. However, in bell, I can't seem to figure out the scope I need to get it working for OAuth 2.0. I've tried openid, email, profile, Mail.Read (only trying this because I've seen it in some docs), and User.Read, but the first 3 scopes don't give back a access_token as per JavaScript OAuth2 flow for Azure AD v2 login does not give an access_token, and the last 2 (Mail.Read, and User.Read) give me an access_token, but they give me authentication issues when calling https://outlook.office.com/api/v2.0/me with Authorization: 'Bearer [access_token].
I found the endpoint for 2 at Microsoft Graph: Get user and it seems to work with the User.Read scope. I get the following response using the access_token returned:
{
'#odata.context': 'https://graph.microsoft.com/v1.0/$metadata#users/$entity',
id: '60...',
userPrincipalName: 'some#email.com',
businessPhones: [],
displayName: null,
jobTitle: null,
mail: null,
mobilePhone: null,
officeLocation: null,
preferredLanguage: null
}
The problem with the response here is that there isn't an explicit email field, but I guess I can just use userPrincipalName (the userPrincipalName is also used for the bell Azure AD provider)
So my question is which endpoint am I supposed to use? Or is there another one somewhere else?
You should absolutely use Microsoft Graph for this and the /v1.0/me endpoint is the correct URI for retrieving the user's profile information.
As for finding their email address, there are a few potential properties you could pull:
mail: This is the default SMTP address for the user. If it is showing up as null, this suggests the value wasn't populated. Normally this is populated automatically by Exchange but depending on the tenant it may need to be manually populated.
proxyAddresses: This is an array of addresses associated with the user. Typically you only use this property when you need to surface a user's alternative email aliases (i.e. name#comp.com & firstname.lastname#comp.com).
If you are only looking for very basic information (name and email) you be able to use OpenID Connect and skip the Microsoft Graph call entirely. OpenID Connect supports returning the user's profile as part of the profile.
To use OpenID Connect you need to make a couple of changes to your Authorization request (i.e. the initial call to https://login.microsoftonline.com/common/oauth2/v2.0/authorize):
The response_type must include id_token. (eg. &response_type=id_token+code)
The scope must include openid, profile, and email (eg. &scope=openid profile email user.read).
When enabled, you will receive an additional property in your Access Token response named id_token. This property holds a JSON Web Token (JWT) that you can decode an obtain the user's profile information:
As an illustration, I used the settings above to request a token from my test Azure AD instance. I took that token and decoded it (I used http://jwt.ms/ but JWT decoder would work) to get the OpenID Connect profile:
{
"typ": "JWT",
"alg": "RS256",
"kid": "{masked}"
}.{
"aud": "{masked}",
"iss": "https://login.microsoftonline.com/{masked}/v2.0",
"iat": 1521825998,
"nbf": 1521825998,
"exp": 1521829898,
"name": "Marc LaFleur",
"nonce": "a3f6250a-713f-4098-98c4-8586b0ec084d",
"oid": "f3cf77fe-17b6-4bb6-8055-6aa084df7d66",
"preferred_username": "marc#officedev.ninja",
"sub": "{masked}",
"tid": "{masked}",
"uti": "{masked}",
"ver": "2.0"
}.[Signature]
The ID Token and Access Token can return attributes like display name, email, etc.
Sample ID Token.
See "Select Application claims" here: Azure Active Directory B2C: Built-in policies
Select Application claims. Choose claims you want returned in the authorization tokens sent back to your application after a successful sign-up or sign-in experience. For example, select Display Name, Identity Provider, Postal Code, User is new and User's Object ID.