If a web app uploads data to an amazon s3 bucket is it possible to restrict access to the data for the specific ip address that the web app initiated the upload from and / or can the web app obtain some sort of token that it uses to access the data prior to uploading it?
For example if Hans in Holland uploads 235325.json and Tina in Germany uploads 3453453.json, the web application client that Hans is running cannot see Tinas 3453453.json file and vice versa. Access to the file upload is only accessible by the user that uploaded it and 100% off access to the rest of the world.
There is no way to achieve this natively (like something you will just configure). You are better off implementing permission control at application level.
But, you can enhance your security mechanisms by generating pre-signed URLs that includes a policy that checks for an IP condition.
In this case:
Hans gets a S3 URL for an object that he uploaded and can legitimately download.
Hans sends the S3 URL to Tina via email.
Tina tries to open the link.
The link will fail for Tina since the allowed IP address included in the S3 URL won't match.
For more information, take a look at Creating a Signed URL Using a Custom Policy. (Scroll to "Creating a Policy Statement for a Signed URL That Uses a Custom Policy".)
Related
I would like to host files on a private AWS S3 bucket which can only be accessed by users who are authenticated to my web application. The links to these file downloads must be static.
Simple proxy method:
I know this could be done using a proxy service. In this case the static links would point to the service and the service would handle validating the requesting users session, if it were valid the service would respond with the file contents from S3.
Presigned URL proxy method:
However rather than implement a proxy to gate access to the files, I was wondering if I could use presigned URLs somehow instead?
https://docs.aws.amazon.com/AmazonS3/latest/dev/ShareObjectPreSignedURL.html
In this case, the role of the proxy is to just return a presigned URL to the user rather than the actual payload of the file from S3. The end user could then use this presigned URL to download the file directly from S3. What I'm not clear on is how this flow is manage in the browser, I am assuming I would need to write JavaScript to the following:
Request presigned URL from proxy service
Wait for response
Use the presigned URL provided in the response (the presigned URL) to download the actual file
Am I on the right track here?
Simply return a 307 redirect from your server to the presigned URL. E.g. the client requests:
GET /the/file HTTP/1.1
And the server generates a presigned URL and responds with:
HTTP/1.1 307 Temporary Redirect
Location: https://s3.aws..../the/file?...
That's a valid approach.
Beware of expiring credentials. Signed URLs will be good for the lesser of the time until the access credentials used to sign them expire, or their expiry time (which you control, within limits) happens. In the case that you're already using temporary credentials (which is very good!) you might want to use AssumeRole explicitly to control the expiry time (you can assume a role from a role to get new temporary credentials with a new time limit).
There's another option too: Amazon Cognito. This can bridge the gap between your user accounts and then issue per-user short-term credentials to your users' browser environments directly. They can then make API calls to S3 with their own credentials. This has some benefit (you can better express user permissions in their profile, rather than checking them yourself before they generate URLs ) and some complexity (can I DoS your account with my user creds, or do you control what APIs I can call? Least Privilege really matters when IAM your only auth tier) On the other hand, IAM calls are free and you don't pay for the servers to host them, so this alo sounds cost effective if you are using federated identity - user pools, not so much.
I've read through quite a few pages of documentation and other StackOverflow questions/answers but can't seem to come across anything that can help me with my scenario.
I'm hosting a public, static site in an S3 bucket. This site makes some calls to an API that I have hosted in an EC2-instance in a VPC. Because of this, my API can only be called by other instances and services in the same VPC.
But I'm not sure how to allow the S3 Bucket site access to my API.
I've tried creating VPC Endpoints and going down that route, but all that did was restrict access to my S3 site from only the instances within my VPC (which is not what I want).
I would appreciate any help with this, thank you so much.
Hopefully my question is clear.
No, S3 Static Websites are 100% client side code. So it's basically just html + css + javascript being delivered, as-is from S3. If you want to get dynamic content into your website, you need to look at calling an API accessible from your user's browser, i.e. from the internet.
AWS API Gateway with Private Integrations could be used to accept the incoming REST call and send it on to your EC2 Server in your VPC.
My preferred solution to adding dynamic data to S3 Static Websites is using API Gateway with AWS Lambda to create a serverless website. This minimises running costs, maintenance, and allows for quick deployments. See The Serverless Framework for getting up and running with this solution.
A static site doesn't run on a server. It runs entirely in the web browser of each site visitor. The computer it is running on would be the laptop of your end-user. None of your code runs in the S3 bucket. The S3 bucket simply stores the files and serves them to the end-user's browser which then runs the code. The route you are going down to attempt to give the S3 bucket access to the VPC resource is not going to work.
You will need to make your API publicly accessible in order for code running in your static site (running in web browsers, on end-user's laptops/tablets/phones/etc.) to be able to access it. You should look into something like API keys or JWT tokens to provide security for your public API.
Per my review of how to setup secure access to amazon s3 buckets it looks like we first generate an IAM user and then tie a security policy allowing s3 access to that user. After that we can generate API keys for the bucket, which can authenticate request for bucket access. That's my understanding at this point, please correct me if I missed something.
I assume the API keys should be server side only (The Secret Access Key). In other words it's not safe to place these directly inside the webapp? Hence we would first have to send the data to our server, and then once there we can send it to the bucket using the API key?
Is there any way to secure access directly from a web app to an amazon s3 bucket?
Approach Summary
Per the discussion with #CaesarKabalan it sounds like the approach that would allow this is:
1) Create an IAM user that can create identities that can be authenticated via Amazon Cognito - Lets call the credentials assigned from this step Cognito Credentials.
2) The user signs in to the webapp with for example Google
3) The webapp makes a request to the webapp's server (Could be a lambda function) to signup the user with Amazon Cognito
4) The webapp now obtains credentials for the user directly from Amazon Cognito and uses these to send the data to the s3 bucket.
I think that's where we are conceptually. Now it's time to test!
From your question I'm not sure what portions of your application are in AWS nor your security policies but you basically have three options:
(Bad) Store your keys on the client. Depending on the scope of your deployment this might be ok. For example if each client has it's own dedicated user and bucket there probably isn't much risk, especially if this is for a private organization where you control all aspects of the access. This is the easiest but less secure. You should not use this if your app is multi-tenant. Probably move along...
(Great) Use an API endpoint to move this data into your bucket. This would involve some sort of infrastructure to receive the file securely from the client then move it into S3 with the security keys stored locally. This would be similar to a traditional web app doing IO into a database. All data into S3 goes through this tier of your app. Downsides are you have to write that service, host it, and pay for bandwidth costs.
(Best) Use Amazon Cognito to assign each app/user their own access key. I haven't done this personally but my understanding is you can provision each entity their own short-lived access credentials that can be renewed and you can give them access to write data straight to S3. The hard part here will be structuring your S3 buckets and properly designing the IAM credentials for your app users to ONLY be able to do exactly what you want. The upside here is the users write directly to S3 bucket, you're using all native AWS services and writing very little custom code. This I would consider the best, most secure, and enterprise class solution. Here is an example: Amazon S3: Allows Amazon Cognito Users to Access Objects in Their Bucket
Happy to answer any more questions or clarify.
Im just starting down my dev journey and need some advice on how to approach a simple app I'm working on. I do not have a good understanding of
modern web development.
What I am looking to achieve is to upload a video or image via a browser form / html form to Amazon S3.
Ideally, I want to leverage the AWS node.js SDK but keep my front end as basic as possible (i.e bootstrap page + html changes) I acknowledge that I could do a straight http operation but would still like to leverage the SDK for now.
I have my html,form and css created (using bootstrap), but do not understand how to connect the form to a node.js script that does the authorization/PUT?
Can I even go from the form, and pass the file to the script to be uploaded?
Thanks for any advice!!! :D
Check out the AWS documentation. They even have an example for your use case: Uploading Photos to Amazon S3 from a Browser
You can use, AWS CloudFront Signed URLs to perform the upload. The flow involved are as follows.
From the browser, you will request for a url, with an expiration time, to allow upload a file to a bucket. You can write Sign url creation using NodeJS Backend and AWS SDK as given in this example, after authenticating the user.
Using the Signed URL and AWS JavaScript SDK for S3, browser can directly upload the file to S3.
For more information about Signed URLs, check How Signed URLs Work.
I have a pyramid/python application with a page at www.domain.com that creates html pages at s3.amazonaws.com/testbucket/object_name. Right now in this test bucket, I also have javascript files that each object(html page) utilizes. I want it so that users can go to subdomain.domain.com/object_name and see the files with the javascript enabled. I have cname'd subdomain.domain.com (the name of my bucket) to s3.amazonaws.com. (with that last period at the end). Right now I have two problems (I am far more concerned with the second one)
1). When I try and access the url via https://subdomain.domain.com/object_name I get a security error (I assume this is because it is redirecting to an amazon s3 bucket. How can I get an ssl certificate for my bucket?
2) When I try and access the url via http://subdomain.domain.com/object_name, there is no secutiry error (not https) but the javascript isn't enabled. How can I make sure that those jacvascript files on the s3 bucket still work?
Edit: upon looking at the developer tools, I see the error: Failed to load resource: the server responded with a status of 403 (Forbidden) referring to the javascript file. Why would this file be forbidden when I have made it public in the bucket?
S3 does not allow you to configure your own SSL certificates for buckets - this is an inherit "problem" with the way S3 is designed and distributed across servers - Amazon provides their own certificate for use with S3, no configuration required.
However, and this is very important - you cannot use SSL over CNAME, period. If you want to use your pretty domain name with SSL using S3 you're out of luck. Its just a S3 quirk we have to live with. (https://forums.aws.amazon.com/thread.jspa?threadID=60502).
In summary, if you want SSL, you must use the full S3 bucket path.