Manage access to an s3 folder in the browser - best practice? - javascript

I have a bucket configured as a website and the bucket policy allows public access to all folders in the bucket except the '/_admin/' folder. Only a specific Iam user making requests is allowed access to '/_admin/'. This is for backend management of the website, so I am serving up html, js, css, etc to the user. Right now I am using the aws javascript sdk to sign every url to a js/css/img src/href link and then update that attribute, or create it. (This means hardly anything is getting cached because the signature changes each time you access it.) I've proven this concept and I can access the files by signing every single url in my webpages, but it seems an awkward way to build a website.
Is there a way I can just add some kind of access header to each page that will be included in every request? If so, will this also apply to all ajax type requests as well?

Here's what I came up with to solve this problem. I modified my bucket policy to deny all anonymous access to '/_admin/' unless it is from my main account id, a specific Iam user, or unless the referrer url matches a specific token. After the user authenticates from a publicly accessible page on my site, I generate a new token, then use the sdk to modify the bucket policy with the value of that new token. (I'll probably also add another condition with an expires date.)
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::test-domain-structure/*"
},
{
"Sid": "AllowAdminWithToken",
"Effect": "Deny",
"NotPrincipal": {
"AWS": [
"arn:aws:iam::AWS-USER-ID:user/IAM-USER",
"arn:aws:iam::AWS-USER-ID:root"
]
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::test-domain-structure/_admin/*",
"Condition": {
"StringNotLike": {
"aws:Referer": "*?t=a77Pn"
}
}
}
]
}
In each page I use javascript to prepend each link/href with the new querystring (?t=a77Pn ...or the new generated token).
Edit: That was really a pain. Links kept breaking, so ultimately I went with the solution below, plus an added condition of an expiration date. Works much better.
Another option is to modify the bucket policy to only allow access from a certain ip address. This would eliminate having to modify all links/hrefs and keep the url clean. Still open to a better idea, but this works for now.

Related

AWS MediaConvert bitrate and quality problem

When I'm trying to use and starting MediaConvert job in CBR, VBR or in QVBR mode with Bitrate or MaxBitrate higher 250 000, getting the error below
Unable to write to output file [s3:///videos//***/original.mp4]: [Failed to write data: Access Denied]
But with Bitrate/MaxBitrate option lower 250 000 transcoding job working fine, but quality is too low. What causing this? Do I need to upgrade that MediaConvert service or I need to add some additional Policies somewhere? All I need is to get videos as avi, etc in mp4 format with the same quality on output as it is on input.
I was receiving the same error and found that it was related to having encryption enabled on my bucket defined in the bucket policy. I build my buckets with Cloudformation and have the following being set in the policy:
{
"Sid": "Deny unencrypted upload (require --sse)",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::BUCKET_NAME/*",
"Condition": {
"StringNotEquals": {
"s3:x-amz-server-side-encryption": "AES256"
}
}
}
I have found that having this set in the policy causes some issues with AWS services that write encrypted objects into s3. So I remove this and then add then set it in the Bucket properties:
Then I added kms:Decrypt and kms:GenerateDataKey to my Role as described here. Although, I'm not 100% sure I needed to do that. But once I did all that, my
Failed to write data: Access Denied
error was resolved.

Enable one-click S3 file download without content-disposition header or access to metadata

I am looking to enable users to download files from S3 with one-click, from my site directly, without going to the destination URL and then right-clicking to finally download the original.
From what I understand, to do this in HTML I need to write content disposition header metadata to a file, before it's stored on S3, but I do not have access to S3 at the time of file generation.
Is there any way to force the browser to download a file in one-click anyway, without sending users to the s3 link to download it manually?
Although it seems browser security prevents one-click download in any other way but with content header, as long as you have access to S3 there is a way to request a file with updated header using a signed url option.
In my case, for Rails, all it takes is an S3 gem, and an IAM user that has Get and Put permissions to your bucket.
Your IAM policy would look something like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::YOUR_BUCKET/*"
}
]
}
Then, just
s3 = Aws::S3::Resource.new(
credentials: Aws::Credentials.new(KEY, SECRET),
region: REGION_CODE)
obj = s3.bucket(BUCKET).object(AWS_KEY)
obj.presigned_url(:get, response_content_disposition: 'attachment')
That will return a url that already the correct header, and it will auto-download in the browser.
Your syntax will differ based on language, but using S3 signed url should be somewhat similar in any language API.

Getting array from remote server location using Javascript

I am having one executable URL ..when I hit that URL using GET type request that URL should return me Array in Javascript...
I have created one doGet() in remote server which returns JSON.stringfy(array);
I tried this code...can anyone tells me how I can get that array?
fetch(myUrl,{method:'get',headers:{'content-type'-'application/x-www-form-urlencoded'},
mode:'no-cors'}).then(function (response){ console.log(response);
});
You need CORS permission to read data from a different origin so do not set mode:'no-cors'.
If you are writing an extension page — not a content extension script — such as a background page, popup, or options page then you can request cross-origin permissions:
By adding hosts or host match patterns (or both) to the permissions
section of the manifest file, the extension can request access to
remote servers outside of its origin.
{
"name": "My extension",
...
"permissions": [
"https://www.google.com/"
],
...
}
Aside: You are making a GET request so you have no request body and so shouldn't describe the type of content in the request body. 'content-type'-'application/x-www-form-urlencoded' is nonsense.

IAM user can only access S3 bucket when bucket is set to public

I've been trying to test my first lambda function, however I'm only able to test it successfully when the bucket policies are set to public. I would like to provide access to my IAM user only, but whenever I try to do so, I receive an Access Denied error. I confirm that the IAM user does have Administrator Access.
Below is the relevant snippet from the bucket policy where I set the Principal to my IAM user's ARN, which results in the "Access Denied" error:
"Principal": {
"AWS": "arn:aws:iam::12_DIGIT_USER_ID:user/adminuser"
}
Setting the Principal to public, like below, allows the lambda to run successfully:
"Principal": {
"AWS": "*"
}
Clearly I want to avoid having a public bucket, and the solution according every blog post and StackOverflow question seems to be to set the bucket policy similar to the first code snippet above, but I just absolutely cannot figure out why it's not working for me. Any help would be greatly appreciated.
Problem is you are confusing between user and resource permissions.
You just need to set Resource to Resource permissions as a policy, assign it to Role with trusted relationship with lambda.amazonaws.com and then assign that Role to Lambda function.
Hope it helps.
When granting permission to access resources in Amazon S3:
If you wish to grant public access, use a Bucket Policy on the bucket itself
If you wish to grant permissions to specific users or groups of users, then add the permissions directly to the User/Group within IAM (no principal required)

Static website private content Amazon S3 and Cloudfront - css, js and images not showing

I have set the acl for the origin access identity on all objects as read. I have set up the bucket policy for the OAI. The only way I can get the css, or anything else apart from the html, to work is if I reference it with the the full signed URL ie domain name/css/main.css?parameters of signed url, in the index.html.
I have ensured that all files have the correct content type.
Is this standard practice? Do I have to reference every image, css, js file this way with the signed url?
I have been searching for days on this, so any help would be greatly appreciated. Thanks in advance.
bucket policy:
{
"Version": "2012-10-17",
"Id": "PolicyForCloudFrontPrivateContent",
"Statement": [
{
"Sid": " Grant a CloudFront Origin Identity access to support private content",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity identity canoncal"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::bucket/*"
}
]
}
My work around is this: I figure that I dont need to protect my css, image and js files. I created a new bucket and placed them all in there and made them public then referenced those from my private site. This works. This will probably suit me as I will be creating more buckets that can reference the same files.

Categories

Resources