AWS MediaConvert bitrate and quality problem - javascript

When I'm trying to use and starting MediaConvert job in CBR, VBR or in QVBR mode with Bitrate or MaxBitrate higher 250 000, getting the error below
Unable to write to output file [s3:///videos//***/original.mp4]: [Failed to write data: Access Denied]
But with Bitrate/MaxBitrate option lower 250 000 transcoding job working fine, but quality is too low. What causing this? Do I need to upgrade that MediaConvert service or I need to add some additional Policies somewhere? All I need is to get videos as avi, etc in mp4 format with the same quality on output as it is on input.

I was receiving the same error and found that it was related to having encryption enabled on my bucket defined in the bucket policy. I build my buckets with Cloudformation and have the following being set in the policy:
{
"Sid": "Deny unencrypted upload (require --sse)",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::BUCKET_NAME/*",
"Condition": {
"StringNotEquals": {
"s3:x-amz-server-side-encryption": "AES256"
}
}
}
I have found that having this set in the policy causes some issues with AWS services that write encrypted objects into s3. So I remove this and then add then set it in the Bucket properties:
Then I added kms:Decrypt and kms:GenerateDataKey to my Role as described here. Although, I'm not 100% sure I needed to do that. But once I did all that, my
Failed to write data: Access Denied
error was resolved.

Related

Enable one-click S3 file download without content-disposition header or access to metadata

I am looking to enable users to download files from S3 with one-click, from my site directly, without going to the destination URL and then right-clicking to finally download the original.
From what I understand, to do this in HTML I need to write content disposition header metadata to a file, before it's stored on S3, but I do not have access to S3 at the time of file generation.
Is there any way to force the browser to download a file in one-click anyway, without sending users to the s3 link to download it manually?
Although it seems browser security prevents one-click download in any other way but with content header, as long as you have access to S3 there is a way to request a file with updated header using a signed url option.
In my case, for Rails, all it takes is an S3 gem, and an IAM user that has Get and Put permissions to your bucket.
Your IAM policy would look something like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::YOUR_BUCKET/*"
}
]
}
Then, just
s3 = Aws::S3::Resource.new(
credentials: Aws::Credentials.new(KEY, SECRET),
region: REGION_CODE)
obj = s3.bucket(BUCKET).object(AWS_KEY)
obj.presigned_url(:get, response_content_disposition: 'attachment')
That will return a url that already the correct header, and it will auto-download in the browser.
Your syntax will differ based on language, but using S3 signed url should be somewhat similar in any language API.

IAM user can only access S3 bucket when bucket is set to public

I've been trying to test my first lambda function, however I'm only able to test it successfully when the bucket policies are set to public. I would like to provide access to my IAM user only, but whenever I try to do so, I receive an Access Denied error. I confirm that the IAM user does have Administrator Access.
Below is the relevant snippet from the bucket policy where I set the Principal to my IAM user's ARN, which results in the "Access Denied" error:
"Principal": {
"AWS": "arn:aws:iam::12_DIGIT_USER_ID:user/adminuser"
}
Setting the Principal to public, like below, allows the lambda to run successfully:
"Principal": {
"AWS": "*"
}
Clearly I want to avoid having a public bucket, and the solution according every blog post and StackOverflow question seems to be to set the bucket policy similar to the first code snippet above, but I just absolutely cannot figure out why it's not working for me. Any help would be greatly appreciated.
Problem is you are confusing between user and resource permissions.
You just need to set Resource to Resource permissions as a policy, assign it to Role with trusted relationship with lambda.amazonaws.com and then assign that Role to Lambda function.
Hope it helps.
When granting permission to access resources in Amazon S3:
If you wish to grant public access, use a Bucket Policy on the bucket itself
If you wish to grant permissions to specific users or groups of users, then add the permissions directly to the User/Group within IAM (no principal required)

Google Drive API V3: Creating file permissions for a domain

Trying to add permissions to a file via Google Drive's API V3 and I ran into the error below. I want to allow requests from mta.io, my site, to able to read the file. The error seems to come from what domain I pass in the body of request for example, example.com works fine and permissions are granted to it. Do I need to whitelist my domain in order to give it permissions to the file?
Works:
{
"role": "reader",
"type": "domain",
"domain": "example.com"
}
Doesn't work:
{
"role": "reader",
"type": "domain",
"domain": "mta.io"
}
Error:
{
"error": {
"errors": [
{
"domain": "global",
"reason": "invalid",
"message": "The specified domain is invalid or not applicable for the given permission type.",
"locationType": "other",
"location": "permission.domain"
}
],
"code": 400,
"message": "The specified domain is invalid or not applicable for the given permission type."
}
}
I'm using the try it feature found on the API's site.
Figured it out, you can only use G Suite domains.
It is a bummer but in order to share file permission exclusively with a domain you need to have a G Suite account and verify that you own that domain - the domain needs to be linked with your G Suite account.
https://developers.google.com/drive/v3/web/about-permissions#types_and_values
For example, a permission with a type of domain may have a domain of thecompany.com, indicating that the permission grants the given role to all users in the G Suite domain thecompany.com
Based from this related thread, this can only be done between users in the same domain, and service accounts don't belong to any domain.
You're best option may be to create a Team Drive that the service account has access to, and perform a two stage process:
Use the service account to move the file into the team drive. Files.update with the addParents parameter set to the Team Drive ID.
Use the domain user to move the file out of the team drive. Files.update with the addParents parameter set to root (or some target folder's ID) and the removeParents parameter set to the Team Drive ID.
Here's another SO post which might also help: Google Drive SDK - change item sharing permissions

Manage access to an s3 folder in the browser - best practice?

I have a bucket configured as a website and the bucket policy allows public access to all folders in the bucket except the '/_admin/' folder. Only a specific Iam user making requests is allowed access to '/_admin/'. This is for backend management of the website, so I am serving up html, js, css, etc to the user. Right now I am using the aws javascript sdk to sign every url to a js/css/img src/href link and then update that attribute, or create it. (This means hardly anything is getting cached because the signature changes each time you access it.) I've proven this concept and I can access the files by signing every single url in my webpages, but it seems an awkward way to build a website.
Is there a way I can just add some kind of access header to each page that will be included in every request? If so, will this also apply to all ajax type requests as well?
Here's what I came up with to solve this problem. I modified my bucket policy to deny all anonymous access to '/_admin/' unless it is from my main account id, a specific Iam user, or unless the referrer url matches a specific token. After the user authenticates from a publicly accessible page on my site, I generate a new token, then use the sdk to modify the bucket policy with the value of that new token. (I'll probably also add another condition with an expires date.)
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::test-domain-structure/*"
},
{
"Sid": "AllowAdminWithToken",
"Effect": "Deny",
"NotPrincipal": {
"AWS": [
"arn:aws:iam::AWS-USER-ID:user/IAM-USER",
"arn:aws:iam::AWS-USER-ID:root"
]
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::test-domain-structure/_admin/*",
"Condition": {
"StringNotLike": {
"aws:Referer": "*?t=a77Pn"
}
}
}
]
}
In each page I use javascript to prepend each link/href with the new querystring (?t=a77Pn ...or the new generated token).
Edit: That was really a pain. Links kept breaking, so ultimately I went with the solution below, plus an added condition of an expiration date. Works much better.
Another option is to modify the bucket policy to only allow access from a certain ip address. This would eliminate having to modify all links/hrefs and keep the url clean. Still open to a better idea, but this works for now.

Static website private content Amazon S3 and Cloudfront - css, js and images not showing

I have set the acl for the origin access identity on all objects as read. I have set up the bucket policy for the OAI. The only way I can get the css, or anything else apart from the html, to work is if I reference it with the the full signed URL ie domain name/css/main.css?parameters of signed url, in the index.html.
I have ensured that all files have the correct content type.
Is this standard practice? Do I have to reference every image, css, js file this way with the signed url?
I have been searching for days on this, so any help would be greatly appreciated. Thanks in advance.
bucket policy:
{
"Version": "2012-10-17",
"Id": "PolicyForCloudFrontPrivateContent",
"Statement": [
{
"Sid": " Grant a CloudFront Origin Identity access to support private content",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity identity canoncal"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::bucket/*"
}
]
}
My work around is this: I figure that I dont need to protect my css, image and js files. I created a new bucket and placed them all in there and made them public then referenced those from my private site. This works. This will probably suit me as I will be creating more buckets that can reference the same files.

Categories

Resources