Google Drive API V3: Creating file permissions for a domain - javascript

Trying to add permissions to a file via Google Drive's API V3 and I ran into the error below. I want to allow requests from mta.io, my site, to able to read the file. The error seems to come from what domain I pass in the body of request for example, example.com works fine and permissions are granted to it. Do I need to whitelist my domain in order to give it permissions to the file?
Works:
{
"role": "reader",
"type": "domain",
"domain": "example.com"
}
Doesn't work:
{
"role": "reader",
"type": "domain",
"domain": "mta.io"
}
Error:
{
"error": {
"errors": [
{
"domain": "global",
"reason": "invalid",
"message": "The specified domain is invalid or not applicable for the given permission type.",
"locationType": "other",
"location": "permission.domain"
}
],
"code": 400,
"message": "The specified domain is invalid or not applicable for the given permission type."
}
}
I'm using the try it feature found on the API's site.

Figured it out, you can only use G Suite domains.
It is a bummer but in order to share file permission exclusively with a domain you need to have a G Suite account and verify that you own that domain - the domain needs to be linked with your G Suite account.
https://developers.google.com/drive/v3/web/about-permissions#types_and_values
For example, a permission with a type of domain may have a domain of thecompany.com, indicating that the permission grants the given role to all users in the G Suite domain thecompany.com

Based from this related thread, this can only be done between users in the same domain, and service accounts don't belong to any domain.
You're best option may be to create a Team Drive that the service account has access to, and perform a two stage process:
Use the service account to move the file into the team drive. Files.update with the addParents parameter set to the Team Drive ID.
Use the domain user to move the file out of the team drive. Files.update with the addParents parameter set to root (or some target folder's ID) and the removeParents parameter set to the Team Drive ID.
Here's another SO post which might also help: Google Drive SDK - change item sharing permissions

Related

insert row to google spreadsheet using javascript post resuquest

i am trying to insert a new row to my google spreadsheet using JS from broser console
using post man made a post request to this url
https://sheets.googleapis.com/v4/spreadsheets/1KjrAqsNzzNIyEDb4hjx846j2AVffSeLUqi0BMR_pKPM/values/Sheet1:append?key=AIzaSyAc5XWsjlFeR3omZiYnrzaEAbZAYIWUXqI
injecting this json:
{
"asin": "aaaa",
"price": "15.99",
"title": "ratings",
"bsr" : "555523",
"image_url":"hhhhhhh"
}
and i got this error:
{
"error": {
"code": 401,
"message": "API keys are not supported by this API. Expected OAuth2 access token or other authentication credentials that assert a principal. See https://cloud.google.com/docs/authentication",
"status": "UNAUTHENTICATED",
"details": [
{
"#type": "type.googleapis.com/google.rpc.ErrorInfo",
"reason": "CREDENTIALS_MISSING",
"domain": "googleapis.com",
"metadata": {
"service": "sheets.googleapis.com",
"method": "google.apps.sheets.v4.SpreadsheetsService.AppendValues"
}
}
]
}
}
my spreadsheet has 4 columns:
asin title bsr price image_url
You would need to authorize your postman application first using Google OAuth2 as referenced here in this article.
You would need to create a Google Cloud Project on console.cloud.google.com, and under API's and Services > Enabled API's and Services, look for Google Sheets API.
Once you have enabled the API, click on Manage and proceed to the Credentials Tab to create your credentials.
Make sure that under Authorized Redirect URIs, put in "https://oauth.pstmn.io/v1/browser-callback"
Also, application type should be set to Web Application
Download the JSON file, and also make sure to save your Client ID and Client Secret.
From the downloaded JSON file, you would need to save the "auth_uri" and "token_uri" (you'll be needing this for authorization on your postman console for auth url and access token url)
Set the client ID and client secret and use the scope "https://www.googleapis.com/auth/drive.file"
From there, once you have generated your access token, make your post request following the sample here in this article: https://developers.google.com/sheets/api/reference/rest/v4/spreadsheets.values/append. YOUR_API_KEY should be the access token that was generated earlier on your Postman console.
If ever you experience other issues with authorizing your account for OAuth2, proceed with going to the OAuth Consent Screen on your Google Cloud Console, create an app, and define the said scope above under the Non-sensitive scopes, and add your Google Account under Test Users.

AWS MediaConvert bitrate and quality problem

When I'm trying to use and starting MediaConvert job in CBR, VBR or in QVBR mode with Bitrate or MaxBitrate higher 250 000, getting the error below
Unable to write to output file [s3:///videos//***/original.mp4]: [Failed to write data: Access Denied]
But with Bitrate/MaxBitrate option lower 250 000 transcoding job working fine, but quality is too low. What causing this? Do I need to upgrade that MediaConvert service or I need to add some additional Policies somewhere? All I need is to get videos as avi, etc in mp4 format with the same quality on output as it is on input.
I was receiving the same error and found that it was related to having encryption enabled on my bucket defined in the bucket policy. I build my buckets with Cloudformation and have the following being set in the policy:
{
"Sid": "Deny unencrypted upload (require --sse)",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::BUCKET_NAME/*",
"Condition": {
"StringNotEquals": {
"s3:x-amz-server-side-encryption": "AES256"
}
}
}
I have found that having this set in the policy causes some issues with AWS services that write encrypted objects into s3. So I remove this and then add then set it in the Bucket properties:
Then I added kms:Decrypt and kms:GenerateDataKey to my Role as described here. Although, I'm not 100% sure I needed to do that. But once I did all that, my
Failed to write data: Access Denied
error was resolved.

Unable to Publish a Post to Facebook Page Using Their API

I was trying to publish something to Facebook page.
From this article, it seemed really easy to publish using Facebook API
So, I create a Facebook page and made sure that I have all the required permission for the given auth token, opened by Postman and created a POST request to following URL (along with attaching my access token bearer)
https://graph.facebook.com/2984640844138/feed?message=HeyWhatever
This is giving me following error
{
"error": {
"message": "(#200) If posting to a group, requires app being installed in the group, and \\\n either publish_to_groups permission with user token, or both manage_pages \\\n and publish_pages permission with page token; If posting to a page, \\\n requires both manage_pages and publish_pages as an admin with \\\n sufficient administrative permission",
"type": "OAuthException",
"code": 200,
"fbtrace_id": "D1z1soQbTE2"
}
}
I am not sure about what I am doing wrong, perhaps my request is incorrect or I am not using postman correctly.
This is what I am doing in Postman (screenshot below), can someone point me out what I am doing wrong? Also, suggestions are also welcome
I would recommend you use the Facebook Graph API Explorer, which provides you with the tools to generate the access token with appropriate permission and also craft your HTTP requests with ease. Visit the same, generate a token, then head to the Access Token Debugger to double check the scopes of the token.

IAM user can only access S3 bucket when bucket is set to public

I've been trying to test my first lambda function, however I'm only able to test it successfully when the bucket policies are set to public. I would like to provide access to my IAM user only, but whenever I try to do so, I receive an Access Denied error. I confirm that the IAM user does have Administrator Access.
Below is the relevant snippet from the bucket policy where I set the Principal to my IAM user's ARN, which results in the "Access Denied" error:
"Principal": {
"AWS": "arn:aws:iam::12_DIGIT_USER_ID:user/adminuser"
}
Setting the Principal to public, like below, allows the lambda to run successfully:
"Principal": {
"AWS": "*"
}
Clearly I want to avoid having a public bucket, and the solution according every blog post and StackOverflow question seems to be to set the bucket policy similar to the first code snippet above, but I just absolutely cannot figure out why it's not working for me. Any help would be greatly appreciated.
Problem is you are confusing between user and resource permissions.
You just need to set Resource to Resource permissions as a policy, assign it to Role with trusted relationship with lambda.amazonaws.com and then assign that Role to Lambda function.
Hope it helps.
When granting permission to access resources in Amazon S3:
If you wish to grant public access, use a Bucket Policy on the bucket itself
If you wish to grant permissions to specific users or groups of users, then add the permissions directly to the User/Group within IAM (no principal required)

Manage access to an s3 folder in the browser - best practice?

I have a bucket configured as a website and the bucket policy allows public access to all folders in the bucket except the '/_admin/' folder. Only a specific Iam user making requests is allowed access to '/_admin/'. This is for backend management of the website, so I am serving up html, js, css, etc to the user. Right now I am using the aws javascript sdk to sign every url to a js/css/img src/href link and then update that attribute, or create it. (This means hardly anything is getting cached because the signature changes each time you access it.) I've proven this concept and I can access the files by signing every single url in my webpages, but it seems an awkward way to build a website.
Is there a way I can just add some kind of access header to each page that will be included in every request? If so, will this also apply to all ajax type requests as well?
Here's what I came up with to solve this problem. I modified my bucket policy to deny all anonymous access to '/_admin/' unless it is from my main account id, a specific Iam user, or unless the referrer url matches a specific token. After the user authenticates from a publicly accessible page on my site, I generate a new token, then use the sdk to modify the bucket policy with the value of that new token. (I'll probably also add another condition with an expires date.)
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::test-domain-structure/*"
},
{
"Sid": "AllowAdminWithToken",
"Effect": "Deny",
"NotPrincipal": {
"AWS": [
"arn:aws:iam::AWS-USER-ID:user/IAM-USER",
"arn:aws:iam::AWS-USER-ID:root"
]
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::test-domain-structure/_admin/*",
"Condition": {
"StringNotLike": {
"aws:Referer": "*?t=a77Pn"
}
}
}
]
}
In each page I use javascript to prepend each link/href with the new querystring (?t=a77Pn ...or the new generated token).
Edit: That was really a pain. Links kept breaking, so ultimately I went with the solution below, plus an added condition of an expiration date. Works much better.
Another option is to modify the bucket policy to only allow access from a certain ip address. This would eliminate having to modify all links/hrefs and keep the url clean. Still open to a better idea, but this works for now.

Categories

Resources