Allowing users to upload content to s3 - javascript

I have an S3 bucket named BUCKET on region BUCKET_REGION. I'm trying to allow users of my web and mobile apps to upload image files to these bucket, provided that they meet certain restrictions based on Content-Type and Content-Length (namely, I want to only allow jpegs less than 3mbs to be uploaded). Once uploaded, the files should be publicly accessible.
Based on fairly extensive digging through AWS docs, I assume that the process should look something like this on my frontend apps:
const a = await axios.post('my-api.com/get_s3_id');
const b = await axios.put(`https://{BUCKET}.amazonaws.com/{a.id}`, {
// ??
headersForAuth: a.headersFromAuth,
file: myFileFromSomewhere // i.e. HTML5 File() object
});
// now can do things like <img src={`https://{BUCKET}.amazonaws.com/{a.id}`} />
// UNLESS the file is over 3mb or not an image/jpeg, in which case I want it to be throwing errors
where on my backend API I'd be doing something like
import aws from 'aws-sdk';
import uuid from 'uuid';
app.post('/get_s3_id', (req, res, next) => {
// do some validation of request (i.e. checking user Ids)
const s3 = new aws.S3({region: BUCKET_REGION});
const id = uuid.v4();
// TODO do something with s3 to make it possible for anyone to upload pictures under 3mbs that have the s3 key === id
res.json({id, additionalAWSHeaders});
});
What I'm not sure about is what exact S3 methods I should be looking at.
Here are some things that don't work:
I've seen a lot of mentions of (a very old) API accessible with s3.getSignedUrl('putObject', ...). However, this doesn't seem to support reliably setting a ContentLength -- at least anymore. (See https://stackoverflow.com/a/28699269/251162.)
I've also seen a closer-to-working example using an HTTP POST with form-data API that is also very old. I guess that this might get it done if there are no alternatives but I am concerned that it is no longer the "right" way to do things -- additionally, it seems to doing a lot of manual encrypting etc and not using the official node SDK. (See https://stackoverflow.com/a/28638155/251162.)

I think what might be better for this case in POSTing directly to S3, skipping your backend server.
What you can do is define a policy that explicitly specifies what can be uploaded to and to where, this policy is then signed using an AWS secret access key (using the AWS sig v4, can generate a policy using this).
An example usage of the policy and signature if viewable in the AWS docs
For your uses you can specify conditions like:
conditions: [
['content-length-range, 0, '3000000'],
['starts-with', '$Content-Type', 'image/']
]
This will limit uploads to 3Mb, and Content-Type to only items that begin with image/
Additionally, you only have to generate your signature for policy once (or whenever it changes), which means you don't need a request to your server to get a valid policy, you just hardcode it in your JS. When/if you need to update just regenerate the policy and signature and then update the JS file.
edit: There isn't a method through the SDK to do this as it's meant as way of directly POSTing from a form on a webpage, i.e. can work with no javascript.
edit 2: Full example of how to sign a policy using standard NodeJS packages:
import crypto from 'crypto';
const AWS_ACCESS_KEY_ID = process.env.AWS_ACCESS_KEY_ID;
const AWS_SECRET_ACCESS_KEY = process.env.AWS_SECRET_ACCESS_KEY;
const ISO_DATE = '20190728T000000Z';
const DATE = '20161201';
const REGION = process.env.AWS_DEFAULT_REGION || 'eu-west-1';
const SERVICE = 's3';
const BUCKET = 'your_bucket';
if (!AWS_ACCESS_KEY_ID || !AWS_SECRET_ACCESS_KEY) {
throw new Error('AWS credentials are incorrect');
}
const hmac = (key, string, encoding) => {
return crypto.createHmac("sha256", key).update(string, "utf8").digest(encoding);
};
const policy = {
expiration: '2022-01-01T00:00:00Z',
conditions: [
{
bucket: BUCKET,
},
['starts-with', '$key', 'logs'],
['content-length-range', '0', '10485760'],
{
'x-amz-date': ISO_DATE,
},
{
'x-amz-algorithm': 'AWS4-HMAC-SHA256'
},
{
'x-amz-credential': `${AWS_ACCESS_KEY_ID}/${DATE}/${REGION}/${SERVICE}/aws4_request`
},
{
'acl': 'private'
}
]
};
function aws4_sign(secret, date, region, service, string_to_sign) {
const date_key = hmac("AWS4" + secret, date);
const region_key = hmac(date_key, region);
const service_key = hmac(region_key, service);
const signing_key = hmac(service_key, "aws4_request");
const signature = hmac(signing_key, string_to_sign, "hex");
return signature;
}
const b64 = new Buffer(JSON.stringify(policy)).toString('base64').toString();
console.log(`b64 policy: \n${b64}`);
const signature = aws4_sign(AWS_SECRET_ACCESS_KEY, DATE, REGION, SERVICE, b64);
console.log(`signature: \n${signature}\n`);

You need to get familiar with Amazon Cognito and especially with identity pool.
Using Amazon Cognito Sync, you can retrieve the data across client platforms, devices, and operating systems, so that if a user starts using your app on a phone and later switches to a tablet, the persisted app information is still available for that user.
Read more here: Cognito identity pools
Once you create new identify pool, you can reference it while using S3 JavaScript SDK which will allow you to upload content whit out exposing any credentials to the client.
Example here: Uploading to S3
Please read through all of it, especially the section "Configuring the SDK".
The second part of your puzzle - validations.
I would go about implementing a client-side validation (if possible) to avoid network latency before giving an error. If you would choose to implement validation on S3 or AWS Lambda you are looking for a wait-time until file reaches AWS - network latency.

This is something I know we have in our project, so I'll show you part of the codes:
you first need to post to your own server to get the creds for the upload,
from that you will return the params from the client upload to S3.
these are params you send to the aws s3 service, you will need the bucket, upload path, and the file
let params = {
Bucket: s3_bucket,
Key: upload_path,
Body: file_itself
};
this is the code I have for the actual upload to s3
config.credentials = new AWS.Credentials(credentials.accessKeyId,
credentials.secretAccessKey, credentials.sessionToken);
let s3 = new S3(config);
return s3.upload(params, options).on("httpUploadProgress", handleProgress);
all of those credentials items you get from your backend of course.

On the backend you need to generate a timed, presigned URL and send that URL to the client for accessing the S3 object. Depending on your backend implementation technology you can use the AWS CLI or SDKs (e.g. for Java, .Net, Ruby or Go).
Please refer to CLI docs and SDK docs and more SDK
Content size restriction is not supported in link generation directly. But the link is just there to relay the access rights that the AWS user has.
For using a policy to restrict file size on upload you have to create a CORS policy on the bucket and use HTTP POST for the upload. Please see this link.

your servers acts like a proxy, also responsible for authorization, validation, etc.
Some code snippet:
upload(config, file, cb) {
const fileType = // pass with request or generate or let empty
const key = `${uuid.v4()}${fileType}`; // generate file name:
const s3 = new AWS.S3();
const s3Params = {
Bucket: config.s3_bucket,
Key: key,
Body: file.buffer
};
s3.putObject(s3Params, cb);
}
and then you can send the key to the client and provide further access.

Related

Connect and update Azure Blob with Blob-specific SAS Token

What I am trying to do: I have a clientside (Browser, not node.js) JS App that uploads files to Blob Storage with the #azure/storage-blob package. To do so, it fetches a SAS Token from an Azure Function for a specific Blob. The Blob is created by the Function on each request and a SAS Token is returned to the client. Blob and SAS generation works and I can download it when the Blob Url with SAS is opened in a Browser.
Now what does not work is when I try to connect with the BLOB SAS (not Storage Account SAS or Connection String) to the Storage Account. The code below works when used with SAS from the whole Storage Account, but I do not want to give that much of permissions. I do not understand why a SAS token can be created for a specific Blob, if it is not possible to connect to it via Blob Service Client.
It is possible to create a Read-only SAS token for the whole storage account to get the connection up and running. But where would the Blob SAS go afterwards, so that the Blob can be accessed?
There is something fundamental that I seem to miss, so how can this be accomplished?
const url = `https://${storageName}.blob.core.windows.net/${sasToken}`; // sas for blob from the azure function
// const url = `https://${storageName}.blob.core.windows.net${containerSas}`; // sas from container
// const url = `https://${storageName}.blob.core.windows.net/${containerName}/${fileName}${sasToken}`; // does not work either
const blobService = new BlobServiceClient(url);
await this.setCors(blobService);
// get Container
const containerClient: ContainerClient = blobService.getContainerClient(containerName);
// get client
const blobClient = containerClient.getBlockBlobClient(fileName);
const exists = await blobClient.exists();
console.log('Exists', exists);
// set mimetype as determined from browser with file upload control
const options: BlockBlobParallelUploadOptions = { blobHTTPHeaders: { blobContentType: file.type } };
// upload file
await blobClient.uploadBrowserData(file, options);
EDIT:
The SAS token for the Blob:
?sv=2018-03-28&sr=b&sig=somesecret&se=2021-07-04T15%3A14%3A28Z&sp=racwl
The CORS method, though I can confirm that it works when I use the global storageaccount SAS:
private async setCors(blobService: BlobServiceClient): Promise<void> {
var props = await blobService.getProperties();
props.
cors =
[{
allowedOrigins: '*',
allowedMethods: '*',
allowedHeaders: '*',
exposedHeaders: '*',
maxAgeInSeconds: 3600
}]
;
}
Errors:
When using the Blob SAS, at the setCors/getProperties method: 403 (Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.)
When using the following service url, at the setCors/getProperties method: https://${storageName}.blob.core.windows.net/${containerName}/${fileName}${sasToken} => RestError: The resource doesn't support specified Http Verb.
When using a Storage Account SAS with only READ permissions, when accessing the blob (blob.exists()): 403 (This request is not authorized to perform this operation using this resource type.) (makes sense, but then I would like to use the Blob-specific SAS)
The reason you're running into this error is because you're trying to set CORS using a SAS token created for a blob (which is a Service SAS). CORS operation is a service level operation and for that you need to use a SAS token obtained at the account level. In other words, you will need to use an Account SAS. Any SAS token created for either blob container or blob is a Service SAS token.
Having said this, you really don't need to set CORS properties on the storage account in each request. This is something you can do at the time of account creation.
Once you remove call to setCors method from your code, it should work fine.
Considering you're working with just a Blob SAS, you can simplify your code considerably by creating an instance of BlockBlobClient directly using the SAS URL. For example, your code could be as simple as:
const url = `https://${storageName}.blob.core.windows.net/${containerName}/${fileName}${sasToken}`;
const blobClient = new BlockBlobClient(url);
//...do operations on blob using this blob client

Why do browsers ignore content-disposition headers for file names sometimes?

The Next.Js/React application I'm working on utilizes Firebase's cloud storage to store .doc/.docx/.pdf files. I want to be able to dynamically change the suggested file name in the browser document viewer on download, however I can only get it to work sometimes. Since I want to keep the original file name the same, I cannot permanently change the metadata in cloud storage either.
I have found that requesting a signed url from cloud storage and adding a responseDisposition property only works if the original file name doesn't include a '.pdf' or '.docx' in the title.
Here is my server handler code that requests the signed url and sends it back to the client:
const {firebaseInit} = require('../../firebase-admin-init');
const fetchResumeLink = async (req,res) => {
const {documentPath, dynamicName} = req.body;
const bucket = firebaseInit.storage().bucket();
const file = bucket.file(documentPath);
const today = new Date();
const tomorrow = new Date();
tomorrow.setDate(today.getDate()+1);
const config = {
action: 'read',
responseDisposition: `attachment; filename=Resume_for_${dynamicName}.pdf`,
expires: tomorrow
}
file.getSignedUrl(config, (err, url) => {
if (err) {
console.error(err);
res.status(500).send(err);
} else {
res.setHeader('Content-Type', 'application/pdf')
res.status(200).send(url);
}
});
}
This method only works in Chrome if the original file is contained at a storage path like /bucket/folder/obj but if it is at /bucket/folder/obj.pdf it doesn't seem to work anymore. On Mozilla I ran across an instance where the tab displayed the correct file name but when prompted to download the file the original file name was the original one.
Does anyone know why this happens? Is there anyway to get the browser document readers to not ignore the content-disposition headers?
Also open to any other methods to dynamically generate a file's saved name.
If a ContentDisposition header is set on the object, it overrides the response-content-disposition query parameter.
So my guess is you're using a library or tool that sets the ContentDisposition on the metadata when you upload objects with certain extensions (like PDF) known to need non-inline display.

How to have the refresh token?

I need to use Google Play Android API, i follow a lot of instructions to be connected with the API but I block at one.(Authorization documentation)
Exactly at the step 4 when they say:
Sending a Post with this code:
grant_type=authorization_code
code=<the code from the previous step>
client_id=<the client ID token created in the APIs Console>
client_secret=<the client secret corresponding to the client ID>
redirect_uri=<the URI registered with the client ID>`
I specify i use serverless and node, how can I do to have my refresh token in https://accounts.google.com/o/oauth2/token please ?
Thank's a lot and sry for my english ^^.
Sry for this oversight, my serverless it’s just that
#serverless.yml
service: scrapper-app
provider:
name: aws
runtime: nodejs8.10
region: eu-west-3
functions:
app:
handler: index.handler
events:
- http: ANY /
- http: 'ANY {proxy+}'
and my js it’s just that too:
//index.js
const serverless = require('serverless-http');
const express = require('express')
const app = express()
//API
const { google } = require('googleapis');
const oauth2Client = new google.auth.OAuth2(
IDCLient,
Secret,
'https://accounts.google.com/o/oauth2/auth',
);
const scopes = 'https://www.googleapis.com/auth/androidpublisher';
const url = oauth2Client.generateAuthUrl({
access_type: 'offline',
scope: scopes
)}
// GET
app.get('/', function (req, res) {
res.send('Scrapper Rs!');
})
module.exports.handler = serverless(app);
I really dont know how can i do http-post using node and serverless, i succeed with a database (with curl) but not post to an url.
I didn't used Google Authentication. But I think you need to use access_type = offline
access_type Recommended. Indicates whether your application can
refresh access tokens when the user is not present at the browser.
Valid parameter values are online, which is the default value, and
offline.
Set the value to offline if your application needs to refresh access
tokens when the user is not present at the browser. This is the method
of refreshing access tokens described later in this document. This
value instructs the Google authorization server to return a refresh
token and an access token the first time that your application
exchanges an authorization code for tokens.
To set this value in PHP, call the setAccessType function:
$client->setAccessType('offline');
Source: https://developers.google.com/identity/protocols/OAuth2WebServer

Cannot call AWS StepFunctions in a browser with AWS javascript SDK because of CORS

I am trying to use AWS StepFunctions API in a browser (angular, using aws javascript API).
Using this code:
import * as StepFunctions from "aws-sdk/clients/stepfunctions";
let sf = new StepFunctions({apiVersion: '2016-11-23'});
var request: GetExecutionHistoryInput = {
executionArn: executionArn,
maxResults: 1000,
reverseOrder: false
}
sf.getExecutionHistory(request).promise()
I got an error on the browser:
OPTIONS https://states.eu-west-1.amazonaws.com/ 404 (Not Found)
Failed to load https://states.eu-west-1.amazonaws.com/: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost:4200' is therefore not allowed access.
Does it mean that AWS StepFunctions is not ready to be used on the browser side ?
If it's true, where in the AWS documentation is it documented ?
Docs are quite difficult to understand when trying to discern between the standard Javascript SDK, and the Javascript SDK for browsers.
As the question surmises, StepFunctions are effectively unavailable via the browser.
Here is an official statement in a github issue comment:
The AWS Step Functions service does not support access via CORS
requests, so it cannot be used from the browser.
https://github.com/aws/aws-sdk-js/issues/1334#issuecomment-276215833
Perhaps if the Browser SDK didn't link directly to the standard API reference there wouldn't be so much confusion.
CORS support added to AWS Step Functions in May 2021.
So now it's possible to use #aws-sdk/client-sfn , here is a very basic TypeScript code:
import { SFNClient, DescribeStateMachineForExecutionCommand } from '#aws-sdk/client-sfn';
const region = 'us-east-1';
const config = {
region,
credentials: {
accessKeyId: 'xxxx', // your temporary cred,
secretAccessKey: 'xxxxx', // your temporary cred,
},
};
const client = new SFNClient(config);
async function getDataByExecutionArn(arn: string, onChunkRead: any) {
const command = new DescribeStateMachineForExecutionCommand({ executionArn: arn });
return await client.send(command);
}
export default getDataByExecutionArn;
Just be sure you don’t put those credentials in Javascript running in a browser.

Authenticating to S3 from Meteor Mobile Application

I have a Meteor Mobile app that accesses a lot of photos stored in my S3 bucket. These photos are user uploaded and change frequently. I don't want these photos to be accessible to anyone that isn't using my app. (ie: these photos are only viewable from my application and going to the url directly in a browser won't load them).
What is the best way to accomplish this? AWS Cognito seems to be the logical choice, but it doesn't seem easy to implement and I'm not exactly sure how to authenticate to AWS from the client once it gets a Cognito identity.
My other thought was putting a read only AWS Key on every url and authenticating that way, but that's almost pointless. It would be really easy to find out the key and secret.
EDIT:
To be specific, the URLs for the images are in a Mongo collection and I pass them into a template. So, the S3 resources are just loaded up with an image tag (<img src="). Something like AWS STS sounds like a great option, but I don't know of a way to pass the tokens in the headers when I'm loading them like this. Doing them as a pre-signed query string seems inefficient.
Another option is to restrict access with the referrer header, like this issue. But like Martijn said, it isn't really a secure way of doing it.
After some research and testing I solved this myself. My ultimate solution was to use the referer header to limit access to my S3 bucket. I created a more secure and detailed solution (see below), but it came with a performance hit that wouldn't work for my app. My app is based around viewing photos and videos, and not being able to have them load near instantly wasn't in the cards. Although, I feel like it could be a sufficient solution for most use-cases. Because my app isn't highly sensitive, the referer header is sufficient for me. Here is how to use the http header referer to limit access to a bucket.
Solution using Amazon's STS:
First, you need to have the AWS SDK on both the server and the client. There was no up to date packages for Meteor available, so I created my own. (I'll publish it shortly and put a link here once I do.)
On the server, you must use credentials that have the ability to assume a role. The role to be assumed must have a Trust Relationship with the user that is assuming the role. Article on using IAM. - Article on using credentials with SDK
In the server.js file I created a Meteor Method that I can call from the client. It first checks if a user is logged in. If that's true, it checks to see if it's current temp-credentials are expiring in the next 5 minutes. If they are, I issue new credentials and either write them to the user document or return them as a callback. If they aren't expiring in the next 5 minutes, I return their current temp-credentials.
You must use Meteor.bindEnvironmentfor the callback. See docs
Meteor.methods({
'awsKey': function(){
if (Meteor.userId()){
var user = Meteor.userId();
var now = moment(new Date());
var userDoc = Meteor.users.findOne({_id: user});
var expire = moment(userDoc.aws.expiration);
var fiveMinutes = 5 * 60 * 1000;
var fut = new Future();
if(moment.duration(expire.diff(now))._milliseconds < fiveMinutes ){
var params = {
RoleArn: 'arn:aws:iam::556754141176:role/RoleToAssume',
RoleSessionName: 'SessionName',
DurationSeconds: 3600 //1 Hour
};
var sts = new AWS.STS();
sts.assumeRole(params, Meteor.bindEnvironment((err, data) => {
if (err){
fut.throw(new Error(err));
}else{
Meteor.users.update({_id: user}, {$set: {aws: {accessKey: data.Credentials.AccessKeyId, secretKey: data.Credentials.SecretAccessKey, sessionToken: data.Credentials.SessionToken, expiration: data.Credentials.Expiration}}});
fut.return(data.Credentials);
}
}));
return fut.wait();
}else{
return userDoc.aws;
}
}
}
}
});
Then you can invoke this method manually or in an setInterval on Meteor.startup.
Meteor.setInterval(function(){
if(Meteor.userId()){
Meteor.call('awsKey', function(err, data){
if (err){
console.log(err);
}else{
if(data.accessKey){
Session.set('accessKey', data.accessKey);
Session.set('secretKey', data.secretKey);
Session.set('sessionToken', data.sessionToken);
}else{
Session.set('accessKey', data.AccessKeyId);
Session.set('secretKey', data.SecretAccessKey);
Session.set('sessionToken', data.SessionToken);
}
}
});
}
}, 300000); //5 Minute interval
This way just sets the keys in a Session variable from the callback. You could do this by querying the user's document to get them as well.
Then, you can use these temporary credentials to get a signed URL for the object you are trying to access in your bucket.
I put this in a template helper by passing the object name to it in the template:
{{getAwsUrl imageName}}
Template.templateName.helpers({
'getAwsUrl': function(filename){
var accessKey = Session.get('accessKey');
var secretKey = Session.get('secretKey');
var sessionToken = Session.get('sessionToken');
var filename = filename;
var params = {Bucket: 'bucketName', Key: filename, Expires: 6000};
new AWS.S3({accessKeyId: accessKey, secretAccessKey: secretKey, sessionToken: sessionToken, region: 'us-west-2'}).getSignedUrl('getObject', params, function (err, url) {
if (err) {
console.log("Error:" +err);
}else{
result = url;
}
});
return result;
}
});
That's all there is to it! I'm sure this can be refined to be better, but this is just what I came up with in testing it really fast. Like I said, it should work in most use cases. My particular one didn't. For some reason, when you tried to toggle the visibility: visible|hidden; on an img src of these signedURLs they would take a lot longer to load than just setting the URL directly. It must be because Amazon has to decrypt the signed URL on their side before return the object.
Thanks to Mikkel for the direction.

Categories

Resources