Connect and update Azure Blob with Blob-specific SAS Token - javascript

What I am trying to do: I have a clientside (Browser, not node.js) JS App that uploads files to Blob Storage with the #azure/storage-blob package. To do so, it fetches a SAS Token from an Azure Function for a specific Blob. The Blob is created by the Function on each request and a SAS Token is returned to the client. Blob and SAS generation works and I can download it when the Blob Url with SAS is opened in a Browser.
Now what does not work is when I try to connect with the BLOB SAS (not Storage Account SAS or Connection String) to the Storage Account. The code below works when used with SAS from the whole Storage Account, but I do not want to give that much of permissions. I do not understand why a SAS token can be created for a specific Blob, if it is not possible to connect to it via Blob Service Client.
It is possible to create a Read-only SAS token for the whole storage account to get the connection up and running. But where would the Blob SAS go afterwards, so that the Blob can be accessed?
There is something fundamental that I seem to miss, so how can this be accomplished?
const url = `https://${storageName}.blob.core.windows.net/${sasToken}`; // sas for blob from the azure function
// const url = `https://${storageName}.blob.core.windows.net${containerSas}`; // sas from container
// const url = `https://${storageName}.blob.core.windows.net/${containerName}/${fileName}${sasToken}`; // does not work either
const blobService = new BlobServiceClient(url);
await this.setCors(blobService);
// get Container
const containerClient: ContainerClient = blobService.getContainerClient(containerName);
// get client
const blobClient = containerClient.getBlockBlobClient(fileName);
const exists = await blobClient.exists();
console.log('Exists', exists);
// set mimetype as determined from browser with file upload control
const options: BlockBlobParallelUploadOptions = { blobHTTPHeaders: { blobContentType: file.type } };
// upload file
await blobClient.uploadBrowserData(file, options);
EDIT:
The SAS token for the Blob:
?sv=2018-03-28&sr=b&sig=somesecret&se=2021-07-04T15%3A14%3A28Z&sp=racwl
The CORS method, though I can confirm that it works when I use the global storageaccount SAS:
private async setCors(blobService: BlobServiceClient): Promise<void> {
var props = await blobService.getProperties();
props.
cors =
[{
allowedOrigins: '*',
allowedMethods: '*',
allowedHeaders: '*',
exposedHeaders: '*',
maxAgeInSeconds: 3600
}]
;
}
Errors:
When using the Blob SAS, at the setCors/getProperties method: 403 (Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.)
When using the following service url, at the setCors/getProperties method: https://${storageName}.blob.core.windows.net/${containerName}/${fileName}${sasToken} => RestError: The resource doesn't support specified Http Verb.
When using a Storage Account SAS with only READ permissions, when accessing the blob (blob.exists()): 403 (This request is not authorized to perform this operation using this resource type.) (makes sense, but then I would like to use the Blob-specific SAS)

The reason you're running into this error is because you're trying to set CORS using a SAS token created for a blob (which is a Service SAS). CORS operation is a service level operation and for that you need to use a SAS token obtained at the account level. In other words, you will need to use an Account SAS. Any SAS token created for either blob container or blob is a Service SAS token.
Having said this, you really don't need to set CORS properties on the storage account in each request. This is something you can do at the time of account creation.
Once you remove call to setCors method from your code, it should work fine.
Considering you're working with just a Blob SAS, you can simplify your code considerably by creating an instance of BlockBlobClient directly using the SAS URL. For example, your code could be as simple as:
const url = `https://${storageName}.blob.core.windows.net/${containerName}/${fileName}${sasToken}`;
const blobClient = new BlockBlobClient(url);
//...do operations on blob using this blob client

Related

How do I set/update AWS s3 object metadata during image upload PUT request to signed url?

I'm trying to include the name of the file that is uploaded to AWS S3 and given a random/unique name in a NextJS app.
I can set metadata from the backend, but I would like to update it from my put request (where the image is actually uploaded) to the signed URL. How would I do this?
To be clear: I would like to set metadata when I do a PUT request to the signed URL. I currently have it set to "none" on the backend to avoid forbidden errors (and this shows up as metadata in s3). Is this possible to update that metadata from my PUT request is there another approach I should take? Thanks!
// Backend code to get signed URL
async function handler(req, res) {
if (req.method === 'GET') {
const key = `content/${uuidv4()}.jpeg`;
s3.getSignedUrl(
'putObject',
{
Bucket: 'assets',
ContentType: 'image/jpeg',
Key: key,
Expires: 5 * 60,
Metadata: {
'file-name': "none",
}
},
(err, url) => res.send({ key, url })
);
}
// Frontend code
const [file, setFile] = useState(null);
const onFileUpload = async (e) => {
e.preventDefault();
const uploadConfig = await fetch('/api/upload');
const uploadURL = await uploadConfig.json();
await fetch(uploadURL.url, {
body: file,
method: 'PUT',
headers: {
'Content-Type': file.type,
'x-amz-meta-file-name': 'updated test name',
},
});
};
It isn't possible to do that with presigned urls. When you create the presigned url, all properties are pre-filled. You can't change them, when uploading the file. The only thing you can control is the object's data. The bucket, the key and the metadata (and all other parameters of put_object) are predefined. This is also the case for generate_presigned_post. All fields are prefilled.
This makes sense, as the back-end grants the permissions and needs to decide on these. Also the implementation will be much more complicated, as presigned urls support all client methods, which have different parameters.
The only way you could do it, is to generate urls on-demand. First generate the pre-signed url, based on the name selected by the client and then do the upload. You will need two round-trips for every file. One to your server, for generating the url and one to S3 for the uploading.

301 Redirecting to azure storage blob is inadvertently passing App credentials

Previously over the last few months I have been serving azure blobs via a 301 redirect from a azure Web API. Authorised clients can access the Web API via a bearer token.
Client visits myapp.azurewebsites.net/api/file/2
The webapp generates an SASS url for the file in blob storage
The webapp returns the SASS url as a 301 redirect
The client's browser follows this and downloads the file transparently.
However recently an error is now being raised from the Azure Blob Api after the redirect:
InvalidAuthenticationInfo - Authentication information is not given in the correct format. Check the value of Authorization header.
Which is suggesting my Bearer token for the Web API is being inadvertently passed onto blob storage, checking the network tab in chrome and it is indeed passing the credentials to azure blob storage.
Is there a way to make the storage API ignore the bearer token and just use the SASS key in the url?
Or is there a way to prevent a 301 redirect from leaking credentials to another domain?
ASP.Net Core:
[HttpGet]
public async Task<ActionResult> GetContent(string id)
{
//var sassUri = https://myblob.blob.core.windows.net/TEST?sv=2020-04-08&se=2021-04-07T13%3A16%3A15Z&sr=b&sp=r&sig=4WBAkWx
var sassURI = await _fileService.GetSASSForFileIdAsync(id);
return new RedirectResult(sassURI, true);
}
Javascript:
const myHeaders = new Headers();
myHeaders.append('Content-Type', 'application/json');
myHeaders.append('Authorization', 'bearer 607cd0a9-6048'); //token for myapp
let resp = await fetch('https://myapp.azurewebsites.net/file/2',{ method: 'GET', headers: myHeaders});
//Network tab shows credentials are being leaked to myblob.blob.core.windows.net
console.log(resp);
Of course, you could request storage API with SAS URL, see here.
Azure Storage supports three types of shared access signatures:
User delegation SAS
Service SAS
Account SAS
For example, create a service SAS for a blob container using GenerateSasUri method:
private static Uri GetServiceSasUriForContainer(BlobContainerClient containerClient,
string storedPolicyName = null)
{
// Check whether this BlobContainerClient object has been authorized with Shared Key.
if (containerClient.CanGenerateSasUri)
{
// Create a SAS token that's valid for one hour.
BlobSasBuilder sasBuilder = new BlobSasBuilder()
{
BlobContainerName = containerClient.Name,
Resource = "c"
};
if (storedPolicyName == null)
{
sasBuilder.ExpiresOn = DateTimeOffset.UtcNow.AddHours(1);
sasBuilder.SetPermissions(BlobContainerSasPermissions.Read);
}
else
{
sasBuilder.Identifier = storedPolicyName;
}
Uri sasUri = containerClient.GenerateSasUri(sasBuilder);
Console.WriteLine("SAS URI for blob container is: {0}", sasUri);
Console.WriteLine();
return sasUri;
}
else
{
Console.WriteLine(#"BlobContainerClient must be authorized with Shared Key
credentials to create a service SAS.");
return null;
}
}

How to generate user delegation SAS for Azure Storage using Javascript SDK

I'm trying to upload a file to Azure Blob Storage. What I've done so far:
npm i #azure/identity #azure/storage-blob
Generate SAS query parameters with a user delegation key:
async function generateSas() {
const startDate = new Date();
const expiryDate = new Date();
startDate.setTime(startDate.getTime() - 100 * 60 * 1000);
expiryDate.setTime(expiryDate.getTime() + 100 * 60 * 60 * 1000);
const credential = new DefaultAzureCredential();
const blobServiceClient = new BlobServiceClient(STORAGE, credential);
const key = await blobServiceClient.getUserDelegationKey(startDate, expiryDate);
return generateBlobSASQueryParameters({
containerName: CONTAINER,
startsOn: startDate,
expiresOn : expiryDate,
permissions: ContainerSASPermissions.parse('rwl'),
}, key, ACCOUNT).toString();
}
Use the generated SAS to upload
async function upload(sasToken: string) {
const blobClient = new BlockBlobClient(
`https://${ACCOUNT}.blob.core.windows.net/${CONTAINER}/test.json?${sasToken}`);
const content = 'some content';
const response = await blobClient.upload(content, content.length);
}
Before I run this, I do az login with my account.
The error:
(node:19584) UnhandledPromiseRejectionWarning: RestError: This request is not authorized to perform
this operation using this permission.
If I copy a SAS from Azure Storage Explorer with the same login, the code works! So I assume that there is some way to retrieve a valid SAS for my account.
I suspect that this is a permission issue.
After analyzing Can't list file system of azure datalake with javascript and ManagedIdentityCredential failed when used to list blob containers #5539 issues closely, I think that the Owner role is not sufficient for uploading blobs inside your blob storage account. You'll have to use one of the Storage Blob Data * roles (like Storage Blob Data Owner before you can upload blobs.
So, try adding Storage Blob Data Owner role to your intended user and try running the code again.

Is it possible upload file with metadata by storage nestJS sdk?

I am using Azure storage and Nestjs. I am using Azure storage to store some static files. I can upload files by Nestjs storage SDK successfully.Now I need to upload a file with some custom blob metadata, I have go through the source code of Nestjs storage SDK, but seems there is no predefined way to do this. So is it possible to upload blobs with custom metadata? Or is there any workarounds?
Thanks!
I also have reviewed the source code of azureStorageService, it not provides useful methods. But the upload operation replys a storageUrl with SAS token, we could use it to make another HTTP request: set-blob-metaData to set blob metadata. This is my test code,name is the metadata in my test:
#Post('azure/upload')
#UseInterceptors(
AzureStorageFileInterceptor('file', null),
)
async UploadedFilesUsingInterceptor(
#UploadedFile()
file: UploadedFileMetadata,
) {
file = {
...file,
buffer : Buffer.from('file'),
originalname: 'somename.txt'
};
const storageUrl = await this.azureStorage.upload(file);
//call rest api to set metadata
await this.httpService.put(storageUrl + "&comp=metadata",null,{headers:{'x-ms-meta-name':'orginal name here'}})
.subscribe((response) => {
console.log(response.status);
});
{
Logger.log(storageUrl);
}}
}
Result:

Allowing users to upload content to s3

I have an S3 bucket named BUCKET on region BUCKET_REGION. I'm trying to allow users of my web and mobile apps to upload image files to these bucket, provided that they meet certain restrictions based on Content-Type and Content-Length (namely, I want to only allow jpegs less than 3mbs to be uploaded). Once uploaded, the files should be publicly accessible.
Based on fairly extensive digging through AWS docs, I assume that the process should look something like this on my frontend apps:
const a = await axios.post('my-api.com/get_s3_id');
const b = await axios.put(`https://{BUCKET}.amazonaws.com/{a.id}`, {
// ??
headersForAuth: a.headersFromAuth,
file: myFileFromSomewhere // i.e. HTML5 File() object
});
// now can do things like <img src={`https://{BUCKET}.amazonaws.com/{a.id}`} />
// UNLESS the file is over 3mb or not an image/jpeg, in which case I want it to be throwing errors
where on my backend API I'd be doing something like
import aws from 'aws-sdk';
import uuid from 'uuid';
app.post('/get_s3_id', (req, res, next) => {
// do some validation of request (i.e. checking user Ids)
const s3 = new aws.S3({region: BUCKET_REGION});
const id = uuid.v4();
// TODO do something with s3 to make it possible for anyone to upload pictures under 3mbs that have the s3 key === id
res.json({id, additionalAWSHeaders});
});
What I'm not sure about is what exact S3 methods I should be looking at.
Here are some things that don't work:
I've seen a lot of mentions of (a very old) API accessible with s3.getSignedUrl('putObject', ...). However, this doesn't seem to support reliably setting a ContentLength -- at least anymore. (See https://stackoverflow.com/a/28699269/251162.)
I've also seen a closer-to-working example using an HTTP POST with form-data API that is also very old. I guess that this might get it done if there are no alternatives but I am concerned that it is no longer the "right" way to do things -- additionally, it seems to doing a lot of manual encrypting etc and not using the official node SDK. (See https://stackoverflow.com/a/28638155/251162.)
I think what might be better for this case in POSTing directly to S3, skipping your backend server.
What you can do is define a policy that explicitly specifies what can be uploaded to and to where, this policy is then signed using an AWS secret access key (using the AWS sig v4, can generate a policy using this).
An example usage of the policy and signature if viewable in the AWS docs
For your uses you can specify conditions like:
conditions: [
['content-length-range, 0, '3000000'],
['starts-with', '$Content-Type', 'image/']
]
This will limit uploads to 3Mb, and Content-Type to only items that begin with image/
Additionally, you only have to generate your signature for policy once (or whenever it changes), which means you don't need a request to your server to get a valid policy, you just hardcode it in your JS. When/if you need to update just regenerate the policy and signature and then update the JS file.
edit: There isn't a method through the SDK to do this as it's meant as way of directly POSTing from a form on a webpage, i.e. can work with no javascript.
edit 2: Full example of how to sign a policy using standard NodeJS packages:
import crypto from 'crypto';
const AWS_ACCESS_KEY_ID = process.env.AWS_ACCESS_KEY_ID;
const AWS_SECRET_ACCESS_KEY = process.env.AWS_SECRET_ACCESS_KEY;
const ISO_DATE = '20190728T000000Z';
const DATE = '20161201';
const REGION = process.env.AWS_DEFAULT_REGION || 'eu-west-1';
const SERVICE = 's3';
const BUCKET = 'your_bucket';
if (!AWS_ACCESS_KEY_ID || !AWS_SECRET_ACCESS_KEY) {
throw new Error('AWS credentials are incorrect');
}
const hmac = (key, string, encoding) => {
return crypto.createHmac("sha256", key).update(string, "utf8").digest(encoding);
};
const policy = {
expiration: '2022-01-01T00:00:00Z',
conditions: [
{
bucket: BUCKET,
},
['starts-with', '$key', 'logs'],
['content-length-range', '0', '10485760'],
{
'x-amz-date': ISO_DATE,
},
{
'x-amz-algorithm': 'AWS4-HMAC-SHA256'
},
{
'x-amz-credential': `${AWS_ACCESS_KEY_ID}/${DATE}/${REGION}/${SERVICE}/aws4_request`
},
{
'acl': 'private'
}
]
};
function aws4_sign(secret, date, region, service, string_to_sign) {
const date_key = hmac("AWS4" + secret, date);
const region_key = hmac(date_key, region);
const service_key = hmac(region_key, service);
const signing_key = hmac(service_key, "aws4_request");
const signature = hmac(signing_key, string_to_sign, "hex");
return signature;
}
const b64 = new Buffer(JSON.stringify(policy)).toString('base64').toString();
console.log(`b64 policy: \n${b64}`);
const signature = aws4_sign(AWS_SECRET_ACCESS_KEY, DATE, REGION, SERVICE, b64);
console.log(`signature: \n${signature}\n`);
You need to get familiar with Amazon Cognito and especially with identity pool.
Using Amazon Cognito Sync, you can retrieve the data across client platforms, devices, and operating systems, so that if a user starts using your app on a phone and later switches to a tablet, the persisted app information is still available for that user.
Read more here: Cognito identity pools
Once you create new identify pool, you can reference it while using S3 JavaScript SDK which will allow you to upload content whit out exposing any credentials to the client.
Example here: Uploading to S3
Please read through all of it, especially the section "Configuring the SDK".
The second part of your puzzle - validations.
I would go about implementing a client-side validation (if possible) to avoid network latency before giving an error. If you would choose to implement validation on S3 or AWS Lambda you are looking for a wait-time until file reaches AWS - network latency.
This is something I know we have in our project, so I'll show you part of the codes:
you first need to post to your own server to get the creds for the upload,
from that you will return the params from the client upload to S3.
these are params you send to the aws s3 service, you will need the bucket, upload path, and the file
let params = {
Bucket: s3_bucket,
Key: upload_path,
Body: file_itself
};
this is the code I have for the actual upload to s3
config.credentials = new AWS.Credentials(credentials.accessKeyId,
credentials.secretAccessKey, credentials.sessionToken);
let s3 = new S3(config);
return s3.upload(params, options).on("httpUploadProgress", handleProgress);
all of those credentials items you get from your backend of course.
On the backend you need to generate a timed, presigned URL and send that URL to the client for accessing the S3 object. Depending on your backend implementation technology you can use the AWS CLI or SDKs (e.g. for Java, .Net, Ruby or Go).
Please refer to CLI docs and SDK docs and more SDK
Content size restriction is not supported in link generation directly. But the link is just there to relay the access rights that the AWS user has.
For using a policy to restrict file size on upload you have to create a CORS policy on the bucket and use HTTP POST for the upload. Please see this link.
your servers acts like a proxy, also responsible for authorization, validation, etc.
Some code snippet:
upload(config, file, cb) {
const fileType = // pass with request or generate or let empty
const key = `${uuid.v4()}${fileType}`; // generate file name:
const s3 = new AWS.S3();
const s3Params = {
Bucket: config.s3_bucket,
Key: key,
Body: file.buffer
};
s3.putObject(s3Params, cb);
}
and then you can send the key to the client and provide further access.

Categories

Resources