AWS SDK for JavaScript (even if only the S3 component is included) is a huge bulk for just some sporadic file uploads in my webapp. Is there a leaner way to upload files to an S3 bucket directly from the client's browser, given that I have the bucket name, accessKeyId and secretAccessKey at my disposal?
To upload files from browser to S3 you can use presigned PUT. This way you will not be disclosing the aws secret to the browser. You can use the minio-js library to generate presigned PUT url.
On the server side you can generate the presigned PUT url like this:
var Minio = require('minio')
// find out your s3 end point here:
// http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region
var s3Client = new Minio({
url: 'https://<your-s3-endpoint>',
accessKey: 'YOUR-ACCESSKEYID',
secretKey: 'YOUR-SECRETACCESSKEY'
})
var presignedUrl = s3Client.presignedPutObject('bucket', 'object', 24*60*60)
// presignedUrl expires in 1 day
You can pass this presigned URL to the browser which can just do a simple HTTP PUT to amazon s3. The PUT request will succeed because the signature will be part of the presignedUrl.
You can also alternatively use presigned POST to upload too. Presigned POST gives much more control on the upload - like you can limit the size of the upload object, its content-type etc.
S3 supports uploads from the browser using a form post upload, with no special code needed at the browser. It involves a specific design of form and a signed policy document that allows the user to only upload files matching constraints you impose, and doesn't expose your secret key. It will optionally also redirect the browser back to your site after the upload.
http://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-post-example.html
Try use Extended S3 Browser (Chrome extension)
Related
I use sails.js for creating a backend, and it supports posting text and uploading file/image same time in one of my controller methods. Because of the business logics, I need to tell if a POST request to this controller method contains file for uploading or not.
To upload files, I use the build-in Skipper module in sails.js and use the file field in the req parameters. For example, if I need to upload images, I put images field in the request going to the backend/sails.js:
req.file('images').Upload({...});
I tried to use req.file('images') to examine if the incoming request contains files to upload by following, but it does not work - request without uploading files still gives req.file('images') = true:
if (!!req.file('images')) { // this turns out to be true regardless of uploading file or not
return uploadToS3(req, 'images');
}
sails.log.info('creating entity without uploading file');
return createEntityWithoutImage(req.params.all());
Any idea how to tell if a POST request contains file to upload?
You can check number of files that the user uploaded by using:
req.file('images')._files.length
This is undocumented feature of skipper:
https://github.com/balderdashy/skipper/blob/master/standalone/Upstream/Upstream.js#L51
So it can be decpreate in future versioln. So be sure, that you not upgrade the module without testing it first.
Simply, is it possible to use transfer acceleration (TA) with pre-signed URLs generated using the AWS-SDK for JavaScript?
Turning on TA for a specific S3 bucket gives a URL with the format: {bucket}.s3-accelerate.amazonaws.com. However, when specifying the parameters for a request, the only valid options seem to be {Bucket: 'bucket', Key: 'key', Body: 'body', Expires: 60} and doesn't seem to allow me to say I want to use TA. The resulting URL is in the usual format {bucket}.s3-{region}.amazonaws.com, which is wrong for TA.
The documentation does not seem to offer much information with regards to pre-signed URLs.
Yes, but this is still undocumented and nowhere to be found on their docs or anywhere else (up until now :) ). We got it working by searching in the source code of the SDK. You need to load S3 like this:
var s3 = new AWS.S3({useAccelerateEndpoint: true});
Then the SDK will use the accelerated endpoint.
As it happens, there is a documented way of enabling S3 transfer acceleration feature on AWS SDK for JavaScript. It can be done by specifying the same property mentioned by #Luc Hendriks but in the AWS.Config class as follow:
AWS.config.update({
useAccelerateEndpoint: true
});
var s3 = new AWS.S3();
Documentation reference: http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/Config.html
For my scenario, I am trying to allow a user to drag and drop files to a webpage using javascript that would be uploaded to a container, upload the files similar to how wordpress media uploading works from the administrative side. The problem I am having is that I found code for creating a SAS url for the container,
//Set the expiry time and permissions for the container.
//In this case no start time is specified, so the shared access signature becomes valid immediately.
SharedAccessBlobPolicy sasConstraints = new SharedAccessBlobPolicy();
sasConstraints.SharedAccessExpiryTime = DateTime.UtcNow.AddHours(24);
sasConstraints.Permissions = SharedAccessBlobPermissions.Write;
//Generate the shared access signature on the container, setting the constraints directly on the signature.
string sasContainerToken = container.GetSharedAccessSignature(sasConstraints);
//Return the URI string for the container, including the SAS token.
return container.Uri + sasContainerToken;
but all of the examples I found seem to indicate that I have to generate a sas url for each blob
Microsoft.WindowsAzure.Storage.Blob.CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
//Get a reference to a container to use for the sample code, and create it if it does not exist.
Microsoft.WindowsAzure.Storage.Blob.CloudBlobContainer container = blobClient.GetContainerReference(containerName);
container.CreateIfNotExists();
//Create a new stored access policy and define its constraints.
Microsoft.WindowsAzure.Storage.Blob.SharedAccessBlobPolicy sharedPolicy = new Microsoft.WindowsAzure.Storage.Blob.SharedAccessBlobPolicy()
{
SharedAccessExpiryTime = DateTime.UtcNow.AddMinutes(10),
Permissions = Microsoft.WindowsAzure.Storage.Blob.SharedAccessBlobPermissions.Write
};
//Get the container's existing permissions.
Microsoft.WindowsAzure.Storage.Blob.BlobContainerPermissions permissions = container.GetPermissions();//new Microsoft.WindowsAzure.Storage.Blob.BlobContainerPermissions();
Microsoft.WindowsAzure.Storage.Blob.CloudBlockBlob blob = container.GetBlockBlobReference(fileName);
return blob.Uri.AbsoluteUri + blob.GetSharedAccessSignature(sharedPolicy);
instead as if I am uploading one file.
An administrator can upload any number of files, so to have to generate a blob sas via web api call for each one of these files seems to be very inefficient. I would prefer to generate a SAS for the container and allow the user to upload to that container for a specified time, say 3 hours. Also, I would like to use chunking to upload each file. Would this be possible or would I have to generate a blob sas url for each file?
An administrator can upload any number of files, so to have to
generate a blob sas via web api call for each one of these files seems
to be very inefficient. I would prefer to generate a SAS for the
container and allow the user to upload to that container for a
specified time, say 3 hours.
It is certainly possible. When you create a SAS on a blob container with Write permission (for uploading purpose), same can be used for uploading multiple blobs in that container. You just have to construct blob URI based on the file being uploaded and append the SAS token. So for example, you created a SAS token for mycontainer container in myaccount storage account and uploading a file myfile.png, your SAS URL would be https://myaccount.blob.core.windows.net/mycontainer/myfile.png?SAS-Token.
I noticed in your code that you're returning container URI with SAS token. In this case, you just have to insert the file name after container name to get blob upload URI.
Also, I would like to use chunking to upload each file.
It is again possible. I wrote a few blog posts about this some time back which you may find useful:
http://gauravmantri.com/2013/02/16/uploading-large-files-in-windows-azure-blob-storage-using-shared-access-signature-html-and-javascript/ (This was written before CORS support in Azure Storage so please ignore my comments about CORS in the post).
http://gauravmantri.com/2013/12/01/windows-azure-storage-and-cors-lets-have-some-fun/
I've done some looking at the Cloudinary API and upload examples for NodeJS, and it looks like the server-side uploads use a file path. Meanwhile, the client-side uploads require a frontend input tag. I already have a frontend for users to select and crop a picture to their liking, and this gives me a data URI. I'd like to save this file to Cloudinary without having to use their built in frontend option. Is this possible? This would basically mean that I would be able to call some kind of upload function that can take a URI or file blob.
Cloudinary supports uploading files using a data-URI encoded string too.
Please make sure that you send your content as a Data-URI as explained here: http://en.wikipedia.org/wiki/Data_URI_scheme.
For example, in Node.js:
cloudinary.uploader.upload("data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
function(result) {console.log(result)});
I'm writing a backup script that will pull a full copy of every file in a specific blob container in our Windows Azure blob storage. These files are not uploaded by me, I'm just writing a script that traverses the blob storage and downloads the files.
To speed up this process and skip unnecessary downloads, I'd like to request MD5s for the files before downloading them, and compare them with the already local files.
My problem: I can't find documentation anywhere detailing how to do this. I'm pretty sure the API supports it, I'm finding docs and answered questions related to other languages everywhere, but not for the Node.js Azure SDK.
My question: Is it possible, and if yes, how, to request an MD5 for the remote file blob through the Azure Node.js SDK, before downloading it? And is it faster than just downloading the file?
It is certainly possible to get blob's MD5 hash. When you list blobs, you'll get MD5 in blob's properties. See the sample code below:
var azure = require('azure');
var blobService = azure.createBlobService("accountname", "accountkey");
blobService.listBlobs("containername", function(error, blobs){
if(!error){
for(var index in blobs){
console.log(blobs[index].name );
console.log(blobs[index].properties['content-md5'] );
}
}
});
Obviously the catch is that blob should have this property set. If this property is not set, then an empty string is returned.