Simply, is it possible to use transfer acceleration (TA) with pre-signed URLs generated using the AWS-SDK for JavaScript?
Turning on TA for a specific S3 bucket gives a URL with the format: {bucket}.s3-accelerate.amazonaws.com. However, when specifying the parameters for a request, the only valid options seem to be {Bucket: 'bucket', Key: 'key', Body: 'body', Expires: 60} and doesn't seem to allow me to say I want to use TA. The resulting URL is in the usual format {bucket}.s3-{region}.amazonaws.com, which is wrong for TA.
The documentation does not seem to offer much information with regards to pre-signed URLs.
Yes, but this is still undocumented and nowhere to be found on their docs or anywhere else (up until now :) ). We got it working by searching in the source code of the SDK. You need to load S3 like this:
var s3 = new AWS.S3({useAccelerateEndpoint: true});
Then the SDK will use the accelerated endpoint.
As it happens, there is a documented way of enabling S3 transfer acceleration feature on AWS SDK for JavaScript. It can be done by specifying the same property mentioned by #Luc Hendriks but in the AWS.Config class as follow:
AWS.config.update({
useAccelerateEndpoint: true
});
var s3 = new AWS.S3();
Documentation reference: http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/Config.html
Related
Using javascript and AWS Amplify, I am trying to attach some custom metadata to files that are uploaded to my s3 bucket. Most of the available options are not described in the documentation, but after digging around the source code I found that adding, for example, {metadata: { 'your-custom-key-1': 'foo', 'your-custom-key-2': 'bar' } as the options paramter in Storage.put() will automatically create custom metadata on your file, which automatically prefixes your custom keys with 'x-amz-meta-'. So in the above case, you have 'x-amz-your-custom-key-1': 'foo', 'x-amz-your-custom-key-2': 'bar' as the actual metadata attached and saved to that specific files.
The issue is, as far as I can tell, I do not see any way to retrieve this metadata using amplify. I presume I would have to dig a layer deeper into the core s3 class to retrieve this information. To make matters even more confusing...I've found that with amplify, if I call Storage.get() with the hidden option { download: true} as my option parameter, I get back a response that actually has a Metadata key, however it is always empty, even when I certainly do have custom metadata attached to my file. I'm guessing this a feature that has either been changed or is incomplete? Looking into the core s3 class, I found the headObject, but I am not clear whether this would give me my custom metadata or just the default. My end goal is to list all of the associated metadata with all files in my bucket when I call Storage.list(). Thanks for any advice
I searched lot and found nothing about how to send files. Even in google documentation there is nothing about sending file using Javascript sdk.
See here https://developers.google.com/drive/v3/web/manage-uploads
So right now I'm converting the nodeJs script to javascript. And they used fs to get the readstream. And I have no idea how to do that in javascript. Closet I can get to this...
var file = uploadButton.files[0]
var fileName = uploadButton.files[0].name
var fileMetadata = {
'name': fileName
};
var media = {
mimeType: 'image/jpeg',
body: file
};
gapi.client.drive.files.create({
resource: fileMetadata,
media: media.result,
fields: 'id'
}).execute();
Above code creates the empty file with then fileName and no content inside on it.
In order to upload a file to your google drive you need to use a google request object and 'POST' the file. You can see an example in this answer. Keep in mind that you need to get your API keys in order to initialise your google drive client object.
I am trying to load pdf from another server to the viewer of pdf.js in my server.I got error
"PDF.js v1.4.20 (build: b15f335)
Message: file origin does not match viewer's"
I already checked many answer, many of them said that pass the pdf url through a proxy like:- link
After searching a lot i found that they release a new patch in which they have lock down any CDR request, correct me if i am wrong:-Here is the link
but in their user manual they specified that it is possible here is the link
I tried all method but not able to enable CDR on my server and many methods didn't work.
Please help me to resolve this issue.
My Basic idea is to show pdf(which is hosted on 3rd party server) on my pdf reader(that i made it from pdf.js).
I resolved this issue by comment this lines in viewer.js
if (fileOrigin !== viewerOrigin) {
throw new Error('file origin does not match viewer\'s');
}
and use proxy like this.
http://192.168.0.101/web/viewer.html?file=https://cors-anywhere.herokuapp.com/pathofpdf.pdf
Add your domain/origin to HOSTED_VIEWER_ORIGINS array
I resolved this issue by adding this line in viewer.js
var LOCAL_AUTO_DETECT_ORIGIN = window.location.origin;
var HOSTED_VIEWER_ORIGINS = ['null', 'http://mozilla.github.io', 'https://mozilla.github.io'];
HOSTED_VIEWER_ORIGINS.push(LOCAL_AUTO_DETECT_ORIGIN);
The problem in my case was the link wasnt in https while the site is secured
pdfjs respect CORS settings. Do the following
Go to viewer.js file and find the location of HOSTED_VIEWER_ORIGINS. Below that line add your domain to the array HOSTED_VIEWER_ORIGINS like this
const LOCAL_AUTO_DETECT_ORIGIN = window.location.origin;
HOSTED_VIEWER_ORIGINS.push(LOCAL_AUTO_DETECT_ORIGIN);
if your file is hosted n AWS S3 bucket, then set CORS policy on the bucket to allow all your domains read files from that bucket
That should solve your prblem
AWS SDK for JavaScript (even if only the S3 component is included) is a huge bulk for just some sporadic file uploads in my webapp. Is there a leaner way to upload files to an S3 bucket directly from the client's browser, given that I have the bucket name, accessKeyId and secretAccessKey at my disposal?
To upload files from browser to S3 you can use presigned PUT. This way you will not be disclosing the aws secret to the browser. You can use the minio-js library to generate presigned PUT url.
On the server side you can generate the presigned PUT url like this:
var Minio = require('minio')
// find out your s3 end point here:
// http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region
var s3Client = new Minio({
url: 'https://<your-s3-endpoint>',
accessKey: 'YOUR-ACCESSKEYID',
secretKey: 'YOUR-SECRETACCESSKEY'
})
var presignedUrl = s3Client.presignedPutObject('bucket', 'object', 24*60*60)
// presignedUrl expires in 1 day
You can pass this presigned URL to the browser which can just do a simple HTTP PUT to amazon s3. The PUT request will succeed because the signature will be part of the presignedUrl.
You can also alternatively use presigned POST to upload too. Presigned POST gives much more control on the upload - like you can limit the size of the upload object, its content-type etc.
S3 supports uploads from the browser using a form post upload, with no special code needed at the browser. It involves a specific design of form and a signed policy document that allows the user to only upload files matching constraints you impose, and doesn't expose your secret key. It will optionally also redirect the browser back to your site after the upload.
http://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-post-example.html
Try use Extended S3 Browser (Chrome extension)
I'm writing a backup script that will pull a full copy of every file in a specific blob container in our Windows Azure blob storage. These files are not uploaded by me, I'm just writing a script that traverses the blob storage and downloads the files.
To speed up this process and skip unnecessary downloads, I'd like to request MD5s for the files before downloading them, and compare them with the already local files.
My problem: I can't find documentation anywhere detailing how to do this. I'm pretty sure the API supports it, I'm finding docs and answered questions related to other languages everywhere, but not for the Node.js Azure SDK.
My question: Is it possible, and if yes, how, to request an MD5 for the remote file blob through the Azure Node.js SDK, before downloading it? And is it faster than just downloading the file?
It is certainly possible to get blob's MD5 hash. When you list blobs, you'll get MD5 in blob's properties. See the sample code below:
var azure = require('azure');
var blobService = azure.createBlobService("accountname", "accountkey");
blobService.listBlobs("containername", function(error, blobs){
if(!error){
for(var index in blobs){
console.log(blobs[index].name );
console.log(blobs[index].properties['content-md5'] );
}
}
});
Obviously the catch is that blob should have this property set. If this property is not set, then an empty string is returned.