How do I get a presigned url from the javascript aws-sdk - javascript

I am trying to generate a presigned url, but when I call the following, I get
s3.createPresignedPost() is not a function
I am running aws-sdk#2.3.14 and in the docs it clearly shows that createPresignedPost() is a function. Here is my code:
getPresignedURL(bucket: string, key: string) {
let s3 = new AWS.S3()
let params = {
Bucket: bucket,
Fields: {
key: key
}
}
return s3.createPresignedPost(params, (err, data) => {
if(err) {
console.log(err)
} else {
console.log(data)
}
})
}

createPresignedPost was introduced at version 2.19.0 and the current version is 2.222.1.
aws-sdk-js CHANGELOG
feature: S3: Added an instance method to S3 clients to create POST
form data with presigned upload policies

Related

Using AWS S3 SDK for node.js how to return filebody instead of 'Promise { pending }'

Javascript isn't my number 1 language, however for an application I try to get objects from my S3 bucket. Eventually, I want these images to be included in some sort of HTML file, so I am looking for a method to create a base64 encode string.
Instead of the string, my function is returning 'Promise { pending }' and despite several attempts I am not getting it to work. This is my function at the moment:
async function getS3file() {
try{
var image = await s3.getObject({
Bucket: 'Bucketname',
Key: 'ImageKey'
}).promise();
return image.Body.toString('base64');
} catch(e) {
throw new Error('Could not retrieve file from S3')
}
}
async function getS3file() {
return s3.getObject({
Bucket: 'Bucketname',
Key: 'ImageKey'
})
.promise()
.then(file => file.Body)
.then(body => body.toString('base64')
// .catch(...)
}
then use
let image = await getS3file();

Upload file to google drive after http get request

I have two functions in separate files to split up the workflow.
const download = function(url){
const file = fs.createWriteStream("./test.png");
const request = https.get(url, function(response) {
response.pipe(file);
});
}
This function in my fileHelper.js is supposed to take a URL with an image in it and then save it locally to test.png
function uploadFile(filePath) {
fs.readFile('credentials.json', (err, content) => {
if (err) return console.log('Error loading client secret file:', err);
// Authorize a client with credentials, then call the Google Drive API.
authorize(JSON.parse(content), function (auth) {
const drive = google.drive({version: 'v3', auth});
const fileMetadata = {
'name': 'testphoto.png'
};
const media = {
mimeType: 'image/png',
body: fs.createReadStream(filePath)
};
drive.files.create({
resource: fileMetadata,
media: media,
fields: 'id'
}, (err, file) => {
if (err) {
// Handle error
console.error(err);
} else {
console.log('File Id: ', file.id);
}
});
});
});
}
This function in my googleDriveHelper.js is supposed to take the filePath of call and then upload that stream into my google drive. These two functions work on their own but it seems that the https.get works asynchronously and if I try to call the googleDriveHelper.uploadFile(filePath) function after the download, it doesn't have time to get the full file to upload so instead a blank file will be uploaded to my drive.
I want to find a way so that when the fileHelper.download(url) is called, it automatically uploads into my drive.
I also don't know if there is a way to create a readStream directly from the download function to the upload function, so I can avoid having to save the file locally to upload it.
I believe your goal as follows.
You want to upload a file retrieving from an URL to Google Drive.
When you download the file from the URL, you want to upload it to Google Drive without creating the file.
You want to achieve this using googleapis with Node.js.
You have already been able to upload a file using Drive API.
For this, how about this answer?
Modification points:
At download function, the retrieved buffer is converted to the stream type, and the stream data is returned.
At uploadFile function, the retrieved stream data is used for uploading.
When the file ID is retrieved from the response value of Drive API, please use file.data.id instead of file.id.
By above modification, the file downloaded from the URL can be uploaded to Google Drive without creating a file.
Modified script:
When your script is modified, please modify as follows.
download()
const download = function (url) {
return new Promise(function (resolve, reject) {
request(
{
method: "GET",
url: url,
encoding: null,
},
(err, res, body) => {
if (err && res.statusCode != 200) {
reject(err);
return;
}
const stream = require("stream");
const bs = new stream.PassThrough();
bs.end(body);
resolve(bs);
}
);
});
};
uploadFile()
function uploadFile(data) { // <--- Modified
fs.readFile("drive_credentials.json", (err, content) => {
if (err) return console.log("Error loading client secret file:", err);
authorize(JSON.parse(content), function (auth) {
const drive = google.drive({ version: "v3", auth });
const fileMetadata = {
name: "testphoto.png",
};
const media = {
mimeType: "image/png",
body: data, // <--- Modified
};
drive.files.create(
{
resource: fileMetadata,
media: media,
fields: "id",
},
(err, file) => {
if (err) {
console.error(err);
} else {
console.log("File Id: ", file.data.id); // <--- Modified
}
}
);
});
});
}
For testing
For example, when above scripts are tested, how about the following script?
async function run() {
const url = "###";
const data = await fileHelper.download(url);
googleDriveHelper.uploadFile(data);
}
References:
Class: stream.PassThrough
google-api-nodejs-client

How to delete multiple objects with the same key in aws s3 bucket

TLDR; How can I delete image replicas in subfolders of s3 with the same key as the original image?
I've got a prisma server and upload images from my app to my s3 bucket through the prisma backend. Moreover, I run a lambda function to resize these images on-the-fly if requested.
Here's the process of the lambda function
A user requests a resized asset from an S3 bucket through its static
website hosting endpoint. The bucket has a routing rule configured to
redirect to the resize API any request for an object that cannot be
found.
Because the resized asset does not exist in the bucket, the
request is temporarily redirected to the resize API method.
The user’s browser follows the redirect and requests the resize operation via API Gateway.
The API Gateway method is configured to trigger a
Lambda function to serve the request.
The Lambda function downloads
the original image from the S3 bucket, resizes it, and uploads the
resized image back into the bucket as the originally requested key.
When the Lambda function completes, API Gateway permanently redirects
the user to the file stored in S3.
The user’s browser requests the
now-available resized image from the S3 bucket.
Subsequent requests
from this and other users will be served directly from S3 and bypass
the resize operation.
If the resized image is deleted in the future,
the above process repeats and the resized image is re-created and
replaced into the S3 bucket.
https://aws.amazon.com/blogs/compute/resize-images-on-the-fly-with-amazon-s3-aws-lambda-and-amazon-api-gateway/
This brings me to the following issue:
Whenever I delete an image-node with a key in Prisma I can delete the object with the same key from aws s3, yet I won't touch the resized replicas of it in the subfolders of the respective resolutions. How can I achieve this? I tried using aws' deleteObjects() by passing in only one key as shown below. However, this only deletes the original image at the root of the bucket.
Here's the lambda functions implementation
exports.processDelete = async ( { id, key }, ctx, info) => {
const params = {
Bucket: 'XY',
Delete: {
Objects: [
{
Key: key,
},
],
Quiet: false
}
}
// Delete from S3
const response = await s3
.deleteObjects(
params,
function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
}
).promise()
// Delete from Prisma
await ctx.db.mutation.deleteFile({ where: { id } }, info)
console.log('Successfully deleted file!')
}
Because I'm only allowing the resizing of certain resolutions, I ended up doing the following:
exports.processDelete = async ( { id, key }, ctx, info) => {
const keys = [
'200x200/' + key,
'293x293/' + key,
'300x300/' + key,
'400x400/' + key,
'500x500/' + key,
'600x600/' + key,
'700x700/' + key,
'800x800/' + key,
'900x900/' + key,
'1000x1000' + key,
]
const params = {
Bucket: 'XY',
Delete: {
Objects: [
{
Key: key,
},
{
Key: keys[0],
},
{
Key: keys[1],
},
{
Key: keys[2],
},
{
Key: keys[3],
},
{
Key: keys[4],
},
{
Key: keys[5],
},
{
Key: keys[6],
},
{
Key: keys[7],
},
{
Key: keys[8],
},
{
Key: keys[9],
},
],
Quiet: false
}
}
If there is a more elegant way, please let me know. :)
I did something similar time ago. We storaged the images like path/to/my/image/11222333.jpg and the renditions in path/to/my/image/11222333/200x200.jpg So when delete 112233.jpg we just need to list all renditions inside of the folder and delete them.

Download file from Amazon S3 using REST API

I have my own REST API to call in order to download a file. (At the end, the file could be store in different kind of server... Amazon s3, locally etc...)
To get a file from s3, I should use this method:
var url = s3.getSignedUrl('getObject', params);
This will give me a downloadable link to call.
Now, my question is, how can I use my own rest API to download a file when it comes from that link? Is there a way to redirect the call?
I'm using Hapi for my REST server.
{
method: "GET", path: "/downloadFile",
config: {auth: false},
handler: function (request, reply) {
// TODO
reply({})
}
},
Instead of using a redirect to download the desired file, just return back an unbufferedStream instead from S3. An unbufferedStream can be returned from the HttpResponse within the AWS-SDK. This means there is no need to download the file from S3, then read it in, and then have the requester download the file.
FYI I use this getObject() approach with Express and have never used Hapi, however I think that I'm pretty close with the route definition but hopefully it will capture the essence of what I'm trying to achieve.
Hapi.js route
const getObject = require('./getObject');
{
method: "GET", path: "/downloadFile",
config: {auth: false},
handler: function (request, reply) {
let key = ''; // get key from request
let bucket = ''; // get bucket from request
return getObject(bucket, key)
.then((response) => {
reply.statusCode(response.statusCode);
response.headers.forEach((header) => {
reply.header(header, response.headers[header]);
});
return reply(response.readStream);
})
.catch((err) => {
// handle err
reply.statusCode(500);
return reply('error');
});
}
},
getObject.js
const AWS = require('aws-sdk');
const S3 = new AWS.S3(<your-S3-config>);
module.exports = function getObject(bucket, key) {
return new Promise((resolve, reject) => {
// Get the file from the bucket
S3.getObject({
Bucket: bucket,
Key: key
})
.on('error', (err) => {
return reject(err);
})
.on('httpHeaders', (statusCode, headers, response) => {
// If the Key was found inside Bucket, prepare a response object
if (statusCode === 200) {
let responseObject = {
statusCode: statusCode,
headers: {
'Content-Disposition': 'attachment; filename=' + key
}
};
if (headers['content-type'])
responseObject.headers['Content-Type'] = headers['content-type'];
if (headers['content-length'])
responseObject.headers['Content-Length'] = headers['content-length'];
responseObject.readStream = response.httpResponse.createUnbufferedStream();
return resolve(responseObject);
}
})
.send();
});
}
Return a HTTP 303 Redirect with the Location header set to the blob's public URL in the S3 bucket.
If your bucket is private then you need to proxy the request instead of performing a redirect, unless your clients also have access to the bucket.

JavaScript aws-sdk S3 deleteObject(s) succedes but doesn't actually delete anything

In the MEAN.js app I'm building I upload images to AWS S3. I am trying to use the AWS SDK to delete unwanted images from the site but after a successful ajax call the file remains on S3.
I have required the AWS SDK like so, it works both with and without the config variables (as it should):
var aws = require('aws-sdk');
aws.config.update({accessKeyId: process.env.AWS_ACCESS_KEY_ID, secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY});
For my route I have the following code:
router.post('/delete', auth, function(req,res, next){
if(req.body.key) {
var s3 = new aws.S3();
var params = {
Bucket: 'bucket name',
Key: req.body.key
};
s3.deleteObject(params, function (err, data) {
if (err) {
console.log(err, err.stack);
return next(err);
}
console.log(data);
res.end('done');
I get a 200 response and {} is logged to the console but the file is not deleted from storage. I've also tried using the deleteObjects method like so:
var params = {
Bucket: 'bucket name',
Delete: {
Objects: [
{
Key: req.body.key
}
]
}
};
s3.deleteObjects(params, function (err, data) {
if (err) {
console.log(err, err.stack);
return next(err);
}
console.log(data);
res.end('done');
When I use deleteObjects I get { Deleted: [ { Key: 'file name' } ], Errors: [] } as a response but the file is still on S3.
Am I doing something wrong? I thought I followed the documentation to a T.
Also, issue occurs wether or not versioning is enabled on the bucket. With versioning enabled my response is:
{ Deleted:
[ { Key: 'file name',
DeleteMarker: true,
DeleteMarkerVersionId: 'long id' } ],
Errors: [] }
Try this one. You need to use promise() to ensure that the object is deleted before ending the execution. 6 hours waiting just for a simple object deletion is not normal, even with considering S3 99.999999999% durability.
var params = {
Bucket : bucket,
Key : video
};
try {
await s3.deleteObject(params,function(err,data){
if (err) console.log(err,err.stack);
else console.log("Response:",data);
}).promise();
} catch (e) {}
Looks like first comment was right, it takes some time for files to be removed from AWS S3. In this case it was over an hour until it disappeared (could have been 6 hours, I stepped away for quite a bit).
I really don't think Aws takes long that long to delete. I was having same issue and solved it by changing the file name value, I was initially using the url instead of using the filename i used in uploading the image.
I've noticed AWS indicates the key as deleted even though it does not exist. In my case I sent the file path as the object key, but the key was actually the file path minus the leading /.

Categories

Resources