I am working on Nodejs and Expressjs applications. I want to delete an image on Aws-s3 which I uploaded with multer-s3.
I have tried so many examples that I saw online but none of them worked. For instance :
aws.config.update({
secretAccessKey: '*******************',
accessKeyId: '*****************',
region: 'eu-west-3',
});
const s3 = new aws.S3();
s3.deleteObject({ Bucket: 'schubox', Key: rayon.img }, (err, data) => {
if (err) console.error(err);
else console.log(data);
});
This code does not throw any errors but nothing is deleted on the S3 side.
where am I making a mistake?
Here the thing is that you need to use "await" and "promise()" to ensure that the object is deleted before ending the execution (wait until delete process completes).
Checkout below changes and try it:-
const params = { Bucket: 'schubox', Key: rayon.img }
await s3.deleteObject(params).promise()
console.log("file deleted Successfully")
Related
I am uploading file to S3 bucket using S3 upload function in Node.js. The frontend is built on Angular. But now the client's requirement is that all uploads should direct to s3 bucket via a presigned URL. Does this because of any security concern? The Code that i am currently using to upload files to S3 Bucket is:
async function uploadFile(object){
//object param contains two properties 'image_data' and 'path'
return new Promise(async(resolve, reject) => {
var obj = object.image_data;
var imageRemoteName = object.path+'/'+Date.now()+obj.name;
AWS.config.update({
accessKeyId: ACCESS_KEY,
secretAccessKey: SECRET_KEY,
region: REGION
})
var s3 = new AWS.S3()
s3.upload({
Bucket: BUCKET,
Body: obj.data,
Key: imageRemoteName
})
.promise()
.then(response => {
console.log(`done! - `, response)
resolve(response.Location)
})
.catch(err => {
console.log('failed:', err)
})
})
}
Any Help will be appreciated, Thanks!
Security wise it doesn't make a difference whether you call upload or first create a pre-signed URL, as long as the code you showed does not run within your Angular application, meaning on the client. In that case every client of your application has access to your AWS access key, and secret key. Still, swapping upload with a pre-signed URL won't solve the problem in this case. However, if you use a server such as express and that's where this code is running, you're basically fine.
AWS provides instructions on how to upload objects using a pre-signed URL. The basic steps are:
import { getSignedUrl } from "#aws-sdk/s3-request-presigner";
import { S3Client, PutObjectCommand } from "#aws-sdk/client-s3";
const s3Client = new S3Client({
accessKeyId: ACCESS_KEY,
secretAccessKey: SECRET_KEY,
region: REGION
});
/* ... */
const command = new PutObjectCommand({
Bucket: BUCKET,
Body: obj.data,
Key: imageRemoteName
});
// upload image and return a new signed URL,
// with expiration to download image, if needed.
// Otherwise you can leave `signedUrl` unused.
const signedUrl = await getSignedUrl(s3Client, command, {
expiresIn: 3600,
});
I am trying to upload files to my S3 bucket from my Node.js app, so I am following some very simple tutorials like this one.
The code is pretty straightforward :
const AWS = require("aws-sdk"); // fresh install, version : ^2.697.0
AWS.config.update({ // Credentials are OK
accessKeyId: process.env.s3_accessKeyId,
secretAccessKey: process.env.s3_secretAccessKey,
region: 'eu-central-1'
});
const s3 = new AWS.S3();
let params = {
// (some upload params, file name, bucket name etc)
};
s3.upload(params); // <-- crash with error: "s3.upload is not a function"
I had a look at the official AWS documentation and s3.upload() seems to be a thing. I have no idea why I get an error.
If I console.log(s3.upload) I get undefined.
Node.js v13.11.0.
EDIT
I ended up using s3.putObject() which does pretty much the same thing as s3.upload(), and works, while the latter is still inexplicably undefined...
console.log(`typeof s3.upload = `);
console.log(typeof s3.upload); // undefined?? WHY
console.log(`typeof s3.putObject = `);
console.log(typeof s3.putObject); // function, and works
Use putObject, example:
s3.client.putObject({
Bucket: bucketName,
Key: 'folder/file.txt',
Body: data,
ACL: 'public-read'
}, function (res) {
console.log('Successfully uploaded file.');
})
Documentation: https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#putObject-property
Can also try reinstalling aws-sdk package.
Refer: https://github.com/aws/aws-sdk-js/issues/916#issuecomment-191012462
You can try this
s3 = new AWS.S3({apiVersion: '2006-03-01'});
s3.upload(params, function(err, data) {
console.log(err, data);
});
I'm trying to download images from aws s3 using the AWS-SDK for nodejs.
The file does get downloaded and the size is also correct. However, the file is corrupted and shows Decompression error in IDAT.
async download(accessKeyId, secretAccessKey, region, bucketName, baseImage) {
console.log("Entered download");
const s3 = new AWS.S3({region: region});
const params = {
Bucket: bucketName,
Key: `base/${baseImage}`
};
const outStream = fs.createWriteStream(this.config.baseFolder + baseImage);
const awsStream = s3.getObject(params, (uerr, data) => {
if(uerr) throw uerr;
console.log(`Base file downloaded successfully!`)
}).createReadStream().pipe(outStream);
awsStream.on('end', function() {
console.log("successfully Downloaded");
}).on('error', function() {
console.log("Some error occured while downloading");
});
}
Here's the link I followed - https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/requests-using-stream-objects.html
The file should get downloaded without any error. I tried searching on stack and there are some similar questions, however, they are using nodejs to deliver the output to the frontend and those solutions aren't working for me.
It wasn't necessary to make a mess and do all this...
It can directly be achieved by -
async download(accessKeyId, secretAccessKey, region, bucketName, baseImage) {
console.log("Starting Download... ")
const s3 = new AWS.S3({
accessKeyId: accessKeyId,
secretAccessKey: secretAccessKey,
region: region
});
const params = {
Bucket: bucketName,
Key: `base/${baseImage}`
};
s3.getObject(params, (err, data) => {
if(err) console.error(err);
console.log(this.config.baseFolder + baseImage);
fs.writeFileSync(this.config.baseFolder + baseImage, data.Body);
console.log("Image Downloaded.");
});
}
Following on from the great help I received on my original post
Uploading a file to an s3 bucket, triggering a lambda, which sends an email containing info on the file uploaded to s3 buket
I have tested previously sending the email so I know that works but when I try to include the data of the upload it fires error
Could not fetch object data: { AccessDenied: Access Denied
at Request.extractError (/var/runtime/node_modules/aws-sdk/lib/services/s3.js:577:35)
at Request.callListeners (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:105:20)
at Request.emit (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:77:10)
at Request.emit (/var/runtime/node_modules/aws-sdk/lib/request.js:683:14)
I have found many q's related to this online regarding policies around roles etc..So I have added lambda to the s3 event, and added s3 permission to the role eg.
https://stackoverflow.com/questions/35589641/aws-lambda-function-getting-access-denied-when-getobject-from-s3
Unfortunately none of these have helped. I noticed a comment however
Then the best solution is to allow S3FullAccess, see if it works. If it does, then remove one set of access at a time from the policy and find the least privileges required for your Lambda to work. If it does not work even after giving S3FullAccess, then the problem is elsewhere
So how would I go about finding where the problem is?
Thank Y
'use strict';
console.log('Loading function');
var aws = require('aws-sdk');
var ses = new aws.SES({
region: 'us-west-2'
});
//var fileType = require('file-type');
console.log('Loading function2');
var s3 = new aws.S3({ apiVersion: '2006-03-01', accessKeyId: process.env.ACCESS_KEY, secretAccessKey: process.env.SECRET_KEY, region: process.env.LAMBDA_REGION });
console.log('Loading function3');
//var textt = "";
exports.handler = function(event, context) {
console.log("Incoming: ", event);
// textt = event.Records[0].s3.object.key;
// var output = querystring.parse(event);
//var testData = null;
// Get the object from the event and show its content type
// const bucket = event.Records[0].s3.bucket.name;
// const key = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, ' '));
const params = {
Bucket: 'bucket',
Key: 'key',
};
s3.getObject(params, function(err, objectData) {
if (err) {
console.log('Could not fetch object data: ', err);
} else {
console.log('Data was successfully fetched from object');
var eParams = {
Destination: {
ToAddresses: ["fake#fake.com"]
},
Message: {
Body: {
Text: {
Data: objectData
// Data: textt
}
},
Subject: {
Data: "Email Subject!!!"
}
},
Source: "fake#fake.com"
};
console.log('===SENDING EMAIL===');
var email = ses.sendEmail(eParams, function(err, emailResult) {
if (err) console.log('Error while sending email', err);
else {
console.log("===EMAIL SENT===");
//console.log(objectData);
console.log("EMAIL CODE END");
console.log('EMAIL: ', emailResult);
context.succeed(event);
}
});
}
});
};
UPDATE
I have added comments to the code and checked the logs...it doesnt go past this line
var s3 = new aws.S3({ apiVersion: '2006-03-01', accessKeyId: process.env.ACCESS_KEY, secretAccessKey: process.env.SECRET_KEY, region: process.env.LAMBDA_REGION });
Is this anyway related to access denied?
NOTE: ALL I WANT IN THE FILENAME OF THE UPLOADED FILE
UPDATE 2
iv replaced the line causing issue with var s3 = new aws.S3().getObject({ Bucket: this.awsBucketName, Key: 'keyName' }, function(err, data)
{
if (!err)
console.log(data.Body.toString());
});
but this is firing as TypeError: s3.getObject is not a function
Also tried...var s3 = new aws.S3();
this is back to the original error of Could not fetch object data: { AccessDenied: Access Denied
First of all region should be S3 bucket region and not lambda region. Next you need to verify your credentials and if they have access to S3 bucket you have defined. As you stated in one of the comment try attaching S3 full access Amazon managed policy to your IAM user which is associated with credentials you are using in Lambda. Next step would be use aws cli to see if you can access this bucket. Maybe something like -
aws s3 ls
Having said above you should not use credentials at all. Since Lambda and S3 are amazon services you should use roles. Just give Lambda a role that gives it full access to S3 and do not use aws IAM credentials for this. And
var s3 = new AWS.S3();
is sufficient.
Update: For future reference, Amazon have now updated the documentation from what was there at time of asking. As per #Loren Segal's comment below:-
We've corrected the docs in the latest preview release to document this parameter properly. Sorry about the mixup!
I'm trying out the developer preview of the AWS SDK for Node.Js and want to upload a zipped tarball to S3 using putObject.
According to the documentation, the Body parameter should be...
Body - (Base64 Encoded Data)
...therefore, I'm trying out the following code...
var AWS = require('aws-sdk'),
fs = require('fs');
// For dev purposes only
AWS.config.update({ accessKeyId: 'key', secretAccessKey: 'secret' });
// Read in the file, convert it to base64, store to S3
fs.readFile('myarchive.tgz', function (err, data) {
if (err) { throw err; }
var base64data = new Buffer(data, 'binary').toString('base64');
var s3 = new AWS.S3();
s3.client.putObject({
Bucket: 'mybucketname',
Key: 'myarchive.tgz',
Body: base64data
}).done(function (resp) {
console.log('Successfully uploaded package.');
});
});
Whilst I can then see the file in S3, if I download it and attempt to decompress it I get an error that the file is corrupted. Therefore it seems that my method for 'base64 encoded data' is off.
Can someone please help me to upload a binary file using putObject?
You don't need to convert the buffer to a base64 string. Just set body to data and it will work.
Here is a way to send a file using streams, which might be necessary for large files and will generally reduce memory overhead:
var AWS = require('aws-sdk'),
fs = require('fs');
// For dev purposes only
AWS.config.update({ accessKeyId: 'key', secretAccessKey: 'secret' });
// Read in the file, convert it to base64, store to S3
var fileStream = fs.createReadStream('myarchive.tgz');
fileStream.on('error', function (err) {
if (err) { throw err; }
});
fileStream.on('open', function () {
var s3 = new AWS.S3();
s3.putObject({
Bucket: 'mybucketname',
Key: 'myarchive.tgz',
Body: fileStream
}, function (err) {
if (err) { throw err; }
});
});
I was able to upload my binary file this way.
var fileStream = fs.createReadStream("F:/directory/fileName.ext");
var putParams = {
Bucket: s3bucket,
Key: s3key,
Body: fileStream
};
s3.putObject(putParams, function(putErr, putData){
if(putErr){
console.error(putErr);
} else {
console.log(putData);
}
});