I am trying to fetch a S3 Object using AWS Storage
fetchAvatar = async () => {
try {
const imageData = await Storage.get("public/public/us-east-2:/1597842961073/family.jpg")
console.log(imageData)
} catch (err) {
console.log('error fetching avatar: ')
console.log(err)
}
}
When I click on the link that the imageData provides I get NoSuchKey error, however it does exist
I've made sure that the image is public and accessible by everyone, so there shouldn't be any authentication problems. I've also looked at similar issue to this and I made sure there is no spaces or a lot of special keys in my image keys. I am kind of stumped on this...
So I figured out the reason, and it has to do something with AWS S3 Management. For some reason that every time I upload an image, the folder will reset and become privet. When I remake the folders and image public manually I am able to render the image properly...So i guess it is more of AWS issue or bug that they need to fix I think
I suggest to use javascript aws sdk, you can get an object from the bucket like below:
var params = {
Bucket: "your-bucket-name",
Key: "yourFileName.jpg"
};
s3.getObject(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data);
});
UPDATE:
You can define your region when you create a s3 instance, like:
const s3 = new S3({
region: 'eu-central-1',
});
Related
I'm working on setting up a Lambda function in JavaScript. I want this function to take some data when a dynamodb record is deleted, and use that to find and remove the S3 object it corresponds to (In a versioned bucket). Here's what I have so far:
import { Context, APIGatewayProxyResult, APIGatewayEvent } from 'aws-lambda';
const AWS = require('aws-sdk');
const s3 = new AWS.S3({
region: 'eu-west-2'
});
export const handler = async (event: APIGatewayEvent, context: Context): Promise<APIGatewayProxyResult> => {
event.Records.forEach(async record => {
if (record.eventName == "REMOVE") {
_processRecord(record.dynamodb.OldImage).promise();
}
});
};
async function _processRecord(oldImage) {
const parameters = {
Bucket: process.env.BUCKETNAME,
Key: oldImage.propertyId.S
};
try {
s3.headObject(parameters).promise();
console.log('File located in S3');
try {
s3.deleteObject(parameters).promise();
console.log('File deleted from S3');
}
catch (error) {
console.log("ERROR in file Deleting : " + JSON.stringify(error));
}
} catch (error) {
console.log("File not Found ERROR : " + error.code)
}
}
Everything seems fine until I get to the S3 section. When I invoke the function I get a 202 response which all looks fine, but the files are not being deleted when I check in S3. I've tried adding in a version to the parameters but that doesn't seem to work either. Any help would be greatly appreciated.
When you delete and object from an S3 bucket which has versioning enabled, the object is not permanently deleted. Instead the latest version is specially marked as deleted. Any get object issued to that object will return as if the object has been deleted.
To permanently delete an object in a versioned bucket you must delete all the versions of that object.
You can find more detail on the Deleting object versions docs.
I've done some more digging and was able to figure out where things were going wrong. I think my main issue was actually using foreach in the async function. Replacing that with for (const record of Records) got things running smoothly.
I have some problems accessing my S3 images via get request form my express server.
I have a mongo database where I store text information for the items on my webpage and save the image key that I send to my S3 bucket. Now when I try to get all the items and the respective png images, this error came to me:
...aws-sdk\lib\request.js:31
throw err;
^
AccessDenied: Access Denied ...
even if my user authorization in S3 is good.
Because I need to fetch all the items for a productPage component I go like this:
//ROUTER FILE
router.get("/cust/test", async (req, res) => {
try {
let tests;
tests = await Test.find();
tests.map((t) => {
const png = t.png;
const readStream = s3DwnPng(png);
readStream.pipe(res);
console.log(png);
});
res.status(200).json(tests);
console.log(tests);
} catch (err) {
res.status(500).json(err);
}
});
//S3 FILE
function s3DwnPng(fileKey) {
const dwnParams = {
Bucket: process.env.AWS_BUCKET_NAME,
Key: `png/${fileKey}`,
};
return s3.getObject(dwnParams).createReadStream();
}
exports.s3DwnPng = s3DwnPng;
but this does not work for me.
Someone could help me?
And is it worth persisting accessing the images passing throw my server? I'm considering switching to a public policy with private CORS access to make the load on my server lighter, is it really secure to do so?
I am using Firebase Cloud Storage.
I know how to get the URL of a downloaded file (tags.txt) using Firebase SDK function running on a client javascript :
storage.ref('tags.txt').getDownloadURL().then((url) => {
console.log(url);
});
I want to get the downloadURL from Node.JS . I do know this function does not exists for Node.JS
However, In Google Cloud Platform you can get the following key/value pair :
firebaseStorageDownloadTokens : 0ea9a60a-7719-4c6b-9cb5-7fcf69d7c633, the value being the token you want. From this token I can easily build the downloadURL I need.
The path to get there is:
Cloud Storage / "Bucket name" / Bucket details / tags.txt / EDIT METADATA / Custom metadata.
I try this code to access this metadata :
async function getBucketMetadata() {
const bucketName = "gs://tags.appspot.com";
try {
// Get Bucket Metadata
let metadata = await admin.storage().bucket(bucketName).file('tags.txt').getMetadata();
console.log(metadata)
}
catch (err) {
console.log(err.message)
}
}
I got keys/values (not a real project though) info such as:
bucket:'tags.appspot.com'
contentType:'text/plain'
crc32c:'Y1Sdxw=='
etag:'CI1EETD18Co9vECEAE='
generation:'162694124484794756'
id:'tags-admin.appspot.com/tags.txt/162694431484794756'
kind:'storage#object'
md5Hash:'P1YSFER4xSf5p0/KWrdQWx1z1Lyg=='
mediaLink:'https://storage.googleapis.com/download/storage/v1/b/tags-admin.appspot.com/o/tags.txt?generation=162694443184794756&alt=media'
metageneration:'1'
name:'tags.txt'
selfLink:'https://www.googleapis.com/storage/v1/b/tags-admin.appspot.com/o/tags.txt'
size:'5211247'
storageClass:'STANDARD'
timeCreated:'2021-07-22T09:01:24.862Z'
timeStorageClassUpdated:'2021-07-22T09:01:24.862Z'
updated:'2021-07-22T09:01:24.862Z'
But nothing regarding the key/value pair I want : firebaseStorageDownloadTokens : 0ea9a60a-7719-4c6b-9cb5-7fcf69d7c633
If the key/value can be seen on Google Cloud Platform , I do believe the key/value is also accessible via some code.
Your help is appreciated.
I mixed up two projects. I re-tried it, and its work pretty nicely. Using this method I can retrieve the file token and build the file URL around it on the back-end. The front-end function getDownloadURL() is not longer required.
Thanks for your help anyway.
The code become:
async function getBucketMetadata() {
const bucketName = "gs://tags.appspot.com";
try {
// Get Bucket Metadata
let metadata = await admin.storage().bucket(bucketName).file('tags.txt').getMetadata();
console.log(metadata[0].metadata.firebaseStorageDownloadTokens)
}
catch (err) {
console.log(err.message)
}
}
I'm trying to build an Express server that will send items in a S3 bucket to the client using Node.js and Express.
I found the following code on the AWS documentation.
var s3 = new AWS.S3({apiVersion: '2006-03-01'});
var params = {Bucket: 'myBucket', Key: 'myImageFile.jpg'};
var file = require('fs').createWriteStream('/path/to/file.jpg');
s3.getObject(params).createReadStream().pipe(file);
I have changed I slightly to the following:
app.get("/", (req, res) => {
const params = {
Bucket: env.s3ImageBucket,
Key: "images/profile/abc"
};
s3.getObject(params).createReadStream().pipe(res);
});
I believe this should work fine. The problem I'm running into is when the file doesn't exist or S3 returns some type of error. The application crashes and I get the following error:
NoSuchKey: The specified key does not exist
My question is, how can I catch or handle this error? I have tried a few things such as wrapping that s3.getObject line in a try/catch block, all of which haven't worked.
How can I catch an error and handle it my own way?
I suppose you can catch error by listening to the error emitter first.
s3.getObject(params)
.createReadStream()
.on('error', (e) => {
// NoSuchKey & others
})
.pipe(res)
.on('data', (data) => {
// data
})
I have some very simple code for generating an S3 URL. The URL I get back from the SDK only has the base path for S3. It doesn't contain anything else. Why is this happening?
var AWS = require('aws-sdk');
var s3 = new AWS.S3();
console.log(s3.getSignedUrl('getObject', {
Bucket: 'test',
Key: 'test'
}));
// Returns "https://s3.amazonaws.com/"
Node.js v0.12.0, AWS SDK 2.1.15 or 2.1.17, Windows 7 64-bit,
The problem wasn't with code. It turns out that when you don't have your AWS credentials set up properly in your environment that the AWS SDK doesn't complain. Fixing the credentials in ~/.aws/credentials resolved the issue.
I too had the same problem. I got the correct output by changing the below
from AWS_Access_Key_Id = myaccesskey to aws_access_key_id=myaccesskey
Similarly for Secret key. That means you should not use Upper case and no space before and after =
I had the same problem.
I inserted a correct access token, but some requests received only basepath, and some requests received normal URLs.
I was able to get the correct URL when I modified getSignedUrl to await getSignedUrlPromise.
To trace your issue whether your bucket exists with right permissions, and/or credentials are correct in your ~/.aws/credentials file, or whatever other aws access related problems. I just used the (Headbucket) operation as per documentation.
Ref: https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#getSignedUrlPromise-property
to achieve this programmatically:
/* This operation checks to see if a bucket exists. Put into aws.ts files*/
var params = {
Bucket: "acl1"
};
s3.headBucket(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
Meanwhile the callback:
var params = {
Bucket: 'STRING_VALUE', /* required */
ExpectedBucketOwner: 'STRING_VALUE' /* the owner's aws account id */
};
s3.headBucket(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
This will throw an exception like:
for example => CredentialsError: Missing credentials in config, if using AWS_CONFIG_FILE...