Where can I see the code for the AWS SDK function? - javascript

For example, When I use this function like this
// snippet-start:[s3.JavaScript.buckets.createBucket]
// Load the AWS SDK for Node.js
var AWS = require('aws-sdk');
// Set the region
AWS.config.update({region: 'REGION'});
// Create S3 service object
s3 = new AWS.S3({apiVersion: '2006-03-01'});
// Create the parameters for calling createBucket
var bucketParams = {
Bucket : process.argv[2]
};
// call S3 to create the bucket
s3.createBucket(bucketParams, function(err, data) {
if (err) {
console.log("Error", err);
} else {
console.log("Success", data.Location);
}
});
I want to see the code that implements the function. When I looked at the AWS SDK code, I couldn't go deeper than this.
Where can I see the code for the AWS SDK function?

.d.ts files are not code, they are only type definitions.
If you look at the s3.js file that is next to that s3.d.ts file you've already found, you will see that it calls require('../lib/services/s3').
Look in that file to find the code.

Related

AWS Lambda - Delete an object in a versioned S3 Bucket

I'm working on setting up a Lambda function in JavaScript. I want this function to take some data when a dynamodb record is deleted, and use that to find and remove the S3 object it corresponds to (In a versioned bucket). Here's what I have so far:
import { Context, APIGatewayProxyResult, APIGatewayEvent } from 'aws-lambda';
const AWS = require('aws-sdk');
const s3 = new AWS.S3({
region: 'eu-west-2'
});
export const handler = async (event: APIGatewayEvent, context: Context): Promise<APIGatewayProxyResult> => {
event.Records.forEach(async record => {
if (record.eventName == "REMOVE") {
_processRecord(record.dynamodb.OldImage).promise();
}
});
};
async function _processRecord(oldImage) {
const parameters = {
Bucket: process.env.BUCKETNAME,
Key: oldImage.propertyId.S
};
try {
s3.headObject(parameters).promise();
console.log('File located in S3');
try {
s3.deleteObject(parameters).promise();
console.log('File deleted from S3');
}
catch (error) {
console.log("ERROR in file Deleting : " + JSON.stringify(error));
}
} catch (error) {
console.log("File not Found ERROR : " + error.code)
}
}
Everything seems fine until I get to the S3 section. When I invoke the function I get a 202 response which all looks fine, but the files are not being deleted when I check in S3. I've tried adding in a version to the parameters but that doesn't seem to work either. Any help would be greatly appreciated.
When you delete and object from an S3 bucket which has versioning enabled, the object is not permanently deleted. Instead the latest version is specially marked as deleted. Any get object issued to that object will return as if the object has been deleted.
To permanently delete an object in a versioned bucket you must delete all the versions of that object.
You can find more detail on the Deleting object versions docs.
I've done some more digging and was able to figure out where things were going wrong. I think my main issue was actually using foreach in the async function. Replacing that with for (const record of Records) got things running smoothly.

Getting Google Cloud Platform custom metadata " firebaseStorageDownloadToken" programmatically

I am using Firebase Cloud Storage.
I know how to get the URL of a downloaded file (tags.txt) using Firebase SDK function running on a client javascript :
storage.ref('tags.txt').getDownloadURL().then((url) => {
console.log(url);
});
I want to get the downloadURL from Node.JS . I do know this function does not exists for Node.JS
However, In Google Cloud Platform you can get the following key/value pair :
firebaseStorageDownloadTokens : 0ea9a60a-7719-4c6b-9cb5-7fcf69d7c633, the value being the token you want. From this token I can easily build the downloadURL I need.
The path to get there is:
Cloud Storage / "Bucket name" / Bucket details / tags.txt / EDIT METADATA / Custom metadata.
I try this code to access this metadata :
async function getBucketMetadata() {
const bucketName = "gs://tags.appspot.com";
try {
// Get Bucket Metadata
let metadata = await admin.storage().bucket(bucketName).file('tags.txt').getMetadata();
console.log(metadata)
}
catch (err) {
console.log(err.message)
}
}
I got keys/values (not a real project though) info such as:
bucket:'tags.appspot.com'
contentType:'text/plain'
crc32c:'Y1Sdxw=='
etag:'CI1EETD18Co9vECEAE='
generation:'162694124484794756'
id:'tags-admin.appspot.com/tags.txt/162694431484794756'
kind:'storage#object'
md5Hash:'P1YSFER4xSf5p0/KWrdQWx1z1Lyg=='
mediaLink:'https://storage.googleapis.com/download/storage/v1/b/tags-admin.appspot.com/o/tags.txt?generation=162694443184794756&alt=media'
metageneration:'1'
name:'tags.txt'
selfLink:'https://www.googleapis.com/storage/v1/b/tags-admin.appspot.com/o/tags.txt'
size:'5211247'
storageClass:'STANDARD'
timeCreated:'2021-07-22T09:01:24.862Z'
timeStorageClassUpdated:'2021-07-22T09:01:24.862Z'
updated:'2021-07-22T09:01:24.862Z'
But nothing regarding the key/value pair I want : firebaseStorageDownloadTokens : 0ea9a60a-7719-4c6b-9cb5-7fcf69d7c633
If the key/value can be seen on Google Cloud Platform , I do believe the key/value is also accessible via some code.
Your help is appreciated.
I mixed up two projects. I re-tried it, and its work pretty nicely. Using this method I can retrieve the file token and build the file URL around it on the back-end. The front-end function getDownloadURL() is not longer required.
Thanks for your help anyway.
The code become:
async function getBucketMetadata() {
const bucketName = "gs://tags.appspot.com";
try {
// Get Bucket Metadata
let metadata = await admin.storage().bucket(bucketName).file('tags.txt').getMetadata();
console.log(metadata[0].metadata.firebaseStorageDownloadTokens)
}
catch (err) {
console.log(err.message)
}
}

Amazon S3 Fetching Error: "NoSuchKey" However Key Does Exist

I am trying to fetch a S3 Object using AWS Storage
fetchAvatar = async () => {
try {
const imageData = await Storage.get("public/public/us-east-2:/1597842961073/family.jpg")
console.log(imageData)
} catch (err) {
console.log('error fetching avatar: ')
console.log(err)
}
}
When I click on the link that the imageData provides I get NoSuchKey error, however it does exist
I've made sure that the image is public and accessible by everyone, so there shouldn't be any authentication problems. I've also looked at similar issue to this and I made sure there is no spaces or a lot of special keys in my image keys. I am kind of stumped on this...
So I figured out the reason, and it has to do something with AWS S3 Management. For some reason that every time I upload an image, the folder will reset and become privet. When I remake the folders and image public manually I am able to render the image properly...So i guess it is more of AWS issue or bug that they need to fix I think
I suggest to use javascript aws sdk, you can get an object from the bucket like below:
var params = {
Bucket: "your-bucket-name",
Key: "yourFileName.jpg"
};
s3.getObject(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data);
});
UPDATE:
You can define your region when you create a s3 instance, like:
const s3 = new S3({
region: 'eu-central-1',
});

How does one specify directory, permissions, and create sub-directory for S3 in AWS lambda code?

I am implementing an AWS lambda function
(JavaScript, node.js environment)
which uses a call which goes like this:
const aws = require('aws-sdk');
const s3 = new aws.S3();
function work1(obj, cb_Work1) {
console.log(">>> Calling work");
s3.putObject({
Bucket: bucketName,
Key: 'test.txt',
Body: JSON.stringify(obj)
})
.promise()
.then(() => {
console.log('S3 -> UPLOAD SUCCESS');
work2(obj, (resp) => {
cb_Work1(resp);
});
})
.catch(e => {
console.log('S3 -> UPLOAD ERROR');
console.log(e);
cb_Work1({
error: e
});
});
}
So here Key seems to be the file name.
But the thing is that I don't find yet detailed docs for s3.putObject so I don't know
1) how to specify a directory name (not just bucket name),
2) how to define permissions on that file which I am creating
3) how to create a sub-directory before putting the file, etc., etc., etc.
How can these things be done via aws-sdk (from JavaScript, node.js code)?
Many thanks in advance.
As stated in the S3 documentation here:
The Amazon S3 console treats all objects that have a forward slash "/" character as the last (trailing) character in the key name as a folder, for example examplekeyname/. You cannot upload an object with a key name with a trailing "/" character by using the Amazon S3 console. However, objects named with a trailing "/" can be uploaded with the Amazon S3 API by using the AWS CLI, the AWS SDKs, or REST API.
As for permissions, the API provides the putObjectAcl function that allows defining an ACL to use when putting the object.

Problems Downloading files using Dropbox JavaScript SDK

I need to figure out where my files are downloading when I use the filesDownload(). I don't see an argument for file destination. Here's my code:
require('isomorphic-fetch');
var Dropbox = require('dropbox').Dropbox;
var dbx = new Dropbox({ accessToken: 'accessToken', fetch});
dbx.filesDownload({path: 'filepath}).
then(function(response) {
console.log(response);
})
.catch(function(error) {
console.log(error);
});
I'm getting a successful callback when I run the code but I don't see the file anywhere.
I need to know where my files are downloading to and how to specify the file destination in my function.
Thanks,
Gerald
I've used the function as described in the SDK's documentation (http://dropbox.github.io/dropbox-sdk-js/Dropbox.html#filesDownload__anchor) but I have no idea where my file goes.
Expected Result: Files are downloaded to Dropbox to path that I have designated.
Actual Results: I get a successful callback from Dropbox but I cannot find the files downloaded.
In Node.js, the Dropbox API v2 JavaScript SDK download-style methods return the file data in the fileBinary property of the object they pass to the callback (which is response in your code).
You can find an example of that here:
https://github.com/dropbox/dropbox-sdk-js/blob/master/examples/javascript/node/download.js#L20
So, you should be able to access the data as response.fileBinary. It doesn't automatically save it to the local filesystem for you, but you can then do so if you want.
You need to use fs module to save binary data to file.
dbx.filesDownload({path: YourfilePath})
.then(function(response) {
console.log(response.media_info);
fs.writeFile(response.name, response.fileBinary, 'binary', function (err) {
if (err) { throw err; }
console.log('File: ' + response.name + ' saved.');
});
})
.catch(function(error) {
console.error(error);
});

Categories

Resources