is there a way to getObjects from s3 with the url - javascript

If I Want to get my image from s3 but using the URL as a parameter in the params is that possible. Currently, I am getting my images using the key.
const downloadParams = {
Key: fileKey,
Bucket: BUCKET_NAME
};
const data = await s3.getObject(downloadParams);
const stream = await fs.createReadStream(data);
return await s3.getObject(downloadParams).createReadStream();
}

If you have the full image URL you can use request or a similar library to get the file instead of using s3.getObject

Related

Stripe Payment Success URL Redirect With Parameter

I'm working with the following Stripe.js file in a Next.js project:
import { loadStripe } from "#stripe/stripe-js";
export async function Stripe({ lineItems }, imageUrls) {
let stripePromise = null;
const getStripe = () => {
if (!stripePromise) {
stripePromise = loadStripe(process.env.NEXT_PUBLIC_API_KEY);
}
return stripePromise;
};
const stripe = await getStripe();
await stripe.redirectToCheckout({
mode: "payment",
lineItems,
successUrl: `http://localhost:3000/success?pdf=${imageUrls}`,
cancelUrl: window.location.origin,
});
}
When I call the Stripe function, I'm passing an imageUrls array which looks like this for example:
['blob:http://localhost:3000/2a47a926-be04-49a9-ad96-3279c540ebb4']
When the Stripe redirectToCheckout happens, I navigate to the success page and pass imageUrls.
My goal is to convert these imageUrls into png images from the success page using code like this inside of an async function:
const fileResponse = await fetch(imageUrl);
const contentType = fileResponse.headers.get("content-type");
const blob = await fileResponse.blob();
const imageFile = new File([blob], `someImage.png`, {
contentType,
});
I end up getting this error though:
GET blob:http://localhost:3000/2a47a926-be04-49a9-ad96-3279c540ebb4 net::ERR_FILE_NOT_FOUND
I'm guessing after the redirect this URL doesn't exist anymore? What is the correct way to make something like this work?
Edit to include success.js code:
import Layout from "../components/Layout/Layout";
import { useEffect } from "react";
function SuccessPage() {
useEffect(() => {
const params = new Proxy(new URLSearchParams(window.location.search), {
get: (searchParams, prop) => searchParams.get(prop),
});
let value = params.pdf;
console.log("this is value");
console.log(value);
async function getFileFromUrl(imageUrl) {
const fileResponse = await fetch(imageUrl);
const contentType = fileResponse.headers.get("content-type");
const blob = await fileResponse.blob();
const ditheredImageFile = new File([blob], `test.png`, {
contentType,
});
return ditheredImageFile;
}
let imageFile = getFileFromUrl(value);
console.log("this is imageFile");
console.log(imageFile);
}, []);
return (
<Layout>
<h3>Thank You For Your Order!</h3>
</Layout>
);
}
export default SuccessPage;
You're using blobs and you shouldn't - and I'm guessing those blobs came from a user input.
Blobs in javascript work in-memory (RAM), they will be discarded when your document unloads.
Since you're redirect the user to another page (stripe) you're unloading your document and thus loosing everything you've in memory (all your blobs are gone, they are only good while your document is loaded, after you leave it/get redirected they are cleared from memory).
To solve your problem you must simply upload the documents to a server prior to unloading the document (redirecting the user to stripe) and pass your server URL instead of your "internal" (blob) URL and all should work.
Basically, you need to save your files on a server via AJAX, have the server save the files and return their URL's (or even better, an ID for your image collection) and use those server URL's on the redirect (or an ID that you'll use to retrieve all the files you need later, simplifying your parameter usage).
More info at: https://javascript.info/blob ("Blob as URL" section)
The localhost url will not work because Stripe has no way of accessing an image on your local machine. Instead, you should upload the image to a database and provide a public URL. I'm not sure what systems (e.g. AWS, Azure) you use, so it's hard to get very specific.
The database can then be queried after landing on the checkout page. One way to do this would be to pass the ID of the item as a URL param. Another way is to store information about the product in local storage.
Either way, you should get a response from your database with a unique link or ID so you know exactly what to query later.
you should pass the image data as a base64 encoded string to the success URL and then decode it in the success page to get the actual image data.
modify your code:
On the Stripe function:
const imageUrls = lineItems.map((item) => {
return btoa(item.url);
});
// ...
successUrl: `http://localhost:3000/success?pdf=${imageUrls}`,
// ...
On the SuccessPage
async function getFileFromUrl(imageUrl) {
const decodedUrl = atob(imageUrl);
const fileResponse = await fetch(decodedUrl);
const contentType = fileResponse.headers.get("content-type");
const blob = await fileResponse.blob();
const ditheredImageFile = new File([blob], `test.png`, {
contentType,
});
return ditheredImageFile;
}
I think error is coming from this one
blob url will be only available in the same origin context, but your successurl is not the same as like blob
please try to generate dataurls from blob.
const dataUrls = await Promise.all(
imageUrls.map(async (url) => {
const response = await fetch(url);
const blob = await response.blob();
return URL.createObjectURL(blob);
})
);
await stripe.redirectToCheckout({
mode: "payment",
lineItems,
successUrl: `http://localhost:3000/success?pdf=${encodeURIComponent(dataUrls.join(","))}`,
cancelUrl: window.location.origin,
});
And then please decode urls at the success page, then you could resolve your issues.
The correct way to make this work is to use a URL query parameter instead of a URL path parameter. When you pass parameters in the URL path, they are not accessible after the redirect.
So instead, you should pass the imageUrls array as a query parameter in the successUrl like this:
successUrl: http://localhost:3000/success?pdf=${imageUrls.join(',')},
You can then access the query parameter in the success page and convert the imageUrls array into png images.

Adding files to stripe connect

I'm trying to add some files (identity documents) to stripe to create a connected account, but I'm having trouble with uploading them from client side to stripe. My backend is in Node.js and the Stripe documentation says it should use this format:
const Stripe = require('stripe');
const stripe = Stripe('stripeAPIKEY');
var fp = fs.readFileSync('/path/to/a/file.jpg');
var file = await stripe.files.create({
purpose: 'identity_document',
file: {
data: fp,
name: 'file.jpg',
type: 'image/jpg',
},
});
I need to upload the file data (the variable fp), but I can't seem to get the relevant path for when the user uploads their document in the client side in Javascript. Here is my function call to Stripe:
export const uploadPersonIdFile = async (identityDocument: any) => {
const fp = fs.readFileSync(identityDocument);
const personId = await stripe.files.create({
purpose: 'identity_document',
file: {
data: fp,
name: 'idDocument.jpg',
type: 'image/jpg',
},
});
return personId;
}
My client side looks like this:
const inpFileU = $("#utilityButton");
const previewImage = $("#image-preview__image-U");
const previewDefaultText = $("#image-preview__default-text-U");
inpFileU.change(function(){
const file = this.files[0];
if(file){
const reader = new FileReader();
previewImage.css("display", "block");
reader.addEventListener("load", function(){
previewImage.attr("src", this.result);
});
reader.readAsDataURL(file);
utilityFileName = file.name;
await uploadMerchantUtilityDocument({
utilityDocument: utilityFileName
}).then((result) => {
/** #type {any} */
const data = result.data;
const textData = data.text;
console.log(JSON.stringify(data));
console.log(JSON.stringify(textData));
}).catch((error) => {
console.log("Error message: " + error.message);
console.log("Error details: " + error.details);
});
} else {
console.log('no file');
}
});
I upload the file and then the error message response I keep getting is this:
Error: ENOENT: no such file or directory, open 'insuranceImage.jpeg'
How should I upload my file? I think my fp variable is wrong, but I don't know what to replace it with
As far as I understood you are uploading files from the client to the server and from the server you want to upload to stripe API. In this case, when you read a file with fs default encoding is utf8 but maybe when the file uploaded it was encoded as base64. I do not know too much about jquery. Check how it was encoded, so use the correct encoding.
const image= fs.readFileSync('/path/to/file.jpg', {encoding: 'base64'});
I think somehow the image is broken. If the image path is correct, just manually place an image in that directory, and then read from it as utf8 encoding.
since u got this error base64 string is too large to process in fs.readFileSync which means your path is correct.
reader = fs.createReadStream('imagePath', {
flag: 'a+',
// then try this to base64
encoding: 'UTF-8',
start: 5,
end: 64,
highWaterMark: 16
});
// Read and display the file data on console
reader.on('data', function (chunk) {
console.log(chunk);
});
Based on the error and the comment it looks like the path you're providing to readFileSync isn't pointing to where the file you're trying to read exists. From the error it looks like you're passing insuranceImage.jpeg and that the system can't find that file.
Try confirming whether the relative path you're providing is correct, or construct and provide an absolute path instead.
Ok, so after a while I was helped by a pro online. Here's what he did (because stripe's documentation isn't too great on file uploading)...
I created a base64 variable from FileReader(), which came from creating an array base64String = [] This array took hold of the result from reader.load, like so:
reader.addEventListener("load", function(){
base64String.push(this.result);
});
This base64String array was then used as the identityDocument variable for the backend function.
So my base64 variable in the client-side kept on starting with 'image/jpeg; base64,' and I needed to get rid of that. No point using the fs.readFileSync as we're not uploading a file to stripe, we're uploading raw base64 data. So, the following code in node.js fits in well to solve this:
const parts = utilityDocument.split(",");
const base64 = parts[1];
const buffer = Buffer.from(base64, "base64");
so the full function that calls to stripe is so:
export const uploadPersonIdFile = async (uid: any, identityDocument: any) => {
const parts = identityDocument.split(",");
const base64 = parts[1];
const buffer = Buffer.from(base64, "base64");
const personId = await stripe.files.create({
purpose: 'identity_document',
file: {
data: buffer,
name: 'identity.jpg',
type: 'image/jpg',
},
});
await updateMerchantId(uid, { idDocument: personId.id });
return personId;
}

How to post original file as formdata using api with filestack in reactjs?

I am using filestack in react to upload files. First i browse the images and upload it, on
Upload Done i am using a function, in this function i want to post original file as formdata
using my api with post method. So help me to post original file to my server after getting
response from filestack upload.
const options = {
maxFiles: 5,
onUploadDone: handleUploadFunction // callback function on upload done
};
const filestack = client.init("key", options);
const picker = filestack.picker(options);
function handleUploadFunction(result, board_id) {
const fileData = result.filesUploaded[0];
console.log(result);
const getValue = sessionStorage.getItem("user_id");
const token = sessionStorage.getItem("userToken");
let imageData = new FormData();
imageData.append("v_code", "1.0");
imageData.append("apikey", "41bbf547d64c309749b613f16323b762");
imageData.append("token", token);
imageData.append("userid", getValue);
imageData.append("board_id", 362);
imageData.append("img_text", "Test Image 362");
imageData.append("img_data", "image data 362");
imageData.append("card_id", 8854);
imageData.append("image", fileData.originalFile);
axios.post("http://160.153.247.88:3000/add_file", imageData).then(res => {
console.log(res.data);
});
}

Is there a way to get the previous version of a deleted s3 object with aws-sdk?

I have a S3 bucket with versioning enabled, configured to send notification events to Lambda. I need to process deleted objects from that bucket when the s3:ObjectRemoved:* event is received.
The event contains the versionId of the deleted object.
Is there a way to discover the versionId of the immediately previous version of the deleted object and fetch that version using the aws-sdk?
Or, alternatively, is there a way to get the deleted object using aws-sdk?
(I'm using the JavaScript aws-sdk)
It can be done with a 3-step process:
Get the list of versions with listObjectVersions
Get the wanted version from the list
Get the specific object,
passing VersionId as argument in getObject
const AWS = require('aws-sdk');
const s3 = new AWS.S3();
async function getDeletedObject (event, context) {
let params = {
Bucket: 'my-bucket',
Prefix: 'my-file'
};
try {
const previousVersion = await s3.listObjectVersions(params)
.promise()
.then(result => {
const versions = result.Versions;
// get previous versionId
return versions[0].VersionId;
});
params = {
Bucket: 'my-bucket',
Key: 'my-file',
VersionId: previousVersion
};
const deletedObject = await s3.getObject(params)
.promise()
.then(response => response.Body.toString('utf8'));
return deletedObject;
}
catch (error) {
console.log(error);
return;
}
}
Getting the below error with the solution mentioned by #andreswebs
IMG
UnhandledPromiseRejectionWarning: MethodNotAllowed: The specified method is not allowed against this resource.

How to get response from S3 getObject in Node.js?

In a Node.js project I am attempting to get data back from S3.
When I use getSignedURL, everything works:
aws.getSignedUrl('getObject', params, function(err, url){
console.log(url);
});
My params are:
var params = {
Bucket: "test-aws-imagery",
Key: "TILES/Level4/A3_B3_C2/A5_B67_C59_Tiles.par"
If I take the URL output to the console and paste it in a web browser, it downloads the file I need.
However, if I try to use getObject I get all sorts of odd behavior. I believe I am just using it incorrectly. This is what I've tried:
aws.getObject(params, function(err, data){
console.log(data);
console.log(err);
});
Outputs:
{
AcceptRanges: 'bytes',
LastModified: 'Wed, 06 Apr 2016 20:04:02 GMT',
ContentLength: '1602862',
ETag: '9826l1e5725fbd52l88ge3f5v0c123a4"',
ContentType: 'application/octet-stream',
Metadata: {},
Body: <Buffer 01 00 00 00 ... > }
null
So it appears that this is working properly. However, when I put a breakpoint on one of the console.logs, my IDE (NetBeans) throws an error and refuses to show the value of data. While this could just be the IDE, I decided to try other ways to use getObject.
aws.getObject(params).on('httpData', function(chunk){
console.log(chunk);
}).on('httpDone', function(data){
console.log(data);
});
This does not output anything. Putting a breakpoint in shows that the code never reaches either of the console.logs. I also tried:
aws.getObject(params).on('success', function(data){
console.log(data);
});
However, this also does not output anything and placing a breakpoint shows that the console.log is never reached.
What am I doing wrong?
#aws-sdk/client-s3 (2022 Update)
Since I wrote this answer in 2016, Amazon has released a new JavaScript SDK, #aws-sdk/client-s3. This new version improves on the original getObject() by returning a promise always instead of opting in via .promise() being chained to getObject(). In addition to that, response.Body is no longer a Buffer but, one of Readable|ReadableStream|Blob. This changes the handling of the response.Data a bit. This should be more performant since we can stream the data returned instead of holding all of the contents in memory, with the trade-off being that it is a bit more verbose to implement.
In the below example the response.Body data will be streamed into an array and then returned as a string. This is the equivalent example of my original answer. Alternatively, the response.Body could use stream.Readable.pipe() to an HTTP Response, a File or any other type of stream.Writeable for further usage, this would be the more performant way when getting large objects.
If you wanted to use a Buffer, like the original getObject() response, this can be done by wrapping responseDataChunks in a Buffer.concat() instead of using Array#join(), this would be useful when interacting with binary data. To note, since Array#join() returns a string, each Buffer instance in responseDataChunks will have Buffer.toString() called implicitly and the default encoding of utf8 will be used.
const { GetObjectCommand, S3Client } = require('#aws-sdk/client-s3')
const client = new S3Client() // Pass in opts to S3 if necessary
function getObject (Bucket, Key) {
return new Promise(async (resolve, reject) => {
const getObjectCommand = new GetObjectCommand({ Bucket, Key })
try {
const response = await client.send(getObjectCommand)
// Store all of data chunks returned from the response data stream
// into an array then use Array#join() to use the returned contents as a String
let responseDataChunks = []
// Handle an error while streaming the response body
response.Body.once('error', err => reject(err))
// Attach a 'data' listener to add the chunks of data to our array
// Each chunk is a Buffer instance
response.Body.on('data', chunk => responseDataChunks.push(chunk))
// Once the stream has no more data, join the chunks into a string and return the string
response.Body.once('end', () => resolve(responseDataChunks.join('')))
} catch (err) {
// Handle the error or throw
return reject(err)
}
})
}
Comments on using Readable.toArray()
Using Readable.toArray() instead of working with the stream events directly might be more convenient to use but, its worse performing. It works by reading all response data chunks into memory before moving on. Since this removes all benefits of streaming, this approach is discouraged per the Node.js docs.
As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams. Documentation Link
#aws-sdk/client-s3 Documentation Links
GetObjectCommand
GetObjectCommandInput
GetObjectCommandOutput
aws-sdk (Original Answer)
When doing a getObject() from the S3 API, per the docs the contents of your file are located in the Body property, which you can see from your sample output. You should have code that looks something like the following
const aws = require('aws-sdk');
const s3 = new aws.S3(); // Pass in opts to S3 if necessary
var getParams = {
Bucket: 'abc', // your bucket name,
Key: 'abc.txt' // path to the object you're looking for
}
s3.getObject(getParams, function(err, data) {
// Handle any error and exit
if (err)
return err;
// No error happened
// Convert Body from a Buffer to a String
let objectData = data.Body.toString('utf-8'); // Use the encoding necessary
});
You may not need to create a new buffer from the data.Body object but if you need you can use the sample above to achieve that.
Based on the answer by #peteb, but using Promises and Async/Await:
const AWS = require('aws-sdk');
const s3 = new AWS.S3();
async function getObject (bucket, objectKey) {
try {
const params = {
Bucket: bucket,
Key: objectKey
}
const data = await s3.getObject(params).promise();
return data.Body.toString('utf-8');
} catch (e) {
throw new Error(`Could not retrieve file from S3: ${e.message}`)
}
}
// To retrieve you need to use `await getObject()` or `getObject().then()`
const myObject = await getObject('my-bucket', 'path/to/the/object.txt');
Updated (2022)
nodejs v17.5.0 added Readable.toArray. If this API is available in your node version. The code will be very short:
const buffer = Buffer.concat(
await (
await s3Client
.send(new GetObjectCommand({
Key: '<key>',
Bucket: '<bucket>',
}))
).Body.toArray()
)
If you are using Typescript, you are safe to cast the .Body part as Readable (the other types ReadableStream and Blob are only returned in browser environment. Moreover, in browser, Blob is only used in legacy fetch API when response.body is not supported)
(response.Body as Readable).toArray()
Note that: Readable.toArray is an experimental (yet handy) feature, use it with caution.
=============
Original answer
If you are using aws sdk v3, the sdk v3 returns nodejs Readable (precisely, IncomingMessage which extends Readable) instead of a Buffer.
Here is a Typescript version. Note that this is for node only, if you send the request from browser, check the longer answer in the blog post mentioned below.
import {GetObjectCommand, S3Client} from '#aws-sdk/client-s3'
import type {Readable} from 'stream'
const s3Client = new S3Client({
apiVersion: '2006-03-01',
region: 'us-west-2',
credentials: {
accessKeyId: '<access key>',
secretAccessKey: '<access secret>',
}
})
const response = await s3Client
.send(new GetObjectCommand({
Key: '<key>',
Bucket: '<bucket>',
}))
const stream = response.Body as Readable
return new Promise<Buffer>((resolve, reject) => {
const chunks: Buffer[] = []
stream.on('data', chunk => chunks.push(chunk))
stream.once('end', () => resolve(Buffer.concat(chunks)))
stream.once('error', reject)
})
// if readable.toArray() is support
// return Buffer.concat(await stream.toArray())
Why do we have to cast response.Body as Readable? The answer is too long. Interested readers can find more information on my blog post.
For someone looking for a NEST JS TYPESCRIPT version of the above:
/**
* to fetch a signed URL of a file
* #param key key of the file to be fetched
* #param bucket name of the bucket containing the file
*/
public getFileUrl(key: string, bucket?: string): Promise<string> {
var scopeBucket: string = bucket ? bucket : this.defaultBucket;
var params: any = {
Bucket: scopeBucket,
Key: key,
Expires: signatureTimeout // const value: 30
};
return this.account.getSignedUrlPromise(getSignedUrlObject, params);
}
/**
* to get the downloadable file buffer of the file
* #param key key of the file to be fetched
* #param bucket name of the bucket containing the file
*/
public async getFileBuffer(key: string, bucket?: string): Promise<Buffer> {
var scopeBucket: string = bucket ? bucket : this.defaultBucket;
var params: GetObjectRequest = {
Bucket: scopeBucket,
Key: key
};
var fileObject: GetObjectOutput = await this.account.getObject(params).promise();
return Buffer.from(fileObject.Body.toString());
}
/**
* to upload a file stream onto AWS S3
* #param stream file buffer to be uploaded
* #param key key of the file to be uploaded
* #param bucket name of the bucket
*/
public async saveFile(file: Buffer, key: string, bucket?: string): Promise<any> {
var scopeBucket: string = bucket ? bucket : this.defaultBucket;
var params: any = {
Body: file,
Bucket: scopeBucket,
Key: key,
ACL: 'private'
};
var uploaded: any = await this.account.upload(params).promise();
if (uploaded && uploaded.Location && uploaded.Bucket === scopeBucket && uploaded.Key === key)
return uploaded;
else {
throw new HttpException("Error occurred while uploading a file stream", HttpStatus.BAD_REQUEST);
}
}
Converting GetObjectOutput.Body to Promise<string> using node-fetch
In aws-sdk-js-v3 #aws-sdk/client-s3, GetObjectOutput.Body is a subclass of Readable in nodejs (specifically an instance of http.IncomingMessage) instead of a Buffer as it was in aws-sdk v2, so resp.Body.toString('utf-8') will give you the wrong result “[object Object]”. Instead, the easiest way to turn GetObjectOutput.Body into a Promise<string> is to construct a node-fetch Response, which takes a Readable subclass (or Buffer instance, or other types from the fetch spec) and has conversion methods .json(), .text(), .arrayBuffer(), and .blob().
This should also work in the other variants of aws-sdk and platforms (#aws-sdk v3 node Buffer, v3 browser Uint8Array subclass, v2 node Readable, v2 browser ReadableStream or Blob)
npm install node-fetch
import { Response } from 'node-fetch';
import * as s3 from '#aws-sdk/client-s3';
const client = new s3.S3Client({})
const s3Response = await client.send(new s3.GetObjectCommand({Bucket: '…', Key: '…'});
const response = new Response(s3Response.Body);
const obj = await response.json();
// or
const text = await response.text();
// or
const buffer = Buffer.from(await response.arrayBuffer());
// or
const blob = await response.blob();
Reference: GetObjectOutput.Body documentation, node-fetch Response documentation, node-fetch Body constructor source, minipass-fetch Body constructor source
Thanks to kennu comment in GetObjectCommand usability issue
Extremely similar answer to #ArianAcosta above. Except I'm using import (for Node 12.x and up), adding AWS config and sniffing for an image payload and applying base64 processing to the return.
// using v2.x of aws-sdk
import aws from 'aws-sdk'
aws.config.update({
accessKeyId: process.env.YOUR_AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.YOUR_AWS_SECRET_ACCESS_KEY,
region: "us-east-1" // or whatever
})
const s3 = new aws.S3();
/**
* getS3Object()
*
* #param { string } bucket - the name of your bucket
* #param { string } objectKey - object you are trying to retrieve
* #returns { string } - data, formatted
*/
export async function getS3Object (bucket, objectKey) {
try {
const params = {
Bucket: bucket,
Key: objectKey
}
const data = await s3.getObject(params).promise();
// Check for image payload and formats appropriately
if( data.ContentType === 'image/jpeg' ) {
return data.Body.toString('base64');
} else {
return data.Body.toString('utf-8');
}
} catch (e) {
throw new Error(`Could not retrieve file from S3: ${e.message}`)
}
}
At first glance it doesn't look like you are doing anything wrong but you don't show all your code. The following worked for me when I was first checking out S3 and Node:
var AWS = require('aws-sdk');
if (typeof process.env.API_KEY == 'undefined') {
var config = require('./config.json');
for (var key in config) {
if (config.hasOwnProperty(key)) process.env[key] = config[key];
}
}
var s3 = new AWS.S3({accessKeyId: process.env.AWS_ID, secretAccessKey:process.env.AWS_KEY});
var objectPath = process.env.AWS_S3_FOLDER +'/test.xml';
s3.putObject({
Bucket: process.env.AWS_S3_BUCKET,
Key: objectPath,
Body: "<rss><data>hello Fred</data></rss>",
ACL:'public-read'
}, function(err, data){
if (err) console.log(err, err.stack); // an error occurred
else {
console.log(data); // successful response
s3.getObject({
Bucket: process.env.AWS_S3_BUCKET,
Key: objectPath
}, function(err, data){
console.log(data.Body.toString());
});
}
});
Alternatively you could use minio-js client library get-object.js
var Minio = require('minio')
var s3Client = new Minio({
endPoint: 's3.amazonaws.com',
accessKey: 'YOUR-ACCESSKEYID',
secretKey: 'YOUR-SECRETACCESSKEY'
})
var size = 0
// Get a full object.
s3Client.getObject('my-bucketname', 'my-objectname', function(e, dataStream) {
if (e) {
return console.log(e)
}
dataStream.on('data', function(chunk) {
size += chunk.length
})
dataStream.on('end', function() {
console.log("End. Total size = " + size)
})
dataStream.on('error', function(e) {
console.log(e)
})
})
Disclaimer: I work for Minio Its open source, S3 compatible object storage written in golang with client libraries available in Java, Python, Js, golang.
Just as an alternate solution:
As per this issue on the same subject, it seems like in October 2022, there is a way of handling the body returned from an S3 GetObject request. Assuming you are using AWS SDK V3, you can take advantage of the #aws-sdk/util-stream-node package in the official AWS SDK:
import { GetObjectCommand, S3Client } from '#aws-sdk/client-s3';
import { sdkStreamMixin } from '#aws-sdk/util-stream-node';
const s3Client = new S3Client({});
const { Body } = await s3Client.send(
new GetObjectCommand({
Bucket: 'your-bucket',
Key: 'your-key',
}),
);
// Throws error if Body is undefined
const body = await sdkStreamMixin(Body).transformToString();
You can also transform the body into a byte array or web stream using the .transformToByteArray() and .transformToWebStream() functions.
Keep in mind that the package says that you shouldn't be using it directly, but it seems to be the most straightforward way to handle the body from the request.
This was found in this reply that highlighted a PR that added this feature.
This is the async / await version
var getObjectAsync = async function(bucket,key) {
try {
const data = await s3
.getObject({ Bucket: bucket, Key: key })
.promise();
var contents = data.Body.toString('utf-8');
return contents;
} catch (err) {
console.log(err);
}
}
var getObject = async function(bucket,key) {
const contents = await getObjectAsync(bucket,key);
console.log(contents.length);
return contents;
}
getObject(bucket,key);
The Body.toString() method no longer works with the latest version of the s3 api. Use the following instead:
const { S3Client, GetObjectCommand } = require("#aws-sdk/client-s3");
const streamToString = (stream) =>
new Promise((resolve, reject) => {
const chunks = [];
stream.on("data", (chunk) => chunks.push(chunk));
stream.on("error", reject);
stream.on("end", () => resolve(Buffer.concat(chunks).toString("utf8")));
});
(async () => {
const region = "us-west-2";
const client = new S3Client({ region });
const command = new GetObjectCommand({
Bucket: "test-aws-sdk-js-1877",
Key: "readme.txt",
});
const { Body } = await client.send(command);
const bodyContents = await streamToString(Body);
console.log(bodyContents);
})();
Copy and pasted from here: https://github.com/aws/aws-sdk-js-v3/issues/1877#issuecomment-755387549
Not sure why this solution hasn't already been added as I think it is cleaner than the top answer.
Using express and AWS SDK v3:
public downloadFeedFile = (req: IFeedUrlRequest, res: Response) => {
const downloadParams: GetObjectCommandInput = parseS3Url(req.s3FileUrl.replace(/\s/g, ''));
logger.info("requesting S3 file " + JSON.stringify(downloadParams));
const run = async () => {
try {
const fileStream = await this.s3Client.send(new GetObjectCommand(downloadParams));
if (fileStream.Body instanceof Readable){
fileStream.Body.once('error', err => {
console.error("Error downloading s3 file")
console.error(err);
});
fileStream.Body.pipe(res);
}
} catch (err) {
logger.error("Error", err);
}
};
run();
};

Categories

Resources