Cloudinary Signed Uploads with Widget - javascript

Documentation is extremely frustrating.
I'm using the upload widget to try to allow users to upload multiple pictures for their profile. I can't use unsigned uploads because of the potential for abuse.
I would much rather upload the file through the upload widget instead of through the server as it seems like it should be so simple
I've pieced together what I think should work but it is still saying: Upload preset must be whitelisted for unsigned uploads
Server:
// grab a current UNIX timestamp
const millisecondsToSeconds = 1000;
const timestamp = Math.round(Date.now() / millisecondsToSeconds);
// generate the signature using the current timestmap and any other desired Cloudinary params
const signature = cloudinaryV2.utils.api_sign_request({ timestamp }, CLOUDINARY_SECRET_KEY);
// craft a signature payload to send to the client (timestamp and signature required)
return signature;
also tried
return {
signature,
timestamp,
};
also tried
const signature = cloudinaryV2.utils.api_sign_request(
data.params_to_sign,
CLOUDINARY_SECRET_KEY,
);
Client:
const generateSignature = async (callback: Function, params_to_sign: object): Promise<void> => {
try {
const signature = await generateSignatureCF({ slug: 'xxxx' });
// also tried { slug: 'xxxx', params_to_sign }
callback(signature);
} catch (err) {
console.log(err);
}
};
cloudinary.openUploadWidget(
{
cloudName: 'xxx',
uploadPreset: 'xxxx',
sources: ['local', 'url', 'facebook', 'dropbox', 'google_photos'],
folder: 'xxxx',
apiKey: ENV.CLOUDINARY_PUBLIC_KEY,
uploadSignature: generateSignature,
},
function(error, result) {
console.log(error);
},
);

Let's all take a moment to point out how horrible Cloudinary's documentation is. It's easily the worst i've ever seen. Nightmare fuel.
Now that i've got that off my chest... I really needed to be able to do this and I spent way too long banging my head against walls for what should be extremely simple. Here it is...
Server (Node.js)
You'll need an endpoint that returns a signature-timestamp pair to the frontend:
import cloudinary from 'cloudinary'
export async function createImageUpload() {
const timestamp = new Date().getTime()
const signature = await cloudinary.utils.api_sign_request(
{
timestamp,
},
process.env.CLOUDINARY_SECRET
)
return { timestamp, signature }
}
Client (Browser)
The client makes a request to the server for a signature-timestamp pair and then uses that to upload a file. The file used in the example should come from an <input type='file'/> change event etc.
const CLOUD_NAME = process.env.CLOUDINARY_CLOUD_NAME
const API_KEY = process.env.CLOUDINARY_API_KEY
async function uploadImage(file) {
const { signature, timestamp } = await api.post('/image-upload')
const form = new FormData()
form.append('file', file)
const res = await fetch(
`https://api.cloudinary.com/v1_1/${CLOUD_NAME}/image/upload?api_key=${API_KEY}&timestamp=${timestamp}&signature=${signature}`,
{
method: 'POST',
body: form,
}
)
const data = await res.json()
return data.secure_url
}
That's it. That's all it takes. If only Cloudinary had this in their docs.

Man. I hate my life. I finally figured it out. It literally took me beautifying the upload widget js to understand that the return of the function should be a string instead of an object even though the docs make it seem otherwise.
Here is how to implement a signed upload with a Firebase Cloud Function
import * as functions from 'firebase-functions';
import cloudinary from 'cloudinary';
const CLOUDINARY_SECRET_KEY = functions.config().cloudinary.key;
const cloudinaryV2 = cloudinary.v2;
module.exports.main = functions.https.onCall(async (data, context: CallableContext) => {
// Checking that the user is authenticated.
if (!context.auth) {
// Throwing an HttpsError so that the client gets the error details.
throw new functions.https.HttpsError(
'failed-precondition',
'The function must be called while authenticated.',
);
}
try {
return cloudinaryV2.utils.api_sign_request(data.params_to_sign, CLOUDINARY_SECRET_KEY);
} catch (error) {
throw new functions.https.HttpsError('failed-precondition', error.message);
}
});
// CLIENT
const uploadWidget = () => {
const generateSignature = async (callback: Function, params_to_sign: object): Promise<void> => {
try {
const signature = await generateImageUploadSignatureCF({ params_to_sign });
callback(signature.data);
} catch (err) {
console.log(err);
}
};
cloudinary.openUploadWidget(
{
cloudName: 'xxxxxx',
uploadSignature: generateSignature,
apiKey: ENV.CLOUDINARY_PUBLIC_KEY,
},
function(error, result) {
console.log(error);
},
);
};

Related

agora start method error : post method api body check failed

I'm building a video-calling app using Next js and agora.io 4, I followed the steps mentioned in the Docs.
I enabled agora cloud recording
called the acquire method and got the resourceId.
Then, I called the start method. but it always failed with an error post method API body check failed!
However, it works perfectly on Postman.
Here's the code :
import axios from "axios";
import chalk from "chalk";
// AWS S3 storage bucket credentials
const secretKey = process.env.S3_SECRET_KEY;
const accessKey = process.env.S3_ACCESS_KEY;
const bucket = process.env.S3_BUCKET_NAME;
const region = process.env.S3_BUCKET_REGION;
const vendor = process.env.S3_VENDOR;
//agora credentials
const appId = process.env.APP_ID;
const key = process.env.KEY;
const secret = process.env.SECRET;
export default async function startHandler(req, res) {
//call agora start method
const { uid, cname, resourceId, token } = req.body;
const plainCredential = `${key}:${secret}`;
const encodedCredential = Buffer.from(plainCredential).toString("base64"); // Encode with base64
const authorizationField = `Basic ${encodedCredential}`;
const data = {
uid,
cname,
clientRequest: {
recordingConfig: {
streamMode: "standard",
channelType: 0,
subscribeUidGroup: 0,
},
storageConfig: {
accessKey,
region,
bucket,
secretKey,
vendor,
},
},
};
const headers = {
"Content-Type": "application/json",
Authorization: authorizationField,
};
const startUrl = `https://api.agora.io/v1/apps/${appId}/cloud_recording/resourceid/${resourceId}/mode/individual/start`;
try {
const response = await axios.post(startUrl, data, {
headers,
});
res.status(200).send(response.data);
} catch (error) {
console.error(error);
res.send(error);
}
}
Any help/hint would be much appreciated
I found the fix!
First, you may be tricked by the uid returned from the agora join method, it's returning a Number, surprisingly! the start method
expect the uid to be a string, so don't forget to do a
uid.toString().
In the storageConfig object, you should check the type of each of its attributes. each of region and vendor is expected to be of type Number. That said, if you're storing this info in a .env file, remember that environment files only stores strings. Therefore, you should convert them to Numbers!
This problem took me 2 days, so I hope this will be useful for you!

Call external API in Firebase Cloud Function returns null

I'm trying to call a simple api to currency conversion in Firebase Cloud Function in Typescript, but always it returns 'null'
import { https } from 'firebase-functions';
import * as axios from 'axios';
import * as cors from 'cors';
export const createTransfer = async (amount: number) => {
https.onRequest((req, res) => {
cors({origin: true})(req, res, () => {
const config = {
headers: {
apikey: 'APIKEY',
},
params: {
to: 'USD',
from: 'ILS',
amount: amount
}
};
const convert = axios.default.get('https://api.apilayer.com/exchangerates_data/convert', config)
.then((resp) => {
res.send(resp.data);
})
.catch((error) => {
res.sendStatus(error);
});
return convert;
});
});
};
/////// DEPLOYABLE FUNCTION ////////
export const stripeTransferPayment = https.onCall( async (data, context) => {
const amount = assert(data, 'amount');
return createTransfer(amount);
});
It should return the converted amount. Where am I doing wrong and how can I solve this?
It looks like your line return convert; returns the value before it is ever assigned. Try awaiting on your request:
const convert = await axios.default.get('https://api.apilayer.com/exchangerates_data/convert', config);
res.send(convert.data)
According to the docs at https://apilayer.com/marketplace/exchangerates_data-api I don't think you are providing the proper parameters in the request. First, it looks like you are attempting to convert the value in today's terms, but the endpoint you're using is for historical data. This probably isn't really a problem, and I'd expect you can continue using this endpoint if you also send it today's date. However, if you are trying to get a conversion for today, I would suggest using the https://api.apilayer.com/exchangerates_data/live endpoint instead.
According to the docs, the proper parameter names are "base" and "symbols", not "from" and "to" (respectively). Try this:
// ...
https.onRequest((req, res) => {
cors({origin: true})(req, res, () => {
const config = {
headers: {
apikey: 'APIKEY', // ensure this is actually your API key in your code, good on you for not exposing it publicly ;)
},
params: {
base: 'USD',
symbols: 'ILS',
amount: amount
}
};
const convert = axios.default.get('https://api.apilayer.com/exchangerates_data/live', config)
.then((resp) => {
res.send(resp.data);
})
.catch((error) => {
res.sendStatus(error);
});
return convert;
});
});
// ...

Streaming video from AWS Kinesis to JS client

I'm trying to consume the video from an AWS Kinesis stream. The stream is visible in the AWS console, but I cannot consume it in the JS application I'm trying to create.
I've been following this tutorial, but cannot get the streaming URL.
My code is here:
import React, { Component} from 'react'
import ReactPlayer from 'react-player'
import AWS from "aws-sdk";
import { STREAM_NAME, ACCESS_KEY_ID, SECRET_ACCESS_KEY, REGION } from '../secrets'
var streamName = STREAM_NAME;
// Step 1: Configure SDK Clients
var options = {
accessKeyId: ACCESS_KEY_ID,
secretAccessKey: SECRET_ACCESS_KEY,
region: REGION
}
var kinesisVideo = new AWS.KinesisVideo(options);
var kinesisVideoArchivedContent = new AWS.KinesisVideoArchivedMedia(options);
// Step 2: Get a data endpoint for the stream
kinesisVideo.getDataEndpoint({
StreamName: streamName,
APIName: "GET_HLS_STREAMING_SESSION_URL"
}, function(err, response) {
if (err) { return console.error(err); }
console.log('Data endpoint: ' + response.DataEndpoint);
kinesisVideoArchivedContent.endpoint = new AWS.Endpoint(response.DataEndpoint);
});
// Step 3: Get an HLS Streaming Session URL
console.log('Fetching HLS Streaming Session URL');
var playbackMode = 'LIVE'; // 'LIVE' or 'ON_DEMAND'
//var startTimestamp = new Date('START_TIMESTAMP'); // For ON_DEMAND only
//var endTimestamp = new Date('END_TIMESTAMP'); // For ON_DEMAND only
var fragmentSelectorType = 'SERVER_TIMESTAMP'; // 'SERVER_TIMESTAMP' or 'PRODUCER_TIMESTAMP'
const SESSION_EXPIRATION_SECONDS = 60*60
console.log(kinesisVideo)
const hlsUrl = kinesisVideoArchivedContent.getHLSStreamingSessionURL({
StreamName: streamName,
//StreamARN: "arn:aws:kinesisvideo:us-east-1:635420739373:stream/mr-pinchers-dot-org/1561848963391",
PlaybackMode: playbackMode,
HLSFragmentSelector: {
FragmentSelectorType: fragmentSelectorType,
TimestampRange: playbackMode === 'LIVE' ? undefined : {
// StartTimestamp: startTimestamp,
// EndTimestamp: endTimestamp
}
},
Expires: parseInt(SESSION_EXPIRATION_SECONDS)
}, function(err, response) {
if (err) { return console.error("Darn", err); }
console.log('HLS Streaming Session URL: ' + response.HLSStreamingSessionURL, response);
}
)
console.log("here", hlsUrl)
class Home extends Component {
render () {
return <ReactPlayer url={hlsUrl} playing={true} />
}
}
export default Home
The response I'm getting in Step 3 (response.HLSStreamingSessionURL) is undefined.
Step 2 runs fine, and I get an endpoint back, so I'm confident that it's not a permissions problem.
Part of me thinks that I should be using some async/await calls but I'm not sure, still pretty new to JS and all that async stuff so didn't know how to incorporate it into this.
I've spent quite a bit of time trying to figure this out but the documentation on Kinesis is still pretty light, although if someone has a good resource for it, please let me know.
This is basic JavaScript async behavior. You're executing step 3 before step 2 is complete. You can't use the response before it's happened.
You can fix this by starting step 3 when step 2 has completed, as follows:
kinesisVideo.getDataEndpoint({
StreamName: streamName,
APIName: "GET_HLS_STREAMING_SESSION_URL"
}, function(err, response) {
if (err) { return console.error(err); }
console.log('Data endpoint: ' + response.DataEndpoint);
kinesisVideoArchivedContent.endpoint = new AWS.Endpoint(response.DataEndpoint);
var playbackMode = 'LIVE';
var fragmentSelectorType = 'SERVER_TIMESTAMP';
const SESSION_EXPIRATION_SECONDS = 60*60
kinesisVideoArchivedContent.getHLSStreamingSessionURL({...});
// remainder of code here
});
Or you can use async/await and promise variants of the AWS SDK methods like so:
(async () => {
const kv_response = await kv.getDataEndpoint({...}).promise();
// ...
const hls_response = await kvac.getHLSStreamingSessionURL({...}).promise();
})();
Note that await may only be used inside an async function, hence the anonymous async wrapper.

By using ledger nano s, I wanna sign a transaction and send it

I'm trying to send ethereum transaction that sends ERC20 tokens to someone with Ledger Nano S through Node.JS but I'm not able to successfully sign and send this transaction.
First of all, I signed the transaction through the method, signTransaction, of ledgerhq API and then after signing it, I sended it to the main net by using sendSignedTransaction. When I execute below code, Ledger receives request and shows details of a transaction. However, after pressing Ledger's confirm button, the console returns error 'Returned error: Invalid signature: Crypto error (Invalid EC signature)'.
import AppEth from "#ledgerhq/hw-app-eth";
import TransportU2F from "#ledgerhq/hw-transport-u2f";
import TransportNodeHid from "#ledgerhq/hw-transport-node-hid";
import EthereumTx from "ethereumjs-tx"
const Web3 = require('web3');
import { addHexPrefix, bufferToHex, toBuffer } from 'ethereumjs-util';
const web3 = new Web3(new Web3.providers.HttpProvider('http://localhost:8545'));
var destAddresses = ['0xa6acFa18468786473269Dc1521fd4ff40F6481D9'];
var amount = 1000000000000;
var i=0;
var contract = new web3.eth.Contract([token contract ABI... ], '0x74a...');
const data1 = contract.methods.transfer(destAddresses[0], amount).encodeABI();
const exParams = {
gasLimit: 6e6,
gasPrice: 3e9,
from: '0x1A...',
data : data1,
to: '0x74a...',
value: '0x00',
nonce: "0x0",
chainId: 1,
v: "0x01",
r: "0x00",
s: "0x00"
}
async function makeSign(txParams) {
const tx = new EthereumTx(txParams);
const txHex = tx.serialize().toString("hex");
const signedTransaction = '0x' + txHex;
let transport;
try {
transport = await TransportNodeHid.create();
let eth2 = new AppEth(transport);
const result = await eth2.signTransaction("m/44'/60'/0'/0", txHex).then(result => {
web3.eth.sendSignedTransaction('0x' + txHex)
.then(res => {
console.log(res);
}).catch(err => {
console.log('sendSignedTransaction');
console.log(err);
});
}).catch(err => {
console.log('signTransaction');
console.log(err);
});
txParams.r = `0x${result.r, 'hex'}`;
txParams.s = `0x${result.s, 'hex'}`;
txParams.v = `0x${result.v, 'hex'}`;
return result;
} catch (e) {
console.log(e);
}
}
makeSign(exParams).then(function () {
console.log("Promise Resolved2");
}.catch(function () {
console.log("Promise Rejected2");
});
When I only use signTransaction function, I can confirm the transaction in the ledger device and return txhash on the console. However, ultimately I want to broadcast a transaction to the main net. Could you please give me any idea? I want any feedback. Also, if there are any examples of creating and broadcasting a raw transaction by using the ledger, notice me please.
Your code already sends the transaction to the network. However, just awaiting the "send" promise only gives you the transaction hash, not the receipt. You need to treat it as an event emitter and wait for the 'confirmation' event.
const serializedTx = tx.serialize();
web3.eth.sendSignedTransaction(serializedTx.toString('hex'))
.once('transactionHash', hash => console.log('Tx hash', hash))
.on('confirmation', (confNumber, receipt) => {
console.log(`Confirmation #${confNumber}`, receipt);
})
.on('error', console.error);
To send it to mainnet as you mention, you can either run a local geth node on port 8545 and use your code unchanged, or point web3 at infura or similar.

How to get response from S3 getObject in Node.js?

In a Node.js project I am attempting to get data back from S3.
When I use getSignedURL, everything works:
aws.getSignedUrl('getObject', params, function(err, url){
console.log(url);
});
My params are:
var params = {
Bucket: "test-aws-imagery",
Key: "TILES/Level4/A3_B3_C2/A5_B67_C59_Tiles.par"
If I take the URL output to the console and paste it in a web browser, it downloads the file I need.
However, if I try to use getObject I get all sorts of odd behavior. I believe I am just using it incorrectly. This is what I've tried:
aws.getObject(params, function(err, data){
console.log(data);
console.log(err);
});
Outputs:
{
AcceptRanges: 'bytes',
LastModified: 'Wed, 06 Apr 2016 20:04:02 GMT',
ContentLength: '1602862',
ETag: '9826l1e5725fbd52l88ge3f5v0c123a4"',
ContentType: 'application/octet-stream',
Metadata: {},
Body: <Buffer 01 00 00 00 ... > }
null
So it appears that this is working properly. However, when I put a breakpoint on one of the console.logs, my IDE (NetBeans) throws an error and refuses to show the value of data. While this could just be the IDE, I decided to try other ways to use getObject.
aws.getObject(params).on('httpData', function(chunk){
console.log(chunk);
}).on('httpDone', function(data){
console.log(data);
});
This does not output anything. Putting a breakpoint in shows that the code never reaches either of the console.logs. I also tried:
aws.getObject(params).on('success', function(data){
console.log(data);
});
However, this also does not output anything and placing a breakpoint shows that the console.log is never reached.
What am I doing wrong?
#aws-sdk/client-s3 (2022 Update)
Since I wrote this answer in 2016, Amazon has released a new JavaScript SDK, #aws-sdk/client-s3. This new version improves on the original getObject() by returning a promise always instead of opting in via .promise() being chained to getObject(). In addition to that, response.Body is no longer a Buffer but, one of Readable|ReadableStream|Blob. This changes the handling of the response.Data a bit. This should be more performant since we can stream the data returned instead of holding all of the contents in memory, with the trade-off being that it is a bit more verbose to implement.
In the below example the response.Body data will be streamed into an array and then returned as a string. This is the equivalent example of my original answer. Alternatively, the response.Body could use stream.Readable.pipe() to an HTTP Response, a File or any other type of stream.Writeable for further usage, this would be the more performant way when getting large objects.
If you wanted to use a Buffer, like the original getObject() response, this can be done by wrapping responseDataChunks in a Buffer.concat() instead of using Array#join(), this would be useful when interacting with binary data. To note, since Array#join() returns a string, each Buffer instance in responseDataChunks will have Buffer.toString() called implicitly and the default encoding of utf8 will be used.
const { GetObjectCommand, S3Client } = require('#aws-sdk/client-s3')
const client = new S3Client() // Pass in opts to S3 if necessary
function getObject (Bucket, Key) {
return new Promise(async (resolve, reject) => {
const getObjectCommand = new GetObjectCommand({ Bucket, Key })
try {
const response = await client.send(getObjectCommand)
// Store all of data chunks returned from the response data stream
// into an array then use Array#join() to use the returned contents as a String
let responseDataChunks = []
// Handle an error while streaming the response body
response.Body.once('error', err => reject(err))
// Attach a 'data' listener to add the chunks of data to our array
// Each chunk is a Buffer instance
response.Body.on('data', chunk => responseDataChunks.push(chunk))
// Once the stream has no more data, join the chunks into a string and return the string
response.Body.once('end', () => resolve(responseDataChunks.join('')))
} catch (err) {
// Handle the error or throw
return reject(err)
}
})
}
Comments on using Readable.toArray()
Using Readable.toArray() instead of working with the stream events directly might be more convenient to use but, its worse performing. It works by reading all response data chunks into memory before moving on. Since this removes all benefits of streaming, this approach is discouraged per the Node.js docs.
As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams. Documentation Link
#aws-sdk/client-s3 Documentation Links
GetObjectCommand
GetObjectCommandInput
GetObjectCommandOutput
aws-sdk (Original Answer)
When doing a getObject() from the S3 API, per the docs the contents of your file are located in the Body property, which you can see from your sample output. You should have code that looks something like the following
const aws = require('aws-sdk');
const s3 = new aws.S3(); // Pass in opts to S3 if necessary
var getParams = {
Bucket: 'abc', // your bucket name,
Key: 'abc.txt' // path to the object you're looking for
}
s3.getObject(getParams, function(err, data) {
// Handle any error and exit
if (err)
return err;
// No error happened
// Convert Body from a Buffer to a String
let objectData = data.Body.toString('utf-8'); // Use the encoding necessary
});
You may not need to create a new buffer from the data.Body object but if you need you can use the sample above to achieve that.
Based on the answer by #peteb, but using Promises and Async/Await:
const AWS = require('aws-sdk');
const s3 = new AWS.S3();
async function getObject (bucket, objectKey) {
try {
const params = {
Bucket: bucket,
Key: objectKey
}
const data = await s3.getObject(params).promise();
return data.Body.toString('utf-8');
} catch (e) {
throw new Error(`Could not retrieve file from S3: ${e.message}`)
}
}
// To retrieve you need to use `await getObject()` or `getObject().then()`
const myObject = await getObject('my-bucket', 'path/to/the/object.txt');
Updated (2022)
nodejs v17.5.0 added Readable.toArray. If this API is available in your node version. The code will be very short:
const buffer = Buffer.concat(
await (
await s3Client
.send(new GetObjectCommand({
Key: '<key>',
Bucket: '<bucket>',
}))
).Body.toArray()
)
If you are using Typescript, you are safe to cast the .Body part as Readable (the other types ReadableStream and Blob are only returned in browser environment. Moreover, in browser, Blob is only used in legacy fetch API when response.body is not supported)
(response.Body as Readable).toArray()
Note that: Readable.toArray is an experimental (yet handy) feature, use it with caution.
=============
Original answer
If you are using aws sdk v3, the sdk v3 returns nodejs Readable (precisely, IncomingMessage which extends Readable) instead of a Buffer.
Here is a Typescript version. Note that this is for node only, if you send the request from browser, check the longer answer in the blog post mentioned below.
import {GetObjectCommand, S3Client} from '#aws-sdk/client-s3'
import type {Readable} from 'stream'
const s3Client = new S3Client({
apiVersion: '2006-03-01',
region: 'us-west-2',
credentials: {
accessKeyId: '<access key>',
secretAccessKey: '<access secret>',
}
})
const response = await s3Client
.send(new GetObjectCommand({
Key: '<key>',
Bucket: '<bucket>',
}))
const stream = response.Body as Readable
return new Promise<Buffer>((resolve, reject) => {
const chunks: Buffer[] = []
stream.on('data', chunk => chunks.push(chunk))
stream.once('end', () => resolve(Buffer.concat(chunks)))
stream.once('error', reject)
})
// if readable.toArray() is support
// return Buffer.concat(await stream.toArray())
Why do we have to cast response.Body as Readable? The answer is too long. Interested readers can find more information on my blog post.
For someone looking for a NEST JS TYPESCRIPT version of the above:
/**
* to fetch a signed URL of a file
* #param key key of the file to be fetched
* #param bucket name of the bucket containing the file
*/
public getFileUrl(key: string, bucket?: string): Promise<string> {
var scopeBucket: string = bucket ? bucket : this.defaultBucket;
var params: any = {
Bucket: scopeBucket,
Key: key,
Expires: signatureTimeout // const value: 30
};
return this.account.getSignedUrlPromise(getSignedUrlObject, params);
}
/**
* to get the downloadable file buffer of the file
* #param key key of the file to be fetched
* #param bucket name of the bucket containing the file
*/
public async getFileBuffer(key: string, bucket?: string): Promise<Buffer> {
var scopeBucket: string = bucket ? bucket : this.defaultBucket;
var params: GetObjectRequest = {
Bucket: scopeBucket,
Key: key
};
var fileObject: GetObjectOutput = await this.account.getObject(params).promise();
return Buffer.from(fileObject.Body.toString());
}
/**
* to upload a file stream onto AWS S3
* #param stream file buffer to be uploaded
* #param key key of the file to be uploaded
* #param bucket name of the bucket
*/
public async saveFile(file: Buffer, key: string, bucket?: string): Promise<any> {
var scopeBucket: string = bucket ? bucket : this.defaultBucket;
var params: any = {
Body: file,
Bucket: scopeBucket,
Key: key,
ACL: 'private'
};
var uploaded: any = await this.account.upload(params).promise();
if (uploaded && uploaded.Location && uploaded.Bucket === scopeBucket && uploaded.Key === key)
return uploaded;
else {
throw new HttpException("Error occurred while uploading a file stream", HttpStatus.BAD_REQUEST);
}
}
Converting GetObjectOutput.Body to Promise<string> using node-fetch
In aws-sdk-js-v3 #aws-sdk/client-s3, GetObjectOutput.Body is a subclass of Readable in nodejs (specifically an instance of http.IncomingMessage) instead of a Buffer as it was in aws-sdk v2, so resp.Body.toString('utf-8') will give you the wrong result “[object Object]”. Instead, the easiest way to turn GetObjectOutput.Body into a Promise<string> is to construct a node-fetch Response, which takes a Readable subclass (or Buffer instance, or other types from the fetch spec) and has conversion methods .json(), .text(), .arrayBuffer(), and .blob().
This should also work in the other variants of aws-sdk and platforms (#aws-sdk v3 node Buffer, v3 browser Uint8Array subclass, v2 node Readable, v2 browser ReadableStream or Blob)
npm install node-fetch
import { Response } from 'node-fetch';
import * as s3 from '#aws-sdk/client-s3';
const client = new s3.S3Client({})
const s3Response = await client.send(new s3.GetObjectCommand({Bucket: '…', Key: '…'});
const response = new Response(s3Response.Body);
const obj = await response.json();
// or
const text = await response.text();
// or
const buffer = Buffer.from(await response.arrayBuffer());
// or
const blob = await response.blob();
Reference: GetObjectOutput.Body documentation, node-fetch Response documentation, node-fetch Body constructor source, minipass-fetch Body constructor source
Thanks to kennu comment in GetObjectCommand usability issue
Extremely similar answer to #ArianAcosta above. Except I'm using import (for Node 12.x and up), adding AWS config and sniffing for an image payload and applying base64 processing to the return.
// using v2.x of aws-sdk
import aws from 'aws-sdk'
aws.config.update({
accessKeyId: process.env.YOUR_AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.YOUR_AWS_SECRET_ACCESS_KEY,
region: "us-east-1" // or whatever
})
const s3 = new aws.S3();
/**
* getS3Object()
*
* #param { string } bucket - the name of your bucket
* #param { string } objectKey - object you are trying to retrieve
* #returns { string } - data, formatted
*/
export async function getS3Object (bucket, objectKey) {
try {
const params = {
Bucket: bucket,
Key: objectKey
}
const data = await s3.getObject(params).promise();
// Check for image payload and formats appropriately
if( data.ContentType === 'image/jpeg' ) {
return data.Body.toString('base64');
} else {
return data.Body.toString('utf-8');
}
} catch (e) {
throw new Error(`Could not retrieve file from S3: ${e.message}`)
}
}
At first glance it doesn't look like you are doing anything wrong but you don't show all your code. The following worked for me when I was first checking out S3 and Node:
var AWS = require('aws-sdk');
if (typeof process.env.API_KEY == 'undefined') {
var config = require('./config.json');
for (var key in config) {
if (config.hasOwnProperty(key)) process.env[key] = config[key];
}
}
var s3 = new AWS.S3({accessKeyId: process.env.AWS_ID, secretAccessKey:process.env.AWS_KEY});
var objectPath = process.env.AWS_S3_FOLDER +'/test.xml';
s3.putObject({
Bucket: process.env.AWS_S3_BUCKET,
Key: objectPath,
Body: "<rss><data>hello Fred</data></rss>",
ACL:'public-read'
}, function(err, data){
if (err) console.log(err, err.stack); // an error occurred
else {
console.log(data); // successful response
s3.getObject({
Bucket: process.env.AWS_S3_BUCKET,
Key: objectPath
}, function(err, data){
console.log(data.Body.toString());
});
}
});
Alternatively you could use minio-js client library get-object.js
var Minio = require('minio')
var s3Client = new Minio({
endPoint: 's3.amazonaws.com',
accessKey: 'YOUR-ACCESSKEYID',
secretKey: 'YOUR-SECRETACCESSKEY'
})
var size = 0
// Get a full object.
s3Client.getObject('my-bucketname', 'my-objectname', function(e, dataStream) {
if (e) {
return console.log(e)
}
dataStream.on('data', function(chunk) {
size += chunk.length
})
dataStream.on('end', function() {
console.log("End. Total size = " + size)
})
dataStream.on('error', function(e) {
console.log(e)
})
})
Disclaimer: I work for Minio Its open source, S3 compatible object storage written in golang with client libraries available in Java, Python, Js, golang.
Just as an alternate solution:
As per this issue on the same subject, it seems like in October 2022, there is a way of handling the body returned from an S3 GetObject request. Assuming you are using AWS SDK V3, you can take advantage of the #aws-sdk/util-stream-node package in the official AWS SDK:
import { GetObjectCommand, S3Client } from '#aws-sdk/client-s3';
import { sdkStreamMixin } from '#aws-sdk/util-stream-node';
const s3Client = new S3Client({});
const { Body } = await s3Client.send(
new GetObjectCommand({
Bucket: 'your-bucket',
Key: 'your-key',
}),
);
// Throws error if Body is undefined
const body = await sdkStreamMixin(Body).transformToString();
You can also transform the body into a byte array or web stream using the .transformToByteArray() and .transformToWebStream() functions.
Keep in mind that the package says that you shouldn't be using it directly, but it seems to be the most straightforward way to handle the body from the request.
This was found in this reply that highlighted a PR that added this feature.
This is the async / await version
var getObjectAsync = async function(bucket,key) {
try {
const data = await s3
.getObject({ Bucket: bucket, Key: key })
.promise();
var contents = data.Body.toString('utf-8');
return contents;
} catch (err) {
console.log(err);
}
}
var getObject = async function(bucket,key) {
const contents = await getObjectAsync(bucket,key);
console.log(contents.length);
return contents;
}
getObject(bucket,key);
The Body.toString() method no longer works with the latest version of the s3 api. Use the following instead:
const { S3Client, GetObjectCommand } = require("#aws-sdk/client-s3");
const streamToString = (stream) =>
new Promise((resolve, reject) => {
const chunks = [];
stream.on("data", (chunk) => chunks.push(chunk));
stream.on("error", reject);
stream.on("end", () => resolve(Buffer.concat(chunks).toString("utf8")));
});
(async () => {
const region = "us-west-2";
const client = new S3Client({ region });
const command = new GetObjectCommand({
Bucket: "test-aws-sdk-js-1877",
Key: "readme.txt",
});
const { Body } = await client.send(command);
const bodyContents = await streamToString(Body);
console.log(bodyContents);
})();
Copy and pasted from here: https://github.com/aws/aws-sdk-js-v3/issues/1877#issuecomment-755387549
Not sure why this solution hasn't already been added as I think it is cleaner than the top answer.
Using express and AWS SDK v3:
public downloadFeedFile = (req: IFeedUrlRequest, res: Response) => {
const downloadParams: GetObjectCommandInput = parseS3Url(req.s3FileUrl.replace(/\s/g, ''));
logger.info("requesting S3 file " + JSON.stringify(downloadParams));
const run = async () => {
try {
const fileStream = await this.s3Client.send(new GetObjectCommand(downloadParams));
if (fileStream.Body instanceof Readable){
fileStream.Body.once('error', err => {
console.error("Error downloading s3 file")
console.error(err);
});
fileStream.Body.pipe(res);
}
} catch (err) {
logger.error("Error", err);
}
};
run();
};

Categories

Resources