I'm trying to use S3 multipart upload to upload data to S3 using a stream on the client-side.
I'm using Browserify to convert the Nodejs code into a single file that can be loaded by the Chrome Extension.
Here is my code:
const Stream = require('stream');
var inputBytesReadable = new Stream.Readable();
// add data to the Stream
var s3 = new AWS.S3({
params: {Bucket: bucketName}
});
var params = {
Bucket: bucketName,
Key: fileName,
PartNumber: partNumber,
UploadId: uploadId,
Body: inputBytesReadable
};
s3.uploadPart(params, function(err, data) {
if (err)
{
appendMessage("Error in uploading "+fileName+"part "+partNumber);
console.log(err, err.stack); // an error occurred
}
});
However, this results in the error: InvalidParameterType: Expected params.Body to be a string, Buffer, Stream, Blob, or typed array object
What am I doing incorrectly? Is there any way that I can pass a Stream to S3 in client-side JavaScript?
It seems that the AWS JavaScript SDK does not support stream inputs unless it is used in a Node.js environment. There is a specific environment check to enable this validation.
Therefore, it seems that it is not possible to use streams with the AWS SDK on the front end.
Related
I'm trying to upload an image to S3 from an external url.
Currently, I can successfully upload a file, but after I download and open it I see that it is corrupted.
Here is the code I'm using (got is just what I use to fetch the resource):
const got = require('got');
const Web3 = require('web3');
const s3 = new AWS.S3({
accessKeyId: AWS_ACCESS_KEY_ID,
secretAccessKey: AWS_SECRET_ACCESS_KEY,
});
const response = await got('https://example.com/image.jpg');
const uploadedFile = await s3
.upload({
Bucket: 'my_bucket',
Key: 'images/',
Body: response.body,
ContentType: 'image/jpeg',
})
.promise();
I tried to create a buffer, and use putObject instead of upload, but I end up with with files that are only a few bytes on S3 instead.
The request to get the object is converting it to a string. Pretty much whatever encoding you pick to do that will corrupt it, since a JPG is binary data not meant to be represented with an string's encoding.
The documentation for the got library states:
encoding
Type: string
Default: 'utf8'
Encoding to be used on setEncoding of the response data.
To get a Buffer, you need to set responseType to buffer instead. Don't set this option to null.
In other words, if you change your download to:
const response = await got('https://example.com/image.jpg', {'responseType': 'buffer'});
You'll get and upload a Buffer object without changing it by encoding it as a string.
Your key is wrong when you are uploading the file to S3:
Key: 'images/'
It cannot be images/ because that would upload the image to an object that represents a folder. While that might work with the local file system on your Windows/Mac laptop, it doesn't work with object stores. It needs to be a key that represents a file, for example:
Key: 'images/image.jpg'
Doing it through streams as mentioned by Ermiya Eskandary seems to work:
const response = got.stream('https://example.com/image.jpg');
I am trying to read an avatar file uploaded via PostMan or any other client to my ExpressJS API.
So far, the recommendations I have being getting has all being Multer.
I don't want to use Multer as I am having some issues with it. I want to be able to read the file directly and upload it to a remote location of choice
Here is the code I have but not working
const getS3Params = (file) => {
let fileName = file.name;
let fileType = file.mimetype;
let fileContent = fs.readFileSync(file); /**Getting an error that says TypeError [ERR_INVALID_ARG_TYPE]: The "path" argument must be of type string or an instance of Buffer or URL. Received an instance of Object**/
return {
Bucket: process.env.S3_BUCKET_NAME,
Key: fileName,
ACL: 'public-read',
ContentType: fileType,
Body: fileContent
};
};
Is there a way to read the content of the file without using Multer?
Thanks.
I'm trying to make an endpoint in NodeJS/Express for downloading content from my AWS S3 Bucket.
It works well, I can download the file in the client side but I can also see the stream preview in the Network tab which is annoying...
QUESTION
I'm wondering if what I'm doing is correct and a good practice.
Also would like to know if it's normal to see the output stream in the Network tab.
How should I properly send I file from S3 to my client application using NodeJS/Express?
I'm pretty sure other websites requests don't let you preview the content with a: "Fail to load response data".
This is what I do in my NodeJS application to get the stream file from AWS S3:
download(fileId) {
const fileObjectStream = app.s3
.getObject({
Key: fileId
})
.createReadStream();
this.res.set("Content-Type", "application/octet-stream");
this.res.set(
"Content-Disposition",
'attachment; filename="' + fileId + '"'
);
fileObjectStream.pipe(this.res);
}
And in the client side I can see this:
I think the issue is with the header :
//this line will set proper header for file and make it downloadable in client's browser
res.attachment(key);
// this will execute download
s3.getObject(bucketParams)
.createReadStream()
.pipe(res);
So code should be like this (This is what I am doing it in my project handling file as res.attachment or res.json in case of error so client can display error to end user) :
router.route("/downloadFile").get((req, res) => {
const query = req.query; //param from client
const key = query.key;//param from client
const bucketName = query.bucket//param from client
var bucketParams = {
Bucket: bucketName,
Key: key
};
//I assume you are using AWS SDK
s3 = new AWS.S3({ apiVersion: "2006-03-01" });
s3.getObject(bucketParams, function(err, data) {
if (err) {
// cannot get file, err = AWS error response,
// return json to client
return res.json({
success: false,
error: err
});
} else {
res.attachment(key); //sets correct header (fixes your issue )
//if all is fine, bucket and file exist, it will return file to client
s3.getObject(bucketParams)
.createReadStream()
.pipe(res);
}
});
});
I'm trying to use the ssh2 module to take a file from a server and add it to an S3 bucket in AWS. I would like to be able to stream the file so that I don't have to have it in memory. I tried the following:
const Client = require('ssh2').Client;
const aws = require('aws-sdk');
const s3 = new aws.S3();
exports.handler = function(event, context, callback) {
let connSettings = {
host: event.serverHost,
port: event.port,
username: event.username,
password: event.password
};
let conn = new Client();
conn.on('ready', function() {
conn.sftp(true, function(err, sftp) {
if (err) throw err;
let stream = sftp.createReadStream(filename);
let putParams = {
Bucket: s3Bucket,
Key: s3Key,
Body: stream
};
s3.putObject(putParams, function (err) {
if (err) throw err;
console.log("Uploaded!");
});
});
}).connect(connSettings);
};
However, the method sftp.createReadStream(filename) is looking at my local directory and not the server. Which other than that, it works.
Is there a way I can stream a file from a server to S3?
I know I could use the sftp.fastGet method to download the file from the server, save it locally, and then upload it to S3. But I would prefer not to have to save the file locally. The s3 SDK accepts a stream, so it would be much more convenient to just stream it.
UPDATE: the method sftp.createReadStream(filename) is correctly reading from the server and not locally. It is the s3.putObject method that is for some reason trying to get the file locally even though I'm giving it a stream.
For some reason the method s3.putObject is looking for the file locally even though I give it a stream. The stream contains the path from the server, but for whatever reason, when the method s3.putObject method is reading the stream, it tries reading the file locally.
I fixed this by instead using the s3.upload method.
s3.upload(putParams, function (err) {
if (err) throw err;
console.log("Uploaded!");
});
Trying to upload an mp4 file using the AWS JS SDK initiating a multipart upload, I keep getting a file corrupt error when I try to download and play it on my local.
Gists of my code:
Initiating the multipart upload with params:
const createMultipartUploadParams = {
Bucket: bucketname,
Key: fileHash.file_name,
ContentType: 'video/mp4' // TODO: Change hardcode
};
Call:
s3Instance.createMultipartUpload(createMultipartUploadParams, function(err, data) {
}
Doing the chunking:
Params:
const s3ChunkingParams = {
chunkSize,
noOfIterations,
lastChunk ,
UploadId: data.UploadId
}
Reading the file:
const reader = new FileReader();
reader.readAsArrayBuffer(file)
Uploading each chunk:
reader.onloadend = function onloadend(){
console.log('onloadend');
const partUploadParams = {
Bucket: bucketname,
Key: file_name,
PartNumber: i, // Iterating over all parts
UploadId: s3ChunkingParams.UploadId,
Body: reader.result.slice(start, stop) // Chunking up the file
};
s3Instance.uploadPart(partUploadParams, function(err, data1) {
}
Finally completing the multipartUpload:
s3Instance.completeMultipartUpload(completeMultipartParams, function(err, data)
I am guessing the problem is how I am reading the file, so I have tried Content Encoding it to base64 but that makes the size unusually huge. Any help is greatly appreciated!
Tried this too
Only thing that could corrupt is perhaps you are uploading additionally padded content for your individual parts which basically leads to final object being wrong. I do not believe S3 is doing something fishy here.
You can verify after uploading the file what is the final size of the object, if it doesn't match with your local copy then you know you have a problem somewhere.
Are you trying to upload from browser?
Alternatively you can look at - https://github.com/minio/minio-js. It has minimal set of abstracted API's implementing most commonly used S3 calls.
Here is a nodejs example for streaming upload.
$ npm install minio
$ cat >> put-object.js << EOF
var Minio = require('minio')
var fs = require('fs')
// find out your s3 end point here:
// http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region
var s3Client = new Minio({
url: 'https://<your-s3-endpoint>',
accessKey: 'YOUR-ACCESSKEYID',
secretKey: 'YOUR-SECRETACCESSKEY'
})
var outFile = fs.createWriteStream('your_localfile.zip');
var fileStat = Fs.stat(file, function(e, stat) {
if (e) {
return console.log(e)
}
s3Client.putObject('mybucket', 'hello/remote_file.zip', 'application/octet-stream', stat.size, fileStream, function(e) {
return console.log(e) // should be null
})
})
EOF
putObject() here is a fully managed single function call for file sizes over 5MB it automatically does multipart internally. You can resume a failed upload as well and it will start from where its left off by verifying previously upload parts.
So you don't necessarily have to go through the trouble of writing lower level multipart calls.
Additionally this library is also isomorphic, can be used in browsers as well.