File not uploading completely to Google Bucket NodeJs - javascript

The file does get uploaded but instead of it being 100s of Kbs or few MBs its just a couple of Bytes, and when trying to open it shows blank or "file not exist" error. same issue with text files and images.
I believe the problem is with the stream not waiting to read to the whole file before uploading it to bucket.
Code sample is the one provided by google from their "Google Cloud Storage: Node.js Client" Documentation
function main(
bucketName = 'myBucket',
destFileName = 'MyUploadedFile',
contents = 'testFile.pdf'
) {
// Imports the Google Cloud client library
const {Storage} = require('#google-cloud/storage');
// Import Node.js stream
const stream = require('stream');
// Creates a client
const storage = new Storage();
// Get a reference to the bucket
const myBucket = storage.bucket(bucketName);
// Create a reference to a file object
const file = myBucket.file(destFileName);
const passthroughStream = new stream.PassThrough();
passthroughStream.write(contents);
//console.log(passthroughStream.write(contents))
passthroughStream.end();
async function streamFileUpload() {
passthroughStream.pipe(file.createWriteStream({resumable:true,gzip:true})).on('finish', () => {
// The file upload is complete
});
console.log(`${destFileName} uploaded to ${bucketName}`);
}
streamFileUpload().catch(console.error);
// [END storage_stream_file_upload]
}
main(...process.argv.slice(2));

Related

How to read a large csv as a stream

I am using the #aws-sdk/client-s3 to read a json file from S3, take the contents and dump it into dynamodb. This all currently works fine using:
const data = await (await new S3Client(region).send(new GetObjectCommand(bucketParams)));
And then deserialising the response body etc.
However, I'm looking to migrate to use jsonlines format, effectiely csv, in the sense it needs to be streamed in line by line or in chunks of lines and processed. I can't seem to find a way of doing this that doesnt load the entire file into memory (using response.text() etc).
Ideally, I would like to pipe the response into a createReadStream, and go from there.
I found this example with createReadStream() form module fs in node.js:
import fs from 'fs';
function read() {
let data = '';
const readStream = fs.createReadStream('business_data.csv', 'utf-8');
readStream.on('error', (error) => console.log(error.message));
readStream.on('data', (chunk) => data += chunk);
readStream.on('end', () => console.log('Reading complete'));
};
read();
You can modify it for your use. Hope this helps.
Connection to your S3 you can do by:
var s3 = new AWS.S3({apiVersion: '2006-03-01'});
var params = {Bucket: 'myBucket', Key: 'myImageFile.jpg'};
var file = require('fs').createWriteStream('/path/to/file.jpg');
s3.getObject(params).createReadStream().pipe(file);
see here

How to upload a img to firebase storage correctly (javascript)?

Im having problem uploading image to firebase storage, it keeps uploading 9B file to storage even if selected file is a 100mb file. It is showing the progress as NaN%, once i successfully uploaded a image to firebase storage but now im failing 😭 here is the code,
const app = initializeApp(firebaseConfig);
const analytics = getAnalytics(app);
const storage = getStorage();
var picker = document.getElementById('img');
picker.onchange = function(){
var file = window.URL.createObjectURL(picker.files[0]);
var filename = picker.files[0].name;
const storageRef = ref(storage, 'icons/' + filename);
// 'file' comes from the Blob or File API
uploadBytes(storageRef, file).then((snapshot) => {
console.log('Uploaded a blob or file!');
});
}
I tried many options i doesn't know why it is not working, i want to upload image & get download url.
You have to pass the actual File object to uploadBytes and not the object URL. Try:
picker.onchange = function() {
const file = picker.files[0];
if (!file) {
alert("No file selected")
return
}
const storageRef = ref(storage, 'icons/' + file.name);
uploadBytes(storageRef, file).then((snapshot) => {
console.log('Uploaded a blob or file!');
}).catch(e => console.log(e));
}
It seems you are providing a url to the image blob/file instead of passing the file itself. Try changing line 8 to just var file = picker.files[0].
If that doesn’t work, try logging fileafter it is initialized to make sure it exists.

How to set file destination in React?

I'm following up on this article to download objects from GCP Cloud storage bucket: https://cloud.google.com/storage/docs/downloading-objects#storage-download-object-nodejs In the code, I want to set the destination where file needs to be saved dynamically. How can I set file destination in React?
just set desFileName:
const destFileName = '/local/path/to/file.txt';
async function downloadFile() {
const options = {
destination: destFileName,
};
// Downloads the file
await storage.bucket(bucketName).file(fileName).download(options);
console.log(
`gs://${bucketName}/${fileName} downloaded to ${destFileName}.`
);
}

Multiple file stream instead of download to disk and then zip?

I have an API method that when called and passed an array of file keys, downloads them from S3. I'd like to stream them, rather than download to disk, followed by zipping the files and returning that to the client.
This is what my current code looks like:
reports.get('/xxx/:filenames ', async (req, res) => {
var AWS = require('aws-sdk');
var s3 = new AWS.S3();
var str_array = filenames.split(',');
for (var i = 0; i < str_array.length; i++) {
var filename = str_array[i].trim();
localFileName = './' + filename;
var params = {
Bucket: config.reportBucket,
Key: filename
}
s3.getObject(params, (err, data) => {
if (err) console.error(err)
var file = require('fs').createWriteStream(localFileName);
s3.getObject(params).createReadStream().pipe(file);
console.log(file);
})
}
});
How would I stream the files rather than downloading them to disk and how would I zip them to return that to the client?
Main problem is to zip multiple files.
More specifically, download them from AWS S3 in bulk.
I've searched through AWS SDK and didn't find bulk s3 operations.
Which brings us to one possible solution:
Load files one by one and store them to folder
Zip folder (with some package like this)
Send zipped folder
This is raw and untested example, but it might give you the idea:
// Always import packages at the beginning of the file.
const AWS = require('aws-sdk');
const fs = require('fs');
const zipFolder = require('zip-folder');
const s3 = new AWS.S3();
reports.get('/xxx/:filenames ', async (req, res) => {
const filesArray = filenames.split(',');
for (const fileName of filesArray) {
const localFileName = './' + filename.trim();
const params = {
Bucket: config.reportBucket,
Key: filename
}
// Probably you'll need here some Promise logic, to handle stream operation end.
const fileStream = fs.createWriteStream(localFileName);
s3.getObject(params).createReadStream().pipe(fileStream);
}
// After that all required files would be in some target folder.
// Now you need to compress the folder and send it back to user.
// We cover callback function in promise, to make code looks "sync" way.
await new Promise(resolve => zipFolder('/path/to/the/folder', '/path/to/archive.zip', (err) => {resolve()});
// And now you can send zipped folder to user (also using streams).
fs.createReadStream('/path/to/archive.zip').pipe(res);
});
Info about streams link and link
Attention: You'll probably could have some problems with async behaviour, according to streams nature, so, please, first of all, check if all files are stored in folder before zipping.
Just a mention, I've not tested this code. So if any questions appear, let's debug together

Upload a file stream to S3 without a file and from memory

I'm trying to create a csv from a string and upload it to my S3 bucket. I don't want to write a file. I want it all to be in memory.
I don't want to read from a file to get my stream. I would like to make a stream with out a file. I would like this method createReadStream, but instead of a file, I would like to pass a string with my stream's contents.
var AWS = require('aws-sdk'),
zlib = require('zlib'),
fs = require('fs');
s3Stream = require('s3-upload-stream')(new AWS.S3()),
// Set the client to be used for the upload.
AWS.config.loadFromPath('./config.json');
// Create the streams
var read = fs.createReadStream('/path/to/a/file');
var upload = s3Stream.upload({
"Bucket": "bucket-name",
"Key": "key-name"
});
// Handle errors.
upload.on('error', function (error) {
console.log(error);
});
upload.on('part', function (details) {
console.log(details);
});
upload.on('uploaded', function (details) {
console.log(details);
});
read.pipe(upload);
You can create a ReadableStream and push your string directly to it which, can then be consumed by your s3Stream instance.
const Readable = require('stream').Readable
let data = 'this is your data'
let read = new Readable()
read.push(data) // Push your data string
read.push(null) // Signal that you're done writing
// Create upload s3Stream instance and attach listeners go here
read.pipe(upload)

Categories

Resources