I'm following up on this article to download objects from GCP Cloud storage bucket: https://cloud.google.com/storage/docs/downloading-objects#storage-download-object-nodejs In the code, I want to set the destination where file needs to be saved dynamically. How can I set file destination in React?
just set desFileName:
const destFileName = '/local/path/to/file.txt';
async function downloadFile() {
const options = {
destination: destFileName,
};
// Downloads the file
await storage.bucket(bucketName).file(fileName).download(options);
console.log(
`gs://${bucketName}/${fileName} downloaded to ${destFileName}.`
);
}
Related
Im having problem uploading image to firebase storage, it keeps uploading 9B file to storage even if selected file is a 100mb file. It is showing the progress as NaN%, once i successfully uploaded a image to firebase storage but now im failing 😠here is the code,
const app = initializeApp(firebaseConfig);
const analytics = getAnalytics(app);
const storage = getStorage();
var picker = document.getElementById('img');
picker.onchange = function(){
var file = window.URL.createObjectURL(picker.files[0]);
var filename = picker.files[0].name;
const storageRef = ref(storage, 'icons/' + filename);
// 'file' comes from the Blob or File API
uploadBytes(storageRef, file).then((snapshot) => {
console.log('Uploaded a blob or file!');
});
}
I tried many options i doesn't know why it is not working, i want to upload image & get download url.
You have to pass the actual File object to uploadBytes and not the object URL. Try:
picker.onchange = function() {
const file = picker.files[0];
if (!file) {
alert("No file selected")
return
}
const storageRef = ref(storage, 'icons/' + file.name);
uploadBytes(storageRef, file).then((snapshot) => {
console.log('Uploaded a blob or file!');
}).catch(e => console.log(e));
}
It seems you are providing a url to the image blob/file instead of passing the file itself. Try changing line 8 to just var file = picker.files[0].
If that doesn’t work, try logging fileafter it is initialized to make sure it exists.
The file does get uploaded but instead of it being 100s of Kbs or few MBs its just a couple of Bytes, and when trying to open it shows blank or "file not exist" error. same issue with text files and images.
I believe the problem is with the stream not waiting to read to the whole file before uploading it to bucket.
Code sample is the one provided by google from their "Google Cloud Storage: Node.js Client" Documentation
function main(
bucketName = 'myBucket',
destFileName = 'MyUploadedFile',
contents = 'testFile.pdf'
) {
// Imports the Google Cloud client library
const {Storage} = require('#google-cloud/storage');
// Import Node.js stream
const stream = require('stream');
// Creates a client
const storage = new Storage();
// Get a reference to the bucket
const myBucket = storage.bucket(bucketName);
// Create a reference to a file object
const file = myBucket.file(destFileName);
const passthroughStream = new stream.PassThrough();
passthroughStream.write(contents);
//console.log(passthroughStream.write(contents))
passthroughStream.end();
async function streamFileUpload() {
passthroughStream.pipe(file.createWriteStream({resumable:true,gzip:true})).on('finish', () => {
// The file upload is complete
});
console.log(`${destFileName} uploaded to ${bucketName}`);
}
streamFileUpload().catch(console.error);
// [END storage_stream_file_upload]
}
main(...process.argv.slice(2));
This is what I am trying to achieve, implement the firebase's resize image extension, upload an image, then when the resize is completed, add that dowloadUrl's thumbs to a Cloud Firestore document. This question helps me, but still can not identify the thumbs and get the download URL, this is what am have been trying so far.
Note: I set my thumbnail to be at root/thumbs
const functions = require('firebase-functions');
const { Storage } = require('#google-cloud/storage');
const storage = new Storage();
exports.thumbsUrl = functions.storage.object().onFinalize(async object => {
const fileBucket = object.bucket;
const filePath = object.name;
const contentType = object.contentType;
if (fileBucket && filePath && contentType) {
console.log('Complete data');
if (!contentType.startsWith('thumbs/')) {
console.log('This is not a thumbnails');
return true;
}
console.log('This is a thumbnails');
} else {
console.log('Incomplete data');
return null;
}
});
Method 1 : Client Side
Don't change the access token when creating the thumbnail.
Edit the function from gcloud cloud function console
Go to the function code by clicking detailed usage stats
Then click on code
Edit the following lines
Redeploy the function again
// If the original image has a download token, add a
// new token to the image being resized #323
if (metadata.metadata.firebaseStorageDownloadTokens) {
// metadata.metadata.firebaseStorageDownloadTokens = uuidv4_1.uuid();
}
Fetch the uploaded image using getDownloadURLfunction
https://firebasestorage.googleapis.com/v0/b/<project_id>/o/<FolderName>%2F<Filename>.jpg?alt=media&token=xxxxxx-xxx-xxx-xxx-xxxxxxxxxxxxx
Because the access token will be similar
https://firebasestorage.googleapis.com/v0/b/<project_id>/o/<FolderName>%2Fthumbnails%2F<Filename>_300x300.jpg?alt=media&token=xxxxxx-xxx-xxx-xxx-xxxxxxxxxxxxx
Method 2: Server Side
Call this function after thumbnail is created
var storage = firebase.storage();
var pathReference = storage.ref('users/' + userId + '/avatar.jpg');
pathReference.getDownloadURL().then(function (url) {
$("#large-avatar").attr('src', url);
}).catch(function (error) {
// Handle any errors
});
you need to use filePath for checking the thumbs
if(filePath.startswith('thumbs/'){...}
contentType has the metadata of files like type of image and etc.
FilePath will have the full path.
I have an API method that when called and passed an array of file keys, downloads them from S3. I'd like to stream them, rather than download to disk, followed by zipping the files and returning that to the client.
This is what my current code looks like:
reports.get('/xxx/:filenames ', async (req, res) => {
var AWS = require('aws-sdk');
var s3 = new AWS.S3();
var str_array = filenames.split(',');
for (var i = 0; i < str_array.length; i++) {
var filename = str_array[i].trim();
localFileName = './' + filename;
var params = {
Bucket: config.reportBucket,
Key: filename
}
s3.getObject(params, (err, data) => {
if (err) console.error(err)
var file = require('fs').createWriteStream(localFileName);
s3.getObject(params).createReadStream().pipe(file);
console.log(file);
})
}
});
How would I stream the files rather than downloading them to disk and how would I zip them to return that to the client?
Main problem is to zip multiple files.
More specifically, download them from AWS S3 in bulk.
I've searched through AWS SDK and didn't find bulk s3 operations.
Which brings us to one possible solution:
Load files one by one and store them to folder
Zip folder (with some package like this)
Send zipped folder
This is raw and untested example, but it might give you the idea:
// Always import packages at the beginning of the file.
const AWS = require('aws-sdk');
const fs = require('fs');
const zipFolder = require('zip-folder');
const s3 = new AWS.S3();
reports.get('/xxx/:filenames ', async (req, res) => {
const filesArray = filenames.split(',');
for (const fileName of filesArray) {
const localFileName = './' + filename.trim();
const params = {
Bucket: config.reportBucket,
Key: filename
}
// Probably you'll need here some Promise logic, to handle stream operation end.
const fileStream = fs.createWriteStream(localFileName);
s3.getObject(params).createReadStream().pipe(fileStream);
}
// After that all required files would be in some target folder.
// Now you need to compress the folder and send it back to user.
// We cover callback function in promise, to make code looks "sync" way.
await new Promise(resolve => zipFolder('/path/to/the/folder', '/path/to/archive.zip', (err) => {resolve()});
// And now you can send zipped folder to user (also using streams).
fs.createReadStream('/path/to/archive.zip').pipe(res);
});
Info about streams link and link
Attention: You'll probably could have some problems with async behaviour, according to streams nature, so, please, first of all, check if all files are stored in folder before zipping.
Just a mention, I've not tested this code. So if any questions appear, let's debug together
When updating an image in Google Cloud bucket, even if the image update is successful, the url serves the old version for a while (few minutes, e.g. 5 min or so).
The link we are using looks like:
https://storage.googleapis.com/<bucket-name>/path/to/images/1.jpg
The relevant part of the code which updates the image is:
var storageFile = bucket.file(imageToUpdatePath);
var storageFileStream = storageFile.createWriteStream({
metadata: {
contentType: req.file.mimetype
}
});
storageFileStream.on('error', function(err) {
...
});
storageFileStream.on('finish', function() {
// cloudFile.makePublic after the upload has finished, because otherwise the file is only accessible to the owner:
storageFile.makePublic(function(err, data) {
//if(err)
//console.log(err);
if (err) {
return res.render("error", {
err: err
});
}
...
});
});
fs.createReadStream(filePath).pipe(storageFileStream);
It looks like a caching issue on the Google Cloud side. How to solve it? How to get the updated image at the requested url, after being updated?
In the Google Cloud admin, the new image does appear correctly.
By default, public objects get cached for up to 60 minutes - see Cache Control and Consistency. To fix this, you should set the cache-control property of the object to private when you create/upload the object. In your code above, this would go in the metadata block, like so:
var storageFileStream = storageFile.createWriteStream({
metadata: {
contentType: req.file.mimetype,
cacheControl: 'private'
}
});
Reference: https://cloud.google.com/storage/docs/viewing-editing-metadata#code-samples_1
1. await bucket
.file(filePath)
.delete({ ignoreNotFound: true });
// Deleting file with a name.
const blob = bucket.file(filePath);
2. await blob.save(fil?.buffer);
//Saving File with the same name
3. const [metadata] = await storage
.bucket(bucketName)
.file(filePath)
.getMetadata();
newDocObj.location = metadata.mediaLink;
I have used metadata.mediaLink to get the
latest download link of the image from Google Bucket Storage.