When updating an image in Google Cloud bucket, even if the image update is successful, the url serves the old version for a while (few minutes, e.g. 5 min or so).
The link we are using looks like:
https://storage.googleapis.com/<bucket-name>/path/to/images/1.jpg
The relevant part of the code which updates the image is:
var storageFile = bucket.file(imageToUpdatePath);
var storageFileStream = storageFile.createWriteStream({
metadata: {
contentType: req.file.mimetype
}
});
storageFileStream.on('error', function(err) {
...
});
storageFileStream.on('finish', function() {
// cloudFile.makePublic after the upload has finished, because otherwise the file is only accessible to the owner:
storageFile.makePublic(function(err, data) {
//if(err)
//console.log(err);
if (err) {
return res.render("error", {
err: err
});
}
...
});
});
fs.createReadStream(filePath).pipe(storageFileStream);
It looks like a caching issue on the Google Cloud side. How to solve it? How to get the updated image at the requested url, after being updated?
In the Google Cloud admin, the new image does appear correctly.
By default, public objects get cached for up to 60 minutes - see Cache Control and Consistency. To fix this, you should set the cache-control property of the object to private when you create/upload the object. In your code above, this would go in the metadata block, like so:
var storageFileStream = storageFile.createWriteStream({
metadata: {
contentType: req.file.mimetype,
cacheControl: 'private'
}
});
Reference: https://cloud.google.com/storage/docs/viewing-editing-metadata#code-samples_1
1. await bucket
.file(filePath)
.delete({ ignoreNotFound: true });
// Deleting file with a name.
const blob = bucket.file(filePath);
2. await blob.save(fil?.buffer);
//Saving File with the same name
3. const [metadata] = await storage
.bucket(bucketName)
.file(filePath)
.getMetadata();
newDocObj.location = metadata.mediaLink;
I have used metadata.mediaLink to get the
latest download link of the image from Google Bucket Storage.
Related
I'm following up on this article to download objects from GCP Cloud storage bucket: https://cloud.google.com/storage/docs/downloading-objects#storage-download-object-nodejs In the code, I want to set the destination where file needs to be saved dynamically. How can I set file destination in React?
just set desFileName:
const destFileName = '/local/path/to/file.txt';
async function downloadFile() {
const options = {
destination: destFileName,
};
// Downloads the file
await storage.bucket(bucketName).file(fileName).download(options);
console.log(
`gs://${bucketName}/${fileName} downloaded to ${destFileName}.`
);
}
I want to add the image reference of my uploaded image into firestore database. My code is uploading the image into the firebase storage but not save the data into the database. I am getting the error:
Uncaught FirebaseError: Function DocumentReference.set() called with invalid data. Unsupported field value: a custom je object (found in field Image)
What is the correct way to do this?
Please see codes below:
console.log("Initialisation Successful!");
var db = firebase.firestore();
var storageRef = firebase.storage().ref().child('Images');
function addExercise(){
var exerciseName = document.getElementById("ename").value;
var exercisePart = document.getElementById("body_part").value;
var exerciseLevel = document.getElementById("elevel").value;
var file = document.getElementById("eimage").files[0];
var thisRef = storageRef.child(file.name);
thisRef.put(file).then(function(snapshot) {
console.log('done!' + thisRef );
});
db.collection("Exercises").add({
Name: exerciseName,
BodyPart: exercisePart,
Level: exerciseLevel,
Image: thisRef
})
.then(function(){
console.log("Data entered successfully!");
})
.catch(function(error){
console.error("Error!", error);
});
}
You probably just want to save the full path of the reference, with thisRef.fullPath, as this is a simple string and not a full object.
To later translate that to an object that you could read, you'd do something like:
firebase.storage().ref().child(fullPath)
This obviously assumes the bucket hasn't changed -- otherwise you'd need to store the bucket name as well.
See more details here: https://firebase.google.com/docs/storage/web/create-reference.
This is what I am trying to achieve, implement the firebase's resize image extension, upload an image, then when the resize is completed, add that dowloadUrl's thumbs to a Cloud Firestore document. This question helps me, but still can not identify the thumbs and get the download URL, this is what am have been trying so far.
Note: I set my thumbnail to be at root/thumbs
const functions = require('firebase-functions');
const { Storage } = require('#google-cloud/storage');
const storage = new Storage();
exports.thumbsUrl = functions.storage.object().onFinalize(async object => {
const fileBucket = object.bucket;
const filePath = object.name;
const contentType = object.contentType;
if (fileBucket && filePath && contentType) {
console.log('Complete data');
if (!contentType.startsWith('thumbs/')) {
console.log('This is not a thumbnails');
return true;
}
console.log('This is a thumbnails');
} else {
console.log('Incomplete data');
return null;
}
});
Method 1 : Client Side
Don't change the access token when creating the thumbnail.
Edit the function from gcloud cloud function console
Go to the function code by clicking detailed usage stats
Then click on code
Edit the following lines
Redeploy the function again
// If the original image has a download token, add a
// new token to the image being resized #323
if (metadata.metadata.firebaseStorageDownloadTokens) {
// metadata.metadata.firebaseStorageDownloadTokens = uuidv4_1.uuid();
}
Fetch the uploaded image using getDownloadURLfunction
https://firebasestorage.googleapis.com/v0/b/<project_id>/o/<FolderName>%2F<Filename>.jpg?alt=media&token=xxxxxx-xxx-xxx-xxx-xxxxxxxxxxxxx
Because the access token will be similar
https://firebasestorage.googleapis.com/v0/b/<project_id>/o/<FolderName>%2Fthumbnails%2F<Filename>_300x300.jpg?alt=media&token=xxxxxx-xxx-xxx-xxx-xxxxxxxxxxxxx
Method 2: Server Side
Call this function after thumbnail is created
var storage = firebase.storage();
var pathReference = storage.ref('users/' + userId + '/avatar.jpg');
pathReference.getDownloadURL().then(function (url) {
$("#large-avatar").attr('src', url);
}).catch(function (error) {
// Handle any errors
});
you need to use filePath for checking the thumbs
if(filePath.startswith('thumbs/'){...}
contentType has the metadata of files like type of image and etc.
FilePath will have the full path.
The case:
On Salesforce platform I use Google Drive to store files (images for this case) with configured Apex Google Drive API Framework. So Google Drive API handles authToken and so on. I can upload and browse images in my application. In my case I want to select multiple files and download them in a single zip file. So far I'm trying to do that using JSZip and FileSaver libraries. With the same code below I can zip and download multiple files stored somewhere else with proper response header, but not from GDrive because of CORS error.
https://xxx.salesforce.com/contenthub/download/XXXXXXXXXX%3Afile%XXXXXX_XXXXXXXXX. No'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'https://xxx.visual.force.com' is therefore not allowed access. If I just click on this link, file starts to download.
Is there any way to configure GDrive to enable response header: Access-Control-Allow-Origin: * or Access-Control-Allow-Origin: https://*/mydomain.com somehow or I just have to use something else, maybe server side compression? Now I am using the download link provided by Apex Google Drive API (looks like this:
https://xxx.salesforce.com/contenthub/download/XXXXXXXXXXX%3Afile%XXXXXXXX), it works fine when used as src="fileURL" or when pasted directly to the browser. GDrive connector add 'accesToken' and so on.
My code:
//ajax request to get files using JSZipUtils
let urlToPromise = (url) =>{
return new Promise(function(resolve, reject) {
JSZipUtils.getBinaryContent(url, function (err, data) {
if(err) {
reject(err);
} else {
resolve(data);
}
});
});
};
this.downloadAssets = () => {
let zip = new JSZip();
//here 'selectedAssets' array of objects each of them has 'assetFiles'
//with fileURL where I have url. Download and add them to 'zip' one by one
for (var a of this.selectedAssets){
for (let f of a.assetFiles){
let url = f.fileURL;
let name = a.assetName + "." + f.fileType;
let filename = name.replace(/ /g, "");
zip.file(filename, urlToPromise(url), {binary:true});
}
}
//generate zip and download using 'FileSaver.js'
zip.generateAsync({type:"blob"})
.then(function callback(blob) {
saveAs(blob, "test.zip");
});
};
I also tried to change let url = f.fileURL to let url = f.fileURL + '?alt=media'; and &access_token=CURRENT_TOKEN added by GDrive connector.
this link handled by GRDrive connector so if I just enter it in browser it download the image. However, for multiple download using JS I got CORS error.
I think this feature is not yet supported. If you check the Download Files guide from Drive API, there's no mention of downloading multiple files at once. That's because you have to make individual API requests for each file. This is confirmed in this SO thread.
But that selected multiple files are convert into single zip file and download that single zip file which is possible with google drive API. So how can i convert them into single Zip File? please tell me.
According to me, just download all files and store them at temporary directory location and then add that directory to zip file and store that zip to physical device.
public Entity.Result<Entity.GoogleDrive> DownloadMultipleFile(string[] fileidList)
{
var result = new Entity.Result<Entity.GoogleDrive>();
ZipFile zip = new ZipFile();
try
{
var service = new DriveService(new BaseClientService.Initializer()
{
HttpClientInitializer = credential,
ApplicationName = "Download File",
});
FilesResource.ListRequest listRequest = service.Files.List();
//listRequest.PageSize = 10;
listRequest.Fields = "nextPageToken, files(id, name, mimeType, fullFileExtension)";
IList<File> files = listRequest.Execute().Files;
if (files != null && files.Count > 0)
{
foreach (var fileid in fileidList)
{
foreach (var file in files)
{
if (file.Id == fileid)
{
result.Data = new Entity.GoogleDrive { FileId = fileid };
FilesResource.GetRequest request = service.Files.Get(fileid);
request.ExecuteAsync();
var stream = new System.IO.FileStream(HttpContext.Current.Server.MapPath(#"~\TempFiles") + "\\" + file.Name, System.IO.FileMode.Create, System.IO.FileAccess.Write);
request.MediaDownloader.ProgressChanged += (IDownloadProgress progress) =>
{
switch (progress.Status)
{
case DownloadStatus.Downloading:
{
break;
}
case DownloadStatus.Completed:
{
break;
}
case DownloadStatus.Failed:
{
break;
}
}
};
request.Download(stream);
stream.Close();
break;
}
}
}
}
zip.AddDirectory(HttpContext.Current.Server.MapPath(#"~\TempFiles"), "GoogleDrive");
string pathUser = Environment.GetFolderPath(Environment.SpecialFolder.UserProfile);
string pathDownload = System.IO.Path.Combine(pathUser, "Downloads");
zip.Save(pathDownload + "\\GoogleDrive.zip");
System.IO.DirectoryInfo di = new System.IO.DirectoryInfo(HttpContext.Current.Server.MapPath(#"~\TempFiles"));
foreach (var file in di.GetFiles())
{
file.Delete();
}
result.IsSucceed = true;
result.Message = "File downloaded suceessfully";
}
catch (Exception ex)
{
result.IsSucceed = false;
result.Message = ex.ToString();
}
return result;
}
My previously published code works. Forgot to post a solution.
Just instead of using content hub link I started to use direct link to Google Drive and CORS issue was solved. Still not sure if CORS might be solved somehow at Salesforce side. Tried different setups with no luck.
Direct download link to GDrive works ok in my case. The only thing I had to change is the prefix to GDrive file ID.
I've come across a problem in uploading a large csv file to Azure's Table Storage, in that it appears to stream the data from it so fast that it doesn't upload properly or throws a lot of Timeout Errors.
This is my current code:
var fs = require('fs');
var csv = require('csv');
var azure = require('azure');
var AZURE_STORAGE_ACCOUNT = "my storage account";
var AZURE_STORAGE_ACCESS_KEY = "my access key";
var tableService = azure.createTableService(AZURE_STORAGE_ACCOUNT,AZURE_STORAGE_ACCESS_KEY);
var count = 150000;
var uploadCount =1;
var counterror = 1;
tableService.createTableIfNotExists('newallactorstable', function(error){
if(!error){
console.log("Table created / located");
}
else
{
console.log("error");
}
});
csv()
.from.path(__dirname+'/actorsb-c.csv', {delimiter: '\t'})
.transform( function(row){
row.unshift(row.pop());
return row;
})
.on('record', function(row,index){
//Output plane carrier, arrival delay and departure delay
//console.log('Actor:' + row[0]);
var actorsUpload = {
PartitionKey : 'actors'
, RowKey : count.toString()
, Actors : row[0]
};
tableService.insertEntity('newallactorstable', actorsUpload, function(error){
if(!error){
console.log("Added: " + uploadCount);
}
else
{
console.log(error)
}
});
count++
})
.on('close', function(count){
console.log('Number of lines: '+count);
})
.on('error', function(error){
console.log(error.message);
});
The CSV file is roughly 800mb.
I know that to fix it, I probably need to send the data in batches, but I have literally no idea how to do this.
I have no knowledge of the azure package nor the CSV package, but I would suggest you to upload the file using a stream. If you have the file saved to your drive you can create a read stream from it, and then use that stream to upload to azure using createBlockBlobFromStream. That question redirects me here. I suggest you to take a look at that, as it handles the encoding. The code provides a way to convert the file to a base64 string, but i have the idea that can be done more efficiently using node. I will have to look into that though.
hmm What I would suggest is to upload your file to blob storage and you can have reference to blob URI in your table storage. Block blob option give you an easy way of batch upload.